8fdZddlZddlZddlZddlmZmZddlmZddl m Z m Z m Z m Z mZddlmZmZmZmZmZmZddlmZgdZejd Zgd Zed ZGd d eZGddeZ Gdde Z!Gdde"Z#GddZ$e$Z%Gdde&Z'GddZ(dZ)GddZ*e*Z+dZ,Gdd Z-Gd!d"eZ.Gd#d$eZ/Gd%d&e e/Z0Gd'd(Z1Gd)d*e0Z2d+Z3Gd,d-e/Z4Gd.d/e0e4Z5dS)0z pygments.lexer ~~~~~~~~~~~~~~ Base lexer classes. :copyright: Copyright 2006-2024 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. N) apply_filtersFilter)get_filter_by_name)ErrorTextOther Whitespace _TokenType) get_bool_opt get_int_opt get_list_optmake_analysatorFuture guess_decode) regex_opt) Lexer RegexLexerExtendedRegexLexerDelegatingLexer LexerContextincludeinheritbygroupsusingthisdefaultwordsline_rez.*? ))sutf-8)szutf-32)szutf-32be)szutf-16)szutf-16becdS)N)xs z/builddir/build/BUILD/imunify360-venv-2.3.5/opt/imunify360/venv/lib/python3.11/site-packages/pip/_vendor/pygments/lexer.pyr%"s#ceZdZdZdZdS) LexerMetaz This metaclass automagically converts ``analyse_text`` methods into static methods which always return float values. ctd|vrt|d|d<t||||S)N analyse_text)rtype__new__)mcsnamebasesds r$r,zLexerMeta.__new__+s< Q   /.0A B BAn ||Cua000r&N)__name__ __module__ __qualname____doc__r,r"r&r$r(r(%s- 11111r&r(cbeZdZdZdZgZgZgZgZdZ dZ dZ dZ dZ dZdZdZdZd d Zd ZdS) ra" Lexer for a specific language. See also :doc:`lexerdevelopment`, a high-level guide to writing lexers. Lexer classes have attributes used for choosing the most appropriate lexer based on various criteria. .. autoattribute:: name :no-value: .. autoattribute:: aliases :no-value: .. autoattribute:: filenames :no-value: .. autoattribute:: alias_filenames .. autoattribute:: mimetypes :no-value: .. autoattribute:: priority Lexers included in Pygments should have two additional attributes: .. autoattribute:: url :no-value: .. autoattribute:: version_added :no-value: Lexers included in Pygments may have additional attributes: .. autoattribute:: _example :no-value: You can pass options to the constructor. The basic options recognized by all lexers and processed by the base `Lexer` class are: ``stripnl`` Strip leading and trailing newlines from the input (default: True). ``stripall`` Strip all leading and trailing whitespace from the input (default: False). ``ensurenl`` Make sure that the input ends with a newline (default: True). This is required for some lexers that consume input linewise. .. versionadded:: 1.3 ``tabsize`` If given and greater than 0, expand tabs in the input (default: 0). ``encoding`` If given, must be an encoding name. This encoding will be used to convert the input string to Unicode, if it is not already a Unicode string (default: ``'guess'``, which uses a simple UTF-8 / Locale / Latin1 detection. Can also be ``'chardet'`` to use the chardet library, if it is installed. ``inencoding`` Overrides the ``encoding`` if given. Nrc ||_t|dd|_t|dd|_t|dd|_t |dd|_|dd |_|d p|j|_g|_ t|d d D]}| |d S)a This constructor takes arbitrary options as keyword arguments. Every subclass must first process its own options and then call the `Lexer` constructor, since it processes the basic options like `stripnl`. An example looks like this: .. sourcecode:: python def __init__(self, **options): self.compress = options.get('compress', '') Lexer.__init__(self, **options) As these options must all be specifiable as strings (due to the command line usage), there are various utility functions available to help with that, see `Utilities`_. stripnlTstripallFensurenltabsizerencodingguess inencodingfiltersr"N) optionsr r7r8r9r r:getr;r>r add_filter)selfr?filter_s r$__init__zLexer.__init__s& #GY== $Wj%@@ $Wj$?? "7Iq99  J88  L11BT]  #GY;; % %G OOG $ $ $ $ % %r&c`|jrd|jjd|jdSd|jjdS)Nz)r? __class__r1rBs r$__repr__zLexer.__repr__sC < BWt~'>WWdlWWW WAt~'>AAA Ar&c ~t|ts t|fi|}|j|dS)z8 Add a new stream filter to this lexer. N) isinstancerrr>append)rBrCr?s r$rAzLexer.add_filtersG'6** =(<.streamersC66t<<  1ad   r&)rcrr>)rBrN unfilteredrjstreams`` r$ get_tokenszLexer.get_tokenssd++D11       ?"64<>>F r&ct)aS This method should process the text and return an iterable of ``(index, tokentype, value)`` tuples where ``index`` is the starting position of the token within the input text. It must be overridden by subclasses. It is recommended to implement it as a generator to maximize effectiveness. )NotImplementedError)rBrNs r$rgzLexer.get_tokens_unprocesseds "!r&)F)r1r2r3r4r.aliases filenamesalias_filenames mimetypespriorityurl version_added_examplerDrIrAr*rcrmrgr"r&r$rr1s88v DG IOIH CMH%%%<BBB %%%   "///b0 " " " " "r&r) metaclassc"eZdZdZefdZdZdS)ra  This lexer takes two lexer as arguments. A root lexer and a language lexer. First everything is scanned using the language lexer, afterwards all ``Other`` tokens are lexed using the root lexer. The lexers from the ``template`` lexer package use this base lexer. c l|di||_|di||_||_tj|fi|dSNr") root_lexerlanguage_lexerneedlerrD)rB _root_lexer_language_lexer_needler?s r$rDzDelegatingLexer.__init__-sV%+0000-o8888  t''w'''''r&cd}g}g}|j|D]U\}}}||jur.|r&|t ||fg}||z }=||||fV|r$|t ||ft ||j|S)N)r}rgr~rLrX do_insertionsr|)rBrNbuffered insertions lng_bufferirhris r$rgz&DelegatingLexer.get_tokens_unprocessed3s  *AA$GG - -GAq!DK$%%s8}}j&ABBB!#JA !!1a),,,,  ;   s8}}j9 : : :Z!_CCHMMOO Or&N)r1r2r3r4rrDrgr"r&r$rr#sL>C(((( OOOOOr&rceZdZdZdS)rzI Indicates that a state should include rules from another state. Nr1r2r3r4r"r&r$rrJs Dr&rceZdZdZdZdS)_inheritzC Indicates the a state should inherit from its superclass. cdS)Nrr"rHs r$rIz_inherit.__repr__Usyr&N)r1r2r3r4rIr"r&r$rrQs-r&rceZdZdZdZdZdS)combinedz: Indicates a state combined from multiple states. c8t||Srf)tupler,)clsargss r$r,zcombined.__new__`s}}S$'''r&cdSrfr")rBrs r$rDzcombined.__init__cs r&N)r1r2r3r4r,rDr"r&r$rr[s<(((     r&rc<eZdZdZdZd dZd dZd dZdZdZ dS) _PseudoMatchz: A pseudo match object constructed from a string. c"||_||_dSrf)_text_start)rBstartrNs r$rDz_PseudoMatch.__init__ms  r&Nc|jSrf)rrBargs r$rz_PseudoMatch.startqs {r&c:|jt|jzSrf)rrXrrs r$endz_PseudoMatch.endts{S__,,r&c2|rtd|jS)Nz No such group) IndexErrorrrs r$groupz_PseudoMatch.groupws  ._-- -zr&c|jfSrf)rrHs r$groupsz_PseudoMatch.groups|s  }r&ciSrfr"rHs r$ groupdictz_PseudoMatch.groupdicts r&rf) r1r2r3r4rDrrrrrr"r&r$rrhs---- r&rcdfd }|S)zL Callback that yields multiple actions for each group in the match. Nc 3KtD]\}}|t|tur8||dz}|r||dz||fVV||dz}|Y|r||dz|_||t ||dz||D]}|r|V |r||_dSdS)N) enumerater+r rrposrr)lexermatchctxractiondataitemrs r$callbackzbygroups..callbacks*"4 ' 'IAv~f++{{1q5));++a!e,,fd::::{{1q5))#5"'++a!e"4"4 &u'3EKKA4F4F'M'Ms!T!T'''"&JJJ  "iikkCGGG " "r&rfr")rrs` r$rrs(""""""& Or&ceZdZdZdS)_ThiszX Special singleton used for indicating the caller class. Used by ``using``. Nrr"r&r$rrsr&rc idvr>d}t|ttfr|d<nd|fd<turdfd }ndfd }|S)a Callback that processes the match with a different lexer. The keyword arguments are forwarded to the lexer, except `state` which is handled separately. `state` specifies the state that the new lexer will start in, and can be an enumerable such as ('root', 'inline', 'string') or a simple string which is assumed to be on top of the root state. Note: For that to work, `_other` must not be an `ExtendedRegexLexer`. statestackrootNc3(K r( |j|jdi }n|}|}|j|fiD]\}}}||z||fV|r||_dSdSr{)updater?rGrrgrrr) rrrlxsrrhri gt_kwargskwargss r$rzusing..callbacks  em,,,$U_..v.. A424U[[]]PPiPP " "1a!eQk!!!! &))++ & &r&c3K |jdi }|}|j|fi D]\}}}||z||fV|r||_dSdSr{)rr?rrgrrr) rrrrrrrhri_otherrrs r$rzusing..callbacks MM%- ( ( (!!&!!B A424U[[]]PPiPP " "1a!eQk!!!! &))++ & &r&rf)poprKlistrr)rrrrrs`` @r$rrsI& JJw   a$ ' ' -!"Ig  "(!Ig  ~~ & & & & & & & & & & & & & & & & Or&ceZdZdZdZdS)rz Indicates a state or state action (e.g. #pop) to apply. For example default('#pop') is equivalent to ('', Token, '#pop') Note that state tuples may be used as well. .. versionadded:: 2.0 c||_dSrf)r)rBrs r$rDzdefault.__init__s  r&N)r1r2r3r4rDr"r&r$rrs-r&rc eZdZdZddZdZdS)rz Indicates a list of literal words that is transformed into an optimized regex that matches any of the words. .. versionadded:: 2.0 rc0||_||_||_dSrf)rprefixsuffix)rBrrrs r$rDzwords.__init__s   r&cDt|j|j|jS)Nrr)rrrrrHs r$r@z words.getsDK LLLLr&N)rr)r1r2r3r4rDr@r"r&r$rrsF  MMMMMr&rc>eZdZdZdZdZdZdZd dZdZ d Z dS) RegexLexerMetazw Metaclass for RegexLexer, creates the self._tokens attribute from self.tokens on the first instantiation. ct|tr|}tj||jS)zBPreprocess the regular expression component of a token definition.)rKrr@recompiler)rregexrflagsrs r$_process_regexzRegexLexerMeta._process_regexs6 eV $ $ IIKKEz%((..r&cjt|tust|s Jd||S)z5Preprocess the token component of a token definition.z0token type must be simple type or callable, not )r+r callable)rtokens r$_process_tokenzRegexLexerMeta._process_tokens<E{{j((HUOO(( Hu H H)(; r&c2t|trJ|dkrdS||vr|fS|dkr|S|dddkrt|dd SJd|t|trfd |jz}|xjd z c_g}|D]?}||ks Jd ||||||@|||<|fSt|tr|D]}||vs|d vs Jd|z|SJd |)z=Preprocess the state transition action of a token definition.#pop#pushNz#pop:Fzunknown new state z_tmp_%drzcircular state ref )rrzunknown new state def )rKrTintr_tmpnameextend_process_stater)r new_state unprocessed processed tmp_stateitokensistates r$_process_new_statez!RegexLexerMeta._process_new_states i % % AF""rk))!|#g%%  2A2'))IabbM****@@9@@@@u  8 , , A!CL0I LLA LLG# F F***,L&,L,L***s11+2;V E EFFFF#*Ii <   5 ) ) A# 2 2+--"3333(61434  @@9@@ @ @5r&c 0t|ts Jd||ddks Jd|||vr||Sgx}||<|j}||D]}t|trK||ks Jd|||||t|ct|t ryt|trL||j ||}| tj dj d|ft|tus Jd| ||d||}n4#t"$r'} t%d |dd |d |d | | d} ~ wwxYw||d } t)|dkrd}n||d||}| || |f|S)z%Preprocess a single state definition.zwrong state name r#zinvalid state name zcircular state reference rNzwrong rule def zuncompilable regex z in state z of z: r)rKrTflagsrrrrrrrrLrrrr+rr Exception ValueErrorrrX) rrrrtokensrtdefrrexerrrs r$rzRegexLexerMeta._process_state)sr%%%DD'D5'D'DDD%Qx3 ?e ? ? I  U# #$&&5!& 3 3D$(( u}}}&K%&K&K}}} c00i14T<<===$)) $(( 224:{IVV  rz"~~3T9EFFF::&&&(B$(B(B&&& r((a&%@@ r r r !gtAw!g!gE!g!gY\!g!gbe!g!ghhnqq r&&tAw//E4yyA~~ 22473> KK  MM3y1 2 2 2 2 sE:: F+"F&&F+Ncix}|j|<|p |j|}t|D]}|||||S)z-Preprocess a dictionary of token definitions.) _all_tokensrrr)rr. tokendefsrrs r$process_tokendefzRegexLexerMeta.process_tokendefTsZ,.. COD)1D!1 )__ < >(B//D $  C C u!::e,,# %*F5M!&+kk'&:&: %!!! !)4K&)ooeT:: &7<[]23C#(++g"6"6K*5{)BK&&"D3 C< s$A:: BB4C C$#C$cd|jvrSi|_d|_t|dr|jrn-|d||_tj |g|Ri|S)z:Instantiate cls after preprocessing its token definitions._tokensrtoken_variantsr) rrrhasattrrrrrr+__call__)rrkwdss r$rzRegexLexerMeta.__call__s CL ( ( COCLs,-- L#2D L!222s7H7H7J7JKK }S040004000r&rf) r1r2r3r4rrrrrrrr"r&r$rrs ///  !A!A!AF)))V///b 1 1 1 1 1r&rc,eZdZdZejZiZddZdS)rz Base for simple stateful regular expression-based lexers. Simplifies the lexing process so that you need only provide a list of states and regular expressions. rc#Kd}|j}t|}||d} |D]p\}}} |||} | rZ|Bt|tur||| fVn||| Ed{V| }| t | trk| D]g} | dkr(t|dkr| 0| dkr| |dR| | hnpt | tr,t| t|kr|dd=n5|| d=n/| dkr| |dn Jd| ||d}nVr ||d krd g}|d }|td fV|dz }|t||fV|dz }n#t$rYdSwxYw) z~ Split ``text`` into (tokentype, text) pairs. ``stack`` is the initial stack (default: ``['root']``) rrrNrrFwrong state def: rRr)rrr+r rrrKrrXrrLrabsr rr) rBrNrrr statestack statetokensrexmatchrrmrs r$rgz!RegexLexer.get_tokens_unprocesseds L %[[  2/ 1 /:0 0 +&)HT3'')<<:55"%vqwwyy"88888'-vdA6666666%%''C ,%i77L)2==#(F??'*:':':(2(8(8(8%*g%5%5$.$5$5jn$E$E$E$E$.$5$5e$<$<$<$<=( 377 L #9~~Z@@$.qrrNN$.yzz$:$:&'11&--jn====K*Ki*K*KKK5&/ 2&? E?F CyD((&,X &/&7 !:t3333q ud3i////1HCC!EEa1 s(G! G!! G/.G/Nr) r1r2r3r4r MULTILINErrrgr"r&r$rrsB LE0F;;;;;;r&rc eZdZdZddZdZdS)rz9 A helper object that holds lexer position data. Ncb||_||_|pt||_|pdg|_dS)Nr)rNrrXrr)rBrNrrrs r$rDzLexerContext.__init__s4 ##d))&vh r&c8d|jd|jd|jdS)Nz LexerContext(z, ))rNrrrHs r$rIzLexerContext.__repr__s)KtyKKdhKKDJKKKKr&NN)r1r2r3r4rDrIr"r&r$rrsF'''' LLLLLr&rceZdZdZddZdS)rzE A RegexLexer that uses a context object to store its state. Nc#`K|j}|st|d}|d}n|}||jd}|j} |D]\}}}|||j|j} | r|vt |tur8|j|| fV| |_n(||| |Ed{V|s||jd}|5t|tr|D]} | dkr2t|jdkr|j :| dkr&|j |jdf|j | nt|tr;t|t|jkr |jdd=nD|j|d=n9|dkr&|j |jdn Jd |||jd}n |j|jkrdS||jd kr3dg|_|d}|jt d fV|xjdz c_/|jt"||jfV|xjdz c_n#t$$rYdSwxYwn) z Split ``text`` into (tokentype, text) pairs. If ``context`` is given, use this lexer context instead. rrrrNrrFr rR)rrrrNrrr+r rrKrrXrrLrr rrr) rBrNcontextrrr rrrrrs r$rgz)ExtendedRegexLexer.get_tokens_unprocesseds  L  tQ''C#F+KKC#CIbM2K8D3 /:2 2 +&)HT37CG44!)<<:55"%'617799"<<<<&'eeggCGG'-vdAs';';;;;;;;;#,G.7 " .F  ,%i77L)2<<#(F??'*39~~'9'9(+ %*g%5%5$'I$4$4SYr]$C$C$C$C$'I$4$4U$;$;$;$;<( 377 L"9~~SY??$'IabbMM$'Iijj$9$9&'11I,,SYr];;;;K*Ki*K*KKK5&/ " &> EC!F w#'))CG},,%+H &/&7 !gtT11111  '5$sw-7777GGqLGGG!EEe3 sJ,AJ0,J J+*J+r)r1r2r3r4rgr"r&r$rr s8@@@@@@r&rc#Kt|} t|\}}n#t$r |Ed{VYdSwxYwd}d}|D]\}}}||}d} |r|t|z|kr|| ||z } | r||| fV|t| z }|D]\} } } || | fV|t| z } ||z } t|\}}n#t$rd}YnwxYw|r|t|z|k| t|kr$|||| dfV|t|| z z }|rQ|pd}|D]\}}}|||fV|t|z } t|\}}n#t$rd}YdSwxYw|OdSdS)ag Helper for lexers which must combine the results of several sublexers. ``insertions`` is a list of ``(index, itokens)`` pairs. Each ``itokens`` iterable should be inserted at position ``index`` into the token stream given by the ``tokens`` argument. The result is a combined token stream. TODO: clean up the code here. NTrF)iternext StopIterationrX)rrrrrealposinsleftrrhriolditmpvalit_indexit_tokenit_valueps r$rrSs{j!!Jj))ww  GG%%1a ?G !c!ff*--tEAI~&F 'q&((((3v;;&07 ) ),(Hx11113x==(19D !%j!1!1ww      !c!ff*-- #a&&==1ah& & & & s1vv} $G  ,Q  GAq!1a-    s1vv GG !*--NE77   G EE       s0&<<9C  CCE** E:9E:ceZdZdZdZdS)ProfilingRegexLexerMetaz>Metaclass for ProfilingRegexLexer, collects regex timing info.ct|tr"t|j|j|jn|t j|tjffd }|S)Nrcjd fddg}tj}|||}tj}|dxxdz cc<|dxx||z z cc<|S)Nrrr!r) _prof_data setdefaulttimer) rNrendposinfot0rest1rcompiledrrs r$ match_funcz:ProfilingRegexLexerMeta._process_regex..match_funcs>"%00%3xHHDB..sF33CB GGGqLGGG GGGrBw GGGJr&) rKrrrrrrsysmaxsize)rrrrr3r2rs` ` @@r$rz&ProfilingRegexLexerMeta._process_regexs eU # # EK #(<111CCC:c6**),         r&N)r1r2r3r4rr"r&r$r'r's)HHr&r'c"eZdZdZgZdZddZdS)ProfilingRegexLexerzFDrop-in replacement for RegexLexer that does profiling of its regexes.rc#~Kjjit||Ed{Vjj}t d|Dfdd}td|D}ttdjj t||fztdtdd ztd |D]}td |ztddS) Nc3K|]Y\\}}\}}|t|ddddd|d|zd|z|z fVZdS)zu'z\\\NAi)reprr[rQ).0rrnrhs r$ z=ProfilingRegexLexer.get_tokens_unprocessed..s@@+FQFQ477==//77EEcrcJ4!8TAX\3@@@@@@r&c|jSrf)_prof_sort_index)r#rBs r$r%z.sAd&;$<r&T)keyreversec3&K|] }|dV dS)Nr")r>r#s r$rAz=ProfilingRegexLexer.get_tokens_unprocessed..s&++!++++++r&z2Profiling result for %s lexing %d chars in %.3f mszn==============================================================================================================z$%-20s %-64s ncalls tottime percall)rrzn--------------------------------------------------------------------------------------------------------------z%-20s %-65s %5d %8.4f %8.4f) rGr*rLrrgrsortedrsumprintr1rX)rBrNrrawdatar sum_totalr0s` r$rgz*ProfilingRegexLexer.get_tokens_unprocessedso !((,,,44T4GGGGGGGGG.+//11@@/6}}@@@=<<<" $$$ ++d+++++   B~&D 9=> ? ? ? i 47IIJJJ i 5 5A /!3 4 4 4 4 ir&Nr)r1r2r3r4r*rCrgr"r&r$r7r7s9PPJr&r7)6r4rr4r,pip._vendor.pygments.filterrrpip._vendor.pygments.filtersrpip._vendor.pygments.tokenrrrr r pip._vendor.pygments.utilr r r rrrpip._vendor.pygments.regexoptr__all__rrrV staticmethod_default_analyser+r(rrrTrrrrrrrrrrrrrrrrrr'r7r"r&r$rUsg ========;;;;;;QQQQQQQQQQQQQQ****************333333 * * * "*W  ,,,  < .. 1 1 1 1 1 1 1 1o"o"o"o"o"io"o"o"o"dOOOOOeOOON     c    (**      u    64  uww///d         M M M M MF M M M d1d1d1d1d1Yd1d1d1N^^^^^.^^^^B L L L L L L L LEEEEEEEEP===@n,*0Gr&