{fc@sdZdZdZddlZddlZddlmZmZddlTddl m Z ge e D]Z e d d krge ^qgd d d gZ [ yeWnek reZnXdZdZdZdZdZeedeeeZdZdZdZdZdZeeeeeZdZeddeeZdeZeeeZ ede dZ!ee!e eZ"dZ#d Z$d!Z%d"Z&ed#d$Z'ed%d&Z(ed'd(d)d*d+d,d-d.d/ Z)d0Z*ed1d2Z+ee)e*e+Z,ee"e,e(eZ-ee-Z.ed3ed4dd5ed6dZ/edee'Z0eee0e"e,e/eZ1e2ej3e.e1e%e&f\Z4Z5Z6Z7i&ej3e#d46ej3e$d66e6d76e7d86e6d96e7d:6e6d;6e7d<6e6d=6e7d>6e6d?6e7d@6e6dA6e7dB6e6dC6e7dD6e6dE6e7dF6e6dG6e7dH6e6dI6e7dJ6e6dK6e7dL6e6dM6e7dN6e6dO6e7dP6e6dQ6e7dR6e6dS6e7dT6ddU6ddV6ddW6ddX6ddY6ddZ6Z9iZ:xdD]Z;e;e:e;fdyYZ?dze>fd{YZ@d|ZAeAd}ZBd~ZCdddYZDej3dZEej3dZFdZGdZHdZIdZJeKdkrddlLZLeMeLjNdkreBeOeLjNdjPqeBeLjQjPndS(sTokenization help for Python programs. generate_tokens(readline) is a generator that breaks a stream of text into Python tokens. It accepts a readline-like method which is called repeatedly to get the next line of input (or "" for EOF). It generates 5-tuples with these members: the token type (see token.py) the token (a string) the starting (row, column) indices of the token (a 2-tuple of ints) the ending (row, column) indices of the token (a 2-tuple of ints) the original line (string) It is designed to match the working of the Python tokenizer exactly, except that it produces COMMENT tokens for comments and gives type OP for all operators Older entry points tokenize_loop(readline, tokeneater) tokenize(readline, tokeneater=printtoken) are the same, except instead of generating tokens, tokeneater is a callback function to which the 5 fields described above are passed as 5 arguments, each time a new token is found.sKa-Ping Yee s@GvR, ESR, Tim Peters, Thomas Wouters, Fred Drake, Skip MontanaroiN(tBOM_UTF8tlookup(t*i(ttokenit_ttokenizetgenerate_tokenst untokenizecGsddj|dS(Nt(t|t)(tjoin(tchoices((s./usr/lib64/python2.7/lib2to3/pgen2/tokenize.pytgroup0tcGst|dS(NR(R (R ((s./usr/lib64/python2.7/lib2to3/pgen2/tokenize.pytany1RcGst|dS(Nt?(R (R ((s./usr/lib64/python2.7/lib2to3/pgen2/tokenize.pytmaybe2Rs[ \f\t]*s #[^\r\n]*s\\\r?\ns [a-zA-Z_]\w*s 0[bB][01]*s0[xX][\da-fA-F]*[lL]?s0[oO]?[0-7]*[lL]?s [1-9]\d*[lL]?s [eE][-+]?\d+s\d+\.\d*s\.\d+s\d+s\d+[jJ]s[jJ]s[^'\\]*(?:\\.[^'\\]*)*'s[^"\\]*(?:\\.[^"\\]*)*"s%[^'\\]*(?:(?:\\.|'(?!''))[^'\\]*)*'''s%[^"\\]*(?:(?:\\.|"(?!""))[^"\\]*)*"""s[ubUB]?[rR]?'''s[ubUB]?[rR]?"""s&[uU]?[rR]?'[^\n'\\]*(?:\\.[^\n'\\]*)*'s&[uU]?[rR]?"[^\n"\\]*(?:\\.[^\n"\\]*)*"s\*\*=?s>>=?s<<=?s<>s!=s//=?s->s[+\-*/%&@|^=<>]=?t~s[][(){}]s\r?\ns[:;.,`@]s'[uUbB]?[rR]?'[^\n'\\]*(?:\\.[^\n'\\]*)*t's'[uUbB]?[rR]?"[^\n"\\]*(?:\\.[^\n"\\]*)*t"s'''s"""sr'''sr"""su'''su"""sb'''sb"""sur'''sur"""sbr'''sbr"""sR'''sR"""sU'''sU"""sB'''sB"""suR'''suR"""sUr'''sUr"""sUR'''sUR"""sbR'''sbR"""sBr'''sBr"""sBR'''sBR"""trtRtutUtbtBsr'sr"sR'sR"su'su"sU'sU"sb'sb"sB'sB"sur'sur"sUr'sUr"suR'suR"sUR'sUR"sbr'sbr"sBr'sBr"sbR'sbR"sBR'sBR"it TokenErrorcBseZRS((t__name__t __module__(((s./usr/lib64/python2.7/lib2to3/pgen2/tokenize.pyRstStopTokenizingcBseZRS((RR(((s./usr/lib64/python2.7/lib2to3/pgen2/tokenize.pyRsc CsA|\}}|\}}d||||t|t|fGHdS(Ns%d,%d-%d,%d: %s %s(ttok_nametrepr( ttypeRtstarttendtlinetsrowtscolterowtecol((s./usr/lib64/python2.7/lib2to3/pgen2/tokenize.pyt printtokens  cCs)yt||Wntk r$nXdS(s: The tokenize() function accepts two parameters: one representing the input stream, and one providing an output mechanism for tokenize(). The first parameter, readline, must be a callable object which provides the same interface as the readline() method of built-in file objects. Each call to the function should return one line of input as a string. The second parameter, tokeneater, must also be a callable object. It is called once for each token, with five arguments, corresponding to the tuples generated by generate_tokens(). N(t tokenize_loopR(treadlinet tokeneater((s./usr/lib64/python2.7/lib2to3/pgen2/tokenize.pyRs  cCs%xt|D]}||q WdS(N(R(R+R,t token_info((s./usr/lib64/python2.7/lib2to3/pgen2/tokenize.pyR*st UntokenizercBs,eZdZdZdZdZRS(cCsg|_d|_d|_dS(Nii(ttokenstprev_rowtprev_col(tself((s./usr/lib64/python2.7/lib2to3/pgen2/tokenize.pyt__init__s  cCs:|\}}||j}|r6|jjd|ndS(Nt (R1R/tappend(R2R"trowtcolt col_offset((s./usr/lib64/python2.7/lib2to3/pgen2/tokenize.pytadd_whitespaces  cCsx|D]}t|dkr3|j||Pn|\}}}}}|j||jj||\|_|_|ttfkr|jd7_d|_qqWdj |jS(NiiiR( tlentcompatR9R/R5R0R1tNEWLINEtNLR (R2titerablettttok_typeRR"R#R$((s./usr/lib64/python2.7/lib2to3/pgen2/tokenize.pyRs  c Cs%t}g}|jj}|\}}|ttfkrC|d7}n|ttfkr^t}nx|D]}|d \}}|ttfkr|d7}n|tkr|j|qenZ|t kr|j qen>|ttfkrt}n#|r|r||dt}n||qeWdS(NR4ii( tFalseR/R5tNAMEtNUMBERR<R=tTruetINDENTtDEDENTtpop( R2RR>t startlinetindentst toks_appendttoknumttokvalttok((s./usr/lib64/python2.7/lib2to3/pgen2/tokenize.pyR;s0             (RRR3R9RR;(((s./usr/lib64/python2.7/lib2to3/pgen2/tokenize.pyR.s   s&^[ \t\f]*#.*?coding[:=][ \t]*([-\w.]+)s^[ \t\f]*(?:[#\r\n]|$)cCs^|d jjdd}|dks7|jdr;dS|d ksV|jd rZdS|S(s(Imitates get_normal_name in tokenizer.c.i Rt-sutf-8sutf-8-slatin-1s iso-8859-1s iso-latin-1slatin-1-s iso-8859-1-s iso-latin-1-(slatin-1s iso-8859-1s iso-latin-1(slatin-1-s iso-8859-1-s iso-latin-1-(tlowertreplacet startswith(torig_enctenc((s./usr/lib64/python2.7/lib2to3/pgen2/tokenize.pyt_get_normal_names cstd}d}fd}fd}|}|jtrat|d}d}n|sq|gfS||}|r||gfStj|s||gfS|}|s||gfS||}|r|||gfS|||gfS(s The detect_encoding() function is used to detect the encoding that should be used to decode a Python source file. It requires one argument, readline, in the same way as the tokenize() generator. It will call readline a maximum of twice, and return the encoding used (as a string) and a list of any lines (left as bytes) it has read in. It detects the encoding from the presence of a utf-8 bom or an encoding cookie as specified in pep-0263. If both a bom and a cookie are present, but disagree, a SyntaxError will be raised. If the encoding cookie is an invalid charset, raise a SyntaxError. Note that if a utf-8 bom is found, 'utf-8-sig' is returned. If no encoding is specified, then the default of 'utf-8' will be returned. sutf-8cs'y SWntk r"tSXdS(N(t StopIterationtbytes((R+(s./usr/lib64/python2.7/lib2to3/pgen2/tokenize.pyt read_or_stops  csy|jd}Wntk r'dSXtj|}|sAdSt|jd}yt|}Wn!tk rt d|nXr|j dkrt dn|d7}n|S(Ntasciiisunknown encoding: sutf-8sencoding problem: utf-8s-sig( tdecodetUnicodeDecodeErrortNonet cookie_retmatchRTR Rt LookupErrort SyntaxErrortname(R$t line_stringR]tencodingtcodec(t bom_found(s./usr/lib64/python2.7/lib2to3/pgen2/tokenize.pyt find_cookies"   is utf-8-sigN(RAR[RQRRDtblank_reR](R+RbtdefaultRWRetfirsttsecond((RdR+s./usr/lib64/python2.7/lib2to3/pgen2/tokenize.pytdetect_encodings0          cCst}|j|S(sTransform tokens back into Python source code. Each element returned by the iterable must be a token sequence with at least two elements, a token number and token value. If only two tokens are passed, the resulting output is poor. Round-trip invariant for full input: Untokenized source will match input source exactly Round-trip invariant for limited intput: # Output text will tokenize the back to the input t1 = [tok[:2] for tok in generate_tokens(f.readline)] newcode = untokenize(t1) readline = iter(newcode.splitlines(1)).next t2 = [tok[:2] for tokin generate_tokens(readline)] assert t1 == t2 (R.R(R>tut((s./usr/lib64/python2.7/lib2to3/pgen2/tokenize.pyRFs ccs@d}}}tjdd}}d\}}d}dg} xy |} Wntk rfd} nX|d}dt| } } |r{| std| fn|j| }|r|jd} }t|| | | ||f|| fVd\}}d}q|ra| ddkra| d d krat || | |t| f|fVd}d}q@q|| }|| }q@n`|dkr| r| sPnd}xv| | kr| | d kr|d}n?| | d kr|t dt }n| | d krd}nP| d} qW| | kr'Pn| | dkr| | dkr| | j d}| t|}t ||| f|| t|f| fVt | |||f|t| f| fVq@t t f| | dk| | || f|t| f| fVq@n|| dkrI| j|t| | |df|| f| fVnx|| dkr|| krtdd|| | fn| d } td|| f|| f| fVqLWn$| std|dffnd}x| | krtj| | }|r|jd\}}||f||f|}}} | ||!| |}}||kss|dkr|dkrt|||| fVq|dkrt}|dkrt }n||||| fVq|dkrt |||| fVq|tkrrt|}|j| | }|rR|jd} | || !}t|||| f| fVq||f} | |}| }Pq|tks|d tks|d tkr|ddkr||f} t|pt|dpt|d}| |d}}| }Pqt|||| fVq||kr5t|||| fVq|dkrdt |||| f| fVd}q|dkr}|d}n|dkr|d}nt|||| fVqt | | || f|| df| fV| d} qWq@Wx2| dD]&}td|df|dfdfVqWtd|df|dfdfVdS(sT The generate_tokens() generator requires one argument, readline, which must be a callable object which provides the same interface as the readline() method of built-in file objects. Each call to the function should return one line of input as a string. Alternately, readline can be a callable function terminating with StopIteration: readline = open(myfile).next # Example of alternate readline The generator produces 5-tuples with these members: the token type; the token string; a 2-tuple (srow, scol) of ints specifying the row and column where the token begins in the source; a 2-tuple (erow, ecol) of ints specifying the row and column where the token ends in the source; and the line on which the token was found. The line passed is the logical line; continuation lines are included. iRt 0123456789RisEOF in multi-line stringis\ is\ R4s s s# t#s is3unindent does not match any outer indentation levels sEOF in multi-line statementt.iis s\s([{s)]}N(Ri(Ri(tstringt ascii_lettersR[RUR:RR]R#tSTRINGt ERRORTOKENttabsizetrstriptCOMMENTR=R5REtIndentationErrorRFt pseudoprogtspanRCR<t triple_quotedtendprogst single_quotedRBtOPt ENDMARKER(R+tlnumtparenlevt continuedt namecharstnumcharstcontstrtneedconttcontlineRIR$tpostmaxtstrstarttendprogtendmatchR#tcolumnt comment_tokentnl_post pseudomatchR"tsposteposRtinitialtnewlinetindent((s./usr/lib64/python2.7/lib2to3/pgen2/tokenize.pyR[s        &      $ #  '  '                   $t__main__(s'''s"""sr'''sr"""sR'''sR"""su'''su"""sU'''sU"""sb'''sb"""sB'''sB"""sur'''sur"""sUr'''sUr"""suR'''suR"""sUR'''sUR"""sbr'''sbr"""sBr'''sBr"""sbR'''sbR"""sBR'''sBR"""(RRsr'sr"sR'sR"su'su"sU'sU"sb'sb"sB'sB"sur'sur"sUr'sUr"suR'suR"sUR'sUR"sbr'sbr"sBr'sBr"sbR'sbR"sBR'sBR"((Rt__doc__t __author__t __credits__RotretcodecsRRtlib2to3.pgen2.tokenRRtdirtxt__all__RVt NameErrortstrR RRt WhitespacetCommenttIgnoretNamet Binnumbert Hexnumbert Octnumbert Decnumbert IntnumbertExponentt PointfloattExpfloatt Floatnumbert ImagnumbertNumbertSingletDoubletSingle3tDouble3tTripletStringtOperatortBrackettSpecialtFunnyt PlainTokentTokentContStrt PseudoExtrast PseudoTokentmaptcompilet tokenprogRwt single3progt double3progR[RzRyR?R{Rst ExceptionRRR)RR*R.R\RfRTRjRRRtsysR:targvtopenR+tstdin(((s./usr/lib64/python2.7/lib2to3/pgen2/tokenize.pyts /           '#     8 I