bg+UdZddlmZddlZddlZddlZddlZddlZddlZddl Z ddl Z ddl m Z ddl mZmZmZmZddlmZddlmZmZmZddlmZmZdd lmZmZmZmZmZmZm Z m!Z!m"Z"m#Z#ddl$Z$dd l$m%Z%dd l&m'Z'm(Z(m)Z)dd l*m+Z+m,Z,dd l-m.Z.m/Z/ddl0m1Z1m2Z2erddl3m4Z4dZ5ej6dZ7ede8Z9ede8Z:GddeZ;edee9fZGddZ?ede?fZ@GddZAGddZBGddeZCe"de:e:fZDe5fdVd$ZEdWd(ZFdXd.ZGGd/d0eZHGd1d2ZI dYdZd=ZJejKGd>dZLd?d@iZMdAeNdB<d[dIZOGdJdKe(eIZPd\dNZQd]d^dQZRdYd_dTZSeTdUkr eSdSdS)`aA similarities / code duplication command line tool and pylint checker. The algorithm is based on comparing the hash value of n successive lines of a file. First the files are read and any line that doesn't fulfill requirement are removed (comments, docstrings...) Those stripped lines are stored in the LineSet class which gives access to them. Then each index of the stripped lines collection is associated with the hash of n successive entries of the stripped lines starting at the current index (n is the minimum common lines option). The common hashes between both linesets are then looked for. If there are matches, then the match indices in both linesets are stored and associated with the corresponding couples (start line number/end line number) in both files. This association is then post-processed to handle the case of successive matches. For example if the minimum common lines setting is set to four, then the hashes are computed with four lines. If one of match indices couple (12, 34) is the successor of another one (11, 33) then it means that there are in fact five lines which are common. Once post-processed the values of association table are the result looked for, i.e. start and end lines numbers of common lines in both files. ) annotationsN) defaultdict)Callable GeneratorIterableSequence)getopt)BufferedIOBaseBufferedReaderBytesIO)chaingroupby) TYPE_CHECKINGAnyDictList NamedTupleNewTypeNoReturnTextIOTupleUnion)nodes) BaseCheckerBaseRawFileCheckertable_lines_from_stats)SectionTable)MessageDefinitionTupleOptions) LinterStatsdecoding_stream)PyLinterz.*\w+Index LineNumberc$eZdZUded<ded<dS) LineSpecifsr& line_numberstrtextN__name__ __module__ __qualname____annotations__h/builddir/build/BUILD/cloudlinux-venv-1.0.7/venv/lib/python3.11/site-packages/pylint/checkers/similar.pyr(r(Qs% IIIIIr2r( LinesChunkSuccessiveLinesLimitsceZdZdZdZd d Zd S) CplSuccessiveLinesLimitszHolds a SuccessiveLinesLimits object for each checked file and counts the number of common lines between both stripped lines collections extracted from both files.  first_file second_fileeffective_cmn_lines_nbr9r5r:r;intreturnNonec0||_||_||_dSNr8)selfr9r:r;s r3__init__z!CplSuccessiveLinesLimits.__init__hs! %&&<###r2N)r9r5r:r5r;r<r=r>)r-r.r/__doc__ __slots__rBr1r2r3r7r7as:HI======r2r7LineSetStartCouplec>eZdZdZdZdd ZddZddZddZddZ dS)r4zlThe LinesChunk object computes and stores the hash of some consecutive stripped lines of a lineset. _fileid_index_hashfileidr*num_liner<lines Iterable[str]r=r>c|||_ t||_ td|D|_dS)Nc34K|]}t|VdSr@)hash).0lins r3 z&LinesChunk.__init__..s(99Cd3ii999999r2)rHr%rIsumrJ)rArKrLrMs r3rBzLinesChunk.__init__sD" Q"8__  99599999 11r2orboolcZt|tstS|j|jkSr@) isinstancer4NotImplementedrJ)rArVs r3__eq__zLinesChunk.__eq__s)!Z(( "! !zQW$$r2c|jSr@)rJrAs r3__hash__zLinesChunk.__hash__s zr2c8d|jd|jd|jdS)NzrGr]s r3__repr__zLinesChunk.__repr__s+ X4< X X4; X X$* X X X r2c6d|jd|jd|jS)NzLinesChunk object for file z, starting at line z Hash is rGr]s r3__str__zLinesChunk.__str__s7 $$, $ $4; $ $z $ $ r2N)rKr*rLr<rMrNr=r>)rVrr=rWr=r<r=r*) r-r.r/rCrDrBr[r^r`rbr1r2r3r4r4xs/I 2 2 2 2%%%%           r2cxeZdZdZdZddZedd Zedd Zej dd ZddZ dS)r5zA class to handle the numbering of begin and end of successive lines. :note: Only the end line number can be updated. _start_endstartr&endr=r>c"||_||_dSr@rf)rArirjs r3rBzSuccessiveLinesLimits.__init__s"' # r2c|jSr@)rgr]s r3rizSuccessiveLinesLimits.starts {r2c|jSr@rhr]s r3rjzSuccessiveLinesLimits.ends yr2valuec||_dSr@rnrAros r3rjzSuccessiveLinesLimits.ends  r2r*c(d|jd|jdS)Nz>rfr]s r3r`zSuccessiveLinesLimits.__repr__sE$+EE EEEEr2N)rir&rjr&r=r>)r=r&)ror&r=r>rd) r-r.r/rCrDrBpropertyrirjsetterr`r1r2r3r5r5s #I$$$$XX ZZFFFFFFr2cHeZdZUdZded<ded<ddZdd Zdd ZddZdS)rEzEIndices in both linesets that mark the beginning of successive lines.r%fst_lineset_indexsnd_lineset_indexr=r*c(d|jd|jdS)Nz.s$99Q!&999999r2c>g|]}t|dSr@)iter)rRirMs r3 z hash_lineset..s'FFFT%)__FFFr2)rirj)rlisttuplestripped_linesrange enumeratezipr&r) IndexErrorr%r5r4nameappend) rr hash2index index2lines shifted_linesr succ_linesstart_linenumberend_linenumberindexl_crMs @r3 hash_linesetrs^T""JK 99'"8999 9 9EFFFFe4D.E.EFFFM#C$788 & &J%g&r(B(NQR(RSSNNN Ta2"   Eu:z:::3u%%%% { ""s B!!,CC all_couplesCplIndexToCplLines_Tr>ct|D]}g}|td}||vr||jj||j_||jj||j_||xjdz c_|||td}||v|D](} | |#t$rY%wxYwdS)aRemoves all successive entries in the dictionary in argument. :param all_couples: collection that has to be cleaned up from successive entries. The keys are couples of indices that mark the beginning of common entries in both linesets. The values have two parts. The first one is the couple of starting and ending line numbers of common successive lines in the first file. The second part is the same for the second file. For example consider the following dict: >>> all_couples {(11, 34): ([5, 9], [27, 31]), (23, 79): ([15, 19], [45, 49]), (12, 35): ([6, 10], [28, 32])} There are two successive keys (11, 34) and (12, 35). It means there are two consecutive similar chunks of lines in both files. Thus remove last entry and update the last line numbers in the first entry >>> remove_successive(all_couples) >>> all_couples {(11, 34): ([5, 10], [27, 32]), (23, 79): ([15, 19], [45, 49])} rN) rkeysrr%r9rjr:r;rpopKeyError)rcouple to_removetesttargets r3remove_successivers14 ((**++ a))k!!1>%((++D k!!   F ''''     s&C<< D D ls_1 stindex_1ls_2 stindex_2common_lines_nbcd|j|||zD}d|j|||zD}tdt||DS)a|Return the effective number of common lines between lineset1 and lineset2 filtered from non code lines. That is to say the number of common successive stripped lines except those that do not contain code (for example a line with only an ending parenthesis) :param ls_1: first lineset :param stindex_1: first lineset starting index :param ls_2: second lineset :param stindex_2: second lineset starting index :param common_lines_nb: number of common successive stripped lines before being filtered from non code lines :return: the number of common successive stripped lines that contain code cZg|](}t|j!|j)Sr1REGEX_FOR_LINES_WITH_CONTENTmatchr+rRlspecifs r3rz(filter_noncode_lines..D@  ' - -gl ; ; r2cZg|](}t|j!|j)Sr1rrs r3rz(filter_noncode_lines..Irr2c3(K|] \}}||kVdSr@r1)rRsline_1sline_2s r3rTz'filter_noncode_lines..Ns,XX&6gww'!XXXXXXr2)rrUr)rrrrr stripped_l1 stripped_l2s r3filter_noncode_linesr/s**9y?7R+RSK *9y?7R+RSK XX#k;:W:WXXX X XXr2cVeZdZUded<ded<ded<ded<ded<ded <ded <d S) Commonalityr< cmn_lines_nbrfst_lsetr&fst_file_start fst_file_endsnd_lsetsnd_file_start snd_file_endNr,r1r2r3rrQsfr2rcreZdZdZeddddfd&d Z d'd(dZd)dZd*dZd+dZ d,dZ d-dZ d.d Z d/d"Z d0d%Zd S)1Similarz-Finds copy-pasted lines of code in a project.F min_linesr<ignore_commentsrWignore_docstringsignore_importsignore_signaturesr=r>c t|tr|jj|_nt j|_||j_||j_||j_ ||j_ ||j_ g|_ dSr@) rYrlinterconfig namespaceargparse Namespacemin_similarity_linesrrrrlinesets)rArrrrrs r3rBzSimilar.__init__^sq dK ( ( 2![/DNN%/11DN.7+)8&+<((6%+<(') r2Nstreamidr*stream STREAM_TYPESencoding str | Nonect|tr|tt||j}n|j} |}n#t $rg}YnwxYw|jt|||j j |j j |j j |j j t|dr |jjnddS)z)Append a file to search for similarities.Nrline_enabled_callback)rYr ValueErrorr" readlinesUnicodeDecodeErrorrrrrrrrrhasattrr_is_one_message_enabled)rArrrrrMs r3 append_streamzSimilar.append_streamss fn - - )  '99CII(I IKKEE!   EEE   .0-04**'dk&I&I    s A AAcx|jjdkrdS||dS)z=Start looking for similarities and display results on stdout.rN)rr _display_sims _compute_simsr]s r3runz Similar.runs= > .! 3 3 F 4--//00000r2)list[tuple[int, set[LinesChunkLimits_T]]]ctt}|D]m}|j}|j}|j}|j}|j}|j}|j } ||} | D]} |||f| vs||| f| vrn| |||f||| fhng} | D]!\}} | D]}| ||f"| | | S)z'Compute similarities in appended files.)rr _iter_simsrrrrrrrritemssortreverse)rA no_duplicates commonalitynumlineset1 start_line_1 end_line_1lineset2 start_line_2 end_line_2 duplicatecouplessims ensemblescplss r3rzSimilar._compute_simss^BMdBSBS ??,,  K*C"+H&5L$1J"+H&5L$1J%c*I$  lJ77BB G GG E G  !<<!<< ;=+1133 ) )NC! ) ) S$K(((( )   r2 similaritiescN||}t|dS)z(Display computed similarities on stdout.N)_get_similarity_reportprint)rArreports r3rzSimilar._display_simss&,,\:: f r2c d}d}|D]\}}|d|dt|dz }t|}dx}x}} |D]\}}} |d|jd|d | d z }|rC|j|| D]3} || rd | dndz }4||t|d z zz }t d |jD} |d| d|d|dz| z ddz }|S)z"Create a report from similarities.r z similar lines in z files N==:[:z] z rc34K|]}t|VdSr@lenrRrs r3rTz1Similar._get_similarity_report..s($O$OgS\\$O$O$O$O$O$Or2z TOTAL lines=z duplicates=z percent=Y@z.2f)r sortedr _real_linesrstriprUr) rArrduplicated_line_numbernumberr couples_lline_set start_lineend_linelinetotal_line_numbers r3rzSimilar._get_similarity_reports&'+ D DOFG K6KKS\\KKK KFwI/3 3H 3zH2; K K.*hJx}JJ JJXJJJJ Q$0H1DEQQDP5DKKMM5555DPFF "fI0B&C C " "!$$O$O$O$O$O!O!O R, R R0 R R-58II Q R R R   r2rrr"Generator[Commonality, None, None]c #Kt||jj\}t||jj\}}t}t|}t ||zfd}i} t |t jdD]} tj | || D]o} | d} | d} ttj || tj || |jj| t| | <pt| | D]\}}|j}|j}|j}t%|||jj|jj||jj|jj}t/|||||}||jjkr|VdS) atFind similarities in the two given linesets. This the core of the algorithm. The idea is to compute the hashes of a minimal number of successive lines of each lineset and then compare the hashes. Every match of such comparison is stored in a dict that links the couple of starting indices in both linesets to the couple of corresponding starting and ending lines in both files. Last regroups all successive couples in a bigger one. It allows to take into account common chunk of lines that have more than the minimal number of successive lines required. c |dSNrr1)mhash_to_index_1s r3z&Similar._find_common..s?1+=a+@r2keyrIrr)r;)rrrrrrrN)rrr frozensetrroperator attrgetter itertoolsproductr7copyrErrrxryr;rr9rirjr:r)rArrindex_to_lines_1hash_to_index_2index_to_lines_2hash_1hash_2 common_hashesrc_hashindices_in_linesetsindex_1index_2cml_stripped_lcmn_l start_index_1 start_index_2nb_common_linescom eff_cmn_nbrs @r3 _find_commonzSimilar._find_commonsB&-9 dn9- - ))-9 dn9- - )))2/2F2F2H2H(I(I(1/2F2F2H2H(I(I.4 VO!@!@!@!@/ / / -/ ]0CH0M0MNNN  F'0'8')@((  #.a0-a0-I.w788I.w788+/>+N&w88  +&&&%0%6%6%8%8   !NE*.?s%UUU'WUUTUUUUr2Nr>)rAr@s r3combine_mapreduce_datazSimilar.combine_mapreduce_data:s VU-@UUU r2) rr<rrWrrWrrWrrWr=r>r@)rr*rrrrr=r>r=r>)r=r)rrr=r>)rrr=r*)rrrrr=r)r=rr=r<)r@rAr=r>)r-r.r/rCDEFAULT_MIN_SIMILARITY_LINErBrrrrrr9rr?rEr1r2r3rr[s775 %"'$"' *****,KO     >1111 %%%%N2IIIIV@@@@VVVVVVr2rrMrNrrWrrrr!Callable[[str, int], bool] | Nonelist[LineSpecifs]c |s|r'tjd|}|r/d|jD}dt |dD}d} |r2dfd g|} t t d | D} g} d} t|dD]O\}}| |d|s|}|r| s}| ds| dr|dd} |dd}n>| ds| dr|dd} |dd}| r| | rd} d}|r| || } | rd}|r.| ddd}|r|| vrd}|r4| t|t|dz Q| S)aReturn tuples of line/line number/line type with leading/trailing white-space and any ignored code features removed. :param lines: a collection of lines :param ignore_comments: if true, any comment in the lines collection is removed from the result :param ignore_docstrings: if true, any line that is a docstring is removed from the result :param ignore_imports: if true, any line that is an import is removed from the result :param ignore_signatures: if true, any line that is part of a function signature is removed from the result :param line_enabled_callback: If called with "R0801" and a line number, a return value of False will disregard the line :return: the collection of line/line number/line type tuples rc3pK|]1}|jt|tjtjffV2dSr@)linenorYrImport ImportFrom)rRnodes r3rTz!stripped_lines..YsQ$ $ [*TEL%:J+KLL M$ $ $ $ $ $ r2cHi|]\}}|td|D S)c3 K|] \}}|V dSr@r1)rR_ is_imports r3rTz,stripped_lines...^s&KKla KKKKKKr2)all)rRrMnode_is_import_groups r3 z"stripped_lines..]sG   ,, CKK6JKKKKK   r2c|dSrr1)rs r3rz stripped_lines..`s !r2r F functionslist[nodes.NodeNG]tree nodes.NodeNGr=c |jD]z}t|tjtjfr||t|tjtjtjfr ||{|S)zeRecursively get all functions including nested in the classes from the tree. )bodyrYr FunctionDefAsyncFunctionDefrClassDef)rYr[rP_get_functionss r3rbz&stripped_lines.._get_functionsfs  4 4dU%68N$OPP+$$T***^U%68NO4#N9d333 r2c3K|]:}t|j|jr|jdjn |jdzV;dS)rrN)rrMr^tolineno)rRfuncs r3rTz!stripped_lines..|sf   /3yO ! ++dma>Or2Nr)riR0801z"""z'''zr"""zr'''r$#r)r+r))rYrZr[r\r=rZ)astroidparsejoinr^rsetr rstrip startswithendswithgetsplitrr(r&)rMrrrrrr[node_is_import_by_linenoline_begins_importcurrent_line_is_importrYsignature_lines strippedlines docstringrMrrbs @r3rrBs(-*-}RWWU^^,, '$ $  $ $ $    07(nn111    "'        &#N2t,,   !*     MI!%q111  ,5J5J V6 6 , zz||   $??5))$T__U-C-C$ $RaRI8DD__V,,$0G0G$ $QqS I8D ==++% $I  %7%;%;.&& "&   1::c1%%a(..00D  ?!:!:D    :fqj3I3IJJJ    r2ceZdZdZ d d!dZd"dZd#dZd$dZd%dZd#dZ d&dZ e d'dZ e d(dZ dS))rzHolds and indexes all the lines of a single source file. Allows for correspondence between real lines of the source file and stripped ones, which are the real ones from which undesired patterns have been removed. FNrr*rM list[str]rrWrrrrrIr=r>cV||_||_t|||||||_dS)Nr)rrr_stripped_lines)rArrMrrrrrs r3rBzLineSet.__init__sB  -     "7    r2cd|jdS)Nz rr]s r3rbzLineSet.__str__s+ty++++r2r<c*t|jSr@)r rr]s r3__len__zLineSet.__len__s4#$$$r2rr(c|j|Sr@r{)rArs r3 __getitem__zLineSet.__getitem__s#E**r2r{c"|j|jkSr@r~r}s r3__lt__zLineSet.__lt__sy5:%%r2c t|Sr@)idr]s r3r^zLineSet.__hash__s $xxr2rcPt|tsdS|j|jkS)NF)rYr__dict__r}s r3r[zLineSet.__eq__s(%)) 5}..r2rJc|jSr@rr]s r3rzLineSet.stripped_liness ##r2c|jSr@)rr]s r3 real_lineszLineSet.real_liness r2)FFFFN)rr*rMryrrWrrWrrWrrWrrIr=r>rdrc)rr<r=r()r{rr=rWr)r=rJ)r=ry)r-r.r/rCrBrbrrrr^r[rurrr1r2r3rrs!&"'$"'CG     *,,,,%%%%++++&&&&//// $$$X$   X   r2rf)zSimilar lines in %s files %szduplicate-codezIndicates that a set of similar lines has been detected among multiple file. This usually means that the code should be refactored to avoid this duplication.z!dict[str, MessageDefinitionTuple]MSGSsectrstatsr! old_statsLinterStats | Nonecgd}|t||dz }|t|ddddS)z0Make a layout with some stats about duplication.)rnowprevious differenceduplicated_linesr$r)childrencolsrheaderscheadersN)rrr)rrrrMs r3report_similaritiesrsP 2 1 1E #E96H I IIEKKu1q1EEEFFFFFr2c eZdZUdZdZeZdeddddfdd d d d dfd d d d ddfdd d d ddfdd d d ddffZde d<dde ffZ d'dZ d(dZ d)dZd(d Zd*d"Zd+d%Zd&S),SimilarCheckerzChecks for similarities and duplicated code. This computation may be memory / CPU intensive, so you should disable it if you experience some problems. rzmin-similarity-linesr<zz%Minimum lines number of a similarity.)defaulttypemetavarhelpignore-commentsTynzz4Comments are removed from the similarity computationignore-docstringsz6Docstrings are removed from the similarity computationignore-importsz3Imports are removed from the similarity computationignore-signaturesz6Signatures are removed from the similarity computationr optionsRP0801 Duplicationrr#r=r>ctj||t||jjj|jjj|jjj|jjj|jjj dS)N)rrrrr) rrBrrrrrrrr)rArs r3rBzSimilarChecker.__init__;sv#D&111 k(= K.>"k0B;-<"k0B      r2cPg|_|jjdS)z=Init the checkers: reset linesets and statistics information.N)rrrreset_duplicated_linesr]s r3openzSimilarChecker.openFs&  0022222r2rP nodes.Modulec|jjtjdt|5}||jj||jddddS#1swxYwYdS)zProcess a module. the module's content is accessible via the stream object stream must implement the readlines method NzIn pylint 3.0 the current_name attribute of the linter object should be a string. If unknown it should be initialized as an empty string.)r current_namewarningswarnDeprecationWarningrr file_encoding)rArPrs r3process_modulezSimilarChecker.process_moduleKs ; # + MN#    [[]] Uf   t{7AS T T T U U U U U U U U U U U U U U U U U Us'A//A36A3c td|jD}d}|jj}|D]\}}g}dx}x}} |D]*\}}} |d|jd|d| d+||r9|j|| D])} || *| dt|d |f ||t|d z zz }|xj t|z c_ |xjt!|o|d z|z z c_dS) zBCompute and display similarities on closing (i.e. end of parsing).c34K|]}t|VdSr@r r s r3rTz'SimilarChecker.close.._s(>>WCLL>>>>>>r2rNrrr]rfr)argsrr )rUrrrrrrrrr add_messager rknb_duplicated_linesr<percent_duplicated_linesfloat) rAtotal duplicatedrrrmsgrrrrs r3closezSimilarChecker.close]s>> >>>>>  ! ..00 3 3LCC.2 2G 2j818 J J-X H HH HHXHHHIIII HHJJJ .#.z(/BC..DJJt{{}}----   WCLL$))C..+I  J J J #W!12 2JJ !!S__4!! &&%0T*u:Lu:T*U*UU&&&&r2r<c6t|S)zPassthru override.)rr?r]s r3r?zSimilarChecker.get_map_datars##D)))r2datarAc>t||dS)rC)r@N)rrE)rArrs r3reduce_map_datazSimilarChecker.reduce_map_datavs# &&t&FFFFFr2Nrr#r=r>rF)rPrr=r>rG)rr#rrAr=r>)r-r.r/rCrrmsgsrHrr0rreportsrBrrrr?rr1r2r3rrs D D #6"?     %N     %P     %M     %P    K.G....`-)<=?G     3333 UUUU$VVVV*****GGGGGGr2rrr#cJ|t|dSr@)register_checkerr)rs r3registerr~s$ N62233333r2statusrctdttdtj|dS)z'Display command line usage information.z*finds copy pasted blocks in a set of fileszUsage: symilar [-d|--duplicates min_duplicated_lines] [-i|--ignore-comments] [--ignore-docstrings] [--ignore-imports] [--ignore-signatures] file1...N)rsysexit)rs r3usagersJ 6777 GGG  `HVr2argvSequence[str] | NonecR|tjdd}d}gd}t}d}d}d}d}tt |||\}} |D]G\} } | dvrt | }| dvrt ,| dvrd }3| d vrd }:| d vrd }A| d vrd }H| st dt|||||} | D]A} t| d 5}| | |dddn #1swxYwYB| tj ddS)z%Standalone command line access point.Nrhdi)rz duplicates=rrrrF>-d --duplicates>-h--help>-i--ignore-commentsT>--ignore-docstrings>--ignore-imports>--ignore-signatureszutf-8)rr) rrrHr rr<rrrrrr)rs_optsl_optsrrrrroptsroptvalsimfilenamers r3Runrs |x| FF,IONT FF33JD$ % %S ( ( (CII $ $ $ GGGG / / /"OO + + + $   ( ( (!NN + + + $   a ?$5~GX  C00 (W - - - 0   h / / / 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0GGIIIHQKKKKKsC33C7 :C7 __main__)rrrr<r=r)rrr=r>) rrrr%rrrr%rr<r=r<r@)rMrNrrWrrWrrWrrWrrIr=rJ)rrrr!rrr=r>r)r)rr<r=r)rrr=r)UrC __future__rrr' functoolsr%r#rerr collectionsrcollections.abcrrrrr ior r r r rtypingrrrrrrrrrrrirpylint.checkersrrrpylint.reporters.ureports.nodesrr pylint.typingrr pylint.utilsr!r" pylint.lintr#rHcompilerr<r%r&r( HashToIndex_TIndexToLines_Trr7rr4r5rELinesChunkLimits_Trrrrrrtotal_orderingrrr0rrrrrr-r1r2r3rst 2#"""""  ######CCCCCCCCCCCC6666666666$$$$$$$$                        SSSSSSSSSS::::::::9999999955555555%$$$$$$)rz(33 W\3 ' ' *\4;./ e445V^W45 ========(02JJK$ $ $ $ $ $ $ $ NFFFFFFFF8        :9j*<=/J&#&#&#&#&#R((((VYYYYD*dVdVdVdVdVdVdVdVZ@D fffffR 6 6 6 6 6 6 6 6 t +GGGGGGGGG'GGGD4444*****Z zCEEEEEr2