bgxkddlmZddlZddlZddlZddlZddlmZmZm Z m Z m Z ddl m Z mZmZmZmZddlmZmZddlmZdZ ddlmZd Zn #e$rYnwxYwd ZGd d eZGd deZGddZGddeZ GddeZ!GddeZ"GddZ#GddZ$dS))BytesION)msb_size stream_copyapply_delta_dataconnect_deltas delta_types)allocate_memory LazyMixinmake_shawriteclose) NULL_BYTE BYTE_SPACE) force_bytesF) apply_deltaT) DecompressMemMapReaderFDCompressedSha1WriterDeltaApplyReader Sha1WriterFlexibleSha1WriterZippedStoreShaWriterrFDStream NullStreamceZdZdZdZdZddZdZdZdZ e dd Z d Z d Z d ZeeddfdZddZdS)raReads data in chunks from a memory map and decompresses it. The client sees only the uncompressed data, respective file-like read calls are handling on-demand buffered decompression accordingly A constraint on the total size of bytes is activated, simulating a logical file within a possibly larger physical memory area To read efficiently, you clearly don't want to read individual bytes, instead, read a few kilobytes at least. **Note:** The chunk-size should be carefully selected as it will involve quite a bit of string copying due to the way the zlib is implemented. Its very wasteful, hence we try to find a good tradeoff between allocation time and number of times we actually allocate. An own zlib implementation would be good here to better support streamed reading - it would only need to keep the mmap and decompress it into chunks, that's all ... ) _m_zip_buf_buflen_br_cws_cwe_s_close_cbr_phiiNc||_tj|_d|_d|_|||_d|_d|_d|_ d|_ d|_ ||_ dS)z|Initialize with mmap for stream reading :param m: must be content data - use new if you have object data and no sizeNrF) rzlib decompressobjrrrr"rr r!r$r%r#)selfmclose_on_deletionsizes ]/builddir/build/BUILD/cloudlinux-venv-1.0.7/venv/lib/python3.11/site-packages/gitdb/stream.py__init__zDecompressMemMapReader.__init__Esd&((    DG    ' c>|dksJ|dS)Nr")_parse_header_infor)attrs r- _set_cache_z"DecompressMemMapReader._set_cache_Us)t|||| !!!!!r/c.|dSN)r r)s r-__del__zDecompressMemMapReader.__del__[s r/c|d}||_||}|t}|d|t \}}t |}||_d|_|dz }t||d|_ t||z |_ d|_ ||fS)zIf this stream contains object data, parse the header info and skip the stream to a point where each read will yield object content :return: parsed type_string, sizei NrT) r"readfindrsplitrintrrrlenrr%)r)maxbhdrhdrendtypr,s r-r1z)DecompressMemMapReader._parse_header_info^siioo)$$L&&z22 T4yy! CL)) 3xx&(  Dyr/Fc\t||d}|\}}|||fS)aCreate a new DecompressMemMapReader instance for acting as a read-only stream This method parses the object header from m and returns the parsed type and size, as well as the created stream instance. :param m: memory map on which to operate. It must be object data ( header + contents ) :param close_on_deletion: if True, the memory map will be closed once we are being deletedr)rr1)r)r*r+instrCr,s r-newzDecompressMemMapReader.new{s7&a):A>>++-- TD$r/c|jS)z8:return: random access compatible data we are working on)rr7s r-datazDecompressMemMapReader.datas wr/c|jr7t|jdr|jd|_dSdS)zClose our underlying stream of compressed bytes if this was allowed during initialization :return: True if we closed the underlying stream :note: can be called safely r FN)r#hasattrrr r7s r-r zDecompressMemMapReader.closesF ; tw((  DKKK  r/c"|j|jkr|jjsd|_t |jdrT|jjt jkr9|tj |jjt jk9nq|jjse|j t|j krH|tj |jjs|j t|j kH|j|_|j S)z :return: number of compressed bytes read. This includes the bytes it took to decompress the header ( if there was one )rstatus)rr"r unused_datarJrLr'Z_OKr;mmapPAGESIZEr$r?rr7s r-compressed_bytes_readz,DecompressMemMapReader.compressed_bytes_reads0 8tw  ty'< DHty(++ -i&$)33IIdm,,,i&$)33 )/-DITW4M4MIIdm,,,)/-DITW4M4M wDH yr/SEEK_SETrc|dks|ttddkrtdtj|_dx|_x|_x|_|_ |j r d|_ |` dSdS)zgAllows to reset the stream to restart reading :raise ValueError: If offset and whence are not 0rrRCan only seek to position 0FN) getattros ValueErrorr'r(rrr r!r$r%r"r)offsetwhences r-seekzDecompressMemMapReader.seeks Q;;&GB A$>$>>>:;; ;&(( 7888498ty49 9 DI  r/cR|dkr|j|jz }nt||j|jz }|dkrdSd}|jr|j|kr<|j|}|xj|zc_|xj|z c_|S|j}||jz}|xj|jz c_d|_d|_|jj}|r,|jt|z |_ |j |z|_n|j }|j|_ ||z|_|j|j z dkr|j dz|_|j |j |j}|j t|z|_|j ||}tjdvr*tjdkst|jj}n3t|jjt|jjz}|xjt||z z c_|xjt|z c_|r||z}|r[t|t|z |kr8|j|jkr(|||t|z z }|S)Nr:rr/)z1.2.7z1.2.5darwin)r"rminrrr;runconsumed_tailr!r?r r decompressr' ZLIB_VERSIONsysplatformrMr$)r)r,dattailcwsindatadcompdatunused_datalens r-r;zDecompressMemMapReader.reads !887TX%DDtTWtx/00D 1993  9 !|t##innT** $ D  inn&& $DL(    y(  # CII-DI D(DII)C DId DI 9ty 1 $ $ A DI49,-IF + 9''55   2 2 23<8;S;S !:;;NN !:;;c$)BW>X>XXN S[[>11  CMM!  &X~H  8XS1T99dh>P>P  $X"677 7Hr/r6F)r\)__name__ __module__ __qualname____doc__ __slots__ max_read_sizer.r4r8r1 classmethodrFrHr rQrUrVr[r;r/r-rr.s:: !IM(((( """ :   [    ---b#*'"j!"<"<    iiiiiir/rceZdZdZdZdZdZdZdZe seZ neZ ddZ e e d dfd Zed Zed Zed ZedZdS)raA reader which dynamically applies pack deltas to a base object, keeping the memory demands to a minimum. The size of the final object is only obtainable once all deltas have been applied, unless it is retrieved from a pack index. The uncompressed Delta has the following layout (MSB being a most significant bit encoded dynamic size): * MSB Source Size - the size of the base against which the delta was created * MSB Target Size - the size of the resulting data after the delta was applied * A list of one byte commands (cmd) which are followed by a specific protocol: * cmd & 0x80 - copy delta_data[offset:offset+size] * Followed by an encoded offset into the delta data * Followed by an encoded size of the chunk to copy * cmd & 0x7f - insert * insert cmd bytes from the delta buffer into the output stream * cmd == 0 - invalid operation ( or error in delta stream ) )_bstream _dstreams _mm_target_sizerict|dks Jd|d|_t|dd|_d|_dS)zInitialize this instance with a list of streams, the first stream being the delta to apply on top of all following deltas, the last stream being the base object onto which to apply the deltasr:z+Need at least one delta and one base streamr\Nr)r?rvtuplerwr)r) stream_lists r-r.zDeltaApplyReader.__init__hsT;!###%R####B {3B3/00r/cpt|jdkr||St|j}|dkrd|_t d|_dS||_t |j|_t |jj }t|jj |j |jj dtjz|jj }||||jddS)Nr:r)r?rw_set_cache_brute_rrboundryr rxrvr,rr;r rOrPapplyr[)r)r3dclbbufr s r-_set_cache_too_slow_without_cz.DeltaApplyReader._set_cache_too_slow_without_crs t~  ! # #))$// / T^,, ::<<1  DJ-a00DO FZZ\\ )$*55t}122DM& DM4FdmH[\\\% $ Qr/c Nt}d}|jD]m}|d}t|\}}t||\}}|||d|||ft ||}n|jj} |}t|jdkrt | |x} }t| } t|jj| j | dtj zt|} d} tt|t|jD]\\} }}}}t|j|z }| | t|j|j |jdtj zdt!vrt#| || n%t%| ||t|| j | | } } | d| d|} | |_| |_dS)z*If we are here, we apply the actual deltasriNr:r~ c_apply_delta)listrwr;rappendmaxrvr,r?r rr rOrPzipreversedglobalsrrr[rxry)r)r3buffer_info_listmax_target_sizedstreambufrYsrc_size target_size base_sizertbuffinal_target_sizedbufddatas r-rz"DeltaApplyReader._set_cache_brute_s%  66~ @ @G,,s##C'}} FH"*3"7"7 FK  # #S\68[$Q R R R!/;??OO M& %  t~   " "&))_&E&E EI y))DM& IsT]?RSSS {++ !>A(K[B\B\^fgkgu^v^v>w>w , , : 1T68[7 $GL6$9::E KK     ek7<t}AT U U U'))++dE40000 xE DJOOO t$D IIaLLL IIaLLL +   & r/rc|j|jz }|dks||kr|}|j|}|xjt |z c_|S)Nr:)ryrrxr;r?)r)countblrHs r-r;zDeltaApplyReader.readsX Z$( " 199 E##E** CII r/rRc|dks|ttddkrtdd|_|jddS)zhAllows to reset the stream to restart reading :raise ValueError: If offset and whence are not 0rrRrTN)rUrVrWrrxr[rXs r-r[zDeltaApplyReader.seeksU Q;;&GB A$>$>>>:;; ; Qr/ct|dkrtd|djtvrtd|djz||S)a Convert the given list of streams into a stream which resolves deltas when reading from it. :param stream_list: two or more stream objects, first stream is a Delta to the object that you want to resolve, followed by N additional delta streams. The list's last stream must be a non-delta stream. :return: Non-Delta OPackStream object whose stream can be used to obtain the decompressed resolved data :raise ValueError: if the stream list cannot be handledzNeed at least two streamsr\zNCannot resolve deltas if there is no base object stream, last one was type: %s)r?rWtype_idrtype)clsr|s r-rFzDeltaApplyReader.newss {  a  899 9 r? "k 1 1`cnoqcrcwwyy ys;r/c|jjSr6)rvrr7s r-rzDeltaApplyReader.types }!!r/c|jjSr6)rvrr7s r-rzDeltaApplyReader.type_ids }$$r/c|jS)z3:return: number of uncompressed bytes in the stream)ryr7s r-r,zDeltaApplyReader.sizes zr/Nr)rmrnrorprqk_max_memory_mover.rr has_perf_modr4r;rUrVr[rsrFpropertyrrr,rtr/r-rrBs0I*    DH'H'H'V 4' 3 #*'"j!"<"<      [ 4""X"%%X%Xr/rc*eZdZdZdZdZdZddZdS) rzpSimple stream writer which produces a sha whenever you like as it degests everything it is supposed to writesha1c,t|_dSr6)r rr7s r-r.zSha1Writer.__init__2sJJ r/cT|j|t|S)z{:raise IOError: If not all bytes could be written :param data: byte object :return: length of incoming data)rupdater?r)rHs r-r zSha1Writer.write7s& 4yyr/Fcj|r|jS|jS)z]:return: sha so far :param as_hex: if True, sha will be hex-encoded, binary otherwise)r hexdigestdigest)r)as_hexs r-shazSha1Writer.shaDs4  )9&&(( (y!!!r/Nrl)rmrnrorprqr.r rrtr/r-rr,sU**I """"""r/rc"eZdZdZdZdZdZdS)rzZWriter producing a sha1 while passing on the written bytes to the given write functionwritercHt|||_dSr6)rr.r)r)rs r-r.zFlexibleSha1Writer.__init__Ts!D!!! r/cft||||dSr6)rr rrs r-r zFlexibleSha1Writer.writeXs0t$$$ Dr/N)rmrnrorprqr.r rtr/r-rrNsAIr/rcTeZdZdZdZdZdZdZdZe e ddfd Z d Z d S) rz=Remembers everything someone writes to it and generates a sha)rrct|t|_t jtj|_dSr6)rr.rrr' compressobj Z_BEST_SPEEDrr7s r-r.zZippedStoreShaWriter.__init__bs:D!!!99#D$566r/c,t|j|Sr6)rUrr2s r- __getattr__z ZippedStoreShaWriter.__getattr__gstx&&&r/ct||}|j|j||Sr6)rr rrcompress)r)rHalens r-r zZippedStoreShaWriter.writejs@d++ tx((../// r/ch|j|jdSr6)rr rflushr7s r-r zZippedStoreShaWriter.closeps( tx~~''(((((r/rRrc|dks|ttddkrtd|jddS)z`Seeking currently only supports to rewind written data Multiple writes are not supportedrrRrTN)rUrVrWrr[rXs r-r[zZippedStoreShaWriter.seekssL Q;;&GB A$>$>>>:;; ;  ar/c4|jS)zA:return: string value from the current stream position to the end)rgetvaluer7s r-rzZippedStoreShaWriter.getvalue{sx  """r/N) rmrnrorprqr.rr r rUrVr[rrtr/r-rr]sGGI777 ''' )))#*'"j!"<"<#####r/rcHeZdZdZdZedZfdZdZdZ xZ S)rzDigests data written to it, making the sha available, then compress the data and write it to the file descriptor **Note:** operates on raw file descriptors **Note:** for this to work, you have to use the close-method of this instance)fdrrz+Failed to write all bytes to filedescriptorct||_tjtj|_dSr6)superr.rr'rrr)r)r __class__s r-r.zFDCompressedSha1Writer.__init__s9 #D$566r/c|j||j|}t |j|}|t |kr|jt |S)zZ:raise IOError: If not all bytes could be written :return: length of incoming data)rrrrr rr?exc)r)rHcdata bytes_writtens r-r zFDCompressedSha1Writer.writesd !!$''dgu-- CJJ & &(N4yyr/c|j}t|j|t |kr|jt |jSr6)rrr rr?rr )r) remainders r-r zFDCompressedSha1Writer.closesEHNN$$ ) $ $I 6 6(NTW~~r/) rmrnrorprqIOErrorrr.r r __classcell__)rs@r-rrs{UU &I '? @ @C77777   r/rc<eZdZdZdZdZdZd dZdZdZ d Z d S) rzA simple wrapper providing the most basic functions on a file descriptor with the fileobject interface. Cannot use os.fdopen as the resulting stream takes ownership_fd_posc"||_d|_dSNrr)r)rs r-r.zFDStream.__init__s r/ct|xjt|z c_tj|j|dSr6)rr?rVr rrs r-r zFDStream.writes2 SYY  4     r/rc|dkr$tj|j}tj|j|}|xjt|z c_|Sr)rVpathgetsize _filepathr;rrr?)r)rbytess r-r;z FDStream.readsP A::GOODN33E%(( SZZ  r/c|jSr6)rr7s r-filenozFDStream.filenos xr/c|jSr6)rr7s r-tellz FDStream.tells yr/c.t|jdSr6)r rr7s r-r zFDStream.closes dhr/Nr) rmrnrorprqr.r r;rrr rtr/r-rrs I!!!r/rc:eZdZdZeZddZdZdZdS)rzVA stream that does nothing but providing a stream interface. Use it like /dev/nullrcdS)Nrt)r)r,s r-r;zNullStream.readsrr/cdSr6rtr7s r-r zNullStream.closes r/c t|Sr6)r?rs r-r zNullStream.writes4yyr/Nr) rmrnrorpr{rqr;r r rtr/r-rrs[I   r/r)%iorrOrVrdr' gitdb.funrrrrr gitdb.utilr r r r r gitdb.constrrgitdb.utils.encodingrrgitdb_speedups._perfrr ImportError__all__rrrrrrrrrtr/r-rs  .-------,,,,,,  AAAAAALL   D  %QQQQQYQQQh`````y```T""""""""D         # # # # #: # # #F#####Z###PD          sA AA