Qf3UdZgdZddlZddlZddlZddlZddlmZddlm Z ddl m Z m Z m Z ddlmZmZddlmZmZmZmZmZmZmZmZmZdd lmZdd lmZdd lmZm Z m!Z!ed Z"Gd de#Z$dZ%dCdZ&dZ'dZ(dZ)dZ*dDdZ+dddddde,e-fdZ.de/de/de/fdZ0dejbjdzd zZ3e/e4d!<de/de/de-fd"Z5de/de/de fd#Z6d$Z7dCd%Z8d&Z9dCd'Z:d(Z;d)Zd,Z?d-Z@d.d/d0d1ZAdCd2ZBdCd3ZCdCd4ZDdCd5ZEd6ZFd7ZGd8d9d:ZHe d;d<ZIdd=d>ZJd?ZK dd@lLmKZKGdAdBZNy#eM$rYwxYw)Fa Basic statistics module. This module provides functions for calculating statistics of data, including averages, variance, and standard deviation. Calculating averages -------------------- ================== ================================================== Function Description ================== ================================================== mean Arithmetic mean (average) of data. fmean Fast, floating-point arithmetic mean. geometric_mean Geometric mean of data. harmonic_mean Harmonic mean of data. median Median (middle value) of data. median_low Low median of data. median_high High median of data. median_grouped Median, or 50th percentile, of grouped data. mode Mode (most common value) of data. multimode List of modes (most common values of data). quantiles Divide data into intervals with equal probability. ================== ================================================== Calculate the arithmetic mean ("the average") of data: >>> mean([-1.0, 2.5, 3.25, 5.75]) 2.625 Calculate the standard median of discrete data: >>> median([2, 3, 4, 5]) 3.5 Calculate the median, or 50th percentile, of data grouped into class intervals centred on the data values provided. E.g. if your data points are rounded to the nearest whole number: >>> median_grouped([2, 2, 3, 3, 3, 4]) #doctest: +ELLIPSIS 2.8333333333... This should be interpreted in this way: you have two data points in the class interval 1.5-2.5, three data points in the class interval 2.5-3.5, and one in the class interval 3.5-4.5. The median of these data points is 2.8333... Calculating variability or spread --------------------------------- ================== ============================================= Function Description ================== ============================================= pvariance Population variance of data. variance Sample variance of data. pstdev Population standard deviation of data. stdev Sample standard deviation of data. ================== ============================================= Calculate the standard deviation of sample data: >>> stdev([2.5, 3.25, 5.5, 11.25, 11.75]) #doctest: +ELLIPSIS 4.38961843444... If you have previously calculated the mean, you can pass it as the optional second argument to the four "spread" functions to avoid recalculating it: >>> data = [1, 2, 2, 4, 4, 4, 5, 6] >>> mu = mean(data) >>> pvariance(data, mu) 2.5 Statistics for relations between two inputs ------------------------------------------- ================== ==================================================== Function Description ================== ==================================================== covariance Sample covariance for two variables. correlation Pearson's correlation coefficient for two variables. linear_regression Intercept and slope for simple linear regression. ================== ==================================================== Calculate covariance, Pearson's correlation, and simple linear regression for two inputs: >>> x = [1, 2, 3, 4, 5, 6, 7, 8, 9] >>> y = [1, 2, 3, 1, 2, 3, 1, 2, 3] >>> covariance(x, y) 0.75 >>> correlation(x, y) #doctest: +ELLIPSIS 0.31622776601... >>> linear_regression(x, y) #doctest: LinearRegression(slope=0.1, intercept=1.5) Exceptions ---------- A single exception is defined: StatisticsError is a subclass of ValueError. ) NormalDistStatisticsError correlation covariancefmeangeometric_mean harmonic_meanlinear_regressionmeanmedianmedian_grouped median_high median_lowmode multimodepstdev pvariance quantilesstdevvarianceNFraction)Decimal)countgroupbyrepeat) bisect_left bisect_right) hypotsqrtfabsexperftaulogfsumsumprod)reduce) itemgetter)Counter namedtuple defaultdict@c eZdZy)rN)__name__ __module__ __qualname__1/opt/alt/python312/lib64/python3.12/statistics.pyrrsr3rcnd}t}|j}i}|j}t|tD]9\}}||t t |D]\}} |dz }|| d|z|| <;d|vr|d} n td|jD} tt|t} | | |fS)a_sum(data) -> (type, sum, count) Return a high-precision sum of the given numeric data as a fraction, together with the type to be converted to and the count of items. Examples -------- >>> _sum([3, 2.25, 4.5, -0.5, 0.25]) (, Fraction(19, 2), 5) Some sources of round-off error will be avoided: # Built-in sum returns zero. >>> _sum([1e50, 1, -1e50] * 1000) (, Fraction(1000, 1), 3000) Fractions and Decimals are also supported: >>> from fractions import Fraction as F >>> _sum([F(2, 3), F(7, 5), F(1, 4), F(5, 6)]) (, Fraction(63, 20), 4) >>> from decimal import Decimal as D >>> data = [D("0.1375"), D("0.2108"), D("0.3061"), D("0.0419")] >>> _sum(data) (, Fraction(6963, 10000), 4) Mixed types are currently treated as an error, except that int is allowed. rNc3:K|]\}}t||ywNr.0dns r4 z_sum..s@/?tq!HQN/?) setaddgetrtypemap _exact_ratiosumitemsr(_coerceint) datartypes types_addpartials partials_gettypvaluesr<r;totalTs r4_sumrRs@ E EE IH<.s.s@,?DAq!Q,?r>c3@K|]\}}t|||zywr8rr9s r4r=z_ss..s"D/Ctq!(1ac"/Cs)rRr?r@r,rHrrBrCrDrrErFr(rG)rIrVrQssdrrJrK sx_partials sxx_partialsrNrOr<sxsxxr;s ` @r4_ssr^sQ }>> _exact_ratio(0.25) (1, 4) x is expected to be an int, Fraction, Decimal or float. Nzcan't convert type 'z' to numerator/denominator) as_integer_ratiora OverflowError ValueError numerator denominatorrBr/ri)rUrks r4rDrDs<!!##   : &4y Q]]++ $T!W%5%5$66PQns 222A.A<ct||ur|St|tr|jdk7rt} ||S#t $r9t|t r'||j||jz cYSwxYw)z&Convert value to given numeric type T.r6)rBrgrHrqrhrirrp)valuerQs r4_convertrtMsz E{a !Se//14 x  a !U__%%*;*;(<< <  s>>B>Bc#BK|D]}|dkr t||yw)z7Iterate over values, failing if any are less than zero.rN)r)rOerrmsgrUs r4 _fail_negrw_s'  q5!&) )sFaverager6)keyreversetiesstartreturncT|dk7rtd|| t||}tt|t |}|dz }dgt |z}t |tdD]:\}} t| } t | } || dzdz z} | D] \} }| ||< || z }<|S)a Rank order a dataset. The lowest value has rank 1. Ties are averaged so that equal values receive the same rank: >>> data = [31, 56, 31, 25, 75, 18] >>> _rank(data) [3.5, 5.0, 3.5, 2.0, 6.0, 1.0] The operation is idempotent: >>> _rank([3.5, 5.0, 3.5, 2.0, 6.0, 1.0]) [3.5, 5.0, 3.5, 2.0, 6.0, 1.0] It is possible to rank the data in reverse order so that the highest value has rank 1. Also, a key-function can extract the field to be ranked: >>> goals = [('eagles', 45), ('bears', 48), ('lions', 44)] >>> _rank(goals, key=itemgetter(1), reverse=True) [2.0, 1.0, 3.0] Ranks are conventionally numbered starting from one; however, setting *start* to zero allows the ranks to be used as array indices: >>> prize = ['Gold', 'Silver', 'Bronze', 'Certificate'] >>> scores = [8.1, 7.3, 9.4, 8.3] >>> [prize[int(i)] for i in _rank(scores, start=0, reverse=True)] ['Bronze', 'Certificate', 'Gold', 'Silver'] rxzUnknown tie resolution method: )rzr6r)ry) rorCsortedziprlenrr)list)rIryrzr{r|val_posiresult_ggroupsizerankrsorig_poss r4_rankrgsJ y:4(CDD 3~Suw'9G  AS3w< FZ]31Q5zD1H>!$OE8#F8  % T  4 Mr3r<mcNtj||z}|||z|z|k7zS)zFSquare root of n/m, rounded to the nearest integer using round-to-odd.)rbisqrt)r<ras r4_integer_sqrt_of_frac_rtors- 16A !A r3r_sqrt_bit_widthc|j|jz tz dz}|dk\rt||d|zz|z}d}||z St|d|zz|}d| z}||z S)z1Square root of n/m as a float, correctly rounded.rrr6) bit_lengthrr)r<rqrprqs r4_float_sqrt_of_fracrs !,,. (? :q@AAv-aa!e<A   { "".a26k1= A2g { ""r3c|dkr|s tdS| | }}t|t|z j}|j\}}|j}|j\}}d|z||zdzz|||z||zzdzzkDr|S|j }|j\} } d|z|| zdzz||| z| |zzdzzkr|S|S)z3Square root of n/m as a Decimal, correctly rounded.rz0.0r)rr rm next_plus next_minus) r<rrootnrdrplusnpdpminusnmdms r4_decimal_sqrt_of_fracrs  Av5> !rA21 AJ # ) ) +D  " " $FB >> D  " " $FB1u2zABB 222 OO E  # # %FB1u2zABB 222 Kr3c^t|\}}}|dkr tdt||z |S)aReturn the sample arithmetic mean of data. >>> mean([1, 2, 3, 4, 4]) 2.8 >>> from fractions import Fraction as F >>> mean([F(3, 7), F(1, 21), F(5, 3), F(1, 3)]) Fraction(13, 21) >>> from decimal import Decimal as D >>> mean([D("0.5"), D("0.75"), D("0.625"), D("0.375")]) Decimal('0.5625') If ``data`` is empty, StatisticsError will be raised. r6z%mean requires at least one data point)rRrrt)rIrQrPr<s r4r r s7 t*KAua1uEFF EAIq !!r3c\|) t|t|}s td|z St |t t fs t |} t||}t|}|s td||z S#t$rdfd}||}YwxYw#t$r tdwxYw)zConvert data to floats and compute the arithmetic mean. This runs faster than the mean() function and it always returns a float. If the input dataset is empty, it raises a StatisticsError. >>> fmean([3.5, 4.0, 5.25]) 4.25 rc3@Kt|dD] \}| yw)Nr6r|) enumerate)iterablerUr<s r4rzfmean..counts!%ha8DAqG9sz&fmean requires at least one data pointz(data and weights must be the same lengthzsum of weights must be non-zero) rrir&r isinstancertupler'ro)rIweightsrrPnumdenr<s @r4rrs D AT !"JK Kqy ge} -w-JdG$ w-C ?@@ 9+ A ;D  JHIIJs A8 B8BBB+cz tttt|S#t$r t ddwxYw)aYConvert data to floats and compute the geometric mean. Raises a StatisticsError if the input dataset is empty, if it contains a zero, or if it contains a negative value. No special efforts are made to achieve exact results. (However, this may change in the future.) >>> round(geometric_mean([54, 24, 36]), 9) 36.0 zGgeometric mean requires a non-empty dataset containing positive numbersN)r"rrCr%ror)rIs r4rrsEG5S$()) G<=BF GGs!$:cxt||ur t|}d}t|}|dkr td|dk(rD|B|d}t |t j tfr|dkr t||Std|td|}|}nQt||ur t|}t||k7r tdtdt||D\}}} t||}tdt||D\}}} |dkr td t||z |S#t$rYywxYw) aReturn the harmonic mean of data. The harmonic mean is the reciprocal of the arithmetic mean of the reciprocals of the data. It can be used for averaging ratios or rates, for example speeds. Suppose a car travels 40 km/hr for 5 km and then speeds-up to 60 km/hr for another 5 km. What is the average speed? >>> harmonic_mean([40, 60]) 48.0 Suppose a car travels 40 km/hr for 5 km, and when traffic clears, speeds-up to 60 km/hr for the remaining 30 km of the journey. What is the average speed? >>> harmonic_mean([40, 60], weights=[5, 30]) 56.0 If ``data`` is empty, or any element is less than zero, ``harmonic_mean`` will raise ``StatisticsError``. z.harmonic mean does not support negative valuesr6z.harmonic_mean requires at least one data pointrzunsupported typez*Number of weights does not match data sizec3 K|]}|ywr8r2)r:ws r4r=z harmonic_mean..Ns G,Fq,Fs c34K|]\}}|r||z ndyw)rNr2)r:rrUs r4r=z harmonic_mean..Qs"P=OTQq1uq0=OszWeighted sum must be positive)iterrrrrnumbersRealrrirrRrwrZeroDivisionErrorrt) rIrrvr<rU sum_weightsrrQrPrs r4rr!sM. DzTDz =F D A1uNOO aGO G a',,0 11u%f--H./ /A, =G #7mG w<1 !"NO O GIgv,F GG;v&PS$=OPP5% z=>> K%' ++ s",D-- D98D9ct|}t|}|dk(r td|dzdk(r||dzS|dz}||dz ||zdz S)aBReturn the median (middle value) of numeric data. When the number of data points is odd, return the middle data point. When the number of data points is even, the median is interpolated by taking the average of the two middle values: >>> median([1, 3, 5]) 3 >>> median([1, 3, 5, 7]) 4.0 rno median for empty datarr6rrr)rIr<rs r4r r Ysg $>> median_low([1, 3, 5]) 3 >>> median_low([1, 3, 5, 7]) 3 rrrr6rrIr<s r4rrqsU $>> median_high([1, 3, 5]) 3 >>> median_high([1, 3, 5, 7]) 5 rrrrrs r4r r s7 $>> demographics = Counter({ ... 25: 172, # 20 to 30 years old ... 35: 484, # 30 to 40 years old ... 45: 387, # 40 to 50 years old ... 55: 22, # 50 to 60 years old ... 65: 6, # 60 to 70 years old ... }) The 50th percentile (median) is the 536th person out of the 1071 member cohort. That person is in the 30 to 40 year old age group. The regular median() function would assume that everyone in the tricenarian age group was exactly 35 years old. A more tenable assumption is that the 484 members of that age group are evenly distributed between 30 and 40. For that, we use median_grouped(). >>> data = list(demographics.elements()) >>> median(data) 35 >>> round(median_grouped(data, interval=10), 1) 37.5 The caller is responsible for making sure the data points are separated by exact multiples of *interval*. This is essential for getting a correct result. The function does not check this precondition. Inputs may be any numeric type that can be coerced to a float during the interpolation step. rr)loz$Value cannot be converted to a floatr-)rrrrrrhrori) rIintervalr<rUrjLcffs r4r r sV $@@As A==Bctt|jd} |ddS#t$r t ddwxYw)axReturn the most common data point from discrete or nominal data. ``mode`` assumes discrete data, and returns a single value. This is the standard treatment of the mode as commonly taught in schools: >>> mode([1, 1, 2, 3, 3, 3, 3, 4]) 3 This also works with nominal (non-numeric) data: >>> mode(["red", "blue", "blue", "red", "green", "red", "red"]) 'red' If there are multiple modes with same frequency, return the first one encountered: >>> mode(['red', 'red', 'green', 'blue', 'blue']) 'red' If *data* is empty, ``mode``, raises StatisticsError. r6rzno mode for empty dataN)r*r most_common IndexErrorr)rIpairss r4rrsP. DJ  + +A .EBQx{ B67TABs -Actt|}|sgSt|j}|j Dcgc] \}}||k(s |c}}Scc}}w)a.Return a list of the most frequently occurring values. Will return more than one result if there are multiple modes or an empty list if *data* is empty. >>> multimode('aabbbbbbbbcc') ['b'] >>> multimode('aabbbbccddddeeffffgg') ['b', 'd', 'f'] >>> multimode('') [] )r*rmaxrOrF)rIcountsmaxcountrsrs r4rrsST$Z F  6==?#H&,lln JnleU8IEn JJ Js  AAr exclusive)r<methodc(|dkr tdt|}t|}|dkr td|dk(rW|dz }g}td|D]?}t ||z|\}}||||z z||dz|zz|z } |j | A|S|dk(rn|dz}g}td|D]V}||z|z}|dkrdn||dz kDr|dz n|}||z||zz }||dz ||z z|||zz|z } |j | X|St d|)aDivide *data* into *n* continuous intervals with equal probability. Returns a list of (n - 1) cut points separating the intervals. Set *n* to 4 for quartiles (the default). Set *n* to 10 for deciles. Set *n* to 100 for percentiles which gives the 99 cuts points that separate *data* in to 100 equal sized groups. The *data* can be any iterable containing sample. The cut points are linearly interpolated between data points. If *method* is set to *inclusive*, *data* is treated as population data. The minimum value is treated as the 0th percentile and the maximum value is treated as the 100th percentile. r6zn must be at least 1rz"must have at least two data points inclusiverUnknown method: )rrrrangedivmodappendro) rIr<rldrrrrdelta interpolateds r4rr9sa  1u455 $>> data = [2.75, 1.75, 1.25, 0.25, 0.5, 1.25, 3.5] >>> variance(data) 1.3720238095238095 If you have already calculated the mean of your data, you can pass it as the optional second argument ``xbar`` to avoid recalculating it: >>> m = mean(data) >>> variance(data, m) 1.3720238095238095 This function does not check that ``xbar`` is actually the mean of ``data``. Giving arbitrary values for ``xbar`` may lead to invalid or impossible results. Decimals and Fractions are supported: >>> from decimal import Decimal as D >>> variance([D("27.5"), D("30.25"), D("30.25"), D("34.5"), D("41.75")]) Decimal('31.01875') >>> from fractions import Fraction as F >>> variance([F(1, 6), F(1, 2), F(5, 3)]) Fraction(67, 108) rz*variance requires at least two data pointsr6r^rrt)rIxbarrQssrVr<s r4rrjs@LdD/KAr1a1uJKK B!a%L! $$r3cbt||\}}}}|dkr tdt||z |S)a,Return the population variance of ``data``. data should be a sequence or iterable of Real-valued numbers, with at least one value. The optional argument mu, if given, should be the mean of the data. If it is missing or None, the mean is automatically calculated. Use this function to calculate the variance from the entire population. To estimate the variance from a sample, the ``variance`` function is usually a better choice. Examples: >>> data = [0.0, 0.25, 0.25, 1.25, 1.5, 1.75, 2.75, 3.25] >>> pvariance(data) 1.25 If you have already calculated the mean of the data, you can pass it as the optional second argument to avoid recalculating it: >>> mu = mean(data) >>> pvariance(data, mu) 1.25 Decimals and Fractions are supported: >>> from decimal import Decimal as D >>> pvariance([D("27.5"), D("30.25"), D("30.25"), D("34.5"), D("41.75")]) Decimal('24.815') >>> from fractions import Fraction as F >>> pvariance([F(1, 4), F(5, 4), F(1, 2)]) Fraction(13, 72) r6z*pvariance requires at least one data pointr)rImurQrrVr<s r4rrs<FdB-KAr1a1uJKK BFA r3ct||\}}}}|dkr td||dz z }t|tr t |j |j St|j |j S)zReturn the square root of the sample variance. See ``variance`` for arguments and other details. >>> stdev([1.5, 2.5, 2.5, 2.75, 3.25, 4.75]) 1.0810874155219827 r'stdev requires at least two data pointsr6r^rrgrrrprqr)rIrrQrrVr<msss r4rrskdD/KAr1a1uGHH A,C!W$S]]COODD s}}coo >>r3ct||\}}}}|dkr td||z }t|tr t |j |j St|j |j S)zReturn the square root of the population variance. See ``pvariance`` for arguments and other details. >>> pstdev([1.5, 2.5, 2.5, 2.75, 3.25, 4.75]) 0.986893273527251 r6z'pstdev requires at least one data pointr)rIrrQrrVr<rs r4rrsgdB-KAr1a1uGHH q&C!W$S]]COODD s}}coo >>r3c t|\}}}}|dkr td||dz z } t|t|j|j fS#t $r%t|t|t|z fcYSwxYw)zFIn one pass, compute the mean and sample standard deviation as floats.rrr6)r^rrhrrprqra)rIrQrrr<rs r4 _mean_stdevrsYNAr41uGHH A,C4T{/ sOOO 4T{E$K%)3334s*A+BBct|}t||k7r td|dkr tdt||z t||z tfd|Dfd|D}||dz z S)apCovariance Return the sample covariance of two inputs *x* and *y*. Covariance is a measure of the joint variability of two inputs. >>> x = [1, 2, 3, 4, 5, 6, 7, 8, 9] >>> y = [1, 2, 3, 1, 2, 3, 1, 2, 3] >>> covariance(x, y) 0.75 >>> z = [9, 8, 7, 6, 5, 4, 3, 2, 1] >>> covariance(x, z) -7.5 >>> covariance(z, x) -7.5 zDcovariance requires that both inputs have same number of data pointsrz,covariance requires at least two data pointsc3(K|] }|z  ywr8r2)r:xirs r4r=zcovariance..s)q29qc3(K|] }|z  ywr8r2r:yiybars r4r=zcovariance..s+B"BIrr6)rrr&r')rUyr<sxyrrs @@r4rrsx" AA 1v{dee1uLMM 7Q;D 7Q;D )q)+B+B CC !a%=r3linear)rct|}t||k7r td|dkr td|dvrtd||dk(r#|dz dz }t|| }t|| }n@t ||z }t ||z }|Dcgc]}||z  }}|Dcgc]}||z  }}t ||} t ||} t ||} | t | | zz Scc}wcc}w#t$r td wxYw) alPearson's correlation coefficient Return the Pearson's correlation coefficient for two inputs. Pearson's correlation coefficient *r* takes values between -1 and +1. It measures the strength and direction of a linear relationship. >>> x = [1, 2, 3, 4, 5, 6, 7, 8, 9] >>> y = [9, 8, 7, 6, 5, 4, 3, 2, 1] >>> correlation(x, x) 1.0 >>> correlation(x, y) -1.0 If *method* is "ranked", computes Spearman's rank correlation coefficient for two inputs. The data is replaced by ranks. Ties are averaged so that equal values receive the same rank. The resulting coefficient measures the strength of a monotonic relationship. Spearman's rank correlation coefficient is appropriate for ordinal data or for continuous data that doesn't meet the linear proportion requirement for Pearson's correlation coefficient. zEcorrelation requires that both inputs have same number of data pointsrz-correlation requires at least two data points>rrankedrrr6rrz&at least one of the inputs is constant)rrrorr&r'r r) rUrrr<r|rrrrrr]syys r4rrs+. AA 1v{eff1uMNN ))+F:677 Q"  !5 ! !5 !Aw{Aw{!" #2R$Y #!" #2R$Y # !Q-C !Q-C !Q-CHT#)_$$ $ # HFGGHs C%! C*C//DLinearRegressionslope intercept) proportionalc t|}t||k7r td|dkr td|s9t||z }t||z |Dcgc]}||z  }} fd|D}t||dz}t||} ||z }|rdn |zz } t || Scc}w#t$r tdwxYw)aSlope and intercept for simple linear regression. Return the slope and intercept of simple linear regression parameters estimated using ordinary least squares. Simple linear regression describes relationship between an independent variable *x* and a dependent variable *y* in terms of a linear function: y = slope * x + intercept + noise where *slope* and *intercept* are the regression parameters that are estimated, and noise represents the variability of the data that was not explained by the linear regression (it is equal to the difference between predicted and actual values of the dependent variable). The parameters are returned as a named tuple. >>> x = [1, 2, 3, 4, 5] >>> noise = NormalDist().samples(5, seed=42) >>> y = [3 * x[i] + 2 + noise[i] for i in range(5)] >>> linear_regression(x, y) #doctest: +ELLIPSIS LinearRegression(slope=3.09078914170..., intercept=1.75684970486...) If *proportional* is true, the independent variable *x* and the dependent variable *y* are assumed to be directly proportional. The data is fit to a line passing through the origin. Since the *intercept* will always be 0.0, the underlying linear function simplifies to: y = slope * x + noise >>> y = [3 * x[i] + noise[i] for i in range(5)] >>> linear_regression(x, y, proportional=True) #doctest: +ELLIPSIS LinearRegression(slope=3.02447542484..., intercept=0.0) zKlinear regression requires that both inputs have same number of data pointsrz3linear regression requires at least two data pointsc3(K|] }|z  ywr8r2rs r4r=z$linear_regression..us #2R$Yrz x is constantr)rrr&r'rr) rUrrr<rrrr]rrrs @r4r r FsL AA 1v{kll1uSTT Aw{Aw{!" #2R$Y # # # !Q-# C !Q-C/c $ )l@gN@g"]Ξ@gnC`@gu @giK~j@gv|E@gd|1@gfRr@gu.2@g~y@gn8(E@?rg@g?g鬷ZaI?ggElD?g7\?guSS?g=. @gj%b@gHw@gjRe?g9dh? >g('߿A?g~z ?g@3?gɅ3?g3fRx?gIFl@gt>g*Yn>gESB\T?gN;A+?gUR1?gEF?gPn@g&>@gigtcI,\>gŝI?g*F2v?gC4?gO1?)r!r r%)prsigmarrrrrUs r4_normal_dist_inv_cdfrs CA Aw% q1u 0140145601456115661 156 6 1 1 56 6 1 1 56 6115661140145601456115661 156 6 1 1 56 6 1 1 56 6 #IQY #X37A c!fW ACx G1A51256712567226772 267 7 2 2 67 7 2 2 67 7222A51256712567226772 267 7 2 2 67 7 2 2 67 7 G1A51256712567226772 267 7 2 2 67 7 2 2 67 7223Q61256712567226772 267 7 2 2 67 7 2 2 67 7 c A3w B U r3)rceZdZdZdddZd!dZedZddd Zd Z d Z d Z d"d Z dZ dZedZedZedZedZedZdZdZdZdZdZdZeZdZeZdZdZdZ dZ!d Z"y)#rz(Normal distribution of a random variablez(Arithmetic mean of a normal distributionz+Standard deviation of a normal distribution_mu_sigmacd|dkr tdt||_t||_y)zDNormalDist where mu is the mean and sigma is the standard deviation.rzsigma must be non-negativeN)rrhr r )selfrrs r4__init__zNormalDist.__init__s+ 3;!">? ?9El r3c|t|S)z5Make a normal distribution instance from sample data.)r)clsrIs r4 from_sampleszNormalDist.from_samplessK%&&r3N)seedc|tjntj|j}|j|j}}t d|Dcgc] }||| c}Scc}w)z=Generate *n* samples for a given mean and standard deviation.N)randomgaussRandomr r r)r r<rrrrrs r4sampleszNormalDist.samplessV $  &--2E2K2KHHdkkE*0q/:/Qb% /:::sA+c|j|jz}|s td||jz }t||zd|zz t t |zz S)z4Probability density function. P(x <= X < x+dx) / dxz$pdf() not defined when sigma is zerog)r rr r"r r$)r rUrdiffs r4pdfzNormalDist.pdfsV;;,!"HI I488|4$;$/23d3>6JJJr3c|js tdddt||jz |jtzz zzS)z,Cumulative distribution function. P(X <= x)z$cdf() not defined when sigma is zerorr)r rr#r _SQRT2r rUs r4cdfzNormalDist.cdfs@{{!"HI IcCTXX$++2F GHHIIr3cn|dks|dk\r tdt||j|jS)aSInverse cumulative distribution function. x : P(X <= x) = p Finds the value of the random variable such that the probability of the variable being less than or equal to that value equals the given probability. This function is also called the percent point function or quantile function. rrz$p must be in the range 0.0 < p < 1.0)rrr r )r rs r4inv_cdfzNormalDist.inv_cdfs4 8qCx!"HI I#Atxx==r3cdtd|Dcgc]}|j||z c}Scc}w)anDivide into *n* continuous intervals with equal probability. Returns a list of (n - 1) cut points separating the intervals. Set *n* to 4 for quartiles (the default). Set *n* to 10 for deciles. Set *n* to 100 for percentiles which gives the 99 cuts points that separate the normal distribution in to 100 equal sized groups. r6)rr)r r<rs r4rzNormalDist.quantiles s/.31a[9[ QU#[999s-c t|ts td||}}|j|jf|j|jfkr||}}|j |j }}|r|s t d||z }t|j|jz }|s%dt|d|jztzz z S|j|z|j|zz }|j|jzt||z|t||z zzz} || z|z } || z |z } dt|j| |j| z t|j| |j| z zz S)aCompute the overlapping coefficient (OVL) between two normal distributions. Measures the agreement between two normal probability distributions. Returns a value between 0.0 and 1.0 giving the overlapping area in the two underlying probability density functions. >>> N1 = NormalDist(2.4, 1.6) >>> N2 = NormalDist(3.2, 2.0) >>> N1.overlap(N2) 0.8035050657330205 z$Expected another NormalDist instancez(overlap() not defined when sigma is zerorr-) rrrir r rrr!r#rr r%r) r otherXYX_varY_vardvrrbx1x2s r4overlapzNormalDist.overlaps_ %,BC CU1 HHaee !%%0 0aqAzz1::uE!"LM M U] !%%!%%- R3>F#:;<< < EEEMAEEEM ) HHqxx $rBwc%%-6H1H'H"I I!er\!er\d1559quuRy01DrQUU2Y9N4OOPPr3ch|js td||jz |jz S)zCompute the Standard Score. (x - mean) / stdev Describes *x* in terms of the number of standard deviations above or below the mean of the normal distribution. z'zscore() not defined when sigma is zero)r rr rs r4zscorezNormalDist.zscore9s.{{!"KL LDHH  ++r3c|jS)z+Arithmetic mean of the normal distribution.r r s r4r zNormalDist.meanD xxr3c|jS)z,Return the median of the normal distributionr/r0s r4r zNormalDist.medianIr1r3c|jS)zReturn the mode of the normal distribution The mode is the value x where which the probability density function (pdf) takes its maximum value. r/r0s r4rzNormalDist.modeNs xxr3c|jS)z.Standard deviation of the normal distribution.r r0s r4rzNormalDist.stdevWs{{r3c4|j|jzS)z!Square of the standard deviation.r5r0s r4rzNormalDist.variance\s{{T[[((r3ct|trAt|j|jzt|j|jSt|j|z|jS)ajAdd a constant or another NormalDist instance. If *other* is a constant, translate mu by the constant, leaving sigma unchanged. If *other* is a NormalDist, add both the means and the variances. Mathematically, this works only if the two distributions are independent or if they are jointly normally distributed. rrr rr r)r*s r4__add__zNormalDist.__add__aO b* %bffrvvouRYY /JK K"&&2+ryy11r3ct|trAt|j|jz t|j|jSt|j|z |jS)asSubtract a constant or another NormalDist instance. If *other* is a constant, translate by the constant mu, leaving sigma unchanged. If *other* is a NormalDist, subtract the means and add the variances. Mathematically, this works only if the two distributions are independent or if they are jointly normally distributed. r8r9s r4__sub__zNormalDist.__sub__or;r3c`t|j|z|jt|zS)zMultiply both mu and sigma by a constant. Used for rescaling, perhaps to change measurement units. Sigma is scaled with the absolute value of the constant. rr r r!r9s r4__mul__zNormalDist.__mul__}& "&&2+ryy48';< : QD ,)) 2 2==-.HH; -P%&r3rr8)znegative value)r)OrZ__all__rbrrsys fractionsrdecimalr itertoolsrrrbisectrrrr r!r"r#r$r%r&r' functoolsr(operatorr) collectionsr*r+r,rrorrRr^rdrGrDrtrwrrhrrHr float_infomant_digr__annotations__rrr rrrr rr r rrrrrrrrrrrr r _statistics ImportErrorrr2r3r4rnshT .  ,,,EEE88 c j 3l&R 4>+\$IQ34PU;3l3>>222Q66 #3 #3 #5 #SSW<",!HG&5,p+0 ,&E+PB<Kr;(4b)%X&R?$?$ 4(8$,-H`02HI057>zGV 0 Z&Z&   s1EE  E