diff --git a/.github/workflows/test.yml b/.github/workflows/test.yml index a5d95b3d..4dee0d47 100644 --- a/.github/workflows/test.yml +++ b/.github/workflows/test.yml @@ -5,9 +5,9 @@ name: Run Tests on: push: - branches: [ "master" ] + branches: [ "main", "master" ] pull_request: - branches: [ "master" ] + branches: [ "main", "master" ] jobs: build: diff --git a/CHANGES.rst b/CHANGES.rst index d40aa093..69962df0 100644 --- a/CHANGES.rst +++ b/CHANGES.rst @@ -1,6 +1,40 @@ Change Log =================== +Unreleased 4.x +-------------- + +Changes: + +- Includes the `main` branch in continuous integration automation. +- [**BREAKING**] Previously when tallying the uncertainty for a `UFloat` object, the + contribution from other `UFloat` objects with `std_dev == 0` were excluded. Now this + special casing for `std_dev == 0` has been removed so that the contribution from all + contributing `UFloat` objects is included. This changes the behavior in certain + corner cases where `UFloat` `f` is derived from `UFloat` `x`, `x` has `x.s == 0`, but + the derivative of `f` with respect to `x` is `NaN`. For example, previously + `(-1)**ufloat(1, 0)` gave `-1.0+/-0`. The justification for this was that the second + `UFloat` with `std_dev` of `0` should be treated like a regular float. Now the same + calculation returns `-1.0+/-nan`. In this case the `UFloat` in the second argument + of the power operator is treated as a degenerate `UFloat`. + +Removes: + +- [**BREAKING**] Removes certain deprecated `umath` functions and + `AffineScalarFunc`/`UFloat` methods. The following `umath` functions are removed: + `ceil`, `copysign`, `fabs`, `factorial`, `floor`, `fmod`, `frexp`, `ldexp`, `modf`, + `trunc`. The following `AffineScalarFunc`/`UFloat` methods are removed: + `__floordiv__`, `__mod__`, `__abs__`, `__trunc__`, `__lt__`, `__le__`, `__gt__`, + `__ge__`, `__bool__`. +- [**BREAKING**] Previously it was possible for a `UFloat` object to compare equal to a + `float` object if the `UFloat` `standard_deviation` was zero and the `UFloat` + `nominal_value` was equal to the `float`. Now, when an equality comparison is made + between a `UFloat` object and another object, if the object is not a `UFloat` then + the equality comparison is deferred to this other object. For the specific case of + `float` this means that the equality comparison always returns `False`. +- [**BREAKING**] The `uncertainties` package is generally dropping formal support for + edge cases involving `UFloat` objects with `std_dev == 0`. + Unreleased ---------- diff --git a/doc/index.rst b/doc/index.rst index 496c4d49..920de4b9 100644 --- a/doc/index.rst +++ b/doc/index.rst @@ -27,10 +27,8 @@ quick taste of how to use :mod:`uncertainties`: The :mod:`uncertainties` library calculates uncertainties using linear `error propagation theory`_ by automatically :ref:`calculating derivatives ` and analytically propagating these to the results. Correlations -between variables are automatically handled. This library can also yield the -derivatives of any expression with respect to the variables that have uncertain -values. For other approaches, see soerp_ (using higher-order terms) and mcerp_ -(using a Monte-Carlo approach). +between variables are automatically handled. For other approaches, see soerp_ (using +higher-order terms) and mcerp_ (using a Monte-Carlo approach). The `source code`_ for the uncertainties package is licensed under the `Revised BSD License`_. This documentation is licensed under the `CC-SA-3 License`_. diff --git a/doc/tech_guide.rst b/doc/tech_guide.rst index 326e0ad5..8a261c9a 100644 --- a/doc/tech_guide.rst +++ b/doc/tech_guide.rst @@ -27,7 +27,7 @@ user-supplied function. .. autofunction:: ufloat_fromstr -.. autoclass:: Variable +.. autoclass:: UFloat .. autofunction:: wrap @@ -93,107 +93,47 @@ are completely uncorrelated. .. _comparison_operators: -Comparison operators --------------------- +Equality Comparison +------------------- -.. warning:: - Support for comparing variables with uncertainties is deprecated and will be - removed in Uncertainties 4.0. The behavior of ``bool`` will also be changed - to always return ``True`` for ``UFloat`` objects. +Numbers with uncertainty, :class:`UFloat` objects, model random variables. +There are a number of senses of equality for two random variables. +The stronger senses of equality define two random variables to be equal if the two +random variables always produce the same result given a random sample from the sample +space. +For :class:`UFloat`, this is the case if two :class:`UFloat` objects have equal +nominal values and standard deviations and are perfectly correlated. +We can test for these conditions by taking the difference of two :class:`UFloat` objects +and looking at the nominal value and standard deviation of the result. +If both the nominal value and standard deviation of the difference are zero, then the +two :class:`UFloat` objects have the same nominal value, standard deviation, and are +perfectly correlated. +In this case we say the two :class:`UFloat` are equal. -Comparison operations (>, ==, etc.) on numbers with uncertainties have -a **pragmatic semantics**, in this package: numbers with uncertainties -can be used wherever Python numbers are used, most of the time with a -result identical to the one that would be obtained with their nominal -value only. This allows code that runs with pure numbers to also work -with numbers with uncertainties. - -.. index:: boolean value - -The **boolean value** (``bool(x)``, ``if x …``) of a number with -uncertainty :data:`x` is defined as the result of ``x != 0``, as usual. - -However, since the objects defined in this module represent -probability distributions and not pure numbers, comparison operators -are interpreted in a specific way. - -The result of a comparison operation is defined so as to be -essentially consistent with the requirement that uncertainties be -small: the **value of a comparison operation** is True only if the -operation yields True for all *infinitesimal* variations of its random -variables around their nominal values, *except*, possibly, for an -*infinitely small number* of cases. - -Example: - ->>> x = ufloat(3.14, 0.01) ->>> x == x +>>> x = ufloat(1, 0.1) +>>> a = 2 * x +>>> b = x + x +>>> print(a - b) +0.0+/-0 +>>> print(a == b) True -because a sample from the probability distribution of :data:`x` is always -equal to itself. However: +It might be the case that two random variables have the same marginal probability +distribution but are uncorrelated. +A weaker sense of equality between random variables may consider two such random +variables to be equal. +This is equivalent to two :class:`UFloat` objects have equal nominal values and +standard deviations, but, are uncorrelated. +The :mod:`uncertainties` package, however, keeps to the stronger sense of random +variable equality such that two such :class:`UFloat` objects do not compare as equal. ->>> y = ufloat(3.14, 0.01) ->>> x == y +>>> x = ufloat(1, 0.1) +>>> y = ufloat(1, 0.1) +>>> print(x - y) +0.00+/-0.14 +>>> print(x == y) False -since :data:`x` and :data:`y` are independent random variables that -*almost* always give a different value (put differently, -:data:`x`-:data:`y` is not equal to 0, as it can take many different -values). Note that this is different -from the result of ``z = 3.14; t = 3.14; print(z == t)``, because -:data:`x` and :data:`y` are *random variables*, not pure numbers. - -Similarly, - ->>> x = ufloat(3.14, 0.01) ->>> y = ufloat(3.00, 0.01) ->>> x > y -True - -because :data:`x` is supposed to have a probability distribution largely -contained in the 3.14±~0.01 interval, while :data:`y` is supposed to be -well in the 3.00±~0.01 one: random samples of :data:`x` and :data:`y` will -most of the time be such that the sample from :data:`x` is larger than the -sample from :data:`y`. Therefore, it is natural to consider that for all -practical purposes, ``x > y``. - -Since comparison operations are subject to the same constraints as -other operations, as required by the :ref:`linear approximation -` method, their result should be essentially *constant* -over the regions of highest probability of their variables (this is -the equivalent of the linearity of a real function, for boolean -values). Thus, it is not meaningful to compare the following two -independent variables, whose probability distributions overlap: - ->>> x = ufloat(3, 0.01) ->>> y = ufloat(3.0001, 0.01) - -In fact the function (x, y) → (x > y) is not even continuous over the -region where x and y are concentrated, which violates the assumption -of approximate linearity made in this package on operations involving -numbers with uncertainties. Comparing such numbers therefore returns -a boolean result whose meaning is undefined. - -However, values with largely overlapping probability distributions can -sometimes be compared unambiguously: - ->>> x = ufloat(3, 1) ->>> x -3.0+/-1.0 ->>> y = x + 0.0002 ->>> y -3.0002+/-1.0 ->>> y > x -True - -In fact, correlations guarantee that :data:`y` is always larger than -:data:`x`: ``y-x`` correctly satisfies the assumption of linearity, -since it is a constant "random" function (with value 0.0002, even -though :data:`y` and :data:`x` are random). Thus, it is indeed true -that :data:`y` > :data:`x`. - - .. index:: linear propagation of uncertainties .. _linear_method: @@ -257,16 +197,6 @@ This indicates that **the derivative required by linear error propagation theory is not defined** (a Monte-Carlo calculation of the resulting random variable is more adapted to this specific case). -However, even in this case where the derivative at the nominal value -is infinite, the :mod:`uncertainties` package **correctly handles -perfectly precise numbers**: - ->>> umath.sqrt(ufloat(0, 0)) -0.0+/-0 - -is thus the correct result, despite the fact that the derivative of -the square root is not defined in zero. - .. _math_def_num_uncert: Mathematical definition of numbers with uncertainties diff --git a/doc/user_guide.rst b/doc/user_guide.rst index 0c555a23..6b49173c 100644 --- a/doc/user_guide.rst +++ b/doc/user_guide.rst @@ -5,28 +5,31 @@ User Guide ========== - -Basic usage -=========== - -Basic mathematical operations involving numbers with uncertainties requires -importing the :func:`ufloat` function which creates a :class:`Variable`: -number with both a nominal value and an uncertainty. - - >>> from uncertainties import ufloat - >>> x = ufloat(2.7, 0.01) # a Variable with a value 2.7+/-0.01 - -The :mod:`uncertainties` module contains sub-modules for :ref:`advanced -mathematical functions `, and :doc:`arrays and -matrices `, which can be accessed as well. +The :mod:`uncertainties` package is built around the :class:`UFloat` class. +:class:`UFloat` objects model statistical random variables. +A :class:`UFloat` object has a nominal value which models the mean of a random variable +and an uncertainty which can be used to calculate the standard deviation of the random +variable and correlations between one random variable and another. +It is possible to construct new random variables by applying mathematical operations to +and between existing random variables. +Likewise, it is possible to apply mathematical operations such as arithmetic or +trigonometric functions to :class:`UFloat` objects to get new :class:`UFloat` objects. +The nominal values pass through the mathematical operations as if they were regular +:class:`float` objects, but the uncertainties propagate according to the approximate +rules of linear error propagation based on local derivatives of the mathematical +operations. + +In addition to the :class:`UFloat` class, the :mod:`uncertainties` package also provides +sub-modules for :ref:`advanced mathematical functions `, and +:doc:`arrays and matrix ` operations. .. index:: - pair: number with uncertainty; creation + pair: number with uncertainty; -Creating Variables: numbers with uncertainties -================================================ +Creating UFloat objects: numbers with uncertainties +=================================================== -To create a number with uncertainties or *Variable*, use the :func:`ufloat` +To create a number with uncertainties or :class:`UFloat`, use the :func:`ufloat` function, which takes a *nominal value* (which can be interpreted as the most likely value, or the mean or central value of the distribution of values), a *standard error* (the standard deviation or :math:`1-\sigma` uncertainty), and @@ -39,8 +42,8 @@ an optional *tag*: pair: nominal value; scalar pair: uncertainty; scalar -You can access the nominal value and standard deviation for any Variable with -the `nominal_value` and `std_dev` attributes: +You can access the nominal value and standard deviation for any :class:`UFloat` object +with the `nominal_value` and `std_dev` attributes: >>> print(x.nominal_value, x.std_dev) 2.7 0.01 @@ -52,9 +55,9 @@ abbreviated as `n` and `std_dev` as `s`: >>> print(x.n, x.s) 2.7 0.01 -uncertainties Variables can also be created from one of many string -representations. The following forms will all create Variables with the same -value: +uncertainties :class:`UFloat` objects can also be created from one of many string +representations. The following forms will all create :class:`UFloat` objects with +the same values: >>> from uncertainties import ufloat_fromstr >>> x = ufloat(0.2, 0.01) @@ -68,12 +71,13 @@ value: More details on the :func:`ufloat` and :func:`ufloat_from_str` can be found in :ref:`api_funcs`. -Basic math with uncertain Variables -========================================= +Basic math with uncertain UFloat objects +======================================== -Uncertainties variables created in :func:`ufloat` or :func:`ufloat_fromstr` can -be used in basic mathematical calculations (``+``, ``-``, ``*``, ``/``, ``**``) -as with other Python numbers and variables. +Uncertainties :class:`UFloat` objects created using :func:`ufloat` or +:func:`ufloat_fromstr` can be used in basic mathematical calculations +(``+``, ``-``, ``*``, ``/``, ``**``) +as with other Python numbers. >>> t = ufloat(0.2, 0.01) >>> double = 2.0*t @@ -83,44 +87,45 @@ as with other Python numbers and variables. >>> print(square) 0.040+/-0.004 -When adding two Variables, the uncertainty in the result is the quadrature sum -(square-root of the sum of squares) of the uncertainties of the two Variables: +When adding two uncorrelated :class:`UFloat` objects, the standard deviation in the +resulting :class:`UFloat` object is the quadrature sum (square-root of the sum of +squares) of the standard deviations of the two input :class:`UFloat` objects. >>> x = ufloat(20, 4) >>> y = ufloat(12, 3) >>> print(x+y) 32+/-5 -We can check that error propagation when adding two independent variables -(using the abbreviation `.s` for the standard error): +Note that calls :func:`ufloat` create :class:`UFloat` objects which have no correlation +to any previously created :class:`UFloat` objects. +We can check the correctness of the error propagation for adding two uncorrelated +:class:`UFloat` objects: >>> from math import sqrt >>> (x+y).s == sqrt(x.s**2 + y.s**2) True - -Multiplying two Variables will properly propagate those -uncertainties too: +We can also multiply two :class:`UFloat` objects: >>> print(x*y) (2.4+/-0.8)e+02 >>> (x*y).s == (x*y).n * sqrt((x.s/x.n)**2 + (y.s/y.n)**2 ) True -But note that adding a Variable to itself does not add its uncertainties in -quadrature, but are simply scaled: +However, consider the behavior when we add a :class:`UFloat` object to itself: >>> print(x+x) 40+/-8 >>> print(3*x + 10) 70+/-12 - -It is important to understand that calculations done with Variable know about -the correlation between the Variables. Variables created with :func:`ufloat` -(and :func:`ufloat_fromstr`) are completely uncorrelated with each other, but -are known to be completely correlated with themselves. This means that - +In this case the resulting standard deviation is not the quadrature sum of the +standard deviation on the inputs. +This is because the :class:`UFloat` ``x`` is correlated with itself so the error +propagation must be handled accordingly. +This begins to demonstrate that calculations performed using :class:`UFloat` objects are +aware of correlations between :class:`UFloat` objects. +This is demonstrated in these two simple examples: >>> x = ufloat(5, 0.5) >>> y = ufloat(5, 0.5) @@ -129,20 +134,15 @@ are known to be completely correlated with themselves. This means that >>> x - x 0.0+/-0 -For two *different* Variables, uncorrelated uncertainties will be propagated. -But when doing a calculation with a single Variable, the uncertainties are -correlated, and calculations will reflect that. - - .. index:: mathematical operation; on a scalar, umath .. _advanced math operations: -Mathematical operations with uncertain Variables +Mathematical operations with uncertain UFloat objects ===================================================== -Besides being able to apply basic mathematical operations to uncertainties -Variables, this package provides generalized versions of 40 of the the +Besides being able to apply basic arithmetic operations to uncertainties +:class`UFloat` objects, this package provides generalized versions of 40 of the the functions from the standard :mod:`math` *module*. These mathematical functions are found in the :mod:`uncertainties.umath` module: @@ -179,15 +179,13 @@ Comparison operators (``==``, ``!=``, ``>``, ``<``, ``>=``, and ``<=``) for Vari uncertainties are somewhat complicated, and need special attention. As we hinted at above, and will explore in more detail below and in the :ref:`Technical Guide `, this relates to the correlation -between Variables. - - +between :class:`UFloat` objects. Equality and inequality comparisons ------------------------------------ -If we compare the equality of two Variables with the same nominal value and -uncertainty, we see +If we compare the equality of two :class:`UFloat` objects with the same nominal value +and standard deviation, we see >>> x = ufloat(5, 0.5) >>> y = ufloat(5, 0.5) @@ -196,60 +194,26 @@ True >>> x == y False -The difference here is that although the two Python objects have the same -nominal value and uncertainty, these are independent, uncorrelated values. It -is not exactly true that the difference is based on identity, note that +The difference here is that although the two :class:`UFloat` objects have the same +nominal value and standard deviation, they model two *uncorrelated* random variables. +This can be demonstrated with the :meth:`UFloat.covariance` method. ->>> x == (1.0*x) -True ->>> x is (1.0*x) -False +>>> print(x.covariance(x)) +0.25 +>>> print(x.covariance(y)) +0 In order for the result of two calculations with uncertainties to be considered equal, the :mod:`uncertainties` package does not test whether the nominal value -and the uncertainty have the same value. Instead it checks whether the -difference of the two calculations has a nominal value of 0 *and* an -uncertainty of 0. +and the standard deviation have the same value. Instead it checks whether the +difference of the two calculations has a nominal value of 0 *and* a standard deviation +of 0. >>> (x -x) 0.0+/-0 >>> (x -y) 0.0+/-0.7071067811865476 - -Comparisons of magnitude ------------------------------------- - -The concept of comparing the magnitude of values with uncertainties is a bit -complicated. That is, a Variable with a value of 25 +/- 10 might be greater -than a Variable with a value of 24 +/- 8 most of the time, but *sometimes* it -might be less than it. The :mod:`uncertainties` package takes the simple -approach of comparing nominal values. That is - ->>> a = ufloat(25, 10) ->>> b = ufloat(24, 8) ->>> a > b -True - -Note that combining this comparison and the above discussion of `==` and `!=` -can lead to a result that maybe somewhat surprising: - - ->>> a = ufloat(25, 10) ->>> b = ufloat(25, 8) ->>> a >= b -False ->>> a > b -False ->>> a == b -False ->>> a.nominal_value >= b.nominal_value -True - -That is, since `a` is neither greater than `b` (nominal value only) nor equal to -`b`, it cannot be greater than or equal to `b`. - - .. index:: pair: testing (scalar); NaN @@ -262,10 +226,10 @@ Variable. As is always the case, care must be exercised when handling NaN values. While :func:`math.isnan` and :func:`numpy.isnan` will raise `TypeError` -exceptions for uncertainties Variables (because an uncertainties Variable is -not a float), the function :func:`umath.isnan` will return whether the nominal -value of a Variable is NaN. Similarly, :func:`umath.isinf` will return whether -the nominal value of a Variable is infinite. +exceptions for uncertainties :class:`UFloat` objects (because an uncertainties +:class:`UFloat` object is not a float), the function :func:`umath.isnan` will return +whether the nominal value of a Variable is NaN. Similarly, :func:`umath.isinf` will +return whether the nominal value of a Variable is infinite. To check whether the uncertainty is NaN or Inf, use one of :func:`math.isnan`, :func:`math.isinf`, :func:`nupmy.isnan`, or , :func:`nupmy.isinf` on the @@ -336,8 +300,8 @@ For ``x < 0`` the ``y`` derivative is always complex valued so a NaN value is us Automatic correlations ====================== -Correlations between variables are **automatically handled** whatever -the number of variables involved, and whatever the complexity of the +Correlations between :class:`UFloat` objects are **automatically handled** whatever +the number of :class:`UFloat` objects involved, and whatever the complexity of the calculation. For example, when :data:`x` is the number with uncertainty defined above, @@ -365,58 +329,128 @@ transparently. -Access to the individual sources of uncertainty -=============================================== - -The various contributions to an uncertainty can be obtained through the -:func:`error_components` method, which maps the **independent variables -a quantity depends on** to their **contribution to the total -uncertainty**. According to :ref:`linear error propagation theory -` (which is the method followed by :mod:`uncertainties`), -the sum of the squares of these contributions is the squared -uncertainty. +UAtoms: How Uncertainty and Correlations are Tracked +==================================================== + +The basic, indivisibile, unit of uncertainty in the :mod:`uncertainties` package is the +:class:`UAtom`. +A :class:`UAtom` models a random variable with zero mean and unity standard deviation. +Every :class:`UAtom` is unique and uncorrelated with every other :class:`UAtom`. +The uncertainty of a :class:`UFloat` object is a :class:`UCombo` object which models a +:class:`float`-weighted linear combination of :class:`UAtom` objects. +A :class:`UFloat` object can be thought of as a sum of a fixed nominal value together +with a zero-mean :class:`UCombo` object. + +The standard deviation of a single :class:`UFloat` object is calculated by taking the +sum-of-squares of the weights for all the :class:`UAtom` objects that make up the +corresponding :attribute:`uncertainty` attribute for that :class:`UFloat` object. +The correlation between two :class:`UFloat` objects is calculated by taking the sum +of products of the weights of shared :class:`UAtom` objects between the two +:class:`UFloat` :attribute:`uncertainty` attributes. + +Every time a new :class:`UFloat` is instantiated via the :func:`ufloat` function a +single new independent :class:`UAtom` is also instantiated (and given the optional tag +passed into :func:`ufloat`) and paired with the new :class:`UFloat`. +When :class:`UFloat` objects are combined together using mathematical operations the +resulting :class:`UFloat` objects inherit dependence on the :class:`UAtom` objects +on which the input :class:`UFloat` objects depend in accordance with +:ref:`linear error propagation theory `. +In this way, the correlation between :class:`UFloat` objects can be tracked. + +We can get access to the :class:`UAtom` objects on which a given :class:`UFloat` +depends, and their corresponding weights using the :attribute:`UFloat.error_components` +attribute: + + +.. testsetup:: uuid + + from uncertainties import ufloat + from unittest.mock import patch + import uuid + import random + + class FakeUUID4: + def __init__(self): + self.seed = 0 + self.rnd = random.Random() + + def __call__(self): + self.rnd.seed(self.seed) + fake_uuid = uuid.UUID(int=self.rnd.getrandbits(128), version=4) + self.seed += 1 + return fake_uuid + fake_uuid4 = FakeUUID4() + + p = patch('uncertainties.ucombo.uuid.uuid4', fake_uuid4) + p.start() + +.. doctest:: uuid + + >>> x = ufloat(1, 0.1) + >>> y = ufloat(2, 0.3) + >>> z = x * y + >>> print(x.error_components) + {UAtom(e3e70682-c209-4cac-a29f-6fbed82c07cd): 0.1} + >>> print(y.error_components) + {UAtom(cd613e30-d8f1-4adf-91b7-584a2265b1f5): 0.3} + >>> print(z.error_components) + {UAtom(cd613e30-d8f1-4adf-91b7-584a2265b1f5): 0.3, UAtom(e3e70682-c209-4cac-a29f-6fbed82c07cd): 0.2} + +The standard deviation of each :class:`UFloat` is given by the sum of squares of the +weights for all the :class:`UAtom` objects on which that :class:`UFloat` depends + +.. doctest:: uuid + + >>> print(x.std_dev) + 0.1 + >>> print(y.std_dev) + 0.3 + >>> print(z.std_dev) + 0.36055512754639896 + +The :func:`ufloat` function accepts a ``tag`` argument. +If a string is passed in as the ``tag`` then this ``tag`` gets added to the new +:class:`UAtom` object that is instantiated together with the new :class:`UFloat`. +Note that :class:`UFloat` objects do not carry tags, only the underlying :class`UAtom` +objects do. +The tags on :class:`UAtom` objects can be used to keep track of the statistical +relationships in a more human-readable way: + +.. doctest:: uuid + + >>> x = ufloat(1, 0.1, tag='x') + >>> y = ufloat(2, 0.3, tag='y') + >>> z = x * y + >>> + >>> for uatom, weight in z.error_components.items(): + ... if uatom.tag is not None: + ... label = uatom.tag + ... else: + ... label = uatom.uuid + ... print(f"{label}: {weight}") + y: 0.3 + x: 0.2 + + +.. testcleanup :: uuid + + p.stop() + +The tags *do not have to be distinct*. For instance, *multiple* :class:`UFloat` objects +can be tagged as ``"systematic"``, and their contribution to the total uncertainty of +:data:`result` can simply be obtained as: -The individual contributions to the uncertainty are more easily usable -when the variables are **tagged**: - ->>> u = ufloat(1, 0.1, "u variable") # Tag ->>> v = ufloat(10, 0.1, "v variable") ->>> sum_value = u+2*v ->>> sum_value -21.0+/-0.223606797749979 ->>> for (var, error) in sum_value.error_components().items(): -... print("{}: {}".format(var.tag, error)) -... -v variable: 0.2 -u variable: 0.1 - -The variance (i.e. squared uncertainty) of the result -(:data:`sum_value`) is the quadratic sum of these independent -uncertainties, as it should be (``0.1**2 + 0.2**2``). - -The tags *do not have to be distinct*. For instance, *multiple* random -variables can be tagged as ``"systematic"``, and their contribution to -the total uncertainty of :data:`result` can simply be obtained as: - ->>> import math ->>> x = ufloat(132, 0.02, "statistical") ->>> y = ufloat(2.1, 0.05, "systematic") ->>> z = ufloat(12, 0.1, "systematic") ->>> result = x**y / z >>> syst_error = math.sqrt(sum( # Error from *all* systematic errors ... error**2 -... for (var, error) in result.error_components().items() -... if var.tag == "systematic")) ->>> print(format(syst_error, ".3f")) -577.984 +... for (uatom, error) in result.error_components().items() +... if uatom.tag == "systematic")) The remaining contribution to the uncertainty is: >>> other_error = math.sqrt(result.std_dev**2 - syst_error**2) -The variance of :data:`result` is in fact simply the quadratic sum of -these two errors, since the variables from -:func:`result.error_components` are independent. +The variance of :data:`result` is in fact simply the quadratic sum of these two errors, +since the :class:`UAtom` objects from :func:`result.error_components` are independent. .. index:: comparison operators diff --git a/tests/helpers.py b/tests/helpers.py index fe1ae628..413438d0 100644 --- a/tests/helpers.py +++ b/tests/helpers.py @@ -1,5 +1,14 @@ from math import isclose, isnan +from uncertainties.core import UFloat + + +def get_single_uatom(num_with_uncertainty: UFloat): + error_components = num_with_uncertainty.error_components + if len(error_components) > 1: + raise ValueError("UFloat has more than one error component.") + return next(iter(error_components.keys())) + def nan_close(first, second, *, rel_tol=1e-9, abs_tol=0.0): if isnan(first): diff --git a/tests/test_formatting.py b/tests/test_formatting.py index 28c9834e..90fcb60d 100644 --- a/tests/test_formatting.py +++ b/tests/test_formatting.py @@ -50,10 +50,6 @@ def test_repr(): x = ufloat(3.14159265358979, 0) assert repr(x) == "3.14159265358979+/-0" - # Tagging: - x = ufloat(3, 1, "length") - assert repr(x) == "< length = 3.0+/-1.0 >" - # The way NaN is formatted with F, E and G depends on the version of Python (NAN for # Python 2.5+ at least): diff --git a/tests/test_power.py b/tests/test_power.py index 82d00e96..1651131b 100644 --- a/tests/test_power.py +++ b/tests/test_power.py @@ -6,7 +6,7 @@ from uncertainties.ops import pow_deriv_0, pow_deriv_1 from uncertainties.umath_core import pow as umath_pow -from helpers import nan_close +from helpers import nan_close, get_single_uatom pow_deriv_cases = [ @@ -69,36 +69,38 @@ def test_pow_deriv_0(x, y, x_deriv_expected, y_deriv_expected): power_derivative_cases, ) def test_power_derivatives(first_ufloat, second_ufloat, first_der, second_der): - result = pow(first_ufloat, second_ufloat) - first_der_result = result.derivatives[first_ufloat] - second_der_result = result.derivatives[second_ufloat] - assert nan_close(first_der_result, first_der) - assert nan_close(second_der_result, second_der) - - result = umath_pow(first_ufloat, second_ufloat) - first_der_result = result.derivatives[first_ufloat] - second_der_result = result.derivatives[second_ufloat] - assert nan_close(first_der_result, first_der) - assert nan_close(second_der_result, second_der) + for op in [pow, umath_pow]: + result = op(first_ufloat, second_ufloat) + + if first_ufloat.s != 0: + first_uatom = get_single_uatom(first_ufloat) + try: + first_der_result = result.error_components[first_uatom] / first_ufloat.s + except KeyError: + first_der_result = 0 + assert nan_close(first_der_result, first_der) + + if second_ufloat.s != 0: + second_uatom = get_single_uatom(second_ufloat) + try: + second_der_result = ( + result.error_components[second_uatom] / second_ufloat.s + ) + except KeyError: + second_der_result = 0 + assert nan_close(second_der_result, second_der) zero = ufloat(0, 0) one = ufloat(1, 0) p = ufloat(0.3, 0.01) -power_float_result_cases = [ +power_zero_std_dev_result_cases = [ (0, p, 0), - (zero, p, 0), - (float("nan"), zero, 1), - (one, float("nan"), 1), (p, 0, 1), (zero, 0, 1), (-p, 0, 1), - (-10.3, zero, 1), - (0, zero, 1), (0.3, zero, 1), - (-p, zero, 1), - (zero, zero, 1), (p, zero, 1), (one, -3, 1), (one, -3.1, 1), @@ -116,11 +118,13 @@ def test_power_derivatives(first_ufloat, second_ufloat, first_der, second_der): @pytest.mark.parametrize( "first_ufloat, second_ufloat, result_float", - power_float_result_cases, + power_zero_std_dev_result_cases, ) -def test_power_float_result_cases(first_ufloat, second_ufloat, result_float): +def test_power_zero_std_dev_result_cases(first_ufloat, second_ufloat, result_float): for op in [pow, umath_pow]: - assert op(first_ufloat, second_ufloat) == result_float + result = op(first_ufloat, second_ufloat) + assert result.n == result_float + assert result.s == 0 power_reference_cases = [ diff --git a/tests/test_umath.py b/tests/test_umath.py index 521c105a..dd00dd78 100644 --- a/tests/test_umath.py +++ b/tests/test_umath.py @@ -1,15 +1,15 @@ import json -import inspect import math from math import isnan from pathlib import Path import pytest +from helpers import get_single_uatom from uncertainties import ufloat import uncertainties.core as uncert_core -import uncertainties.umath_core as umath_core from uncertainties.ops import partial_derivative +import uncertainties.umath_core as umath_core from helpers import nan_close, ufloat_nan_close ############################################################################### @@ -44,7 +44,13 @@ def test_umath_function_derivatives(func_name, ufloat_tuples): result = func(*ufloat_arg_list) for arg_num, arg in enumerate(ufloat_arg_list): - ufloat_deriv_value = result.derivatives[arg] + uatom = get_single_uatom(arg) + orig_weight = arg.error_components[uatom] + try: + result_weight = result.error_components[uatom] + except KeyError: + result_weight = 0 + ufloat_deriv_value = result_weight / orig_weight numerical_deriv_func = partial_derivative(func, arg_num) numerical_deriv_value = numerical_deriv_func(*float_arg_list) assert math.isclose( @@ -200,13 +206,6 @@ def test_math_module(): # Regular operations are chosen to be unchanged: assert isinstance(umath_core.sin(3), float) - # factorial() must not be "damaged" by the umath_core module, so as - # to help make it a drop-in replacement for math (even though - # factorial() does not work on numbers with uncertainties - # because it is restricted to integers, as for - # math.factorial()): - assert umath_core.factorial(4) == 24 - # fsum is special because it does not take a fixed number of # variables: assert umath_core.fsum([x, x]).nominal_value == -3 @@ -264,24 +263,10 @@ def test_hypot(): """ x = ufloat(0, 1) y = ufloat(0, 2) + x_uatom = get_single_uatom(x) + y_uatom = get_single_uatom(y) # Derivatives that cannot be calculated simply return NaN, with no # exception being raised, normally: result = umath_core.hypot(x, y) - assert isnan(result.derivatives[x]) - assert isnan(result.derivatives[y]) - - -@pytest.mark.parametrize("function_name", umath_core.deprecated_functions) -def test_deprecated_function(function_name): - num_args = len(inspect.signature(getattr(math, function_name)).parameters) - args = [ufloat(1, 0.1)] - if num_args == 1: - if function_name == "factorial": - args[0] = 6 - else: - if function_name == "ldexp": - args.append(3) - else: - args.append(ufloat(-12, 2.4)) - with pytest.warns(FutureWarning, match="will be removed"): - getattr(umath_core, function_name)(*args) + assert isnan(result.error_components[x_uatom]) + assert isnan(result.error_components[y_uatom]) diff --git a/tests/test_uncertainties.py b/tests/test_uncertainties.py index a4cc3276..921d30f6 100644 --- a/tests/test_uncertainties.py +++ b/tests/test_uncertainties.py @@ -1,21 +1,21 @@ import copy import json -import inspect import math +import gc from pathlib import Path -import random # noqa +import pickle +import random import pytest import uncertainties.core as uncert_core from uncertainties.core import ( ufloat, - AffineScalarFunc, ufloat_fromstr, - deprecated_methods, ) from uncertainties import ( umath, + UFloat, correlated_values, correlated_values_norm, correlation_matrix, @@ -24,6 +24,7 @@ from helpers import ( nan_close, ufloat_nan_close, + get_single_uatom, ) @@ -33,29 +34,47 @@ np = None -def test_ufloat_function_construction(): - """ - Tests the various means of constructing a constant number with - uncertainty *without a string* (see test_ufloat_fromstr(), for this). - """ +def test_UFloat_class_construction(): + """Test creating UFloat directly.""" + x = UFloat(3, 0.14) + assert x.nominal_value == 3 + assert x.std_dev == 0.14 + assert get_single_uatom(x).tag is None + + x = UFloat(3, 0.14, "pi") + assert x.nominal_value == 3 + assert x.std_dev == 0.14 + assert get_single_uatom(x).tag == "pi" + + x = UFloat(3, 0.14, tag="pi") + assert x.nominal_value == 3 + assert x.std_dev == 0.14 + assert get_single_uatom(x).tag == "pi" - ## Simple construction: + with pytest.raises(uncert_core.NegativeStdDev): + _ = UFloat(3, -0.1) + + with pytest.raises(TypeError): + UFloat(1) + + +def test_ufloat_function_construction(): + """Test creating UFloat via ufloat() function.""" x = ufloat(3, 0.14) assert x.nominal_value == 3 assert x.std_dev == 0.14 - assert x.tag is None + assert get_single_uatom(x).tag is None - # ... with tag as positional argument: x = ufloat(3, 0.14, "pi") assert x.nominal_value == 3 assert x.std_dev == 0.14 - assert x.tag == "pi" + assert get_single_uatom(x).tag == "pi" # ... with tag keyword: x = ufloat(3, 0.14, tag="pi") assert x.nominal_value == 3 assert x.std_dev == 0.14 - assert x.tag == "pi" + assert get_single_uatom(x).tag == "pi" with pytest.raises(uncert_core.NegativeStdDev): _ = ufloat(3, -0.1) @@ -107,9 +126,6 @@ def test_ufloat_function_construction(): ("3.4(nan)e10", 3.4e10, float("nan")), # NaN value: ("nan+/-3.14e2", float("nan"), 314), - # "Double-floats" - ("(-3.1415 +/- 1e-4)e+200", -3.1415e200, 1e196), - ("(-3.1415e-10 +/- 1e-4)e+200", -3.1415e190, 1e196), # Special float representation: ("-3(0.)", -3, 0), ] @@ -120,23 +136,30 @@ def test_ufloat_fromstr(input_str, nominal_value, std_dev): num = ufloat_fromstr(input_str) assert nan_close(num.nominal_value, nominal_value) assert nan_close(num.std_dev, std_dev) - assert num.tag is None + if std_dev != 0: + assert get_single_uatom(num).tag is None + else: + assert num.uncertainty.expanded == {} # With a tag as positional argument: num = ufloat_fromstr(input_str, "test variable") assert nan_close(num.nominal_value, nominal_value) assert nan_close(num.std_dev, std_dev) - assert num.tag == "test variable" + if std_dev != 0: + assert get_single_uatom(num).tag == "test variable" + else: + assert num.uncertainty.expanded == {} # With a tag as keyword argument: num = ufloat_fromstr(input_str, tag="test variable") assert nan_close(num.nominal_value, nominal_value) assert nan_close(num.std_dev, std_dev) - assert num.tag == "test variable" + if std_dev != 0: + assert get_single_uatom(num).tag == "test variable" + else: + assert num.uncertainty.expanded == {} -############################################################################### - ufloat_method_cases_json_path = Path( Path(__file__).parent, "cases", @@ -167,12 +190,15 @@ def test_ufloat_method_derivativs(func_name, ufloat_tuples): extract the numerical partial derivative. """ bound_func = getattr(ufloat_arg_list[0], func_name) - unbound_func = getattr(AffineScalarFunc, func_name) + unbound_func = getattr(UFloat, func_name) result = bound_func(*ufloat_arg_list[1:]) for arg_num, arg in enumerate(ufloat_arg_list): - ufloat_deriv_value = result.derivatives[arg] + uatom = get_single_uatom(arg) + orig_weight = arg.error_components[uatom] + result_weight = result.error_components[uatom] + ufloat_deriv_value = result_weight / orig_weight numerical_deriv_func = partial_derivative(unbound_func, arg_num) numerical_deriv_value = numerical_deriv_func(*float_arg_list) assert math.isclose( @@ -191,153 +217,84 @@ def test_calculate_zero_equality(): def test_copy(): - "Standard copy module integration" - import gc - + """Standard copy module integration.""" x = ufloat(3, 0.1) assert x == x + x_uatom = get_single_uatom(x) + y = copy.copy(x) - assert x != y - assert not (x == y) - assert y in y.derivatives.keys() # y must not copy the dependence on x + assert y == x + assert get_single_uatom(y) == x_uatom z = copy.deepcopy(x) - assert x != z + assert z == x + assert get_single_uatom(z) == x_uatom - # Copy tests on expressions: t = x + 2 * z - # t depends on x: - assert x in t.derivatives + assert x_uatom in t.error_components - # The relationship between the copy of an expression and the - # original variables should be preserved: t_copy = copy.copy(t) - # Shallow copy: the variables on which t depends are not copied: - assert x in t_copy.derivatives + assert x_uatom in t_copy.error_components assert uncert_core.covariance_matrix([t, z]) == uncert_core.covariance_matrix( [t_copy, z] ) - # However, the relationship between a deep copy and the original - # variables should be broken, since the deep copy created new, - # independent variables: t_deepcopy = copy.deepcopy(t) - assert x not in t_deepcopy.derivatives - assert uncert_core.covariance_matrix([t, z]) != uncert_core.covariance_matrix( + assert x_uatom in t_deepcopy.error_components + assert uncert_core.covariance_matrix([t, z]) == uncert_core.covariance_matrix( [t_deepcopy, z] ) - # Test of implementations with weak references: - - # Weak references: destroying a variable should never destroy the - # integrity of its copies (which would happen if the copy keeps a - # weak reference to the original, in its derivatives member: the - # weak reference to the original would become invalid): del x - gc.collect() - - assert y in list(y.derivatives.keys()) + assert x_uatom in y.error_components -## Classes for the pickling tests (put at the module level, so that -## they can be unpickled): +""" +Classes to test pickling with different types of __slots__ inheritance +""" -# Subclass without slots: -class NewVariable_dict(uncert_core.Variable): +class UFloatDict(UFloat): pass -# Subclass with slots defined by a tuple: -class NewVariable_slots_tuple(uncert_core.Variable): +class UFloatSlotsTuple(UFloat): __slots__ = ("new_attr",) -# Subclass with slots defined by a string: -class NewVariable_slots_str(uncert_core.Variable): +class UFloatSlotsStr(UFloat): __slots__ = "new_attr" def test_pickling(): - "Standard pickle module integration." - - import pickle - - x = ufloat(2, 0.1) - + """Standard pickle module integration.""" + x = UFloat(2, 0.1) x_unpickled = pickle.loads(pickle.dumps(x)) - assert x != x_unpickled # Pickling creates copies + assert x_unpickled == x - ## Tests with correlations and AffineScalarFunc objects: f = 2 * x - assert isinstance(f, AffineScalarFunc) - (f_unpickled, x_unpickled2) = pickle.loads(pickle.dumps((f, x))) - # Correlations must be preserved: - assert f_unpickled - x_unpickled2 - x_unpickled2 == 0 + assert isinstance(f, UFloat) - ## Tests with subclasses: + f_unpickled, x_unpickled2 = pickle.loads(pickle.dumps((f, x))) + assert f_unpickled == 2 * x_unpickled2 - for subclass in (NewVariable_dict, NewVariable_slots_tuple, NewVariable_slots_str): + for subclass in (UFloatDict, UFloatSlotsTuple, UFloatSlotsStr): x = subclass(3, 0.14) # Pickling test with possibly uninitialized slots: - pickle.loads(pickle.dumps(x)) + assert pickle.loads(pickle.dumps(x)) == x # Unpickling test: x.new_attr = "New attr value" x_unpickled = pickle.loads(pickle.dumps(x)) - # Must exist (from the slots of the parent class): - x_unpickled.nominal_value - x_unpickled.new_attr # Must exist - - ## - - # Corner case test: when an attribute is present both in __slots__ - # and in __dict__, it is first looked up from the slots - # (references: - # http://docs.python.org/2/reference/datamodel.html#invoking-descriptors, - # http://stackoverflow.com/a/15139208/42973). As a consequence, - # the pickling process must pickle the correct value (i.e., not - # the value from __dict__): - x = NewVariable_dict(3, 0.14) - x._nominal_value = "in slots" - # Corner case: __dict__ key which is also a slot name (it is - # shadowed by the corresponding slot, so this is very unusual, - # though): - x.__dict__["_nominal_value"] = "in dict" - # Additional __dict__ attribute: - x.dict_attr = "dict attribute" + assert x_unpickled == x + assert x_unpickled.new_attr == "New attr value" - x_unpickled = pickle.loads(pickle.dumps(x)) - # We make sure that the data is still there and untouched: - assert x_unpickled._nominal_value == "in slots" - assert x_unpickled.__dict__ == x.__dict__ - - ## - - # Corner case that should have no impact on the code but which is - # not prevented by the documentation: case of constant linear - # terms (the potential gotcha is that if the linear_combo - # attribute is empty, __getstate__()'s result could be false, and - # so __setstate__() would not be called and the original empty - # linear combination would not be set in linear_combo. - x = uncert_core.LinearCombination({}) - assert pickle.loads(pickle.dumps(x)).linear_combo == {} - - -def test_int_div(): - "Integer division" - # We perform all operations on floats, because derivatives can - # otherwise be meaningless: - x = ufloat(3.9, 2) // 2 - assert x.nominal_value == 1.0 - # All errors are supposed to be small, so the ufloat() - # in x violates the assumption. Therefore, the following is - # correct: - assert x.std_dev == 0.0 + x = uncert_core.UCombo(()) + assert pickle.loads(pickle.dumps(x)).ucombo_tuple == () def test_comparison_ops(): @@ -345,28 +302,12 @@ def test_comparison_ops(): # Operations on quantities equivalent to Python numbers must still # be correct: - a = ufloat(-3, 0) b = ufloat(10, 0) c = ufloat(10, 0) - assert a < b - assert a < 3 - assert 3 < b # This is first given to int.__lt__() assert b == c x = ufloat(3, 0.1) - # One constraint is that usual Python code for inequality testing - # still work in a reasonable way (for instance, it is generally - # desirable that functions defined by different formulas on - # different intervals can still do "if 0 < x < 1:...". This - # supposes again that errors are "small" (as for the estimate of - # the standard error). - assert x > 1 - - # The limit case is not obvious: - assert not (x >= 3) - assert not (x < 3) - assert x == x # Comparaison between Variable and AffineScalarFunc: assert x == x + 0 @@ -384,7 +325,7 @@ def test_comparison_ops(): # Comparison to other types should work: assert x is not None # Not comparable - assert x - x == 0 # Comparable, even though the types are different + assert x - x != 0 # Equality comparison with float is always False assert x != [1, 2] #################### @@ -417,7 +358,7 @@ def random_float(var): return (random.random() - 0.5) * min(var.std_dev, 1e-5) + var.nominal_value # All operations are tested: - for op in ["__%s__" % name for name in ("ne", "eq", "lt", "le", "gt", "ge")]: + for op in ["__%s__" % name for name in ("ne", "eq")]: try: float_func = getattr(float, op) except AttributeError: # Python 2.3's floats don't have __ne__ @@ -474,67 +415,58 @@ def random_float(var): def test_logic(): - "Boolean logic: __nonzero__, bool." - + "bool defers to object.__bool__ and always returns True." x = ufloat(3, 0) y = ufloat(0, 0) z = ufloat(0, 0.1) t = ufloat(-1, 2) assert bool(x) - assert not bool(y) + assert bool(y) assert bool(z) - assert bool(t) # Only infinitseimal neighborhood are used + assert bool(t) def test_basic_access_to_data(): "Access to data from Variable and AffineScalarFunc objects." x = ufloat(3.14, 0.01, "x var") - assert x.tag == "x var" + assert get_single_uatom(x).tag == "x var" assert x.nominal_value == 3.14 assert x.std_dev == 0.01 # Case of AffineScalarFunc objects: y = x + 0 - assert type(y) == AffineScalarFunc + assert type(y) == UFloat assert y.nominal_value == 3.14 assert y.std_dev == 0.01 # Details on the sources of error: a = ufloat(-1, 0.001) y = 2 * x + 3 * x + 2 + a - error_sources = y.error_components() - assert len(error_sources) == 2 # 'a' and 'x' - assert error_sources[x] == 0.05 - assert error_sources[a] == 0.001 + error_sources = y.error_components + assert len(error_sources) == 2 + # 'a' and 'x' + assert y.covariance(x) == 0.05 * 0.01 + assert y.covariance(a) == 0.001 * 0.001 - # Derivative values should be available: - assert y.derivatives[x] == 5 - - # Modification of the standard deviation of variables: - x.std_dev = 1 - assert y.error_components()[x] == 5 # New error contribution! + with pytest.raises(AttributeError): + # std_dev cannot be modified + x.std_dev = 1 # Calculated values with uncertainties should not have a settable # standard deviation: y = 2 * x - try: + with pytest.raises(AttributeError): y.std_dev = 1 - except AttributeError: - pass - else: - raise Exception("std_dev should not be settable for calculated results") # Calculation of deviations in units of the standard deviations: assert 10 / x.std_dev == x.std_score(10 + x.nominal_value) # "In units of the standard deviation" is not always meaningful: - x.std_dev = 0 - try: + x = ufloat(1, 0) + with pytest.raises(ValueError): x.std_score(1) - except ValueError: - pass # Normal behavior def test_correlations(): @@ -957,6 +889,9 @@ def f(x, y, *args, **kwargs): z = ufloat(100, 0.111) t = ufloat(0.1, 0.1111) + z_uatom = get_single_uatom(z) + t_uatom = get_single_uatom(t) + assert ufloat_nan_close( f_wrapped(x, y, z, t=t), f_auto_unc(x, y, z, t=t), tolerance=1e-5 ) @@ -975,8 +910,8 @@ def f(x, y, *args, **kwargs): # to try to confuse the code: assert ( - f_wrapped2(x, y, z, t=t).derivatives[y] - == f_auto_unc(x, y, z, t=t).derivatives[y] + f_wrapped2(x, y, z, t=t).error_components[z_uatom] + == f_auto_unc(x, y, z, t=t).error_components[z_uatom] ) # Derivatives supplied through the keyword-parameter dictionary of @@ -992,12 +927,12 @@ def f(x, y, *args, **kwargs): # The derivatives should be exactly the same, because they are # obtained with the exact same analytic formula: assert ( - f_wrapped3(x, y, z, t=t).derivatives[z] - == f_auto_unc(x, y, z, t=t).derivatives[z] + f_wrapped3(x, y, z, t=t).error_components[z_uatom] + == f_auto_unc(x, y, z, t=t).error_components[z_uatom] ) assert ( - f_wrapped3(x, y, z, t=t).derivatives[t] - == f_auto_unc(x, y, z, t=t).derivatives[t] + f_wrapped3(x, y, z, t=t).error_components[t_uatom] + == f_auto_unc(x, y, z, t=t).error_components[t_uatom] ) ######################################## @@ -1106,20 +1041,6 @@ def test_numpy_comparison(): assert len(numpy.array([x, x, x]) == x) == 3 assert numpy.all(x == numpy.array([x, x, x])) - # Inequalities: - assert len(x < numpy.arange(10)) == 10 - assert len(numpy.arange(10) > x) == 10 - assert len(x <= numpy.arange(10)) == 10 - assert len(numpy.arange(10) >= x) == 10 - assert len(x > numpy.arange(10)) == 10 - assert len(numpy.arange(10) < x) == 10 - assert len(x >= numpy.arange(10)) == 10 - assert len(numpy.arange(10) <= x) == 10 - - # More detailed test, that shows that the comparisons are - # meaningful (x >= 0, but not x <= 1): - assert numpy.all((x >= numpy.arange(3)) == [True, False, False]) - def test_correlated_values(): """ Correlated variables. @@ -1276,7 +1197,7 @@ def test_correlated_values_correlation_mat(): list(zip(nominal_values, std_devs)), corr_mat ) - # uarrays_close() is used instead of numbers_close() because + # uarrays_close() is used instead of nan_close() because # it compares uncertainties too: # Test of individual variables: @@ -1336,23 +1257,6 @@ def test_no_numpy(): _ = correlation_matrix([x, y, z]) -@pytest.mark.parametrize("method_name", deprecated_methods) -def test_deprecated_method(method_name): - x = ufloat(1, 0.1) - y = ufloat(-12, 2.4) - num_args = len(inspect.signature(getattr(float, method_name)).parameters) - with pytest.warns(FutureWarning, match="will be removed"): - if num_args == 1: - getattr(x, method_name)() - else: - getattr(x, method_name)(y) - - -def test_deprecated_bool(): - with pytest.warns(FutureWarning, match="is deprecated.*will defer to"): - bool(ufloat(0, 0)) - - def test_zero_std_dev_warn(): with pytest.warns(UserWarning, match="std_dev==0.*unexpected results"): ufloat(1, 0) diff --git a/tests/test_unumpy.py b/tests/test_unumpy.py index 701137ff..8a78bfc6 100644 --- a/tests/test_unumpy.py +++ b/tests/test_unumpy.py @@ -9,7 +9,7 @@ import uncertainties.core as uncert_core from uncertainties import ufloat, unumpy from uncertainties.unumpy import core -from helpers import nan_close, uarrays_close +from helpers import get_single_uatom, nan_close, uarrays_close def test_numpy(): @@ -31,19 +31,6 @@ def test_numpy(): # Operations with arrays work (they are first handled by NumPy, # then by this module): prod1 * prod2 # This should be calculable - assert not (prod1 - prod2).any() # All elements must be 0 - - # Comparisons work too: - - # Usual behavior: - assert len(arr[arr > 1.5]) == 1 - # Comparisons with Variable objects: - assert len(arr[arr > ufloat(1.5, 0.1)]) == 1 - - assert len(prod1[prod1 < prod1 * prod2]) == 2 - - # The following can be calculated (special NumPy abs() function): - numpy.abs(arr + ufloat(-1, 0.1)) # The following does not completely work, because NumPy does not # implement numpy.exp on an array of general objects, apparently: @@ -81,29 +68,13 @@ def test_matrix(): # Test of the nominal_value attribute: assert numpy.all(m_nominal_values == m.nominal_values) - assert type(m[0, 0]) == uncert_core.Variable + assert type(m[0, 0]) == uncert_core.UFloat # Test of scalar multiplication, both sides: 3 * m m * 3 -def derivatives_close(x, y): - """ - Returns True iff the AffineScalarFunc objects x and y have - derivatives that are close to each other (they must depend - on the same variables). - """ - - # x and y must depend on the same variables: - if set(x.derivatives) != set(y.derivatives): - return False # Not the same variables - - return all( - nan_close(x.derivatives[var], y.derivatives[var]) for var in x.derivatives - ) - - def test_inverse(): "Tests of the matrix inverse" @@ -139,6 +110,7 @@ def test_inverse(): # Checks of the covariances between elements: x = ufloat(10, 1) + x_uatom = get_single_uatom(x) m = unumpy.matrix([[x, x], [0, 3 + 2 * x]]) m_inverse = m.I @@ -152,18 +124,16 @@ def test_inverse(): assert uarrays_close(m_double_inverse, m).all() - # Partial test: - assert derivatives_close(m_double_inverse[0, 0], m[0, 0]) - assert derivatives_close(m_double_inverse[1, 1], m[1, 1]) - #################### # Tests of covariances during the inversion: # There are correlations if both the next two derivatives are # not zero: - assert m_inverse[0, 0].derivatives[x] - assert m_inverse[0, 1].derivatives[x] + assert x_uatom in m_inverse[0, 0].error_components + assert m_inverse[0, 0].error_components[x_uatom] != 0 + assert x_uatom in m_inverse[0, 1].error_components + assert m_inverse[0, 1].error_components[x_uatom] != 0 # Correlations between m and m_inverse should create a perfect # inversion: @@ -179,7 +149,7 @@ def test_wrap_array_func(): # Function that works with numbers with uncertainties in mat (if # mat is an uncertainties.unumpy.matrix): def f_unc(mat, *args, **kwargs): - return mat.I + args[0] * kwargs["factor"] + return numpy.matrix(mat).I + args[0] * kwargs["factor"] # Test with optional arguments and keyword arguments: def f(mat, *args, **kwargs): @@ -189,7 +159,7 @@ def f(mat, *args, **kwargs): return f_unc(mat, *args, **kwargs) # Wrapped function: - f_wrapped = core.wrap_array_func(f) + f_wrapped = core.to_uarray_func(f) ########## # Full rank rectangular matrix: @@ -207,7 +177,7 @@ def test_pseudo_inverse(): "Tests of the pseudo-inverse" # Numerical version of the pseudo-inverse: - pinv_num = core.wrap_array_func(numpy.linalg.pinv) + pinv_num = core.to_uarray_func(numpy.linalg.pinv) ########## # Full rank rectangular matrix: diff --git a/uncertainties/core.py b/uncertainties/core.py index 0b2f4018..dacd360f 100644 --- a/uncertainties/core.py +++ b/uncertainties/core.py @@ -15,13 +15,11 @@ from __future__ import division # Many analytical derivatives depend on this from builtins import str, zip, range, object +from math import isnan, sqrt # Optimization: no attribute look-up +from typing import Optional, Union import functools -from math import sqrt, isfinite # Optimization: no attribute look-up from warnings import warn -import copy -import collections - from uncertainties.formatting import format_ufloat from uncertainties.parsing import str_to_number_with_uncert from . import ops @@ -32,6 +30,7 @@ modified_operators, modified_ops_with_reflection, ) +from uncertainties.ucombo import UCombo, UAtom # Attributes that are always exported (some other attributes are # exported only if the NumPy module is available...): @@ -50,7 +49,6 @@ # Variable subclass), but possibly manipulated by external code # ['derivatives()' method, etc.]. "UFloat", - "Variable", # Wrapper for allowing non-pure-Python function to handle # quantitities with uncertainties: "wrap", @@ -189,23 +187,26 @@ def correlated_values_norm(values_with_std_dev, correlation_mat, tags=None): # Creation of new, independent variables: - # We use the fact that the eigenvectors in 'transform' are - # special: 'transform' is unitary: its inverse is its transpose: - - variables = tuple( - # The variables represent "pure" uncertainties: - Variable(0, sqrt(variance), tag) - for (variance, tag) in zip(variances, tags) + ind_vars = tuple( + UCombo(((UAtom(tag), sqrt(variance)),)) + for variance, tag in zip(variances, tags) ) # The coordinates of each new uncertainty as a function of the # new variables must include the variable scale (standard deviation): transform *= std_devs[:, numpy.newaxis] + corr_vars = [] + for sub_coords in transform: + corr_var = sum( + (ind_var * coord for ind_var, coord in zip(ind_vars, sub_coords)), + UCombo(()), + ) + corr_vars.append(corr_var) + # Representation of the initial correlated values: values_funcs = tuple( - AffineScalarFunc(value, LinearCombination(dict(zip(variables, coords)))) - for (coords, value) in zip(transform, nominal_values) + UFloat(value, corr_var) for (corr_var, value) in zip(corr_vars, nominal_values) ) return values_funcs @@ -234,310 +235,118 @@ def correlation_matrix(nums_with_uncert): ######################################## -class LinearCombination(object): +class UFloat(object): """ - Linear combination of Variable differentials. - - The linear_combo attribute can change formally, but its value - always remains the same. Typically, the linear combination can - thus be expanded. - - The expanded form of linear_combo is a mapping from Variables to - the coefficient of their differential. + UFloat objects represent random variables. + + UFloat objects are represented by a float nominal value which gives + the mean of the abstract random variable and a UCombo uncertainty. + The UCombo uncertainty is a linear combination of UAtom where each + UAtom is like a random variable with zero mean and unity variance. + + The variance and standard deviation of the UFloat random variable + can be calculated using the uncertainty UCombo. Also, if two UFloat + objects share uncertainty dependence on any shared UAtoms then those + two UFloat's may exhibit correlations which can be calculated. + + Finally, UFloat objects can pass through simple or complicated + mathematical operations such as addition or trigonometric + manipulations. The result of these operations will be new UFloat + objects whose uncertainty is calculated according to the rules of + linear error propagation. """ - # ! Invariant: linear_combo is represented internally exactly as - # the linear_combo argument to __init__(): - __slots__ = "linear_combo" + __slots__ = ("_nominal_value", "_uncertainty") - def __init__(self, linear_combo): - """ - linear_combo can be modified by the object, during its - lifetime. This allows the object to change its internal - representation over time (for instance by expanding the linear - combination and replacing the original expression with the - expanded one). - - linear_combo -- if linear_combo is a dict, then it represents - an expanded linear combination and must map Variables to the - coefficient of their differential. Otherwise, it should be a - list of (coefficient, LinearCombination) pairs (that - represents a linear combination expression). + def __init__( + self, + nominal_value: float, + uncertainty: Union[UCombo, Union[float, int]], + tag: Optional[str] = None, + ): """ + nominal_value -- (float) mean value of the random variable. - self.linear_combo = linear_combo - - def __bool__(self): - """ - Return True only if the linear combination is non-empty, i.e. if - the linear combination contains any term. - """ - return bool(self.linear_combo) + uncertainty -- Uncertainty of the random variable. This can either be a UCombo + object which represents a linear combination of UAtoms or a simple non-negative + float. In the latter case the non-negative float uncertainty will be created to + use a single term UCombo with a new UAtom using that float as the weight for the + single new UAtom. - def expanded(self): + tag -- (string) optional tag for the new UAtom used if the uncertainty is a + float such that a new UCombo and UAtom is generated. """ - Return True if and only if the linear combination is expanded. - """ - return isinstance(self.linear_combo, dict) - - def expand(self): - """ - Expand the linear combination. - - The expansion is a collections.defaultdict(float). - - This should only be called if the linear combination is not - yet expanded. - """ - - # The derivatives are built progressively by expanding each - # term of the linear combination until there is no linear - # combination to be expanded. - - # Final derivatives, constructed progressively: - derivatives = collections.defaultdict(float) - - while self.linear_combo: # The list of terms is emptied progressively - # One of the terms is expanded or, if no expansion is - # needed, simply added to the existing derivatives. - # - # Optimization note: since Python's operations are - # left-associative, a long sum of Variables can be built - # such that the last term is essentially a Variable (and - # not a NestedLinearCombination): popping from the - # remaining terms allows this term to be quickly put in - # the final result, which limits the number of terms - # remaining (and whose size can temporarily grow): - (main_factor, main_expr) = self.linear_combo.pop() - - # print "MAINS", main_factor, main_expr - - if main_expr.expanded(): - for var, factor in main_expr.linear_combo.items(): - derivatives[var] += main_factor * factor - - else: # Non-expanded form - for factor, expr in main_expr.linear_combo: - # The main_factor is applied to expr: - self.linear_combo.append((main_factor * factor, expr)) - - # print "DERIV", derivatives - - self.linear_combo = derivatives - - def __getstate__(self): - # Not false, otherwise __setstate__() will not be called: - return (self.linear_combo,) - - def __setstate__(self, state): - (self.linear_combo,) = state - - -class AffineScalarFunc(object): - """ - Affine functions that support basic mathematical operations - (addition, etc.). Such functions can for instance be used for - representing the local (linear) behavior of any function. - - This class can also be used to represent constants. - - The variables of affine scalar functions are Variable objects. - - AffineScalarFunc objects include facilities for calculating the - 'error' on the function, from the uncertainties on its variables. - - Main attributes and methods: - - - nominal_value, std_dev: value at the origin / nominal value, and - standard deviation. The standard deviation can be NaN or infinity. - - - n, s: abbreviations for nominal_value and std_dev. - - - error_components(): error_components()[x] is the error due to - Variable x. - - - derivatives: derivatives[x] is the (value of the) derivative - with respect to Variable x. This attribute is a Derivatives - dictionary whose keys are the Variable objects on which the - function depends. The values are the numerical values of the - derivatives. - - All the Variable objects on which the function depends are in - 'derivatives'. - - - std_score(x): position of number x with respect to the - nominal value, in units of the standard deviation. - """ - - # To save memory in large arrays: - __slots__ = ("_nominal_value", "_linear_part") - - # !! Fix for mean() in NumPy 1.8.0: - class dtype(object): - type = staticmethod(lambda value: value) - - #! The code could be modified in order to accommodate for non-float - # nominal values. This could for instance be done through - # the operator module: instead of delegating operations to - # float.__*__ operations, they could be delegated to - # operator.__*__ functions (while taking care of properly handling - # reverse operations: __radd__, etc.). - - def __init__(self, nominal_value, linear_part): - """ - nominal_value -- value of the function when the linear part is - zero. - - linear_part -- LinearCombination that describes the linear - part of the AffineScalarFunc. - """ - - # ! A technical consistency requirement is that the - # linear_part can be nested inside a NestedLinearCombination - # (because this is how functions on AffineScalarFunc calculate - # their result: by constructing nested expressions for them). - - # Defines the value at the origin: - - # Only float-like values are handled. One reason is that it - # does not make sense for a scalar function to be affine to - # not yield float values. Another reason is that it would not - # make sense to have a complex nominal value, here (it would - # not be handled correctly at all): converting to float should - # be possible. - self._nominal_value = float(nominal_value) + if isinstance(uncertainty, (float, int)): + if not isnan(uncertainty) and uncertainty < 0: + raise NegativeStdDev("The standard deviation cannot be negative") + uncertainty = UCombo(((UAtom(tag=tag), float(uncertainty)),)) + self._uncertainty = uncertainty - # In order to have a linear execution time for long sums, the - # _linear_part is generally left as is (otherwise, each - # successive term would expand to a linearly growing sum of - # terms: efficiently handling such terms [so, without copies] - # is not obvious, when the algorithm should work for all - # functions beyond sums). - if not isinstance(linear_part, LinearCombination): - linear_part = LinearCombination(linear_part) - self._linear_part = linear_part - - # The following prevents the 'nominal_value' attribute from being - # modified by the user: @property def nominal_value(self): - "Nominal value of the random number." + """Nominal value of the random number.""" return self._nominal_value - # Abbreviation (for formulas, etc.): - n = nominal_value - - ############################################################ - - # Making derivatives a property gives the user a clean syntax, - # which is consistent with derivatives becoming a dictionary. @property - def derivatives(self): - """ - Return a mapping from each Variable object on which the function - (self) depends to the value of the derivative with respect to - that variable. - - This mapping should not be modified. + def n(self): + """Abbreviation for nominal_value""" + return self.nominal_value - Derivative values are always floats. - - This mapping is cached, for subsequent calls. - """ - - if not self._linear_part.expanded(): - self._linear_part.expand() - # Attempts to get the contribution of a variable that the - # function does not depend on raise a KeyError: - self._linear_part.linear_combo.default_factory = None - - return self._linear_part.linear_combo + @property + def uncertainty(self): + return self._uncertainty - ######################################## + @property + def u(self): + """Abbreviation for uncertainty.""" + return self.uncertainty - # Uncertainties handling: + def covariance(self, other): + return self.uncertainty.covariance(other.uncertainty) + @property def error_components(self): """ - Individual components of the standard deviation of the affine - function (in absolute value), returned as a dictionary with - Variable objects as keys. The returned variables are the - independent variables that the affine function depends on. - - This method assumes that the derivatives contained in the - object take scalar values (and are not a tuple, like what - math.frexp() returns, for instance). - """ - - # Calculation of the variance: - error_components = {} + The uncertainty is stored as a float-weighted linear combination of + UAtom objects. Each UAtom is unique and independent of all other UAtoms. + Each UAtom has a standard deviation of unity. - for variable, derivative in self.derivatives.items(): - # print "TYPE", type(variable), type(derivative) - - # Individual standard error due to variable: - - # 0 is returned even for a NaN derivative (in this case no - # multiplication by the derivative is performed): an exact - # variable obviously leads to no uncertainty in the - # functions that depend on it. - if variable._std_dev == 0: - # !!! Shouldn't the errors always be floats, as a - # convention of this module? - error_components[variable] = 0 - else: - error_components[variable] = abs(derivative * variable._std_dev) - - return error_components + This method returns a dictionary mapping each UAtom with which the + AffineScalarFunc is correlated to its corresponding float weight. + """ + return self.uncertainty.expanded @property def std_dev(self): - """ - Standard deviation of the affine function. + """Standard deviation of the affine function.""" + return self.uncertainty.std_dev - This method assumes that the function returns scalar results. - - This returned standard deviation depends on the current - standard deviations [std_dev] of the variables (Variable - objects) involved. - """ - #! It would be possible to not allow the user to update the - # std dev of Variable objects, in which case AffineScalarFunc - # objects could have a pre-calculated or, better, cached - # std_dev value (in fact, many intermediate AffineScalarFunc do - # not need to have their std_dev calculated: only the final - # AffineScalarFunc returned to the user does). - return float(sqrt(sum(delta**2 for delta in self.error_components().values()))) + @property + def s(self): + """Abbreviation for std_dev.""" + return self.std_dev - # Abbreviation (for formulas, etc.): - s = std_dev + def __eq__(self, other): + if not isinstance(other, type(self)): + return NotImplemented + return self.n == other.n and self.u == other.u def __repr__(self): - # Not putting spaces around "+/-" helps with arrays of - # Variable, as each value with an uncertainty is a - # block of signs (otherwise, the standard deviation can be - # mistaken for another element of the array). - - std_dev = self.std_dev # Optimization, since std_dev is calculated - - # A zero standard deviation is printed because otherwise, - # ufloat_fromstr() does not correctly parse back the value - # ("1.23" is interpreted as "1.23(1)"): - - if std_dev: - std_dev_str = repr(std_dev) - else: + if self.std_dev == 0: std_dev_str = "0" + else: + std_dev_str = repr(self.std_dev) - return "%r+/-%s" % (self.nominal_value, std_dev_str) + num_repr = f"{self.nominal_value}+/-{std_dev_str}" + return num_repr def __str__(self): - # An empty format string and str() usually return the same - # string - # (http://docs.python.org/2/library/string.html#format-specification-mini-language): return self.format("") + def __hash__(self): + return hash((self.nominal_value, self.uncertainty)) + @set_doc(format_ufloat.__doc__) def __format__(self, format_spec): return format_ufloat(self, format_spec) @@ -561,108 +370,15 @@ def std_score(self, value): Raises a ValueError exception if the standard deviation is zero. """ try: - # The ._nominal_value is a float: there is no integer division, - # here: return (value - self._nominal_value) / self.std_dev except ZeroDivisionError: raise ValueError("The standard deviation is zero:" " undefined result") - def __deepcopy__(self, memo): - """ - Hook for the standard copy module. - - The returned AffineScalarFunc is a completely fresh copy, - which is fully independent of any variable defined so far. - New variables are specially created for the returned - AffineScalarFunc object. - """ - return AffineScalarFunc(self._nominal_value, copy.deepcopy(self._linear_part)) - - def __getstate__(self): - """ - Hook for the pickle module. - - The slot attributes of the parent classes are returned, as - well as those of the __dict__ attribute of the object (if - any). - """ - - # In general (case where this class is subclassed), data - # attribute are stored in two places: possibly in __dict_, and - # in slots. Data from both locations is returned by this - # method. - - all_attrs = {} - - # Support for subclasses that do not use __slots__ (except - # through inheritance): instances have a __dict__ - # attribute. The keys in this __dict__ are shadowed by the - # slot attribute names (reference: - # http://stackoverflow.com/questions/15139067/attribute-access-in-python-first-slots-then-dict/15139208#15139208). - # The method below not only preserves this behavior, but also - # saves the full contents of __dict__. This is robust: - # unpickling gives back the original __dict__ even if __dict__ - # contains keys that are shadowed by slot names: - - try: - all_attrs["__dict__"] = self.__dict__ - except AttributeError: - pass - - # All the slot attributes are gathered. - - # Classes that do not define __slots__ have the __slots__ of - # one of their parents (the first parent with their own - # __slots__ in MRO). This is why the slot names are first - # gathered (with repetitions removed, in general), and their - # values obtained later. - - all_slots = set() - - for cls in type(self).mro(): - # In the diamond inheritance pattern, some parent classes - # may not have __slots__: - slot_names = getattr(cls, "__slots__", ()) - - # Slot names can be given in various forms (string, - # sequence, iterable): - if isinstance(slot_names, str): - all_slots.add(slot_names) # Single name - else: - all_slots.update(slot_names) - - # The slot values are stored: - for name in all_slots: - try: - # !! It might happen that '__dict__' is itself a slot - # name. In this case, its value is saved - # again. Alternatively, the loop could be done on - # all_slots - {'__dict__'}: - all_attrs[name] = getattr(self, name) - except AttributeError: - pass # Undefined slot attribute - - return all_attrs - - def __setstate__(self, data_dict): - """ - Hook for the pickle module. - """ - for name, value in data_dict.items(): - # Contrary to the default __setstate__(), this does not - # necessarily save to the instance dictionary (because the - # instance might contain slots): - setattr(self, name, value) - -ops.add_arithmetic_ops(AffineScalarFunc) -ops.add_comparative_ops(AffineScalarFunc) -to_affine_scalar = AffineScalarFunc._to_affine_scalar +ops.add_arithmetic_ops(UFloat) -# Nicer name, for users: isinstance(ufloat(...), UFloat) is -# True. Also: isinstance(..., UFloat) is the test for "is this a -# number with uncertainties from the uncertainties package?": -UFloat = AffineScalarFunc +# Legacy alias for UFloat +AffineScalarFunc = UFloat def wrap(f, derivatives_args=None, derivatives_kwargs=None): @@ -723,130 +439,6 @@ class NegativeStdDev(Exception): pass -class Variable(AffineScalarFunc): - """ - Representation of a float-like scalar Variable with its uncertainty. - - Variables are independent from each other, but correlations between them - are handled through the AffineScalarFunc class. - """ - - # To save memory in large arrays: - __slots__ = ("_std_dev", "tag") - - def __init__(self, value, std_dev, tag=None): - """ - The nominal value and the standard deviation of the variable - are set. - - The value is converted to float. - - The standard deviation std_dev can be NaN. It should normally - be a float or an integer. - - 'tag' is a tag that the user can associate to the variable. This - is useful for tracing variables. - - The meaning of the nominal value is described in the main - module documentation. - """ - - #! The value, std_dev, and tag are assumed by __copy__() not to - # be copied. Either this should be guaranteed here, or __copy__ - # should be updated. - - # Only float-like values are handled. One reason is that the - # division operator on integers would not produce a - # differentiable functions: for instance, Variable(3, 0.1)/2 - # has a nominal value of 3/2 = 1, but a "shifted" value - # of 3.1/2 = 1.55. - value = float(value) - - # If the variable changes by dx, then the value of the affine - # function that gives its value changes by 1*dx: - - # ! Memory cycles are created. However, they are garbage - # collected, if possible. Using a weakref.WeakKeyDictionary - # takes much more memory. Thus, this implementation chooses - # more cycles and a smaller memory footprint instead of no - # cycles and a larger memory footprint. - super(Variable, self).__init__(value, LinearCombination({self: 1.0})) - - self.std_dev = std_dev # Assignment through a Python property - - self.tag = tag - - @property - def std_dev(self): - return self._std_dev - - # Standard deviations can be modified (this is a feature). - # AffineScalarFunc objects that depend on the Variable have their - # std_dev automatically modified (recalculated with the new - # std_dev of their Variables): - @std_dev.setter - def std_dev(self, std_dev): - # We force the error to be float-like. Since it is considered - # as a standard deviation, it must be either positive or NaN: - # (Note: if NaN < 0 is False, there is no need to test - # separately for NaN. But this is not guaranteed, even if it - # should work on most platforms.) - if std_dev < 0 and isfinite(std_dev): - raise NegativeStdDev("The standard deviation cannot be negative") - - self._std_dev = float(std_dev) - - # The following method is overridden so that we can represent the tag: - def __repr__(self): - num_repr = super(Variable, self).__repr__() - - if self.tag is None: - return num_repr - else: - return "< %s = %s >" % (self.tag, num_repr) - - def __hash__(self): - # All Variable objects are by definition independent - # variables, so they never compare equal; therefore, their - # id() are allowed to differ - # (http://docs.python.org/reference/datamodel.html#object.__hash__): - return id(self) - - def __copy__(self): - """ - Hook for the standard copy module. - """ - - # !!!!!! The comment below might not be valid anymore now that - # Variables do not contain derivatives anymore. - - # This copy implicitly takes care of the reference of the - # variable to itself (in self.derivatives): the new Variable - # object points to itself, not to the original Variable. - - # Reference: http://www.doughellmann.com/PyMOTW/copy/index.html - - #! The following assumes that the arguments to Variable are - # *not* copied upon construction, since __copy__ is not supposed - # to copy "inside" information: - return Variable(self.nominal_value, self.std_dev, self.tag) - - def __deepcopy__(self, memo): - """ - Hook for the standard copy module. - - A new variable is created. - """ - - # This deep copy implicitly takes care of the reference of the - # variable to itself (in self.derivatives): the new Variable - # object points to itself, not to the original Variable. - - # Reference: http://www.doughellmann.com/PyMOTW/copy/index.html - - return self.__copy__() - - ############################################################################### # Utilities @@ -884,6 +476,13 @@ def std_dev(x): return 0.0 +def uncertainty(x): + if isinstance(x, UFloat): + return x.uncertainty + else: + return UCombo(()) + + def covariance_matrix(nums_with_uncert): """ Return a matrix that contains the covariances between the given @@ -905,19 +504,19 @@ def covariance_matrix(nums_with_uncert): covariance_matrix = [] for i1, expr1 in enumerate(nums_with_uncert, 1): - derivatives1 = expr1.derivatives # Optimization - vars1 = set(derivatives1) # !! Python 2.7+: viewkeys() would work + error_components_1 = expr1.error_components + uatoms_1 = set(error_components_1) coefs_expr1 = [] for expr2 in nums_with_uncert[:i1]: - derivatives2 = expr2.derivatives # Optimization + error_components_2 = expr2.error_components + uatom_2 = set(error_components_2) coefs_expr1.append( sum( ( - (derivatives1[var] * derivatives2[var] * var._std_dev**2) - # var is a variable common to both numbers with - # uncertainties: - for var in vars1.intersection(derivatives2) + (error_components_1[uatom] * error_components_2[uatom]) + # uatom is common to both numbers with uncertainties: + for uatom in uatoms_1.intersection(uatom_2) ), # The result is always a float (sum() with no terms # returns an integer): @@ -988,7 +587,7 @@ def ufloat_fromstr(representation, tag=None): return ufloat(nom, std, tag) -def ufloat(nominal_value, std_dev=None, tag=None): +def ufloat(nominal_value, std_dev, tag=None): """ Create an uncertainties Variable @@ -1025,7 +624,7 @@ def ufloat(nominal_value, std_dev=None, tag=None): "Using UFloat objects with std_dev==0 may give unexpected results.", stacklevel=2, ) - return Variable(nominal_value, std_dev, tag=tag) + return UFloat(nominal_value, std_dev, tag=tag) # Deprecated UFloat methods @@ -1038,26 +637,3 @@ def wrapped(*args, **kwargs): return func(*args, **kwargs) return wrapped - - -deprecated_methods = [ - "__floordiv__", - "__mod__", - "__abs__", - "__trunc__", - "__lt__", - "__gt__", - "__le__", - "__ge__", -] - -for method_name in deprecated_methods: - message = ( - f"AffineScalarFunc.{method_name}() is deprecated. It will be removed in a future " - f"release." - ) - setattr( - AffineScalarFunc, - method_name, - deprecation_wrapper(getattr(AffineScalarFunc, method_name), message), - ) diff --git a/uncertainties/ops.py b/uncertainties/ops.py index 5df483a5..f750eb40 100644 --- a/uncertainties/ops.py +++ b/uncertainties/ops.py @@ -5,7 +5,8 @@ import itertools from inspect import getfullargspec import numbers -from warnings import warn + +from uncertainties.ucombo import UCombo # Some types known to not depend on Variable objects are put in # CONSTANT_TYPES. The most common types can be put in front, as this @@ -122,10 +123,8 @@ def get_ops_with_reflection(): # AffineScalarFunc._nominal_value numbers, it is applied on # floats, and is therefore the "usual" mathematical division. "div": ("1/y", "-x/y**2"), - "floordiv": ("0.", "0."), # Non exact: there is a discontinuity # The derivative wrt the 2nd arguments is something like (..., x//y), # but it is calculated numerically, for convenience: - "mod": ("1.", "partial_derivative(float.__mod__, 1)(x, y)"), "mul": ("y", "x"), "sub": ("1.", "-1."), "truediv": ("1/y", "-x/y**2"), @@ -227,10 +226,8 @@ def _simple_add_deriv(x): # Single-argument operators that should be adapted from floats to # AffineScalarFunc objects, associated to their derivative: simple_numerical_operators_derivatives = { - "abs": _simple_add_deriv, "neg": lambda x: -1.0, "pos": lambda x: 1.0, - "trunc": lambda x: 0.0, } for op, derivative in iter(simple_numerical_operators_derivatives.items()): @@ -521,17 +518,13 @@ def f_with_affine_output(*args, **kwargs): # defined by (coefficient, argument) pairs, where 'argument' # is an AffineScalarFunc (for all AffineScalarFunc found as # argument of f): - linear_part = [] + uncertainty = UCombo(()) for pos in pos_w_uncert: - linear_part.append( - ( - # Coefficient: - derivatives_args_index[pos](*args_values, **kwargs), - # Linear part of the AffineScalarFunc expression: - args[pos]._linear_part, - ) - ) + arg_uncertainty = args[pos].uncertainty + if arg_uncertainty: + derivative_val = derivatives_args_index[pos](*args_values, **kwargs) + uncertainty += derivative_val * arg_uncertainty for name in names_w_uncert: # Optimization: caching of the automatic numerical @@ -539,24 +532,19 @@ def f_with_affine_output(*args, **kwargs): # discovered. This gives a speedup when the original # function is called repeatedly with the same keyword # arguments: - derivative = derivatives_all_kwargs.setdefault( + if not kwargs_uncert_values[name].uncertainty: + continue + derivative_func = derivatives_all_kwargs.setdefault( name, # Derivative never needed before: partial_derivative(f, name), ) - - linear_part.append( - ( - # Coefficient: - derivative(*args_values, **kwargs), - # Linear part of the AffineScalarFunc expression: - kwargs_uncert_values[name]._linear_part, - ) - ) + derivative_val = derivative_func(*args_values, **kwargs) + uncertainty += derivative_val * kwargs_uncert_values[name].uncertainty # The function now returns the necessary linear approximation # to the function: - return cls(f_nominal_value, linear_part) + return cls(f_nominal_value, uncertainty) f_with_affine_output = set_doc( """\ @@ -651,218 +639,3 @@ def partial_derivative_of_f(*args, **kwargs): return (shifted_f_plus - shifted_f_minus) / 2 / step return partial_derivative_of_f - - -######################################## - -# Definition of boolean operators, that assume that self and -# y_with_uncert are AffineScalarFunc. - -# The fact that uncertainties must be small is used, here: the -# comparison functions are supposed to be constant for most values of -# the random variables. - -# Even though uncertainties are supposed to be small, comparisons -# between 3+/-0.1 and 3.0 are handled correctly (even though x == 3.0 is -# not a constant function in the 3+/-0.1 interval). The comparison -# between x and x is handled too, when x has an uncertainty. In fact, -# as explained in the main documentation, it is possible to give a -# useful meaning to the comparison operators, in these cases. - - -def eq_on_aff_funcs(self, y_with_uncert): - """ - __eq__ operator, assuming that both self and y_with_uncert are - AffineScalarFunc objects. - """ - difference = self - y_with_uncert - # Only an exact zero difference means that self and y are - # equal numerically: - return not (difference._nominal_value or difference.std_dev) - - -def ne_on_aff_funcs(self, y_with_uncert): - """ - __ne__ operator, assuming that both self and y_with_uncert are - AffineScalarFunc objects. - """ - - return not eq_on_aff_funcs(self, y_with_uncert) - - -def gt_on_aff_funcs(self, y_with_uncert): - """ - __gt__ operator, assuming that both self and y_with_uncert are - AffineScalarFunc objects. - """ - return self._nominal_value > y_with_uncert._nominal_value - - -def ge_on_aff_funcs(self, y_with_uncert): - """ - __ge__ operator, assuming that both self and y_with_uncert are - AffineScalarFunc objects. - """ - - return gt_on_aff_funcs(self, y_with_uncert) or eq_on_aff_funcs(self, y_with_uncert) - - -def lt_on_aff_funcs(self, y_with_uncert): - """ - __lt__ operator, assuming that both self and y_with_uncert are - AffineScalarFunc objects. - """ - return self._nominal_value < y_with_uncert._nominal_value - - -def le_on_aff_funcs(self, y_with_uncert): - """ - __le__ operator, assuming that both self and y_with_uncert are - AffineScalarFunc objects. - """ - - return lt_on_aff_funcs(self, y_with_uncert) or eq_on_aff_funcs(self, y_with_uncert) - - -def add_comparative_ops(cls): - def to_affine_scalar(x): - """ - Transforms x into a constant affine scalar function - (AffineScalarFunc), unless it is already an AffineScalarFunc (in - which case x is returned unchanged). - - Raises an exception unless x belongs to some specific classes of - objects that are known not to depend on AffineScalarFunc objects - (which then cannot be considered as constants). - """ - - if isinstance(x, cls): - return x - - if isinstance(x, CONSTANT_TYPES): - # No variable => no derivative: - return cls(x, {}) - - # Case of lists, etc. - raise NotUpcast( - "%s cannot be converted to a number with" " uncertainty" % type(x) - ) - - cls._to_affine_scalar = to_affine_scalar - - def force_aff_func_args(func): - """ - Takes an operator op(x, y) and wraps it. - - The constructed operator returns func(x, to_affine_scalar(y)) if y - can be upcast with to_affine_scalar(); otherwise, it returns - NotImplemented. - - Thus, func() is only called on two AffineScalarFunc objects, if - its first argument is an AffineScalarFunc. - """ - - def op_on_upcast_args(x, y): - """ - Return %s(self, to_affine_scalar(y)) if y can be upcast - through to_affine_scalar. Otherwise returns NotImplemented. - """ % func.__name__ - - try: - y_with_uncert = to_affine_scalar(y) - except NotUpcast: - # This module does not know how to handle the comparison: - # (example: y is a NumPy array, in which case the NumPy - # array will decide that func() should be applied - # element-wise between x and all the elements of y): - return NotImplemented - else: - return func(x, y_with_uncert) - - return op_on_upcast_args - - ### Operators: operators applied to AffineScalarFunc and/or - ### float-like objects only are supported. This is why methods - ### from float are used for implementing these operators. - - # Operators with no reflection: - - ######################################## - - # __nonzero__() is supposed to return a boolean value (it is used - # by bool()). It is for instance used for converting the result - # of comparison operators to a boolean, in sorted(). If we want - # to be able to sort AffineScalarFunc objects, __nonzero__ cannot - # return a AffineScalarFunc object. Since boolean results (such - # as the result of bool()) don't have a very meaningful - # uncertainty unless it is zero, this behavior is fine. - - def __bool__(self): - """ - Equivalent to self != 0. - """ - #! This might not be relevant for AffineScalarFunc objects - # that contain values in a linear space which does not convert - # the float 0 into the null vector (see the __eq__ function: - # __nonzero__ works fine if subtracting the 0 float from a - # vector of the linear space works as if 0 were the null - # vector of that space): - msg = ( - f"{self.__class__.__name__}.__bool__() is deprecated. In future releases " - f"it will defer to object.__bool__() and always return True." - ) - warn(msg, FutureWarning, stacklevel=2) - return self != 0.0 # Uses the AffineScalarFunc.__ne__ function - - cls.__bool__ = __bool__ - ######################################## - - ## Logical operators: warning: the resulting value cannot always - ## be differentiated. - - # The boolean operations are not differentiable everywhere, but - # almost... - - # (1) I can rely on the assumption that the user only has "small" - # errors on variables, as this is used in the calculation of the - # standard deviation (which performs linear approximations): - - # (2) However, this assumption is not relevant for some - # operations, and does not have to hold, in some cases. This - # comes from the fact that logical operations (e.g. __eq__(x,y)) - # are not differentiable for many usual cases. For instance, it - # is desirable to have x == x for x = n+/-e, whatever the size of e. - # Furthermore, n+/-e != n+/-e', if e != e', whatever the size of e or - # e'. - - # (3) The result of logical operators does not have to be a - # function with derivatives, as these derivatives are either 0 or - # don't exist (i.e., the user should probably not rely on - # derivatives for his code). - - # !! In Python 2.7+, it may be possible to use functools.total_ordering. - - # __eq__ is used in "if data in [None, ()]", for instance. It is - # therefore important to be able to handle this case too, which is - # taken care of when force_aff_func_args(eq_on_aff_funcs) - # returns NotImplemented. - cls.__eq__ = force_aff_func_args(eq_on_aff_funcs) - - cls.__ne__ = force_aff_func_args(ne_on_aff_funcs) - cls.__gt__ = force_aff_func_args(gt_on_aff_funcs) - - # __ge__ is not the opposite of __lt__ because these operators do - # not always yield a boolean (for instance, 0 <= numpy.arange(10) - # yields an array). - cls.__ge__ = force_aff_func_args(ge_on_aff_funcs) - - cls.__lt__ = force_aff_func_args(lt_on_aff_funcs) - cls.__le__ = force_aff_func_args(le_on_aff_funcs) - - -# Mathematical operations with local approximations (affine scalar -# functions) - - -class NotUpcast(Exception): - "Raised when an object cannot be converted to a number with uncertainty" diff --git a/uncertainties/ucombo.py b/uncertainties/ucombo.py new file mode 100644 index 00000000..4ce99188 --- /dev/null +++ b/uncertainties/ucombo.py @@ -0,0 +1,136 @@ +from __future__ import annotations + +from collections import defaultdict +from math import sqrt +from typing import Optional, Tuple, Union +import uuid + + +class UAtom: + __slots__ = ["uuid", "tag", "_hash"] + + def __init__(self: UAtom, tag: Optional[str] = None): + self.tag = tag + self.uuid: uuid.UUID = uuid.uuid4() + self._hash = hash(self.uuid) + + def __eq__(self: UAtom, other: UAtom) -> bool: + return hash(self) == hash(other) + + def __hash__(self: UAtom) -> int: + return self._hash + + def __lt__(self: UAtom, other: UAtom) -> bool: + return self.uuid < other.uuid + + def __str__(self: UAtom) -> str: + uuid_abbrev = f"{str(self.uuid)[0:2]}..{str(self.uuid)[-2:]}" + if self.tag is not None: + label = f"{self.tag}, {uuid_abbrev}" + else: + label = uuid_abbrev + return f"{self.__class__.__name__}({label})" + + def __repr__(self: UAtom) -> str: + return f"{self.__class__.__name__}({self.uuid})" + + +class UCombo: + __slots__ = ["ucombo_tuple", "is_expanded", "_std_dev", "_expanded_dict", "_hash"] + + def __init__( + self: UCombo, + ucombo_tuple: Tuple[Tuple[Union[UAtom, UCombo], float], ...], + ): + self.ucombo_tuple = ucombo_tuple + self.is_expanded = False + self._std_dev = None + self._expanded_dict = None + self._hash = None + + @property + def expanded(self: UCombo) -> dict[UAtom, float]: + if self._expanded_dict is None: + term_list = list(self.ucombo_tuple) + self._expanded_dict = defaultdict(float) + while term_list: + term, weight = term_list.pop() + if isinstance(term, UAtom): + self._expanded_dict[term] += weight + elif term.is_expanded: + for sub_term, sub_weight in term.expanded.items(): + self._expanded_dict[sub_term] += weight * sub_weight + else: + for sub_term, sub_weight in term.ucombo_tuple: + term_list.append((sub_term, weight * sub_weight)) + self._expanded_dict = dict(self._expanded_dict) + self._expanded_dict = { + uatom: weight + for uatom, weight in self._expanded_dict.items() + if weight != 0 + } + self.is_expanded = True + return self._expanded_dict + + @property + def std_dev(self: UCombo) -> float: + if self._std_dev is None: + self._std_dev = sqrt(sum(weight**2 for weight in self.expanded.values())) + return self._std_dev + + def covariance(self: UCombo, other: UCombo): + # TODO: pull out to module function and cache + self_uatoms = set(self.expanded.keys()) + other_uatoms = set(other.expanded.keys()) + shared_uatoms = self_uatoms.intersection(other_uatoms) + covariance = 0 + for uatom in shared_uatoms: + covariance += self.expanded[uatom] * other.expanded[uatom] + return covariance + + def __add__(self: UCombo, other) -> UCombo: + if not isinstance(other, UCombo): + return NotImplemented + return UCombo(self.ucombo_tuple + other.ucombo_tuple) + + def __radd__(self: UCombo, other: UCombo) -> UCombo: + return self.__add__(other) + + def __mul__(self: UCombo, scalar: Union[float, int]) -> UCombo: + if not isinstance(scalar, (float, int)): + return NotImplemented + return UCombo(((self, float(scalar)),)) + + def __rmul__(self: UCombo, scalar: Union[float, int]) -> UCombo: + return self.__mul__(scalar) + + def __iter__(self: UCombo): + return iter(self.ucombo_tuple) + + def __bool__(self: UCombo) -> bool: + return bool(self.ucombo_tuple) + + def __str__(self: UCombo) -> str: + ret_str = "" + first = True + for term, weight in self: + if not first: + ret_str += " + " + else: + first = False + + if isinstance(term, UAtom): + ret_str += f"{weight}×{str(term)}" + else: + ret_str += f"{weight}×({str(term)})" + return ret_str + + def __hash__(self: UCombo) -> int: + if self._hash is None: + self._hash = hash( + tuple(sorted((key, value) for key, value in self.expanded.items())) + ) + return self._hash + + def __eq__(self: UCombo, other: UCombo) -> bool: + return hash(self) == hash(other) diff --git a/uncertainties/umath_core.py b/uncertainties/umath_core.py index f07a0b60..2328e174 100644 --- a/uncertainties/umath_core.py +++ b/uncertainties/umath_core.py @@ -19,7 +19,6 @@ # Local modules import uncertainties.core as uncert_core -from uncertainties.core import to_affine_scalar, AffineScalarFunc, LinearCombination ############################################################################### @@ -58,7 +57,7 @@ # Functions with numerical derivatives: # # !! Python2.7+: {..., ...} -num_deriv_funcs = set(["fmod", "gamma", "lgamma"]) +num_deriv_funcs = set(["gamma", "lgamma"]) # Functions are by definition locally constant (on real # numbers): their value does not depend on the uncertainty (because @@ -70,7 +69,7 @@ # comparisons (==, >, etc.). # # !! Python 2.7+: {..., ...} -locally_cst_funcs = set(["ceil", "floor", "isinf", "isnan", "trunc"]) +locally_cst_funcs = set(["isinf", "isnan"]) # Functions that do not belong in many_scalars_to_scalar_funcs, but # that have a version that handles uncertainties. These functions are @@ -192,7 +191,6 @@ def _deriv_pow_1(x, y): lambda y, x: -y / (x**2 + y**2), ], # Correct for x == 0 "atanh": [lambda x: 1 / (1 - x**2)], - "copysign": [_deriv_copysign, lambda x, y: 0], "cos": [lambda x: -math.sin(x)], "cosh": [math.sinh], "degrees": [lambda x: math.degrees(1)], @@ -200,7 +198,6 @@ def _deriv_pow_1(x, y): "erfc": [lambda x: -math.exp(-(x**2)) * erf_coef], "exp": [math.exp], "expm1": [math.exp], - "fabs": [_deriv_fabs], "hypot": [lambda x, y: x / math.hypot(x, y), lambda x, y: y / math.hypot(x, y)], "log": [log_der0, lambda x, y: -math.log(x, y) / y / math.log(y)], "log10": [lambda x: 1 / x / math.log(10)], @@ -291,10 +288,6 @@ def wrapped_func(*args, **kwargs): # Only for Python 2.6+: -# For drop-in compatibility with the math module: -factorial = math.factorial -non_std_wrapped_funcs.append("factorial") - # We wrap math.fsum @@ -323,136 +316,6 @@ def wrapped_fsum(): fsum = wrapped_fsum() non_std_wrapped_funcs.append("fsum") -########## - -# Some functions that either return multiple arguments (modf, frexp) -# or take some non-float arguments (which should not be converted to -# numbers with uncertainty). - -# ! The arguments have the same names as in the math module -# documentation, so that the docstrings are consistent with them. - - -@uncert_core.set_doc(math.modf.__doc__) -def modf(x): - """ - Version of modf that works for numbers with uncertainty, and also - for regular numbers. - """ - - # The code below is inspired by uncert_core.wrap(). It is - # simpler because only 1 argument is given, and there is no - # delegation to other functions involved (as for __mul__, etc.). - - aff_func = to_affine_scalar(x) # Uniform treatment of all numbers - - (frac_part, int_part) = math.modf(aff_func.nominal_value) - - if aff_func._linear_part: # If not a constant - # The derivative of the fractional part is simply 1: the - # linear part of modf(x)[0] is the linear part of x: - return (AffineScalarFunc(frac_part, aff_func._linear_part), int_part) - else: - # This function was not called with an AffineScalarFunc - # argument: there is no need to return numbers with uncertainties: - return (frac_part, int_part) - - -many_scalars_to_scalar_funcs.append("modf") - - -@uncert_core.set_doc(math.ldexp.__doc__) -def ldexp(x, i): - # Another approach would be to add an additional argument to - # uncert_core.wrap() so that some arguments are automatically - # considered as constants. - - aff_func = to_affine_scalar(x) # y must be an integer, for math.ldexp - - if aff_func._linear_part: - return AffineScalarFunc( - math.ldexp(aff_func.nominal_value, i), - LinearCombination([(2**i, aff_func._linear_part)]), - ) - else: - # This function was not called with an AffineScalarFunc - # argument: there is no need to return numbers with uncertainties: - - # aff_func.nominal_value is not passed instead of x, because - # we do not have to care about the type of the return value of - # math.ldexp, this way (aff_func.nominal_value might be the - # value of x coerced to a difference type [int->float, for - # instance]): - return math.ldexp(x, i) - - -many_scalars_to_scalar_funcs.append("ldexp") - - -@uncert_core.set_doc(math.frexp.__doc__) -def frexp(x): - """ - Version of frexp that works for numbers with uncertainty, and also - for regular numbers. - """ - - # The code below is inspired by uncert_core.wrap(). It is - # simpler because only 1 argument is given, and there is no - # delegation to other functions involved (as for __mul__, etc.). - - aff_func = to_affine_scalar(x) - - if aff_func._linear_part: - (mantissa, exponent) = math.frexp(aff_func.nominal_value) - return ( - AffineScalarFunc( - mantissa, - # With frexp(x) = (m, e), x = m*2**e, so m = x*2**-e - # and therefore dm/dx = 2**-e (as e in an integer that - # does not vary when x changes): - LinearCombination([2**-exponent, aff_func._linear_part]), - ), - # The exponent is an integer and is supposed to be - # continuous (errors must be small): - exponent, - ) - else: - # This function was not called with an AffineScalarFunc - # argument: there is no need to return numbers with uncertainties: - return math.frexp(x) - - -non_std_wrapped_funcs.append("frexp") - -# Deprecated functions - -deprecated_functions = [ - "ceil", - "copysign", - "fabs", - "factorial", - "floor", - "fmod", - "frexp", - "ldexp", - "modf", - "trunc", -] - -for function_name in deprecated_functions: - message = ( - f"umath.{function_name}() is deprecated. It will be removed in a future " - f"release." - ) - setattr( - this_module, - function_name, - uncert_core.deprecation_wrapper( - getattr(this_module, function_name), - message, - ), - ) - ############################################################################### # Exported functions: diff --git a/uncertainties/unumpy/core.py b/uncertainties/unumpy/core.py index 714f1729..c3ceefe2 100644 --- a/uncertainties/unumpy/core.py +++ b/uncertainties/unumpy/core.py @@ -10,11 +10,11 @@ # user. # Standard modules: -from builtins import next from builtins import zip -from builtins import range +from functools import wraps import sys -import inspect +from typing import Callable, Union +from numbers import Real from warnings import warn # 3rd-party modules: @@ -23,6 +23,9 @@ # Local modules: import uncertainties.umath_core as umath_core import uncertainties.core as uncert_core +from uncertainties import UFloat +from uncertainties.ucombo import UCombo + __all__ = [ # Factory functions: @@ -35,6 +38,182 @@ "matrix", ] + +def inject_to_args_kwargs(param, injected_arg, *args, **kwargs): + if isinstance(param, int): + new_kwargs = kwargs + new_args = [] + for idx, arg in enumerate(args): + if idx == param: + new_args.append(injected_arg) + else: + new_args.append(arg) + elif isinstance(param, str): + new_args = args + new_kwargs = kwargs + new_kwargs[param] = injected_arg + else: + raise TypeError(f"{param} must be an int or str, not {type(param)}.") + return new_args, new_kwargs + + +def get_args_kwargs_list(*args, **kwargs): + args_kwargs_list = [] + for idx, arg in enumerate(args): + args_kwargs_list.append((idx, arg)) + for key, arg in kwargs.items(): + args_kwargs_list.append((key, arg)) + return args_kwargs_list + + +SQRT_EPS = numpy.sqrt(sys.float_info.epsilon) + + +def array_numerical_partial_derivative( + f: Callable[..., Real], + target_param: Union[str, int], + array_multi_index: tuple = None, + *args, + **kwargs, +) -> float: + """ + Numerically calculate the partial derivative of a function f with respect to the + target_param (string name or position number of the float parameter to f to be + varied) holding all other arguments, *args and **kwargs, constant. + """ + if isinstance(target_param, int): + x = args[target_param] + else: + x = kwargs[target_param] + + if array_multi_index is None: + dx = abs(x) * SQRT_EPS # Numerical Recipes 3rd Edition, eq. 5.7.5 + x_lower = x - dx + x_upper = x + dx + else: + dx = numpy.mean(numpy.abs(x)) * SQRT_EPS + x_lower = numpy.copy(x) + x_upper = numpy.copy(x) + x_lower[array_multi_index] -= dx + x_upper[array_multi_index] += dx + + lower_args, lower_kwargs = inject_to_args_kwargs( + target_param, + x_lower, + *args, + **kwargs, + ) + upper_args, upper_kwargs = inject_to_args_kwargs( + target_param, + x_upper, + *args, + **kwargs, + ) + + lower_y = f(*lower_args, **lower_kwargs) + upper_y = f(*upper_args, **upper_kwargs) + + derivative = (upper_y - lower_y) / (2 * dx) + return derivative + + +def to_uarray_func(func, derivs=None): + if derivs is None: + derivs = {} + + @wraps(func) + def wrapped(*args, **kwargs): + return_uarray = False + return_matrix = False + + float_args = [] + for arg in args: + if isinstance(arg, UFloat): + float_args.append(arg.nominal_value) + return_uarray = True + elif isinstance(arg, numpy.ndarray): + if isinstance(arg, numpy.matrix): + return_matrix = True + if isinstance(arg.flat[0], UFloat): + float_args.append(to_nominal_values(arg)) + return_uarray = True + else: + float_args.append(arg) + else: + float_args.append(arg) + + float_kwargs = {} + for key, arg in kwargs.items(): + if isinstance(arg, UFloat): + float_kwargs[key] = arg.nominal_value + return_uarray = True + elif isinstance(arg, numpy.ndarray): + if isinstance(arg, numpy.matrix): + return_matrix = True + if isinstance(arg.flat[0], UFloat): + float_kwargs[key] = to_nominal_values(arg) + return_uarray = True + else: + float_kwargs[key] = arg + else: + float_kwargs[key] = arg + + new_nominal_array = func(*float_args, **float_kwargs) + if not return_uarray: + return new_nominal_array + + args_kwargs_list = get_args_kwargs_list(*args, **kwargs) + + ucombo_array = numpy.full(new_nominal_array.shape, UCombo(())) + + for label, arg in args_kwargs_list: + if isinstance(arg, UFloat): + if label in derivs and derivs[label] is not None: + deriv_func = derivs[label] + deriv_arr = deriv_func( + label, + None, + *float_args, + **float_kwargs, + ) + else: + deriv_arr = array_numerical_partial_derivative( + func, label, None, *float_args, **float_kwargs + ) + ucombo_array += deriv_arr * arg.uncertainty + elif isinstance(arg, numpy.ndarray): + if isinstance(arg.flat[0], UFloat): + it = numpy.nditer(arg, flags=["multi_index", "refs_ok"]) + for sub_arg in it: + u_num = sub_arg.item() + if isinstance(u_num, UFloat): + multi_index = it.multi_index + if label in derivs and derivs[label] is not None: + deriv_func = derivs[label] + deriv_arr = deriv_func( + label, + multi_index, + *float_args, + **float_kwargs, + ) + else: + deriv_arr = array_numerical_partial_derivative( + func, + label, + multi_index, + *float_args, + **float_kwargs, + ) + ucombo_array += numpy.array(deriv_arr) * u_num.uncertainty + + u_array = numpy.vectorize(UFloat)(new_nominal_array, ucombo_array) + if return_matrix: + u_array = numpy.matrix(u_array) + return u_array + + return wrapped + + ############################################################################### # Utilities: @@ -71,6 +250,11 @@ ), ) +to_uncertainties = numpy.vectorize( + uncert_core.uncertainty, + otypes=[object], +) + def unumpy_to_numpy_matrix(arr): """ @@ -122,170 +306,6 @@ class from this module) are passed through untouched (because a return unumpy_to_numpy_matrix(to_std_devs(arr)) -############################################################################### - - -def derivative(u, var): - """ - Return the derivative of u along var, if u is an - uncert_core.AffineScalarFunc instance, and if var is one of the - variables on which it depends. Otherwise, return 0. - """ - if isinstance(u, uncert_core.AffineScalarFunc): - try: - return u.derivatives[var] - except KeyError: - return 0.0 - else: - return 0.0 - - -def wrap_array_func(func): - # !!! This function is not used in the code, except in the tests. - # - # !!! The implementation seems superficially similar to - # uncertainties.core.wrap(): is there code/logic duplication - # (which should be removed)? - """ - Return a version of the function func() that works even when - func() is given a NumPy array that contains numbers with - uncertainties, as first argument. - - This wrapper is similar to uncertainties.core.wrap(), except that - it handles an array argument instead of float arguments, and that - the result can be an array. - - However, the returned function is more restricted: the array - argument cannot be given as a keyword argument with the name in - the original function (it is not a drop-in replacement). - - func -- function whose first argument is a single NumPy array, - and which returns a NumPy array. - """ - - @uncert_core.set_doc( - """\ - Version of %s(...) that works even when its first argument is a NumPy - array that contains numbers with uncertainties. - - Warning: elements of the first argument array that are not - AffineScalarFunc objects must not depend on uncert_core.Variable - objects in any way. Otherwise, the dependence of the result in - uncert_core.Variable objects will be incorrect. - - Original documentation: - %s""" - % (func.__name__, func.__doc__) - ) - def wrapped_func(arr, *args, **kwargs): - # Nominal value: - arr_nominal_value = nominal_values(arr) - func_nominal_value = func(arr_nominal_value, *args, **kwargs) - - # The algorithm consists in numerically calculating the derivatives - # of func: - - # Variables on which the array depends are collected: - variables = set() - for element in arr.flat: - # floats, etc. might be present - if isinstance(element, uncert_core.AffineScalarFunc): - # !!!! The following forces an evaluation of the - # derivatives!? Isn't this very slow, when - # working with a large number of arrays? - # - # !! set() is only needed for Python 2 compatibility: - variables |= set(element.derivatives.keys()) - - # If the matrix has no variables, then the function value can be - # directly returned: - if not variables: - return func_nominal_value - - # Calculation of the derivatives of each element with respect - # to the variables. Each element must be independent of the - # others. The derivatives have the same shape as the output - # array (which might differ from the shape of the input array, - # in the case of the pseudo-inverse). - derivatives = numpy.vectorize(lambda _: {})(func_nominal_value) - for var in variables: - # A basic assumption of this package is that the user - # guarantees that uncertainties cover a zone where - # evaluated functions are linear enough. Thus, numerical - # estimates of the derivative should be good over the - # standard deviation interval. This is true for the - # common case of a non-zero standard deviation of var. If - # the standard deviation of var is zero, then var has no - # impact on the uncertainty of the function func being - # calculated: an incorrect derivative has no impact. One - # scenario can give incorrect results, however, but it - # should be extremely uncommon: the user defines a - # variable x with 0 standard deviation, sets y = func(x) - # through this routine, changes the standard deviation of - # x, and prints y; in this case, the uncertainty on y - # might be incorrect, because this program had no idea of - # the scale on which func() is linear, when it calculated - # the numerical derivative. - - # The standard deviation might be numerically too small - # for the evaluation of the derivative, though: we set the - # minimum variable shift. - - shift_var = max(var._std_dev / 1e5, 1e-8 * abs(var._nominal_value)) - # An exceptional case is that of var being exactly zero. - # In this case, an arbitrary shift is used for the - # numerical calculation of the derivative. The resulting - # derivative value might be quite incorrect, but this does - # not matter as long as the uncertainty of var remains 0, - # since it is, in this case, a constant. - if not shift_var: - shift_var = 1e-8 - - # Shift of all the elements of arr when var changes by shift_var: - shift_arr = array_derivative(arr, var) * shift_var - - # Origin value of array arr when var is shifted by shift_var: - shifted_arr_values = arr_nominal_value + shift_arr - func_shifted = func(shifted_arr_values, *args, **kwargs) - numerical_deriv = (func_shifted - func_nominal_value) / shift_var - - # Update of the list of variables and associated - # derivatives, for each element: - for derivative_dict, derivative_value in zip( - derivatives.flat, numerical_deriv.flat - ): - if derivative_value: - derivative_dict[var] = derivative_value - - # numbers with uncertainties are built from the result: - return numpy.vectorize(uncert_core.AffineScalarFunc)( - func_nominal_value, - numpy.vectorize(uncert_core.LinearCombination)(derivatives), - ) - - wrapped_func = uncert_core.set_doc( - """\ - Version of %s(...) that works even when its first argument is a NumPy - array that contains numbers with uncertainties. - - Warning: elements of the first argument array that are not - AffineScalarFunc objects must not depend on uncert_core.Variable - objects in any way. Otherwise, the dependence of the result in - uncert_core.Variable objects will be incorrect. - - Original documentation: - %s""" - % (func.__name__, func.__doc__) - )(wrapped_func) - - # It is easier to work with wrapped_func, which represents a - # wrapped version of 'func', when it bears the same name as - # 'func' (the name is used by repr(wrapped_func)). - wrapped_func.__name__ = func.__name__ - - return wrapped_func - - ############################################################################### # Arrays @@ -312,7 +332,7 @@ def uarray(nominal_values, std_devs=None): # ! Looking up uncert_core.Variable beforehand through # '_Variable = uncert_core.Variable' does not result in a # significant speed up: - lambda v, s: uncert_core.Variable(v, s), + lambda v, s: uncert_core.UFloat(v, s), otypes=[object], )(nominal_values, std_devs) @@ -320,192 +340,26 @@ def uarray(nominal_values, std_devs=None): ############################################################################### -def array_derivative(array_like, var): - """ - Return the derivative of the given array with respect to the - given variable. - - The returned derivative is a NumPy ndarray of the same shape as - array_like, that contains floats. - - array_like -- array-like object (list, etc.) that contains - scalars or numbers with uncertainties. - - var -- Variable object. - """ - return numpy.vectorize( - lambda u: derivative(u, var), - # The type is set because an - # integer derivative should not - # set the output type of the - # array: - otypes=[float], - )(array_like) - - -def func_with_deriv_to_uncert_func(func_with_derivatives): - # This function is used for instance for the calculation of the - # inverse and pseudo-inverse of a matrix with uncertainties. - """ - Return a function that can be applied to array-like objects that - contain numbers with uncertainties (lists, lists of lists, NumPy - arrays, etc.). - - func_with_derivatives -- defines a function that takes an - array-like object containing scalars and returns an array. Both - the value and the derivatives of this function with respect to - multiple scalar parameters are calculated by this - func_with_derivatives() argument. - - func_with_derivatives(arr, input_type, derivatives, *args, - **kwargs) must return an iterator. The first element returned by - this iterator is the value of the function at the n-dimensional - array-like 'arr' (with the correct type). The following elements - are arrays that represent the derivative of the function for each - derivative array from the iterator 'derivatives'. - - func_with_derivatives() takes the following arguments: - - arr -- NumPy ndarray of scalars where the function must be - evaluated. - - input_type -- data type of the input array-like object. This - type is used for determining the type that the function should - return. - - derivatives -- iterator that returns the derivatives of the - argument of the function with respect to multiple scalar - variables. func_with_derivatives() returns the derivatives of - the defined function with respect to these variables. - - args -- additional arguments that define the result (example: - for the pseudo-inverse numpy.linalg.pinv: numerical cutoff). - - Examples of func_with_derivatives: inv_with_derivatives(). - """ - - def wrapped_func(array_like, *args, **kwargs): - """ - array_like -- n-dimensional array-like object that contains - numbers with uncertainties (list, NumPy ndarray or matrix, - etc.). - - args -- additional arguments that are passed directly to - func_with_derivatives. - """ - - # The calculation below is not lazy, contrary to the linear - # error propagation done in AffineScalarFunc. Making it lazy - # in the same way would be quite a specific task: basically - # this would amount to generalizing scalar coefficients in - # core.LinearCombination to more general matrix - # multiplications, and to replace Variable differentials by - # full matrices of coefficients. This does not look very - # efficient, as matrices are quite big, and since caching the - # result of a few matrix functions that are not typically - # stringed one after the other (unlike a big sum of numbers) - # should not be needed. - - # So that .flat works even if array_like is a list: - array_version = numpy.asanyarray(array_like) - - # Variables on which the array depends are collected: - variables = set() - for element in array_version.flat: - # floats, etc. might be present - if isinstance(element, uncert_core.AffineScalarFunc): - # !!! set() is only needed for Python 2 compatibility: - variables |= set(element.derivatives.keys()) - - array_nominal = nominal_values(array_version) - # Function value, then derivatives at array_nominal (the - # derivatives are with respect to the variables contained in - # array_like): - func_then_derivs = func_with_derivatives( - array_nominal, - type(array_like), - (array_derivative(array_version, var) for var in variables), - *args, - **kwargs, - ) - - func_nominal_value = next(func_then_derivs) - - if not variables: - return func_nominal_value - - # The result is built progressively, with the contribution of - # each variable added in turn: - - # Calculation of the derivatives of the result with respect to - # the variables. - derivatives = numpy.array( - [{} for _ in range(func_nominal_value.size)], dtype=object - ).reshape(func_nominal_value.shape) - - # Memory-efficient approach. A memory-hungry approach would - # be to calculate the matrix derivatives will respect to all - # variables and then combine them into a matrix of - # AffineScalarFunc objects. The approach followed here is to - # progressively build the matrix of derivatives, by - # progressively adding the derivatives with respect to - # successive variables. - for var, deriv_wrt_var in zip(variables, func_then_derivs): - # Update of the list of variables and associated - # derivatives, for each element: - for derivative_dict, derivative_value in zip( - derivatives.flat, deriv_wrt_var.flat - ): - if derivative_value: - derivative_dict[var] = derivative_value - - # An array of numbers with uncertainties is built from the - # result: - result = numpy.vectorize(uncert_core.AffineScalarFunc)( - func_nominal_value, - numpy.vectorize(uncert_core.LinearCombination)(derivatives), - ) - - # NumPy matrices that contain numbers with uncertainties are - # better as unumpy matrices: - if isinstance(result, numpy.matrix): - result = result.view(matrix) - - return result - - return wrapped_func - - ########## Matrix inverse -def inv_with_derivatives(arr, input_type, derivatives): - """ - Defines the matrix inverse and its derivatives. - - See the definition of func_with_deriv_to_uncert_func() for its - detailed semantics. - """ - - inverse = numpy.linalg.inv(arr) - # The inverse of a numpy.matrix is a numpy.matrix. It is assumed - # that numpy.linalg.inv is such that other types yield - # numpy.ndarrays: - if issubclass(input_type, numpy.matrix): - inverse = inverse.view(numpy.matrix) - yield inverse +def inv_deriv(label, multi_index, *args, **kwargs): + if isinstance(label, int): + arr = args[label] + elif isinstance(label, str): + arr = kwargs[label] + else: + raise ValueError - # It is mathematically convenient to work with matrices: - inverse_mat = numpy.asmatrix(inverse) + deriv_arr = numpy.zeros_like(arr) + deriv_arr[multi_index] = 1 - # Successive derivatives of the inverse: - for derivative in derivatives: - derivative_mat = numpy.asmatrix(derivative) - yield -inverse_mat * derivative_mat * inverse_mat + inv_arr = numpy.linalg.inv(arr) + return -inv_arr @ deriv_arr @ inv_arr -inv = func_with_deriv_to_uncert_func(inv_with_derivatives) -inv.__doc__ = """ +inv = to_uarray_func(numpy.linalg.inv, derivs={0: inv_deriv, "a": inv_deriv}) +inv.__doc__ = """\ Version of numpy.linalg.inv that works with array-like objects that contain numbers with uncertainties. @@ -518,75 +372,51 @@ def inv_with_derivatives(arr, input_type, derivatives): ########## Matrix pseudo-inverse -def pinv_with_derivatives(arr, input_type, derivatives, rcond): +def pinv_deriv(label, multi_index, *args, **kwargs): + if isinstance(label, int): + A = args[label] + elif isinstance(label, str): + A = kwargs[label] + else: + raise ValueError + """ - Defines the matrix pseudo-inverse and its derivatives. + Analytical calculation of the derivative of the Pseudo-Inverse of matrix A with + respect to an element of matrix A. This calculatinon is done according to + formula (4.12) from - Works with real or complex matrices. + G. H. Golub, V. Pereyra, "The Differentiation of Pseudo-Inverses and Nonlinear Least + Squares Problems Whose Variables Separate", Journal on Numerical Analysis, Vol. 10, + No. 2 (Apr., 1973), pp. 413-432 - See the definition of func_with_deriv_to_uncert_func() for its - detailed semantics. + See also + http://mathoverflow.net/questions/25778/analytical-formula-for-numerical-derivative-of-the-matrix-pseudo-inverse """ - inverse = numpy.linalg.pinv(arr, rcond) - # The pseudo-inverse of a numpy.matrix is a numpy.matrix. It is - # assumed that numpy.linalg.pinv is such that other types yield - # numpy.ndarrays: - if issubclass(input_type, numpy.matrix): - inverse = inverse.view(numpy.matrix) - yield inverse - - # It is mathematically convenient to work with matrices: - inverse_mat = numpy.asmatrix(inverse) - - # Formula (4.12) from The Differentiation of Pseudo-Inverses and - # Nonlinear Least Squares Problems Whose Variables - # Separate. Author(s): G. H. Golub and V. Pereyra. Source: SIAM - # Journal on Numerical Analysis, Vol. 10, No. 2 (Apr., 1973), - # pp. 413-432 - - # See also - # http://mathoverflow.net/questions/25778/analytical-formula-for-numerical-derivative-of-the-matrix-pseudo-inverse - - # Shortcuts. All the following factors should be numpy.matrix objects: - PA = arr * inverse_mat - AP = inverse_mat * arr - factor21 = inverse_mat * inverse_mat.H - factor22 = numpy.eye(arr.shape[0]) - PA - factor31 = numpy.eye(arr.shape[1]) - AP - factor32 = inverse_mat.H * inverse_mat - - # Successive derivatives of the inverse: - for derivative in derivatives: - derivative_mat = numpy.asmatrix(derivative) - term1 = -inverse_mat * derivative_mat * inverse_mat - derivative_mat_H = derivative_mat.H - term2 = factor21 * derivative_mat_H * factor22 - term3 = factor31 * derivative_mat_H * factor32 - yield term1 + term2 + term3 - - -# Default rcond argument for the generalization of numpy.linalg.pinv: -# -# Most common modern case first: -try: - pinv_default = inspect.signature(numpy.linalg.pinv).parameters["rcond"].default -except AttributeError: # No inspect.signature() before Python 3.3 - try: - # In numpy 1.17+, pinv is wrapped using a decorator which unfortunately - # results in the metadata (argument defaults) being lost. However, we - # can still get at the original function using the __wrapped__ - # attribute (which is what inspect.signature() does). - pinv_default = numpy.linalg.pinv.__wrapped__.__defaults__[0] - except AttributeError: # Function not wrapped in NumPy < 1.17 - pinv_default = numpy.linalg.pinv.__defaults__[0] # Python 1, 2.6+: - -pinv_with_uncert = func_with_deriv_to_uncert_func(pinv_with_derivatives) - - -def pinv(array_like, rcond=pinv_default): - return pinv_with_uncert(array_like, rcond) + Aprime = numpy.zeros_like(A) + Aprime[multi_index] = 1 + + Aplus = numpy.linalg.pinv(*args, **kwargs) + + n, m = A.shape[-2:] + eye_n = numpy.eye(n) + eye_m = numpy.eye(m) + + ndim = len(A.shape) + trans_axes = list(range(ndim)) + trans_axes[0], trans_axes[1] = trans_axes[1], trans_axes[0] + + AplusT = numpy.transpose(Aplus, axes=trans_axes) + AprimeT = numpy.transpose(Aprime, axes=trans_axes) + + return ( + -Aplus @ Aprime @ Aplus + + Aplus @ AplusT @ AprimeT @ (eye_n - A @ Aplus) + + (eye_m - Aplus @ A) @ AprimeT @ AplusT @ Aplus + ) + +pinv = to_uarray_func(numpy.linalg.pinv, derivs={0: pinv_deriv, "a": pinv_deriv}) pinv = uncert_core.set_doc( """