Skip to content

Release v0.37 #901

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 23 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
23 commits
Select commit Hold shift + click to select a range
8135113
Bump minor version to 0.37.0
mhauru Apr 24, 2025
299e17b
Accumulators, stage 1 (#885)
mhauru May 2, 2025
326d7ed
Replace PriorExtractorContext with PriorDistributionAccumulator (#907)
mhauru May 8, 2025
d4ef1f2
Implement values_as_in_model using an accumulator (#908)
mhauru May 8, 2025
adfdda0
Merge remote-tracking branch 'origin/main' into breaking
penelopeysm Jun 2, 2025
d043103
Bump DynamicPPL versions
penelopeysm Jun 2, 2025
d9545c6
Fix merge (1)
penelopeysm Jun 2, 2025
a243bbb
Add benchmark Pkg source
penelopeysm Jun 2, 2025
e2272a5
[no ci] Don't need to dev again
penelopeysm Jun 2, 2025
3cb47cd
Disable use_closure for ReverseDiff
penelopeysm Jun 2, 2025
2d11ad7
Revert "Disable use_closure for ReverseDiff"
penelopeysm Jun 2, 2025
0445092
Fix LogDensityAt struct
penelopeysm Jun 2, 2025
ff7f8a2
Try not duplicating
penelopeysm Jun 2, 2025
80db9e2
Update comment pointing to closure benchmarks
penelopeysm Jun 2, 2025
bec523a
Merge remote-tracking branch 'origin/main' into breaking
penelopeysm Jun 19, 2025
3af63d5
Remove `context` from model evaluation (use `model.context` instead) …
penelopeysm Jun 19, 2025
8b67e96
Mark function as Const for Enzyme tests (#957)
penelopeysm Jun 19, 2025
1882f72
Move submodel code to submodel.jl; remove `@submodel` (#959)
penelopeysm Jun 26, 2025
7f20709
Fix missing field tests for 1.12 (#961)
penelopeysm Jun 26, 2025
f20e86c
Remove 3-argument `{_,}evaluate!!`; clean up submodel code (#960)
penelopeysm Jul 3, 2025
57a53e1
Merge branch 'main' into breaking
penelopeysm Jul 8, 2025
a0289db
Improve API for AD testing (#964)
penelopeysm Jul 8, 2025
2074657
Merge remote-tracking branch 'origin/main' into breaking
penelopeysm Jul 9, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
70 changes: 70 additions & 0 deletions HISTORY.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,75 @@
# DynamicPPL Changelog

## 0.37.0

**Breaking changes**

### Submodel macro

The `@submodel` macro is fully removed; please use `to_submodel` instead.

### `DynamicPPL.TestUtils.AD.run_ad`

The three keyword arguments, `test`, `reference_backend`, and `expected_value_and_grad` have been merged into a single `test` keyword argument.
Please see the API documentation for more details.
(The old `test=true` and `test=false` values are still valid, and you only need to adjust the invocation if you were explicitly passing the `reference_backend` or `expected_value_and_grad` arguments.)

There is now also an `rng` keyword argument to help seed parameter generation.

Finally, instead of specifying `value_atol` and `grad_atol`, you can now specify `atol` and `rtol` which are used for both value and gradient.
Their semantics are the same as in Julia's `isapprox`; two values are equal if they satisfy either `atol` or `rtol`.

### Accumulators

This release overhauls how VarInfo objects track variables such as the log joint probability. The new approach is to use what we call accumulators: Objects that the VarInfo carries on it that may change their state at each `tilde_assume!!` and `tilde_observe!!` call based on the value of the variable in question. They replace both variables that were previously hard-coded in the `VarInfo` object (`logp` and `num_produce`) and some contexts. This brings with it a number of breaking changes:

- `PriorContext` and `LikelihoodContext` no longer exist. By default, a `VarInfo` tracks both the log prior and the log likelihood separately, and they can be accessed with `getlogprior` and `getloglikelihood`. If you want to execute a model while only accumulating one of the two (to save clock cycles), you can do so by creating a `VarInfo` that only has one accumulator in it, e.g. `varinfo = setaccs!!(varinfo, (LogPriorAccumulator(),))`.
- `MiniBatchContext` does not exist anymore. It can be replaced by creating and using a custom accumulator that replaces the default `LikelihoodContext`. We may introduce such an accumulator in DynamicPPL in the future, but for now you'll need to do it yourself.
- `tilde_observe` and `observe` have been removed. `tilde_observe!!` still exists, and any contexts should modify its behaviour. We may further rework the call stack under `tilde_observe!!` in the near future.
- `tilde_assume` no longer returns the log density of the current assumption as its second return value. We may further rework the `tilde_assume!!` call stack as well.
- For literal observation statements like `0.0 ~ Normal(blahblah)` we used to call `tilde_observe!!` without the `vn` argument. This method no longer exists. Rather we call `tilde_observe!!` with `vn` set to `nothing`.
- `set/reset/increment_num_produce!` have become `set/reset/increment_num_produce!!` (note the second exclamation mark). They are no longer guaranteed to modify the `VarInfo` in place, and one should always use the return value.
- `@addlogprob!` now _always_ adds to the log likelihood. Previously it added to the log probability that the execution context specified, e.g. the log prior when using `PriorContext`.
- `getlogp` now returns a `NamedTuple` with keys `logprior` and `loglikelihood`. If you want the log joint probability, which is what `getlogp` used to return, use `getlogjoint`.
- Correspondingly `setlogp!!` and `acclogp!!` should now be called with a `NamedTuple` with keys `logprior` and `loglikelihood`. The `acclogp!!` method with a single scalar value has been deprecated and falls back on `accloglikelihood!!`, and the single scalar version of `setlogp!!` has been removed. Corresponding setter/accumulator functions exist for the log prior as well.

### Evaluation contexts

Historically, evaluating a DynamicPPL model has required three arguments: a model, some kind of VarInfo, and a context.
It's less known, though, that since DynamicPPL 0.14.0 the _model_ itself actually contains a context as well.
This version therefore excises the context argument, and instead uses `model.context` as the evaluation context.

The upshot of this is that many functions that previously took a context argument now no longer do.
There were very few such functions where the context argument was actually used (most of them simply took `DefaultContext()` as the default value).

`evaluate!!(model, varinfo, ext_context)` is removed, and broadly speaking you should replace calls to that with `new_model = contextualize(model, ext_context); evaluate!!(new_model, varinfo)`.
If the 'external context' `ext_context` is a parent context, then you should wrap `model.context` appropriately to ensure that its information content is not lost.
If, on the other hand, `ext_context` is a `DefaultContext`, then you can just drop the argument entirely.

**To aid with this process, `contextualize` is now exported from DynamicPPL.**

The main situation where one _did_ want to specify an additional evaluation context was when that context was a `SamplingContext`.
Doing this would allow you to run the model and sample fresh values, instead of just using the values that existed in the VarInfo object.
Thus, this release also introduces the **unexported** function `evaluate_and_sample!!`.
Essentially, `evaluate_and_sample!!(rng, model, varinfo, sampler)` is a drop-in replacement for `evaluate!!(model, varinfo, SamplingContext(rng, sampler))`.
**Do note that this is an internal method**, and its name or semantics are liable to change in the future without warning.

There are many methods that no longer take a context argument, and listing them all would be too much.
However, here are the more user-facing ones:

- `LogDensityFunction` no longer has a context field (or type parameter)
- `DynamicPPL.TestUtils.AD.run_ad` no longer uses a context (and the returned `ADResult` object no longer has a context field)
- `VarInfo(rng, model, sampler)` and other VarInfo constructors / functions that made VarInfos (e.g. `typed_varinfo`) from a model
- `(::Model)(args...)`: specifically, this now only takes `rng` and `varinfo` arguments (with both being optional)
- If you are using the `__context__` special variable inside a model, you will now have to use `__model__.context` instead

And a couple of more internal changes:

- Just like `evaluate!!`, the other functions `_evaluate!!`, `evaluate_threadsafe!!`, and `evaluate_threadunsafe!!` now no longer accept context arguments
- `evaluate!!` no longer takes rng and sampler (if you used this, you should use `evaluate_and_sample!!` instead, or construct your own `SamplingContext`)
- The model evaluation function, `model.f` for some `model::Model`, no longer takes a context as an argument
- The internal representation and API dealing with submodels (i.e., `ReturnedModelWrapper`, `Sampleable`, `should_auto_prefix`, `is_rhs_model`) has been simplified. If you need to check whether something is a submodel, just use `x isa DynamicPPL.Submodel`. Note that the public API i.e. `to_submodel` remains completely untouched.

## 0.36.15

Bumped minimum Julia version to 1.10.8 to avoid potential crashes with `Core.Compiler.widenconst` (which Mooncake uses).
Expand Down
4 changes: 3 additions & 1 deletion Project.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
name = "DynamicPPL"
uuid = "366bfd00-2699-11ea-058f-f148b4cae6d8"
version = "0.36.15"
version = "0.37.0"

[deps]
ADTypes = "47edcb42-4c32-4615-8424-f2b9edc5f35b"
Expand All @@ -21,6 +21,7 @@ LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
LogDensityProblems = "6fdf6af0-433a-55f7-b3ed-c6c6e0b8df7c"
MacroTools = "1914dd2f-81c6-5fcd-8719-6d5c9610ff09"
OrderedCollections = "bac558e1-5e72-5ebc-8fee-abe8a469f55d"
Printf = "de0858da-6303-5e67-8744-51eddeeeb8d7"
Random = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c"
Requires = "ae029012-a4dd-5104-9daa-d747884805df"
Statistics = "10745b16-79ce-11e8-11f9-7d13ad32a3b2"
Expand Down Expand Up @@ -68,6 +69,7 @@ MCMCChains = "6, 7"
MacroTools = "0.5.6"
Mooncake = "0.4.95"
OrderedCollections = "1"
Printf = "1.10"
Random = "1.6"
Requires = "1"
Statistics = "1"
Expand Down
5 changes: 4 additions & 1 deletion benchmarks/Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -15,11 +15,14 @@ PrettyTables = "08abe8d2-0d0c-5749-adfa-8a2ac140af0d"
ReverseDiff = "37e2e3b7-166d-5795-8a7a-e32c996b4267"
StableRNGs = "860ef19b-820b-49d6-a774-d7a799459cd3"

[sources]
DynamicPPL = {path = "../"}

[compat]
ADTypes = "1.14.0"
BenchmarkTools = "1.6.0"
Distributions = "0.25.117"
DynamicPPL = "0.36"
DynamicPPL = "0.37"
ForwardDiff = "0.10.38, 1"
LogDensityProblems = "2.1.2"
Mooncake = "0.4"
Expand Down
3 changes: 1 addition & 2 deletions benchmarks/benchmarks.jl
Original file line number Diff line number Diff line change
@@ -1,6 +1,4 @@
using Pkg
# To ensure we benchmark the local version of DynamicPPL, dev the folder above.
Pkg.develop(; path=joinpath(@__DIR__, ".."))

using DynamicPPLBenchmarks: Models, make_suite, model_dimension
using BenchmarkTools: @benchmark, median, run
Expand Down Expand Up @@ -100,4 +98,5 @@ PrettyTables.pretty_table(
header=header,
tf=PrettyTables.tf_markdown,
formatters=ft_printf("%.1f", [6, 7]),
crop=:none, # Always print the whole table, even if it doesn't fit in the terminal.
)
3 changes: 1 addition & 2 deletions benchmarks/src/DynamicPPLBenchmarks.jl
Original file line number Diff line number Diff line change
Expand Up @@ -81,13 +81,12 @@ function make_suite(model, varinfo_choice::Symbol, adbackend::Symbol, islinked::
end

adbackend = to_backend(adbackend)
context = DynamicPPL.DefaultContext()

if islinked
vi = DynamicPPL.link(vi, model)
end

f = DynamicPPL.LogDensityFunction(model, vi, context; adtype=adbackend)
f = DynamicPPL.LogDensityFunction(model, vi; adtype=adbackend)
# The parameters at which we evaluate f.
θ = vi[:]

Expand Down
2 changes: 1 addition & 1 deletion docs/Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ DataStructures = "0.18"
Distributions = "0.25"
Documenter = "1"
DocumenterMermaid = "0.1, 0.2"
DynamicPPL = "0.36"
DynamicPPL = "0.37"
FillArrays = "0.13, 1"
ForwardDiff = "0.10, 1"
JET = "0.9, 0.10"
Expand Down
83 changes: 58 additions & 25 deletions docs/src/api.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,12 @@ getargnames
getmissings
```

The context of a model can be set using [`contextualize`](@ref):

```@docs
contextualize
```

## Evaluation

With [`rand`](@ref) one can draw samples from the prior distribution of a [`Model`](@ref).
Expand Down Expand Up @@ -140,27 +146,15 @@ to_submodel

Note that a `[to_submodel](@ref)` is only sampleable; one cannot compute `logpdf` for its realizations.

In the past, one would instead embed sub-models using [`@submodel`](@ref), which has been deprecated since the introduction of [`to_submodel(model)`](@ref)

```@docs
@submodel
```

In the context of including models within models, it's also useful to prefix the variables in sub-models to avoid variable names clashing:

```@docs
DynamicPPL.prefix
```

Under the hood, [`to_submodel`](@ref) makes use of the following method to indicate that the model it's wrapping is a model over its return-values rather than something else

```@docs
returned(::Model)
```

## Utilities

It is possible to manually increase (or decrease) the accumulated log density from within a model function.
It is possible to manually increase (or decrease) the accumulated log likelihood or prior from within a model function.

```@docs
@addlogprob!
Expand Down Expand Up @@ -212,6 +206,21 @@ To test and/or benchmark the performance of an AD backend on a model, DynamicPPL

```@docs
DynamicPPL.TestUtils.AD.run_ad
```

The default test setting is to compare against ForwardDiff.
You can have more fine-grained control over how to test the AD backend using the following types:

```@docs
DynamicPPL.TestUtils.AD.AbstractADCorrectnessTestSetting
DynamicPPL.TestUtils.AD.WithBackend
DynamicPPL.TestUtils.AD.WithExpectedResult
DynamicPPL.TestUtils.AD.NoTest
```

These are returned / thrown by the `run_ad` function:

```@docs
DynamicPPL.TestUtils.AD.ADResult
DynamicPPL.TestUtils.AD.ADIncorrectException
```
Expand Down Expand Up @@ -329,9 +338,9 @@ The following functions were used for sequential Monte Carlo methods.

```@docs
get_num_produce
set_num_produce!
increment_num_produce!
reset_num_produce!
set_num_produce!!
increment_num_produce!!
reset_num_produce!!
setorder!
set_retained_vns_del!
```
Expand All @@ -346,6 +355,22 @@ Base.empty!
SimpleVarInfo
```

### Accumulators

The subtypes of [`AbstractVarInfo`](@ref) store the cumulative log prior and log likelihood, and sometimes other variables that change during executing, in what are called accumulators.

```@docs
AbstractAccumulator
```

DynamicPPL provides the following default accumulators.

```@docs
LogPriorAccumulator
LogLikelihoodAccumulator
NumProduceAccumulator
```

### Common API

#### Accumulation of log-probabilities
Expand All @@ -354,6 +379,13 @@ SimpleVarInfo
getlogp
setlogp!!
acclogp!!
getlogjoint
getlogprior
setlogprior!!
acclogprior!!
getloglikelihood
setloglikelihood!!
accloglikelihood!!
resetlogp!!
```

Expand Down Expand Up @@ -416,21 +448,26 @@ DynamicPPL.varname_and_value_leaves

### Evaluation Contexts

Internally, both sampling and evaluation of log densities are performed with [`AbstractPPL.evaluate!!`](@ref).
Internally, model evaluation is performed with [`AbstractPPL.evaluate!!`](@ref).

```@docs
AbstractPPL.evaluate!!
```

The behaviour of a model execution can be changed with evaluation contexts that are passed as additional argument to the model function.
This method mutates the `varinfo` used for execution.
By default, it does not perform any actual sampling: it only evaluates the model using the values of the variables that are already in the `varinfo`.
To perform sampling, you can either wrap `model.context` in a `SamplingContext`, or use this convenience method:

```@docs
DynamicPPL.evaluate_and_sample!!
```

The behaviour of a model execution can be changed with evaluation contexts, which are a field of the model.
Contexts are subtypes of `AbstractPPL.AbstractContext`.

```@docs
SamplingContext
DefaultContext
LikelihoodContext
PriorContext
MiniBatchContext
PrefixContext
ConditionContext
```
Expand Down Expand Up @@ -477,7 +514,3 @@ DynamicPPL.Experimental.is_suitable_varinfo
```@docs
tilde_assume
```

```@docs
tilde_observe
```
1 change: 0 additions & 1 deletion ext/DynamicPPLForwardDiffExt.jl
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,6 @@ function DynamicPPL.tweak_adtype(
ad::ADTypes.AutoForwardDiff{chunk_size},
::DynamicPPL.Model,
vi::DynamicPPL.AbstractVarInfo,
::DynamicPPL.AbstractContext,
) where {chunk_size}
params = vi[:]

Expand Down
22 changes: 11 additions & 11 deletions ext/DynamicPPLJETExt.jl
Original file line number Diff line number Diff line change
Expand Up @@ -4,15 +4,10 @@ using DynamicPPL: DynamicPPL
using JET: JET

function DynamicPPL.Experimental.is_suitable_varinfo(
model::DynamicPPL.Model,
context::DynamicPPL.AbstractContext,
varinfo::DynamicPPL.AbstractVarInfo;
only_ddpl::Bool=true,
model::DynamicPPL.Model, varinfo::DynamicPPL.AbstractVarInfo; only_ddpl::Bool=true
)
# Let's make sure that both evaluation and sampling doesn't result in type errors.
f, argtypes = DynamicPPL.DebugUtils.gen_evaluator_call_with_types(
model, varinfo, context
)
f, argtypes = DynamicPPL.DebugUtils.gen_evaluator_call_with_types(model, varinfo)
# If specified, we only check errors originating somewhere in the DynamicPPL.jl.
# This way we don't just fall back to untyped if the user's code is the issue.
result = if only_ddpl
Expand All @@ -24,14 +19,19 @@ function DynamicPPL.Experimental.is_suitable_varinfo(
end

function DynamicPPL.Experimental._determine_varinfo_jet(
model::DynamicPPL.Model, context::DynamicPPL.AbstractContext; only_ddpl::Bool=true
model::DynamicPPL.Model; only_ddpl::Bool=true
)
# Use SamplingContext to test type stability.
sampling_model = DynamicPPL.contextualize(
model, DynamicPPL.SamplingContext(model.context)
)

# First we try with the typed varinfo.
varinfo = DynamicPPL.typed_varinfo(model, context)
varinfo = DynamicPPL.typed_varinfo(sampling_model)

# Let's make sure that both evaluation and sampling doesn't result in type errors.
issuccess, result = DynamicPPL.Experimental.is_suitable_varinfo(
model, context, varinfo; only_ddpl
sampling_model, varinfo; only_ddpl
)

if !issuccess
Expand All @@ -46,7 +46,7 @@ function DynamicPPL.Experimental._determine_varinfo_jet(
else
# Warn the user that we can't use the type stable one.
@warn "Model seems incompatible with typed varinfo. Falling back to untyped varinfo."
DynamicPPL.untyped_varinfo(model, context)
DynamicPPL.untyped_varinfo(sampling_model)
end
end

Expand Down
Loading
Loading