Skip to content

Turing.jl newsletter #2498

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
penelopeysm opened this issue Feb 28, 2025 · 6 comments
Open

Turing.jl newsletter #2498

penelopeysm opened this issue Feb 28, 2025 · 6 comments

Comments

@penelopeysm
Copy link
Member

penelopeysm commented Feb 28, 2025

Hello Turing.jl users!

We (the Turing.jl team) are starting a fortnightly series of updates on what we've been up to and what's in the works. We hope that this will provide you (our users) with some insight into the direction of the Turing ecosystem, and we'd also love for you to chip in with your thoughts if you have any.

You can keep up with this newsletter through any of the following methods:

  1. Subscribing to this GitHub issue
  2. Joining us at #turing on the Julia Slack workspace
  3. Our website: https://turinglang.org/news/

We might post on other places like Discourse, too, this is still in the works.

@TuringLang TuringLang locked and limited conversation to collaborators Feb 28, 2025
@penelopeysm
Copy link
Member Author

penelopeysm commented Feb 28, 2025

Turing.jl Newsletter 1 — 28 February 2025

Welcome to the inaugural issue of the Turing.jl newsletter!

New Turing behaviour, especially .~

Recently we have been focused on reworking a number of internal data structures in DynamicPPL.jl (this is the package that allows you to define models). We haven't released this yet but you might be interested to see the changelog on GitHub.
The main user-facing changes here are the simplification of broadcasted tilde .~ , which we previously posted about on Slack here. We also fixed a bug where the prefixes of nested submodels were applied in the wrong order.

DifferentiationInterface migration

From a developer perspective, we have now fully switched over to DifferentiationInterface.jl for automatic differentiation of models occurs. This work of course couldn't have been possible without @gdalle's work on DI itself and also his help with integrating it into DynamicPPL. This also paves the way for a long-standing goal of Turing, which is to expose a series of AD testing utilities that will allow AD package developers to test against a fixed set of models — this will allow us to formalise the idea of Turing being 'compatible' with a given AD package.

The plan for submodels

We have been discussing for a while now about how best to fully implement submodels (i.e. be able to treat submodels like distributions in the sense that we can sample from them, and also condition models on values obtained from submodels). There is currently a proposal which we've written up on GitHub, and goes into more depth about what we'd like to see and the underlying syntax. If this is a Turing feature that you use, do feel free to let us know what you think.

Turing.jl is now published (again!)

We recently published a new paper with a high-level overview of Turing.jl's features and implementation. Check it out!
Fjelde, T. E., Xu, K., Widmann, D., Tarek, M., Pfiffer, C., Trapp, M., Axen, S. D., Sun, X., Hauru, M., Yong, P., Tebbutt, W., Ghahramani, Z., & Ge, H. (2024). Turing.jl: A General-Purpose Probabilistic Programming Language. ACM Transactions on Probabilistic Machine Learning, 1(1). https://dl.acm.org/doi/10.1145/3711897

We have also published in the conference proceedings of the workshop on Languages for Inference (LAFI), which was held as part of POPL 2025:
Tim Hargreaves, Qing Li, Charles Knipp, Frederic Wantiez, Simon J. Godsill, Hong Ge. State Space Model Programming in Turing.jl. The Languages for Inference (LAFI) workshop, 2025. (link)

Looking for Google Summer of Code students

We are keen to take students for GSoC in 2025! If you are interested in working on a Python/R interface to JuliaBUGS, or making some improvements to TuringPosteriorDB, do get in touch.

@penelopeysm penelopeysm pinned this issue Feb 28, 2025
@penelopeysm
Copy link
Member Author

Turing.jl Newsletter 2 — 14 March 2025

DynamicPPL benchmarking

DynamicPPL.jl now has a set of benchmarks that are run on GitHub Actions! We measure how long it takes to evaluate a small selection of models and also to run AD on them. If you think that there are specific models / features that we should add to the benchmarks, please feel free to create an issue and let us know.

Separately, we are planning to merge the benchmarking utilities in TuringBenchmarking.jl into DynamicPPL itself. There might be a little bit of API shake-up as part of this, but it's for the better as it'll allow the benchmarking code to more easily stay in sync with DynamicPPL — allowing us to catch performance regressions in PRs.

SSMProblems

The SSMProblems.jl and GeneralisedFilters.jl packages have now been merged into a single repository: https://github.com/TuringLang/SSMProblems.jl. This won't affect you if you are using the packages from the Julia General registry, but if you're looking to develop off the main branch you may have to use a different URL, or specify a subdirectory in Pkg.add.

Smaller bits

Other code changes that have been merged:

  • Some old code in AdvancedHMC.jl has been cleaned up quite a bit. See the 0.7.0 release for more information.
  • Turing's Gibbs sampler now supports warmup steps properly. We're still thinking about how to properly encode the scenario where different sub-samplers have different numbers of warmup steps, if you have any ideas, do get in touch on that PR.
  • We are going to formally remove support for Zygote as an AD backend. We don't test it thoroughly in Turing's test suite. You can of course still use Zygote yourself, simply load ADTypes.AutoZygote() — although we can't guarantee that we will fix any bugs that arise.

@penelopeysm
Copy link
Member Author

penelopeysm commented Mar 28, 2025

Turing.jl Newsletter 3 — 28 March 2025

Turing v0.37

We've now released v0.37 of Turing. This includes a lot of new functionality from DynamicPPL 0.35, including the new (simplified) .~ . It also cleans up the list of exported identifiers, and most notably, if you were using things from DynamicPPL, you will now also need to import DynamicPPL (or using).

More generally, it's likely that from now on our releases will involve larger changes because we are aggregating more changes into a single minor version. We are, however, also committed to providing thorough release notes that will help users and library authors upgrade more easily! Release notes will be available on GitHub, and you can see the notes for Turing 0.37 and DynamicPPL 0.35 here. If you have any trouble upgrading, just drop us a note.

AD backend testing

Right now we test a series of DynamicPPL models with several AD backends. It's rather ad-hoc and we are currently drafting a more formal interface for testing AD backends with Turing models. It's still early days but if you are an AD package developer and want to know what this means for integration with Turing, get in touch (easiest way: ping me on Slack) 🙂

Unified interface for optimisation algorithms

There's an ongoing discussion about unifying the interface for MAP/MLE point estimates and variational inference (and potentially even MCMC). If you use more than one of these methods and have thoughts on what you'd like from an interface, we'd be very happy to hear from you!

@penelopeysm
Copy link
Member Author

Turing.jl Newsletter 4 — 11 April 2025

Have you used Turing.jl?

Given that you're reading this, we hope so! We're currently putting together a list of papers and other outputs (e.g. tutorials, presentations, ...) which make use of Turing.jl. We'd love to have more examples, if you have any, please do get in touch (feel free to message me and I can forward it). Thank you!

State of the AD

Over the last few weeks we've been putting together a little project that tabulates the performance of different AD backends on a variety of Turing.jl models, and we're now quite excited to share it: https://turinglang.org/ADTests/ This will hopefully help to answer the perennial question of whether you should stick with good old ForwardDiff, or whether you should try something else. Do note that (as of the time of writing) this table is still in alpha stage and there are a lot of details that have yet to be ironed out 🙂 However, suggestions are always welcome!

JuliaBUGS.jl

The BUGS (Bayesian inference Using Gibbs Sampling) language provides a declarative way to specify complex Bayesian statistical models. For years, implementations like WinBUGS, OpenBUGS, and JAGS have been widely used tools for researchers applying these models. JuliaBUGS.jl is a modern implementation of the BUGS language, aiming for full backwards compatibility with standard BUGS models, while also offering improved interoperability with the Julia ecosystem. (For details and examples of BUGS syntax, check out the JuliaBUGS documentation.)

A recent experimental update introduces significant performance improvements in JuliaBUGS: instead of relying solely on the previous graph-based approach, JuliaBUGS can now directly generate Julia code to compute the model's log-density. This code generation technique can yield >10x speedups compared to the graph-based method. Currently, this provides the most benefit for models with linear or hierarchical structures; support for state space models is planned for a future update.

To use it, run this after compiling your model:

JuliaBUGS.set_evaluation_mode(your_model, JuliaBUGS.UseGeneratedLogDensityFunction())

We would love for you to test out this new functionality! If you have any feedback, please do feel free to open a GitHub issue or discussion.

Even more advanced HMC

Lastly, we have a paper of our own to share on Hamiltonian Monte Carlo methods!

We will be looking to integrate these methods into Turing.jl in the future.

@penelopeysm
Copy link
Member Author

Turing.jl Newsletter 5 — 25 April 2025

DynamicPPL 0.36

A new minor version of DynamicPPL brings with it a few changes especially to the behaviour of submodels. These have not yet percolated up to Turing.jl, but will soon be. Firstly, prefixing behaviour is changed: consider these models

@model function inner()
    x ~ Normal()
end
@model function outer()
    a = [0.0]
    a[1] ~ to_submodel(inner())
end

If you ran this model, you would find that the single random variable was called a[1].x — but this isn't the x field of the 1st element of a, it's actually a variable whose name is literally just Symbol("a[1].x"). DynamicPPL changes this behaviour such that the variable is correctly recognised as the x field of the 1st element of a. This means that if you are trying to, for example, condition on the variable, you can do:

outer() | (@varname(a[1].x) => 1.0)

On the topic of conditioning, you can now also correctly condition or fix variables in a model before using it as a submodel, as this example demonstrates:

@model function inner()
    x ~ Normal()
end
@model function outer()
    a ~ to_submodel(inner() | (@varname(x) => 1))
end

Previously, if you wanted to do this, you would have to condition on @varname(a.x), meaning that you'd need to know the prefix before conditioning it. The current system allows for more modular construction of nested models.

For more complete details, please see the release notes.

TuringBenchmarking.jl

DynamicPPL 0.36 also brings new functionality that can be used for testing and benchmarking automatic differentiation on Turing models. This is what powers the ADTests table, which we shared last time round. (Psst — there are more examples now than before!)

For more information, see the docstring of DynamicPPL.TestUtils.AD.run_ad in the DynamicPPL docs.

As a corollary of this, the AD benchmarking functionalities in TuringBenchmarking.jl are not really needed anymore. If you are using this package, we recommend that you switch over to use the functionality that's directly built into DynamicPPL.

AdvancedHMC compatibility with ComponentArrays

AdvancedHMC had a fairly long-standing issue where it couldn't always be used with ComponentArrays as the position / momentum. This has now been fixed; you can take a look at the test suite to see examples of how they can be used together.

@penelopeysm
Copy link
Member Author

Turing.jl Newsletter 6 — 9 May 2025

Turing v0.38 has just been released and incorporates the changes from DynamicPPL which were mentioned in the last newsletter. It also contains a fix for the Gibbs sampler, so that you can now specify arbitrary VarNames for each sampler (previously, you could only specify single-symbol VarNames). For example, you can now specify the a.x and b.x VarNames here:

@model function inner()
    x ~ Normal()
end
@model function outer()
    a ~ to_submodel(inner())
    b ~ to_submodel(inner())
end
sample(outer(), Gibbs(@varname(a.x) => MH(), @varname(b.x) => MH()), 100)

It is theoretically possible that this will be slow for VarNames that involve indexing (e.g. x[1]), although we don't have an example of this yet. If you find anything you think should be faster, let us know.

One other minor point: on ADTests you can now hover over a model name to see its definition.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

2 participants