Skip to content

Commit 978017b

Browse files
author
AoifeHughes
committed
Add FAQ section to improve user guidance
1 parent f6b5684 commit 978017b

File tree

2 files changed

+74
-0
lines changed

2 files changed

+74
-0
lines changed

_quarto.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,8 @@ website:
2424
text: Get Started
2525
- href: tutorials/coin-flipping/
2626
text: Tutorials
27+
- href: faq/
28+
text: FAQ
2729
- href: https://turinglang.org/library/
2830
text: Libraries
2931
- href: https://turinglang.org/news/

faq/index.qmd

Lines changed: 72 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,72 @@
1+
---
2+
title: "Frequently Asked Questions"
3+
description: "Common questions and answers about using Turing.jl"
4+
---
5+
6+
## Why is this variable being treated as random instead of observed?
7+
8+
This is a common source of confusion. In Turing.jl, you can only manipulate expressions that explicitly appear on the left-hand side (LHS) of a `~` statement.
9+
10+
For example, if your model contains:
11+
```julia
12+
x ~ filldist(Normal(), 2)
13+
```
14+
15+
You cannot directly condition on `x[2]` using `condition(model, @varname(x[2]) => 1.0)` because `x[2]` never appears on the LHS of a `~` statement. Only `x` as a whole appears there.
16+
17+
To understand more about how Turing determines whether a variable is treated as random or observed, see:
18+
- [Compiler Design Overview](../developers/compiler/design-overview/) - explains the heuristics Turing uses
19+
- [DynamicPPL Transformations](../developers/transforms/dynamicppl/) - details about variable transformations and the `@varname` macro
20+
- [Core Functionality](../core-functionality/) - basic explanation of the `~` notation and conditioning
21+
22+
## How do I implement a sampler for a Turing.jl model?
23+
24+
We have comprehensive guides on implementing custom samplers:
25+
- [Implementing Samplers Tutorial](../developers/inference/implementing-samplers/) - step-by-step guide on implementing samplers in the AbstractMCMC framework
26+
- [AbstractMCMC-Turing Interface](../developers/inference/abstractmcmc-turing/) - how to integrate your sampler with Turing
27+
- [AbstractMCMC Interface](../developers/inference/abstractmcmc-interface/) - the underlying interface documentation
28+
29+
## Can I use parallelism / threads in my model?
30+
31+
Yes! Turing.jl supports both multithreaded and distributed sampling. See the [Core Functionality guide](../core-functionality/#sampling-multiple-chains) for detailed examples showing:
32+
- Multithreaded sampling using `MCMCThreads()`
33+
- Distributed sampling using `MCMCDistributed()`
34+
35+
## How do I check the type stability of my Turing model?
36+
37+
Type stability is crucial for performance. Check out:
38+
- [Performance Tips](../usage/performance-tips/) - includes specific advice on type stability
39+
- [Automatic Differentiation](../usage/automatic-differentiation/) - contains benchmarking utilities using `DynamicPPL.TestUtils.AD`
40+
41+
## How do I debug my Turing model?
42+
43+
For debugging both statistical and syntactical issues:
44+
- [Troubleshooting Guide](../usage/troubleshooting/) - common errors and their solutions
45+
- For more advanced debugging, DynamicPPL provides `DynamicPPL.DebugUtils` for inspecting model internals
46+
47+
## What are the main differences between Turing vs BUGS vs Stan syntax?
48+
49+
While there are many syntactic differences, key advantages of Turing include:
50+
- **Julia ecosystem**: Full access to Julia's profiling and debugging tools
51+
- **Parallel computing**: Much easier to use distributed and parallel computing inside models
52+
- **Flexibility**: Can use arbitrary Julia code within models
53+
- **Extensibility**: Easy to implement custom distributions and samplers
54+
55+
## Which automatic differentiation backend should I use?
56+
57+
The choice of AD backend can significantly impact performance. See:
58+
- [Automatic Differentiation Guide](../usage/automatic-differentiation/) - comprehensive comparison of ForwardDiff, Mooncake, ReverseDiff, and other backends
59+
- [Performance Tips](../usage/performance-tips/#choose-your-ad-backend) - quick guide on choosing backends
60+
- [AD Backend Benchmarks](https://turinglang.org/ADTests/) - performance comparisons across various models
61+
62+
For more specific recommendations, check out the [DifferentiationInterface.jl tutorial](https://juliadiff.org/DifferentiationInterface.jl/DifferentiationInterfaceTest/stable/tutorial/).
63+
64+
## I changed one line of my model and now it's so much slower; why?
65+
66+
Small changes can have big performance impacts. Common culprits include:
67+
- Type instability introduced by the change
68+
- Switching from vectorized to scalar operations (or vice versa)
69+
- Inadvertently causing AD backend incompatibilities
70+
- Breaking assumptions that allowed compiler optimizations
71+
72+
See our [Performance Tips](../usage/performance-tips/) and [Troubleshooting Guide](../usage/troubleshooting/) for debugging performance regressions.

0 commit comments

Comments
 (0)