Skip to content

Commit e8dbfb5

Browse files
authored
Minor updates to tutorial_odegen and ml_classical_shadows (#1429)
**Title:** **Summary:** - The odegen demo is very sensitive to the initial guess. Updating the random seed so that the plot looks nicer. - The classical shadows demo is not compatible with latest jax, and is using an old interface of sklearn **Relevant references:** **Possible Drawbacks:** **Related GitHub Issues:**
1 parent a255e60 commit e8dbfb5

File tree

4 files changed

+12
-9
lines changed

4 files changed

+12
-9
lines changed

demonstrations/ml_classical_shadows.metadata.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66
}
77
],
88
"dateOfPublication": "2022-05-02T00:00:00+00:00",
9-
"dateOfLastModification": "2025-01-10T00:00:00+00:00",
9+
"dateOfLastModification": "2025-07-10T00:00:00+00:00",
1010
"categories": [
1111
"Quantum Machine Learning"
1212
],

demonstrations/ml_classical_shadows.py

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -27,8 +27,10 @@
2727
properties from the learned classical shadows. So let's get started!
2828
2929
.. note::
30-
This demo is compatible with the latest version of PennyLane and ``neural-tangents==0.6.5``.
30+
This demo is compatible with the latest version of PennyLane and ``neural-tangents``.
3131
The latter is required for building the kernel for the infinite network used in training.
32+
As of July 10th, 2025, the latest version of ``neural-tangents`` (v0.6.5) is only compatible
33+
with ``jax<0.6.0``.
3234
3335
3436
Building the 2D Heisenberg model
@@ -648,7 +650,7 @@ def build_dataset(num_points, Nr, Nc, T=500):
648650
# from the ``sklearn`` library.
649651
#
650652

651-
from sklearn.metrics import mean_squared_error
653+
from sklearn.metrics import root_mean_squared_error
652654

653655
def fit_predict_data(cij, kernel, opt="linear"):
654656

@@ -685,8 +687,8 @@ def fit_predict_data(cij, kernel, opt="linear"):
685687
best_model = model(hyperparam).fit(X_train, y_train)
686688
best_pred = best_model.predict(X_test)
687689
best_cv_score = cv_score
688-
best_test_score = mean_squared_error(
689-
best_model.predict(X_test).ravel(), y_test_clean.ravel(), squared=False
690+
best_test_score = root_mean_squared_error(
691+
best_model.predict(X_test).ravel(), y_test_clean.ravel()
690692
)
691693

692694
return (

demonstrations/tutorial_odegen.metadata.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66
}
77
],
88
"dateOfPublication": "2023-12-12T00:00:00+00:00",
9-
"dateOfLastModification": "2025-01-28T00:00:00+00:00",
9+
"dateOfLastModification": "2025-07-08T00:00:00+00:00",
1010
"categories": [
1111
"Optimization",
1212
"Quantum Computing",

demonstrations/tutorial_odegen.py

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -296,7 +296,7 @@ def partial_step(grad_circuit, opt_state, theta):
296296

297297
return thetas, energy
298298

299-
key = jax.random.PRNGKey(0)
299+
key = jax.random.PRNGKey(888)
300300
theta0 = jax.random.normal(key, shape=(n_param_batch, tbins * 2))
301301

302302
thetaf_odegen, energy_odegen = run_opt(value_and_grad_jax, theta0)
@@ -313,8 +313,9 @@ def partial_step(grad_circuit, opt_state, theta):
313313

314314
##############################################################################
315315
# We see that with analytic gradients (ODEgen), we can reach the ground state energy within 100 epochs, whereas with SPS gradients we cannot find the path
316-
# towards the minimum due to the stochasticity of the gradient estimates. Note that both optimizations start from the same (random) initial point.
317-
# This picture solidifies when repeating this procedure for multiple runs from different random initializations, as was demonstrated in [#Kottmann]_.
316+
# towards the minimum due to the stochasticity of the gradient estimates. Note that the convergence of the optimization is sensitive to the initial guess.
317+
# In this demonstration, both optimizations start from the same (random) initial point. This picture solidifies when repeating this procedure for multiple
318+
# runs from different random initializations, as was demonstrated in [#Kottmann]_.
318319
#
319320
# We also want to make sure that this is a fair comparison in terms of quantum resources. In the case of ODEgen, we maximally have :math:`\mathcal{R}_\text{ODEgen} = 2 (4^n - 1) = 30` expectation values.
320321
# For SPS we have :math:`2 N_g N_s = 32` (due to :math:`N_g = 2` and :math:`N_s=8` time samples per gradient that we chose in ``num_split_times`` above). Thus, overall, we require fewer

0 commit comments

Comments
 (0)