diff --git a/Recipes/Along-slope-velocities.ipynb b/Recipes/Along-slope-velocities.ipynb index 6856e6b4..e9ef7c2b 100644 --- a/Recipes/Along-slope-velocities.ipynb +++ b/Recipes/Along-slope-velocities.ipynb @@ -5543,7 +5543,7 @@ "metadata": {}, "source": [ "### Map of along-slope velocity with bathymetry contours. \n", - "#### On a Large ARE Instance, this should take ~45 seconds" + "**On a Large ARE Instance, this should take ~45 seconds**" ] }, { diff --git a/Recipes/Geostrophic_Velocities_from_Sea_Level.ipynb b/Recipes/Geostrophic_Velocities_from_Sea_Level.ipynb index 420e68b7..8ec0fcfd 100644 --- a/Recipes/Geostrophic_Velocities_from_Sea_Level.ipynb +++ b/Recipes/Geostrophic_Velocities_from_Sea_Level.ipynb @@ -1678,10 +1678,10 @@ "id": "d4520ed1-f69e-460e-8a7c-0abb0589387f", "metadata": {}, "source": [ - "\\begin{eqnarray}\n", + "$$\n", " u_{g,s} = -\\frac{g}{f}\\frac{\\partial \\eta}{\\partial y} \\quad \\textrm{and} \\quad\n", " v_{g,s} = \\frac{g}{f}\\frac{\\partial \\eta}{\\partial x}\n", - "\\end{eqnarray}" + "$$" ] }, { diff --git a/Recipes/Nearest_Neighbour_Distance.ipynb b/Recipes/Nearest_Neighbour_Distance.ipynb index 2fb7ecee..0f82cfe0 100644 --- a/Recipes/Nearest_Neighbour_Distance.ipynb +++ b/Recipes/Nearest_Neighbour_Distance.ipynb @@ -1432,7 +1432,7 @@ "id": "ce17902e-7eae-4a71-aa8a-94703a1bb4ad", "metadata": {}, "source": [ - "The sea ice outputs need some processing before we can start our calculations. You can check this [example](IcePlottingExample.ipynb) for a guide on how to load and plot sea ice data. \n", + "The sea ice outputs need some processing before we can start our calculations. You can check this [example](Sea_Ice_Coordinates.ipynb) for a guide on how to load and plot sea ice data. \n", " \n", "We will follow these processing steps:\n", "1. Correct time dimension values by subtracting 12 hours,\n", diff --git a/Tutorials/Make_Your_Own_Intake_Datastore.ipynb b/Tutorials/Make_Your_Own_Intake_Datastore.ipynb index 6af85ebc..b265ef30 100644 --- a/Tutorials/Make_Your_Own_Intake_Datastore.ipynb +++ b/Tutorials/Make_Your_Own_Intake_Datastore.ipynb @@ -89,8 +89,7 @@ "id": "bc200ca9-5cba-4413-a550-bd0a4f1b54bd", "metadata": {}, "source": [ - "# Building the datastore\n", - "___" + "## Building the datastore" ] }, { @@ -233,7 +232,7 @@ "tags": [] }, "source": [ - "# Using your datastore" + "## Using your datastore" ] }, { @@ -448,7 +447,6 @@ "metadata": {}, "source": [ "# 2. The convenience method: `use_datastore`\n", - "___\n", "\n", "\n", "With the `access-nri-intake` v1.1.1 release, it is now possible to build and load datastores, all in a single step.\n", diff --git a/Tutorials/intake_to_dask_efficiently_chunking.ipynb b/Tutorials/intake_to_dask_efficiently_chunking.ipynb index 5b32ab4d..3804bb42 100644 --- a/Tutorials/intake_to_dask_efficiently_chunking.ipynb +++ b/Tutorials/intake_to_dask_efficiently_chunking.ipynb @@ -4446,7 +4446,7 @@ "source": [ "## So even with optimised chunks that are about the right size, we still didn't really improve things a great deal.\n", "\n", - "#### Sometimes, getting the chunks right can be more of an art than a science.\n", + "**Sometimes, getting the chunks right can be more of an art than a science.**\n", "\n", "- We tried to follow the 300MiB chunk rule of thumb above, and slowed down loading our dataset by 50% - so the warnings about degrading performance were right. This is because the chunks we chose weren't integer multiples of the disk chunks. However, without `validate_chunkspec`, we would have had no (easy) way of knowing this!\n", "- If we wanted to throw away a large fraction of a dimension - for example, if we were only interested in data in the Southern Ocean, we could instead have tried to split our chunks up on latitude. That way, when we select a subset of data, we can throw away a lot of chunks - without having to extract a subset of their data first.\n", @@ -10398,12 +10398,12 @@ "___\n", "# Part 2: Combining coordinates\n", "\n", - "### Unfortunately, that didn't seem to help much - it might have even made things a bit slower. \n", + "**Unfortunately, that didn't seem to help much - it might have even made things a bit slower.**\n", "- So what is the issue?\n", "\n", "It turns our that xarray is checking that all our coordinates are consistent. Doing that with the 2D arrays `(ni,nj)` can be really quite slow. Fortunately, we have options to turn these checks off too, if we are confident we don't need them. In this instance, they come from a consistent model grid, so we know we can get rid of them.\n", "\n", - "#### We don't use `xarray_open_kwargs` for this: we use `xarray_combine_by_kwargs`\n", + "**We don't use** `xarray_open_kwargs` **for this: we use** `xarray_combine_by_kwargs`\n", "\n", "Lets see if we can beat four minutes...\n", "___\n", @@ -11293,7 +11293,7 @@ "id": "940268d8-4f00-41e9-b3cd-ab041ef186b5", "metadata": {}, "source": [ - "#### So this actually slowed things down pretty substantially - that's not ideal!\n", + "**So this actually slowed things down pretty substantially - that's not ideal!**\n", "\n", "Step 2: Let's set the `compat` flag to `override`. This skips a bunch of checks that slow things down a bunch.\n", "Note however: if we don't set `'datavars' : 'minimal'` and `'coords' : 'minimal'`, this can throw an error.\n"