Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
24 changes: 12 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,15 +32,15 @@ data using remote clusters.
`conda install -c conda-forge mlforecast`

For more detailed instructions you can refer to the [installation
page](https://nixtla.github.io/mlforecast/docs/getting-started/install.html).
page](https://nixtlaverse.nixtla.io/mlforecast/docs/getting-started/install).

## Quick Start

**Get Started with this [quick
guide](https://nixtla.github.io/mlforecast/docs/getting-started/quick_start_local.html).**
guide](https://nixtlaverse.github.io/mlforecast/docs/getting-started/quick_start_local.html).**

**Follow this [end-to-end
walkthrough](https://nixtla.github.io/mlforecast/docs/getting-started/end_to_end_walkthrough.html)
walkthrough](https://nixtlaverse.nixtla.io/mlforecast/docs/getting-started/end_to_end_walkthrough)
for best practices.**

### Videos
Expand All @@ -61,7 +61,7 @@ for best practices.**
Current Python alternatives for machine learning models are slow,
inaccurate and don’t scale well. So we created a library that can be
used to forecast in production environments.
[`MLForecast`](https://Nixtla.github.io/mlforecast/forecast.html#mlforecast)
[`MLForecast`](https://nixtlaverse.nixtla.io/mlforecast/forecast#class-mlforecast)
includes efficient feature engineering to train any machine learning
model (with `fit` and `predict` methods such as
[`sklearn`](https://scikit-learn.org/stable/)) to fit millions of time
Expand All @@ -83,36 +83,36 @@ Missing something? Please open an issue or write us in
## Examples and Guides

📚 [End to End
Walkthrough](https://nixtla.github.io/mlforecast/docs/getting-started/end_to_end_walkthrough.html):
Walkthrough](https://nixtlaverse.nixtla.io/mlforecast/docs/getting-started/end_to_end_walkthrough):
model training, evaluation and selection for multiple time series.

🔎 [Probabilistic
Forecasting](https://nixtla.github.io/mlforecast/docs/how-to-guides/prediction_intervals.html):
Forecasting](https://nixtlaverse.nixtla.io/mlforecast/docs/tutorials/prediction_intervals_in_forecasting_models):
use Conformal Prediction to produce prediciton intervals.

👩‍🔬 [Cross
Validation](https://nixtla.github.io/mlforecast/docs/how-to-guides/cross_validation.html):
Validation](https://nixtlaverse.nixtla.io/mlforecast/docs/how-to-guides/cross_validation):
robust model’s performance evaluation.

🔌 [Predict Demand
Peaks](https://nixtla.github.io/mlforecast/docs/tutorials/electricity_peak_forecasting.html):
Peaks](https://nixtlaverse.nixtla.io/mlforecast/docs/tutorials/electricity_peak_forecasting):
electricity load forecasting for detecting daily peaks and reducing
electric bills.

📈 [Transfer
Learning](https://nixtla.github.io/mlforecast/docs/how-to-guides/transfer_learning.html):
Learning](https://nixtlaverse.nixtla.io/mlforecast/docs/how-to-guides/transfer_learning):
pretrain a model using a set of time series and then predict another one
using that pretrained model.

🌡️ [Distributed
Training](https://nixtla.github.io/mlforecast/docs/getting-started/quick_start_distributed.html):
Training](https://nixtlaverse.nixtla.io/mlforecast/docs/getting-started/quick_start_distributed):
use a Dask, Ray or Spark cluster to train models at scale.

## How to use

The following provides a very basic overview, for a more detailed
description see the
[documentation](https://nixtla.github.io/mlforecast/).
[documentation](https://nixtlaverse.nixtla.io/mlforecast/).

### Data setup

Expand Down Expand Up @@ -165,7 +165,7 @@ models = [
### Forecast object

Now instantiate an
[`MLForecast`](https://Nixtla.github.io/mlforecast/forecast.html#mlforecast)
[`MLForecast`](https://nixtlaverse.nixtla.io/mlforecast/forecast#class-mlforecast)
object with the models and the features that you want to use. The
features can be lags, transformations on the lags and date features. You
can also define transformations to apply to the target before fitting,
Expand Down
26 changes: 13 additions & 13 deletions nbs/docs/how-to-guides/cross_validation.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
"\n",
"## Prerequesites\n",
"\n",
"This tutorial assumes basic familiarity with `MLForecast`. For a minimal example visit the [Quick Start](quick_start_local.html) \n",
"This tutorial assumes basic familiarity with `MLForecast`. For a minimal example visit the [Quick Start](https://nixtlaverse.nixtla.io/mlforecast/docs/getting-started/quick_start_local)\n",
":::"
]
},
Expand All @@ -36,7 +36,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"[MLForecast](https://nixtla.github.io/mlforecast/) has an implementation of time series cross-validation that is fast and easy to use. This implementation makes cross-validation a efficient operation, which makes it less time-consuming. In this notebook, we'll use it on a subset of the [M4 Competition](https://www.sciencedirect.com/science/article/pii/S0169207019301128) hourly dataset. "
"[MLForecast](https://nixtlaverse.nixtla.io/mlforecast/) has an implementation of time series cross-validation that is fast and easy to use. This implementation makes cross-validation a efficient operation, which makes it less time-consuming. In this notebook, we'll use it on a subset of the [M4 Competition](https://www.sciencedirect.com/science/article/pii/S0169207019301128) hourly dataset. "
]
},
{
Expand Down Expand Up @@ -72,7 +72,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"We assume that you have `MLForecast` already installed. If not, check this guide for instructions on [how to install MLForecast](../getting-started/install.html)."
"We assume that you have `MLForecast` already installed. If not, check this guide for instructions on [how to install MLForecast](https://nixtlaverse.nixtla.io/mlforecast/docs/getting-started/install)"
]
},
{
Expand All @@ -88,11 +88,11 @@
"metadata": {},
"outputs": [],
"source": [
"import pandas as pd \n",
"import pandas as pd\n",
"\n",
"from utilsforecast.plotting import plot_series\n",
"\n",
"from mlforecast import MLForecast # required to instantiate MLForecast object and use cross-validation method "
"from mlforecast import MLForecast # required to instantiate MLForecast object and use cross-validation method"
]
},
{
Expand Down Expand Up @@ -190,8 +190,8 @@
}
],
"source": [
"Y_df = pd.read_csv('https://datasets-nixtla.s3.amazonaws.com/m4-hourly.csv') # load the data \n",
"Y_df.head() "
"Y_df = pd.read_csv('https://datasets-nixtla.s3.amazonaws.com/m4-hourly.csv') # load the data\n",
"Y_df.head()"
]
},
{
Expand Down Expand Up @@ -252,9 +252,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"For this example, we'll use LightGBM. We first need to import it and then we need to instantiate a new [MLForecast](../../forecast.html#mlforecast) object. \n",
"For this example, we'll use LightGBM. We first need to import it and then we need to instantiate a new [MLForecast](https://nixtlaverse.nixtla.io/mlforecast/forecast#class-mlforecast) object. \n",
"\n",
"In this example, we are only using `differences` and `lags` to produce features. See [the full documentation](https://nixtla.github.io/mlforecast) to see all available features.\n",
"In this example, we are only using `differences` and `lags` to produce features. See [the full documentation](https://nixtlaverse.nixtla.io/mlforecast) to see all available features.\n",
"\n",
"Any settings are passed into the constructor. Then you call its `fit` method and pass in the historical data frame `df`. "
]
Expand All @@ -278,8 +278,8 @@
"models = [lgb.LGBMRegressor(verbosity=-1)]\n",
"\n",
"mlf = MLForecast(\n",
" models=models, \n",
" freq=1,# our series have integer timestamps, so we'll just add 1 in every timeste, \n",
" models=models,\n",
" freq=1,# our series have integer timestamps, so we'll just add 1 in every timeste,\n",
" target_transforms=[Differences([24])],\n",
" lags=range(1, 25)\n",
")"
Expand All @@ -296,7 +296,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Once the `MLForecast` object has been instantiated, we can use the [cross_validation method](../../forecast.html#mlforecast.cross_validation)."
"Once the `MLForecast` object has been instantiated, we can use the [cross_validation method](https://nixtlaverse.nixtla.io/mlforecast/forecast#method-cross-validation)"
]
},
{
Expand Down Expand Up @@ -504,7 +504,7 @@
"outputs": [],
"source": [
"from utilsforecast.evaluation import evaluate\n",
"from utilsforecast.losses import rmse "
"from utilsforecast.losses import rmse"
]
},
{
Expand Down
48 changes: 24 additions & 24 deletions nbs/docs/how-to-guides/prediction_intervals.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@
"\n",
"## Prerequesites\n",
"\n",
"This tutorial assumes basic familiarity with MLForecast. For a minimal example visit the [Quick Start](https://nixtla.github.io/mlforecast/docs/quick_start_local.html) \n",
"This tutorial assumes basic familiarity with MLForecast. For a minimal example visit the [Quick Start](https://nixtlaverse.nixtla.io/mlforecast/docs/getting-started/quick_start_local)\n",
":::"
]
},
Expand Down Expand Up @@ -323,7 +323,7 @@
"metadata": {},
"outputs": [],
"source": [
"n_series = 8 \n",
"n_series = 8\n",
"uids = train['unique_id'].unique()[:n_series] # select first n_series of the dataset\n",
"train = train.query('unique_id in @uids')\n",
"test = test.query('unique_id in @uids')"
Expand Down Expand Up @@ -415,7 +415,7 @@
"metadata": {},
"outputs": [],
"source": [
"# Create a list of models and instantiation parameters \n",
"# Create a list of models and instantiation parameters\n",
"models = [\n",
" KNeighborsRegressor(),\n",
" Lasso(),\n",
Expand Down Expand Up @@ -765,11 +765,11 @@
"outputs": [],
"source": [
"fig = plot_series(\n",
" train, \n",
" test, \n",
" plot_random=False, \n",
" models=['KNeighborsRegressor'], \n",
" level=levels, \n",
" train,\n",
" test,\n",
" plot_random=False,\n",
" models=['KNeighborsRegressor'],\n",
" level=levels,\n",
" max_insample_length=48\n",
")"
]
Expand Down Expand Up @@ -809,11 +809,11 @@
"outputs": [],
"source": [
"fig = plot_series(\n",
" train, \n",
" test, \n",
" plot_random=False, \n",
" train,\n",
" test,\n",
" plot_random=False,\n",
" models=['Lasso'],\n",
" level=levels, \n",
" level=levels,\n",
" max_insample_length=48\n",
")"
]
Expand Down Expand Up @@ -853,11 +853,11 @@
"outputs": [],
"source": [
"fig = plot_series(\n",
" train, \n",
" test, \n",
" plot_random=False, \n",
" train,\n",
" test,\n",
" plot_random=False,\n",
" models=['LinearRegression'],\n",
" level=levels, \n",
" level=levels,\n",
" max_insample_length=48\n",
")"
]
Expand Down Expand Up @@ -897,11 +897,11 @@
"outputs": [],
"source": [
"fig = plot_series(\n",
" train, \n",
" test, \n",
" plot_random=False, \n",
" train,\n",
" test,\n",
" plot_random=False,\n",
" models=['MLPRegressor'],\n",
" level=levels, \n",
" level=levels,\n",
" max_insample_length=48\n",
")"
]
Expand Down Expand Up @@ -941,11 +941,11 @@
"outputs": [],
"source": [
"fig = plot_series(\n",
" train, \n",
" test, \n",
" plot_random=False, \n",
" train,\n",
" test,\n",
" plot_random=False,\n",
" models=['Ridge'],\n",
" level=levels, \n",
" level=levels,\n",
" max_insample_length=48\n",
")"
]
Expand Down
6 changes: 3 additions & 3 deletions nbs/docs/how-to-guides/transfer_learning.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -153,7 +153,7 @@
"- `differences`: Differences to take of the target before computing the features. These are restored at the forecasting step.\n",
"- `lags`: Lags of the target to use as features.\n",
"\n",
"In this example, we are only using `differences` and `lags` to produce features. See [the full documentation](https://nixtla.github.io/mlforecast/forecast.html) to see all available features.\n",
"In this example, we are only using `differences` and `lags` to produce features. See [the full documentation](https://nixtlaverse.nixtla.io/mlforecast/forecast) to see all available features.\n",
"\n",
"Any settings are passed into the constructor. Then you call its `fit` method and pass in the historical data frame `Y_df_M3`. "
]
Expand All @@ -165,7 +165,7 @@
"outputs": [],
"source": [
"fcst = MLForecast(\n",
" models=models, \n",
" models=models,\n",
" lags=range(1, 13),\n",
" freq='MS',\n",
" target_transforms=[Differences([1, 12])],\n",
Expand All @@ -190,7 +190,7 @@
"source": [
"Y_df = pd.read_csv('https://datasets-nixtla.s3.amazonaws.com/air-passengers.csv', parse_dates=['ds'])\n",
"\n",
"# We define the train df. \n",
"# We define the train df.\n",
"Y_train_df = Y_df[Y_df.ds<='1959-12-31'] # 132 train\n",
"Y_test_df = Y_df[Y_df.ds>'1959-12-31'] # 12 test"
]
Expand Down
Loading
Loading