--
Repository: https://github.com/chirindaopensource/between_transatlantic_monetary_disturbances
Owner: 2025 Craig Chirinda (Open Source Projects)
This repository contains an independent, professional-grade Python implementation of the research methodology from the 2025 paper entitled "In-between Transatlantic (Monetary) Disturbances" by:
- Santiago Camara
- Jeanne Aublin
The project provides a complete, end-to-end computational framework for identifying source-dependent monetary policy shocks and analyzing their international spillovers. It delivers a modular, auditable, and extensible pipeline that replicates the paper's entire workflow: from rigorous high-frequency data processing and validation, through the sophisticated rotational decomposition for shock identification, to the estimation of BVAR and Local Projection models and a comprehensive suite of robustness checks.
- Introduction
- Theoretical Background
- Features
- Methodology Implemented
- Core Components (Notebook Structure)
- Key Callable: execute_full_study_pipeline
- Prerequisites
- Installation
- Input Data Structure
- Usage
- Output Structure
- Project Structure
- Customization
- Contributing
- Recommended Extensions
- License
- Citation
- Acknowledgments
This project provides a Python implementation of the methodologies presented in the 2025 paper "In-between Transatlantic (Monetary) Disturbances." The core of this repository is the iPython Notebook between_transatlantic_monetary_disturbances_draft.ipynb
, which contains a comprehensive suite of functions to replicate the paper's findings, from initial data validation to the final generation and analysis of impulse response functions and robustness tests.
The paper addresses a key question in international macroeconomics: How do monetary policy shocks from major economic blocs (the U.S. and the Euro Area) propagate to a smaller, open economy (Canada), and do the transmission channels differ? This codebase operationalizes the paper's advanced approach, allowing users to:
- Rigorously validate and cleanse high-frequency financial data and low-frequency macroeconomic data.
- Identify "pure" monetary policy shocks, purged of central bank information effects, using a high-frequency identification strategy with sign restrictions.
- Estimate the dynamic effects of these shocks using both Bayesian Vector Autoregressions (BVAR) and Local Projections (LP).
- Conduct a full suite of robustness checks to validate the stability of the findings across different identification schemes, sample periods, and model specifications.
- Systematically investigate specific transmission channels (e.g., trade, financial) by running augmented models.
The implemented methods are grounded in modern time-series econometrics and international finance.
1. High-Frequency Identification with Sign Restrictions:
Standard event studies can be confounded by the "information effect," where a central bank's policy action reveals private information about the economic outlook. To solve this, the paper uses the methodology of Jarociński & Karadi (2020). Raw surprises in interest rates (A
is achieved by finding all rotations of an initial Cholesky decomposition that satisfy a set of theoretical sign restrictions on the impulse responses (e.g., a contractionary MP shock must raise rates and lower equity prices).
2. Bayesian Vector Autoregression (BVAR):
The primary workhorse model is a VAR-X, where the identified shocks are treated as exogenous variables. The model for a vector of endogenous variables
3. Local Projections (LP):
As a robustness check, the impulse responses are also estimated using the Local Projection method of Jordà (2005). This involves running a separate regression for each forecast horizon h
:
$$ y_{k, t+h} = \beta_h^{Shock} s_t^{Shock} + \text{controls}t + \epsilon{t+h} $$
The sequence of estimated coefficients ${\hat{\beta}h^{Shock}}{h=0}^H$ forms the impulse response function. This method is robust to misspecification but less efficient than a VAR. Inference requires HAC (Newey-West) standard errors.
The provided iPython Notebook (between_transatlantic_monetary_disturbances_draft.ipynb
) implements the full research pipeline, including:
- Modular, Multi-Phase Architecture: The entire pipeline is broken down into 17 distinct, modular tasks, from data validation to final robustness checks.
- Configuration-Driven Design: All methodological and computational parameters are managed in an external
config.yaml
file, allowing for easy customization without code changes. - Professional-Grade Data Pipeline: A comprehensive validation, quality assessment, and cleansing suite for both high-frequency and low-frequency data, including robust handling of timezones and DST.
- High-Fidelity Shock Identification: A precise, vectorized implementation of the rotational-angle decomposition method.
- Robust BVAR Estimation: A complete Gibbs sampler for a BVAR with a Normal-Wishart prior, including intra-run convergence diagnostics.
- Complete Local Projections Estimator: A full implementation of the LP method with HAC-robust standard errors.
- Advanced Robustness Toolkit:
- A framework for testing alternative identification schemes (Poor Man's Sign Restriction).
- A parallelized framework for quantifying identification uncertainty by integrating over all admissible rotations.
- A framework for testing sensitivity to alternative sample periods (pre-GFC, pre-COVID).
- A framework for testing sensitivity to estimation choices (prior hyperparameters, lag length).
The core analytical steps directly implement the methodology from the paper:
- Data Validation & Preprocessing (Tasks 1-3): Ingests and rigorously validates all raw data and the
config.yaml
file, performs a deep data quality audit, and produces clean, analysis-ready data streams. - Shock Identification (Tasks 4-6): Defines event windows, extracts high-frequency prices, calculates raw surprises, and performs the rotational decomposition to identify structural shocks.
- Model Preparation (Tasks 7-8): Aggregates the identified shocks to a monthly frequency and assembles the final, transformed dataset for econometric modeling.
- Estimation (Tasks 9-11): Sets up and estimates the baseline BVAR via Gibbs sampling and the Local Projections model via OLS with HAC errors.
- Results & Validation (Tasks 12-14): Calculates impulse response functions from the BVAR posterior and runs a full suite of in-sample and out-of-sample validation tests.
- Robustness Analysis (Tasks 16-17): Orchestrates the entire suite of robustness checks on the identification and estimation methods.
The between_transatlantic_monetary_disturbances_draft.ipynb
notebook is structured as a logical pipeline with modular orchestrator functions for each of the major tasks. All functions are self-contained, fully documented with type hints and docstrings, and designed for professional-grade execution.
The central function in this project is execute_full_study_pipeline
. It orchestrates the entire analytical workflow, providing a single entry point for running the baseline study and all associated robustness checks.
def execute_full_study_pipeline(
equity_tick_df: pd.DataFrame,
rate_tick_df: pd.DataFrame,
macro_df: pd.DataFrame,
announcement_df: pd.DataFrame,
target_market: str,
study_config: Dict[str, Any],
run_identification_robustness: bool = True,
run_estimation_robustness: bool = True
) -> Dict[str, Any]:
"""
Executes the entire research study, including the main analysis and all robustness checks.
"""
# ... (implementation is in the notebook)
- Python 3.9+
- Core dependencies:
pandas
,numpy
,scipy
,statsmodels
,pyyaml
,tqdm
,joblib
,pandas_market_calendars
.
-
Clone the repository:
git clone https://github.com/chirindaopensource/between_transatlantic_monetary_disturbances.git cd between_transatlantic_monetary_disturbances
-
Create and activate a virtual environment (recommended):
python -m venv venv source venv/bin/activate # On Windows, use `venv\Scripts\activate`
-
Install Python dependencies:
pip install pandas numpy scipy statsmodels pyyaml tqdm joblib pandas_market_calendars
The pipeline requires four pandas.DataFrame
s and a configuration file as input. Mock data generation functions are provided in the main notebook to create valid examples for testing.
equity_tick_df
/rate_tick_df
: Must contain columns['timestamp_micros_utc', 'price', 'volume', 'type']
.macro_df
: A long-format DataFrame with columns['date', 'source_series_id', 'country', 'variable_name', 'value_raw']
.announcement_df
: Must contain columns['event_id', 'central_bank', 'announcement_date_local', 'announcement_time_local', 'local_timezone']
.
The between_transatlantic_monetary_disturbances_draft.ipynb
notebook provides a complete, step-by-step guide. The core workflow is:
-
Prepare Inputs: Load your four raw
pandas.DataFrame
s. Ensure theconfig.yaml
file is present in the same directory. -
Execute Pipeline: Call the grand orchestrator function.
# This single call runs the entire project. final_results = execute_full_study_pipeline( equity_tick_df=my_equity_data, rate_tick_df=my_rate_data, macro_df=my_macro_data, announcement_df=my_announcement_data, target_market='CAN', study_config=my_config_dict, run_identification_robustness=False, # Set to True for the full analysis run_estimation_robustness=False )
-
Inspect Outputs: The returned
final_results
dictionary contains all generated artifacts, including intermediate data, final IRFs, and validation reports.
The execute_full_study_pipeline
function returns a single, comprehensive dictionary containing all generated artifacts, structured by analytical phase. Key outputs include:
benchmark_run
: The results of the main analysis.phase_2_identification['structural_shocks']
: The identified monthly shock series.phase_3_model_prep['analysis_ready_df']
: The final dataset used for estimation.phase_5_results['bvar_irfs']
: The final impulse response functions from the BVAR.phase_5_results['model_validation_reports']
: The full suite of diagnostic reports.
identification_robustness_suite
: (If run) Contains the results of the PMSR, rotational uncertainty, and sub-sample analyses.estimation_robustness_suite
: (If run) Contains the results of the prior, lag, and specification sensitivity analyses.
between_transatlantic_monetary_disturbances/
│
├── between_transatlantic_monetary_disturbances_draft.ipynb # Main implementation notebook
├── config.yaml # Master configuration file
├── requirements.txt # Python package dependencies
├── LICENSE # MIT license file
└── README.md # This documentation file
The pipeline is highly customizable via the config.yaml
file. Users can easily modify all methodological parameters, such as BVAR lags, prior hyperparameters, MCMC settings, and window definitions, without altering the core Python code.
Contributions are welcome. Please fork the repository, create a feature branch, and submit a pull request with a clear description of your changes. Adherence to PEP 8, type hinting, and comprehensive docstrings is required.
Future extensions could include:
- Visualization Module: Creating a function that takes the final IRF results and automatically generates publication-quality plots that replicate the figures in the paper.
- Automated Reporting: Building a module that uses the generated results and validation reports to automatically create a full PDF or HTML summary report of the findings.
- Alternative Priors: Implementing other BVAR prior structures, such as the Independent Normal-Wishart prior or stochastic volatility.
- Structural VAR Identification: Adding modules for other SVAR identification schemes, such as Cholesky or long-run restrictions, for comparison.
This project is licensed under the MIT License. See the LICENSE
file for details.
If you use this code or the methodology in your research, please cite the original paper:
@article{camara2025inbetween,
title={{In-between Transatlantic (Monetary) Disturbances}},
author={Camara, Santiago and Aublin, Jeanne},
journal={arXiv preprint arXiv:2509.13578},
year={2025}
}
For the implementation itself, you may cite this repository:
Chirinda, C. (2025). A High-Resolution Framework for Analyzing Transatlantic Monetary Spillovers.
GitHub repository: https://github.com/chirindaopensource/between_transatlantic_monetary_disturbances
- Credit to Santiago Camara and Jeanne Aublin for their foundational research, which forms the entire basis for this computational replication.
- This project is built upon the exceptional tools provided by the open-source community. Sincere thanks to the developers of the scientific Python ecosystem, including Pandas, NumPy, SciPy, Statsmodels, and Joblib, whose work makes complex computational analysis accessible and robust.
--
This README was generated based on the structure and content of between_transatlantic_monetary_disturbances_draft.ipynb
and follows best practices for research software documentation.