Skip to content

CDCgov/scenarios-hpc-azure

Repository files navigation

CFA Scenarios Azure HPC Acceleration

Overview | Quick Start | Project Admins | Fine Text and Disclaimers

Overview

Important

This repository is now deprecated. Please look around, but we advise against building anything on top of this code.

This repository is responsible for creating, visualizing, launching, and standardizing DynODE experiments. An experiment is the broadest categorization of an effort or goal, e.g. Fitting a particular time period in a specific way is an experiment.

When a user wants to launch an experiment, the individual run is called a job. Jobs are broken down into a series of tasks, which represent the smallest chunk of work handled by an individual Azure VM.

Quick Start

After installing scenarios-hpc-azure into your poetry environment, you should have access to the scripts listed in pyproject.toml [tool.poetry.scripts] section.

Currently, the two supported scripts are create_experiment and launch_experiment.

These scripts will aid you in creating and launching your experiment and are used as command-line tools. Use the -h flag to get a brief description of the expected input parameters to each script.

Technical Details

Disclaimer: This library is not a catch-all way to parallelize DynODE projects onto any HPC environment. On the backend, this project relies on a library called cfa-azure which acts as a wrapper script around the base Azure SDK. As a result of this dependency, we do not offer any additional guarantees not offered by cfa-azure regarding its ability to work out of the box with other Azure implementations. Every high-performance computing system will be different; even one Azure setup to the next may have different authentication protocols or systems. This repository is public out of an interest in transparency in how our system is orchestrated, but it may not be immediately useful to others. With all that being said, let us describe what it means for a DynODE model to be an experiment and what an experiment launching looks like.

File structure

A DynODE experiment is the broadest categorization of an effort or goal. If you wish to fit a mechanistic compartmental model built with DynODE onto a specific period, you may call that an experiment. An experiment must be nested entirely in an exp/ folder where the name of the experiment matches the name of the directory within exp/. Let's create an example experiment called fitting_covid; our experiment-specific files would then be inside the repository under exp/fitting_covid. Here is an example directory structure.

exp/
├─ fitting_covid/
│  ├─ postprocessing_scripts/
│  │  ├─ combine_state_outputs.py
│  ├─ states/
│  │  ├─ CA/
│  │  │  ├─ config_global.json (readonly)
│  │  │  ├─ config_inference.json (readonly)
│  │  ├─ NY/
│  │  │  ├─ config_global.json (readonly)
│  │  │  ├─ config_inference.json (readonly)
│  │  ├─ TX/
│  │  │  ├─ config_global.json (readonly)
│  │  │  ├─ config_inference.json (readonly)
│  ├─ template_configs/
│  │  ├─ config_global.json
│  │  ├─ config_inference.json
│  ├─ misc_experiment_utils.py
│  ├─ run_task.py
secrets/
├─ azure_authentication_config.toml
Dockerfile
poetry.lock
pyproject.toml

The actual modeling, reading, writing, and processing are all kicked off by the run_task.py script.

Each directory within states/ will be launched as a single Azure task, all under the same Azure job.

Configuration files and directories within states/ are programmatically generated by the create_experiment script, using the files found within exp/fitting_covid/template_configs as a base. The reason why all JSON files within states/ are read-only is to avoid undocumented one-off changes to individual states, which can be exceedingly hard to track down. Any state-specific changes should be written in code within a committed experiment_creator script.

Files within postprocessing_scripts/ are often scripts that run after every state is finished modeling. Their responsibilities may include things like visualization, data collation, or writing to other databases.

The azure_authentication_config.toml within the secrets/ directory is meant to provide all the necessary information to authenticate a user onto the Azure system, either by managed identity, service principal, or user identity. The implementation details surrounding this system are all internal to cfa-azure.

Lastly, the Dockerfile, poetry.lock, and pyproject.toml are files that containerize the experiment and manage its Python version and dependencies. This package is often included as a developer dependency within pyproject.toml. This is because it is needed to launch the job, but unless jobs launch other jobs (which they should not except in very special circumstances), this code is not needed within an Azure VM.

Execution

graph LR
    subgraph create_experiment["(1) create_experiment.py"]
        copy["Copy template files"]
    end

    create_experiment -->lunch_experiment

    subgraph lunch_experiment["(2) lunch_experiment.py"]
    direction TB
        containerize["Containerize<br>Python"] --> upload_image
        upload_image["Upload image<br>to ACR"] --> upload_experiment
        upload_experiment["Upload experiment<br>to blob storage"] --> spin
        spin["Start Azure<br>Batch Pool"]
    end

    lunch_experiment -->run_task

    subgraph run_task["(3) run_task.py"]
        execute["Execute job<br>for the config state"]
    end
Loading

The intended goal for the create_experiment script is to provide an easy way for users to programmatically copy over their template configuration files to each of the states they hope to model. The launch_experiment script does a number of important tasks needed to launch onto Azure. Firstly, it containerizes the Python version, dependencies, and any other files the user wants in their Docker image. Then, it uploads that image to the Azure Container Registry (ACR,) where the VMs can access it. Next, it uploads the experiment directory to a blob storage in a particular location. The reason files are uploaded and not included in the image is to reduce image rebuilds and reuploads every time a small change is made to run_task.py.

Experiments are uploaded to a location on the blob based on the experiment name and the ID of the job being launched. In this case, if the user wants to call their job fitting_job_1 the exp/fitting_covid directory would be uploaded to input_blob/exp/fitting_covid/fitting_job_1. Thus, a user may later return to the input_blob/exp/fitting_covid directory and find all relevant files to any of their past jobs.

Once the launch_experiment script has uploaded the image to the ACR and the exp/fitting_covid directory to the input blob storage. It will then send instructions to Azure Batch to spin up a pool of VMs with a specified number of CPUs (default 4) and to create 1 task per state within exp/fitting_covid/states/, passing that state's name to run_task.py via command line arguments. From there your run_task.py script takes over control within the Azure VM and does the work for that state.

Project Admins

Thomas Hladish, Lead Data Scientist, utx5@cdc.gov, CDC/IOD/ORR/CFA

Ariel Shurygin, Data Scientist, uva5@cdc.gov, CDC/IOD/ORR/CFA

Ed Baskerville, Data Scientist, ah20@cdc.gov, CDC/IOD/ORR/CFA (Contract)

General Disclaimer

This repository was created for use by CDC programs to collaborate on public health related projects in support of the CDC mission. GitHub is not hosted by the CDC, but is a third party website used by CDC and its partners to share information and collaborate on software. CDC use of GitHub does not imply an endorsement of any one particular service, product, or enterprise.

Public Domain Standard Notice

This repository constitutes a work of the United States Government and is not subject to domestic copyright protection under 17 USC § 105. This repository is in the public domain within the United States, and copyright and related rights in the work worldwide are waived through the CC0 1.0 Universal public domain dedication. All contributions to this repository will be released under the CC0 dedication. By submitting a pull request you are agreeing to comply with this waiver of copyright interest.

License Standard Notice

This repository is licensed under ASL v2 or later.

This source code in this repository is free: you can redistribute it and/or modify it under the terms of the Apache Software License version 2, or (at your option) any later version.

This source code in this repository is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the Apache Software License for more details.

You should have received a copy of the Apache Software License along with this program. If not, see http://www.apache.org/licenses/LICENSE-2.0.html

The source code forked from other open source projects will inherit its license.

Privacy Standard Notice

This repository contains only non-sensitive, publicly available data and information. All material and community participation is covered by the Disclaimer and Code of Conduct. For more information about CDC's privacy policy, please visit http://www.cdc.gov/other/privacy.html.

Contributing Standard Notice

Anyone is encouraged to contribute to the repository by forking and submitting a pull request. (If you are new to GitHub, you might start with a basic tutorial.) By contributing to this project, you grant a world-wide, royalty-free, perpetual, irrevocable, non-exclusive, transferable license to all users under the terms of the Apache Software License v2 or later.

All comments, messages, pull requests, and other submissions received through CDC including this GitHub page may be subject to applicable federal law, including but not limited to the Federal Records Act, and may be archived. Learn more at http://www.cdc.gov/other/privacy.html.

Records Management Standard Notice

This repository is not a source of government records but is a copy to increase collaboration and collaborative potential. All government records will be published through the CDC web site.

Additional Standard Notices

Please refer to CDC's Template Repository for more information about contributing to this repository, public domain notices and disclaimers, and code of conduct.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 5

Languages