You can modify this README file with all the information that your team consider relevant for a technical audience who would like to understand your project or to run it in the future.
Note that this file is written in MarkDown language. A reference is available here: https://www.markdownguide.org/basic-syntax/
Include the name, logo and images refering to your project
[Project ] is an interactive web dashboard to....
The problem detected was...
The proposed solution is valuable because...
Tested on Python 3.12.7 with the following packages:
- Jupyter v1.1.1
- Streamlit v1.46.1
- Seaborn v0.13.2
- Plotly v6.2.0
- Scikit-Learn v1.7.0
- shap v0.48.0
Run the commands below in a terminal to configure the project and install the package dependencies for the first time.
If you are using Mac, you may need to follow install Xcode. Check the official Streamlit documentation here.
- Create the environment with
python -m venv env
- Activate the virtual environment for Python
- If using Mac or Linux, type the command:
source env/bin/activate
- If using Windows:
- First, set the Default Terminal Profile to CMD Terminal
- Then, type in the CMD terminal:
.\env\Scripts\activate.bat
- If using Mac or Linux, type the command:
- Make sure that your terminal is in the environment (
env
) not in the global Python installation - Install required packages
pip install -r ./requirements.txt
- Check that everything is ok running
streamlit hello
- Stop the terminal by pressing Ctrl+C
To run the dashboard execute the following command:
> streamlit run Dashboard.py
# If the command above fails, use:
> python -m streamlit run Dashboard.py
assets/
. The first time that you execute the application, it will show an error saying that such file does not exist. Therefore, you need to execute the notebook inside the folder jupyter-notebook/
to create the pre-trained model.
This logic resembles the expected pipeline, where the jupyter notebooks are used to iterate the data modeling part until a satisfactory trained model is created, and the streamlit scripts are only in charge of rendering the user-facing interface to generate the prediction for new data. In practice, the data science pipeline is completely independent from the web dashboard, and both are connected via the pre-trained model.
Add the project's authors, contact information, and links to their websites or portfolios.