Skip to content

Commit 298265e

Browse files
authored
Merge pull request #53 from omics-datascience/2.1.4
2.1.4
2 parents a2f4f9e + 1584e4f commit 298265e

26 files changed

+178649
-733
lines changed

.github/workflows/prod-env-wf.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@ name: Prod enviroment workflow to build and push docker image
22
on:
33
push:
44
branches:
5-
- "prod"
5+
- "main"
66
jobs:
77
docker-modulector:
88
runs-on: ubuntu-latest

.github/workflows/prod-pr-wf.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@ name: Check if version exist on docker registry
22
on:
33
pull_request:
44
branches:
5-
- "prod"
5+
- "main"
66
jobs:
77
version-check-repo:
88
runs-on: ubuntu-latest

.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -149,3 +149,4 @@ modulector/files/EPIC-8v2-0_A1.csv
149149
modulector/files/mirDIP_Unidirectional_search_v.5.txt
150150
*.sql.gz
151151
modulector/files/tmp_db.csv
152+
docker-compose.mauri_dev.yml

CONTRIBUTING.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -47,19 +47,19 @@ The entire contributing process consists in the following steps:
4747

4848
## Workflow
4949

50-
We use gitlab environment git workflow. The default branch is `dev` and the publishing branch is `prod`. The working branches are created from `dev` and must respect the following steps and actions:
50+
We use gitlab environment git workflow. The default branch is `dev` and the publishing branch is `main`. The working branches are created from `dev` and must respect the following steps and actions:
5151

5252
1. A new branch is created from `dev`.
5353
1. After finish working with it, a PR to `dev` must be created.
5454
1. Automatic Action/Workflow for PR is executed.
5555
1. The new branch is merged to `dev`.
5656
1. Automatic Action/Workflow for _Push_ events into `dev` is executed.
57-
1. When is ready to publish a new version of `dev`, a PR to `prod` is created.
57+
1. When is ready to publish a new version of `dev`, a PR to `main` is created.
5858
1. These Action/Workflow are executed:
5959
1. PR.
6060
1. Version checker (to avoid overwrite an existing image on Docker Hub repository).
61-
1. `dev` is merged into `prod`.
62-
1. Automatic Action/Workflow for _Push_ events into `prod` is executed to build a new Docker image for Modulector and publish it.
61+
1. `dev` is merged into `main`.
62+
1. Automatic Action/Workflow for _Push_ events into `main` is executed to build a new Docker image for Modulector and publish it.
6363

6464

6565
[**More information**](https://docs.google.com/presentation/d/1c1PXM89HLXJyF-zHAEpW_bcxb0iE_Fv2XEpEXYV2Tj4/edit?usp=sharing)

DEPLOYING.md

Lines changed: 34 additions & 46 deletions
Original file line numberDiff line numberDiff line change
@@ -2,27 +2,27 @@
22

33
Below are the steps to perform a production deploy.
44

5-
65
## Requirements
76

87
1. The entire deploy was configured to be simple from the tool Docker Compose. So you need to install:
98
- [Docker](https://docs.docker.com/desktop/#download-and-install)
109
- [Docker Compose](https://docs.docker.com/compose/install/)
1110

12-
1311
## Instructions
1412

1513
1. Create MongoDB Docker volumes:
14+
1615
```bash
1716
docker volume create --name=modulector_postgres_data
1817
```
18+
1919
1. Make a copy of `docker-compose_dist.yml` with the name `docker-compose.yml`.
2020
1. Set the environment variables that are empty with data. They are listed below by category:
2121
- Django:
2222
- `DJANGO_SETTINGS_MODULE`: indicates the `settings.py` file to read. In production, we set in `docker-compose_dist.yml` the value `ModulectorBackend.settings_prod` which contains several production properties.
2323
- `ALLOWED_HOSTS`: list of allowed host separated by commas. Default `['web', '.localhost', '127.0.0.1', '[::1]']`.
2424
- `ENABLE_SECURITY`: set the string `true` to enable Django's security mechanisms. In addition to this parameter, to have a secure site you must configure the HTTPS server, for more information on the latter see the section [Enable SSL/HTTPS](#enable-sslhttps). Default `false`.
25-
- `CSRF_TRUSTED_ORIGINS`: in Django >= 4.x, it's mandatory to define this in production when you are using Daphne through NGINX. The value is a single host or list of hosts separated by commas. 'http://', 'https://' prefixes are mandatory. Examples of values: 'http://127.0.0.1', 'http://127.0.0.1,https://127.0.0.1:8000', etc. You can read more [here][csrf-trusted-issue].
25+
- `CSRF_TRUSTED_ORIGINS`: in Django >= 4.x, it's mandatory to define this in production when you are using Daphne through NGINX. The value is a single host or list of hosts separated by commas. 'http://', 'https://' prefixes are mandatory. Examples of values: '<http://127.0.0.1>', '<http://127.0.0.1,https://127.0.0.1:8000>', etc. You can read more [here][csrf-trusted-issue].
2626
- `SECRET_KEY`: Django's secret key. If not specified, one is generated with [generate-secret-key application](https://github.com/MickaelBergem/django-generate-secret-key) automatically.
2727
- `MEDIA_ROOT`: absolute path where will be stored the uploaded files. By default `<project root>/uploads`.
2828
- `MEDIA_URL`: URL of the `MEDIA_ROOT` folder. By default `<url>/media/`.
@@ -34,7 +34,7 @@ Below are the steps to perform a production deploy.
3434
- `POSTGRES_PORT` : Database server listen port. By default, the docker image uses `5432`.
3535
- `POSTGRES_DB` : Database name to be used. By default, the docker image uses `modulector`.
3636
- Health-checks and alerts:
37-
- `HEALTH_URL` : indicates the url that will be requested on Docker health-checks. By default, it is http://localhost:8000/drugs/. The healthcheck makes a GET request on it. Any HTTP code value greater or equals than 400 is considered an error.
37+
- `HEALTH_URL` : indicates the url that will be requested on Docker health-checks. By default, it is <http://localhost:8000/drugs/>. The healthcheck makes a GET request on it. Any HTTP code value greater or equals than 400 is considered an error.
3838
- `HEALTH_ALERT_URL` : if you want to receive an alert when health-checks failed, you can set this variable to a webhook endpoint that will receive a POST request and a JSON body with the field **content** that contains the fail message.
3939
1. Go back to the project's root folder and run the following commands:
4040
- Docker Compose:
@@ -49,62 +49,57 @@ Below are the steps to perform a production deploy.
4949
1. Run: `python3 manage.py createsuperuser`
5050
1. Exit the container: `exit`
5151
52-
5352
### Start delays
5453
5554
Due to the database restoration in the first start, the container `db_modulector` may take a while to be up a ready. We can follow the status of the startup process in the logs by doing: `docker compose logs --follow`.
5655
Sometimes this delay makes django server throws database connection errors. If it is still down and not automatically fixed when database is finally up, we can restart the services by doing: `docker compose up -d`.
5756
58-
5957
## Enable SSL/HTTPS
6058
6159
To enable HTTPS, follow the steps below:
6260
6361
1. Set the `ENABLE_SECURITY` parameter to `true` as explained in the [Instructions](#instructions) section.
64-
1. Copy the file `config/nginx/multiomics_intermediate_safe_dist.conf` and paste it into `config/nginx/conf.d/` with the name `multiomics_intermediate.conf`.
65-
1. Get the `.crt` and `.pem` files for both the certificate and the private key and put them in the `config/nginx/certificates` folder.
66-
1. Edit the `multiomics_intermediate.conf` file that we pasted in point 2. Uncomment the lines where both `.crt` and `.pem` files must be specified.
67-
1. Edit the `docker-compose.yml` file so that the `nginx` service exposes both port 8000 and 443. Also, you need to add `certificates` folder to `volumes` section. It should look something like this:
68-
69-
```yaml
70-
...
71-
nginx:
72-
image: nginx:1.23.3
73-
ports:
74-
- 80:8000
75-
- 443:443
76-
# ...
77-
volumes:
78-
...
79-
- ./config/nginx/certificates:/etc/nginx/certificates
80-
...
81-
```
62+
2. Copy the file `config/nginx/multiomics_intermediate_safe_dist.conf` and paste it into `config/nginx/conf.d/` with the name `multiomics_intermediate.conf`.
63+
3. Get the `.crt` and `.pem` files for both the certificate and the private key and put them in the `config/nginx/certificates` folder.
64+
4. Edit the `multiomics_intermediate.conf` file that we pasted in point 2. Uncomment the lines where both `.crt` and `.pem` files must be specified.
65+
5. Edit the `docker-compose.yml` file so that the `nginx` service exposes both port 8000 and 443. Also, you need to add `certificates` folder to `volumes` section. It should look something like this:
66+
67+
```yaml
68+
...
69+
nginx:
70+
image: nginx:1.23.3
71+
ports:
72+
- 80:8000
73+
- 443:443
74+
# ...
75+
volumes:
76+
...
77+
- ./config/nginx/certificates:/etc/nginx/certificates
78+
...
79+
```
8280
8381
6. Redo the deployment with Docker.
8482
85-
8683
## Perform security checks
8784
8885
Django provides in its official documentation a configuration checklist that must be present in the production file `settings_prod.py`. To verify that everything is fulfilled, you could execute the following command **once the server is up (this is because several environment variables are required that are set in the `docker-compose.yml`)**.
8986
90-
```
87+
```bash
9188
docker container exec modulector_backend python3 manage.py check --deploy --settings ModulectorBackend.settings_prod
9289
```
9390
9491
Otherwise, you could set all the mandatory variables found in `settings_prod.py` and run directly without the need to pick up any service:
9592
96-
```
93+
```bash
9794
python3 manage.py check --deploy --settings ModulectorBackend.settings_prod
9895
```
9996
100-
10197
## Restart/stop the services
10298
10399
If the configuration of the `docker-compose.yml` file has been changed, you can apply the changes without stopping the services, just running the `docker compose restart` command.
104100
105101
If you want to stop all services, you can execute the command `docker compose down`.
106102
107-
108103
## See container status
109104
110105
To check the different services' status you can run:
@@ -113,7 +108,6 @@ To check the different services' status you can run:
113108
114109
Where *\<service's name\>* could be `nginx_modulector`, `web_modulector` or `db_modulector`.
115110
116-
117111
## Creating Dumps and Restoring from Dumps
118112
119113
### Export
@@ -124,23 +118,20 @@ In order to create a database dump you can execute the following command:
124118
125119
That command will create a compressed file with the database dump inside.
126120
127-
128121
### Import
129122
130123
You can use set Modulector DB in two ways.
131124
132-
133125
### Importing an existing database dump (recommended)
134126
135-
1. Start up a PostgreSQL service. You can use the same service listed in the `docker-compose.dev.yml` file. Run `docker compose -f docker-compose.dev.yml up -d db_modulector` to start the DB service.
127+
1. Start up a PostgreSQL service. You can use the same service listed in the `docker-compose.dev.yml` file. Run `docker compose -f docker-compose.dev.yml up -d db_modulector` to start the DB service.
136128
1. **Optional but recommended (you can omit these steps if it's the first time you are deploying Modulector)**: due to major changes, it's probably that an import thrown several errors when importing. To prevent that you could do the following steps before doing the importation:
137129
1. Drop all the tables from the DB: `docker exec -i [name of the DB container] psql postgres -U postgres -c "DROP DATABASE modulector;"`
138130
1. Create an empty database: `docker exec -i [name of the DB container] psql postgres -U postgres -c "CREATE DATABASE modulector;"`
139131
1. Download `.sql.gz` from [Modulector releases pages](https://github.com/omics-datascience/modulector/releases) or use your own export file.
140132
1. Restore the database: `zcat modulector.sql.gz | docker exec -i [name of the DB container] psql modulector -U modulector`. This command will restore the database using a compressed dump as source, **keep in mind that could take several minutes to finish the process**.
141133
- **NOTE**: in case you are working on Windows, the command must be executed from [Git Bash][git-bash] or WSL.
142134
143-
144135
### Regenerating the data manually
145136
146137
1. Download the files for the mirDIP database (version 5.2) and the Illumina 'Infinium MethylationEPIC 2.0' array. The files can be freely downloaded from their respective web pages.
@@ -152,38 +143,35 @@ You can use set Modulector DB in two ways.
152143
**For the EPIC Methylation array**:
153144
- Go to the [Illumina product files web page](https://support.illumina.com/downloads/infinium-methylationepic-v2-0-product-files.html) and download the ZIP file called "*Infinium MethylationEPIC v2.0 Product Files (ZIP Format)*".
154145
- Unzip the file.
155-
- Within the unzipped files you will find one called "*EPIC-8v2-0_A1.csv*". Move this file to the directory **"modulector/files/"**.
146+
- Within the unzipped files you will find one called "*EPIC-8v2-0_A1.csv*". Move this file to the directory **"modulector/files/"**.
156147
- **NOTE:** the total weight of both files is about 5 GB.
157148
158149
**For the mirBase database**: this database is embedded as it weighs only a few MBs. Its data is processed in Django migrations during the execution of the `python3 manage.py migrate` command. So, you don't have to do manual steps to incorporate mirBase data inside Modulector.
159-
1. Start up a PostgreSQL service. You can use the same service listed in the _docker-compose.dev.yml_ file.
150+
1. Start up a PostgreSQL service. You can use the same service listed in the *docker-compose.dev.yml* file.
160151
1. Run `python3 manage.py migrate` to apply all the migrations (**NOTE:** this can take a long time to finish).
161152
162-
163153
## Update databases
164154
165155
Modulector currently works with the mirDIP (version 5.2) and miRBase (version 22.1) databases for miRNA data, and with information from the Illumina 'Infinium MethylationEPIC 2.0' array for information about methylation sites.
166156
If new versions are released for these databases, and you want to update them, follow these steps:
167157
168-
- For **mirDIP** and **Illumina EPIC array** you must follow the same steps described in the [Regenerating the data manually](#regenerating-the-data-manually) section, replacing the named files with the most recent versions that have been published on their sites .
169-
- For **miRBase**, follow the instructions below:
170-
1. Go to the [_Download_ section on the website][mirbase-download-page].
171-
1. Download the files named _hairpin.fa_ and _mature.fa_ from the latest version of the database.
172-
1. Replace the files inside the _modulector/files/_ directory with the ones downloaded in the previous step.
173-
1. Start up a PostgreSQL service. You can use the same service listed in the _docker-compose.dev.yml_ file.
158+
- For **mirDIP** and **Illumina EPIC array** you must follow the same steps described in the [Regenerating the data manually](#regenerating-the-data-manually) section, replacing the named files with the most recent versions that have been published on their sites .
159+
- For **miRBase**, follow the instructions below:
160+
1. Go to the [*Download* section on the website][mirbase-download-page].
161+
1. Download the files named *hairpin.fa* and *mature.fa* from the latest version of the database.
162+
1. Replace the files inside the *modulector/files/* directory with the ones downloaded in the previous step.
163+
1. Start up a PostgreSQL service. You can use the same service listed in the *docker-compose.dev.yml* file.
174164
1. Run the command `python3 manage.py migrate` to apply all the migrations (**NOTE:** this can take a long time to finish).
175165
176166
**Note:** These updates will work correctly as long as they maintain the format of the data in the source files.
177167
178-
179168
## Configure your API key
180169
181170
When we notify user about updates of pubmeds they are subscribed to we interact with a ncbi api that uses an API_KEY, by default, we left a random API_KEY pre-configured in our settings file, you should replace it with your own.
182171
183-
184172
## Cron job configuration
185-
For cron jobs we use the following [library](https://github.com/kraiz/django-crontab). In our settings file we configured our cron jobs inside the `CRONJOBS = []`
186173
174+
For cron jobs we use the following [library](https://github.com/kraiz/django-crontab). In our settings file we configured our cron jobs inside the `CRONJOBS = []`
187175
188176
[mirbase-download-page]: https://www.mirbase.org/ftp.shtml
189177
[csrf-trusted-issue]: https://docs.djangoproject.com/en/4.2/ref/csrf/

Dockerfile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
FROM python:3.12-slim-bullseye
1+
FROM python:3.12-slim-bookworm
22

33
# Default value for deploying with modulector DB image
44
ENV POSTGRES_USERNAME "modulector"

ModulectorBackend/settings.py

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@
1313
import os
1414

1515
# Modulector version
16-
VERSION: str = '2.1.3'
16+
VERSION: str = '2.1.4'
1717

1818
# Default primary key field type
1919
# https://docs.djangoproject.com/en/4.0/ref/settings/#default-auto-field
@@ -163,6 +163,9 @@
163163
MEDIA_ROOT = os.getenv('MEDIA_ROOT', os.path.join(BASE_DIR, ''))
164164
MEDIA_URL = os.getenv('MEDIA_URL', '/media/')
165165

166+
# Test runner
167+
TEST_RUNNER = 'modulector.tests.runner.DjangoTestSuiteRunner'
168+
166169
# Email Server
167170
EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'
168171
# This email configuration is what postfix uses, for production, use your own

0 commit comments

Comments
 (0)