You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
docs: running in production should change aes key (#12132)
* running in production should change aes key
* running in production should change aes key
* add instructions on how to generate AES key
* adjust celery text
Copy file name to clipboardExpand all lines: docs/content/en/open_source/installation/running-in-production.md
+16-4Lines changed: 16 additions & 4 deletions
Original file line number
Diff line number
Diff line change
@@ -28,6 +28,18 @@ With a separate database, the minimum recommendations to run DefectDojo are:
28
28
a different disk than your OS\'s for potential performance
29
29
improvements.
30
30
31
+
### Security
32
+
Verify the `nginx` configuration and other run-time aspects such as security headers to comply with your compliance requirements.
33
+
Change the AES256 encryption key `&91a*agLqesc*0DJ+2*bAbsUZfR*4nLw` in `docker-compose.yml` to something unique for your instance.
34
+
This encryption key is used to encrypt API keys and other credentials stored in Defect Dojo to connect to external tools such as SonarQube. A key can be generated in various ways for example using a password manager or `openssl`:
35
+
36
+
```
37
+
openssl rand -base64 32
38
+
```
39
+
```
40
+
DD_CREDENTIAL_AES_256_KEY: "${DD_CREDENTIAL_AES_256_KEY:-<PUT THE GENERATED KEY HERE>o}"
41
+
```
42
+
31
43
## File Backup
32
44
33
45
In both cases (dedicated DB or containerized), if you are self-hosting, it is recommended that you implement and create periodic backups of your data.
@@ -55,7 +67,7 @@ concurrent connections.
55
67
56
68
### Celery worker
57
69
58
-
By default, a single mono-process celery worker is spawned. When storing a large amount of findings, leveraging async functions (like deduplication), or both. Eventually, it is important to adjust these parameters to prevent resource starvation.
70
+
By default, a single mono-process celery worker is spawned. When storing a large amount of findingsor running large imports it might be helpful to adjust these parameters to prevent resource starvation.
59
71
60
72
The following variables can be changed to increase worker performance, while keeping a single celery container.
61
73
@@ -80,8 +92,8 @@ and see what is in effect.
80
92
81
93
<spanstyle="background-color:rgba(242, 86, 29, 0.3)">This experimental feature has been deprecated as of DefectDojo 2.44.0 (March release). Please exercise caution if using this feature with an older version of DefectDojo, as results may be inconsistent.</span>
82
94
83
-
Import and Re-Import can also be configured to handle uploads asynchronously to aid in
84
-
processing especially large scans. It works by batching Findings and Endpoints by a
95
+
Import and Re-Import can also be configured to handle uploads asynchronously to aid in
96
+
processing especially large scans. It works by batching Findings and Endpoints by a
85
97
configurable amount. Each batch will be be processed in separate celery tasks.
86
98
87
99
The following variables impact async imports.
@@ -90,7 +102,7 @@ The following variables impact async imports.
90
102
-`DD_ASYNC_FINDING_IMPORT_CHUNK_SIZE` defaults to 100
91
103
92
104
When using asynchronous imports with dynamic scanners, Endpoints will continue to "trickle" in
93
-
even after the import has returned a successful response. This is because processing continues
105
+
even after the import has returned a successful response. This is because processing continues
94
106
to occur after the Findings have already been imported.
95
107
96
108
To determine if an import has been fully completed, please see the progress bar in the appropriate test.
0 commit comments