-
Notifications
You must be signed in to change notification settings - Fork 0
Project Setup
This page covers how to configure and run the application in different environments such as production and development. Broadly this can be divided into development and production. However these are only to "presets" and this page should also cover the general ideas and overall principles behind configuring this application.
-
Delete Docker Containers and Volumes in docker incase it has been used before on pc
Run either prod or dev mode: - Run in the docker folder of project : "docker-compose --profile prod build --no-cache" and "docker-compose --profile prod up - Run in the Docker folder of project : "docker-compose --profile dev build --no-cache" and "docker-compose --profile dev up"
-
Create a copy of the
.env.example
named.env
-
Configure your password in the
.env
file (Docker folder) -
Make sure to set the same password in the correct
appsettings.{Environment}.json
file (Fennec) -
Run Elasticsearch container and enter in the cli : bin/elasticsearch-create-enrollment-token -s kibana
-
Start kibana
-
Open http://localhost:5601/ and past the token you recieved form previous step
-
You will recieve a 6 digit code from Kibana in the container (Docker Kibana Log)
-
Log in with "elastic" as username and the password you configured in the .env file (DockerFolder)
-
Start TAPAS
To be able to fine tune the parser and fix potential problems early the project includes the traffic-replay
service. It’s a docker container which will continuously send specified data to the host read from a .pcap file.
Specify which file to replay in the .env file with setting the TRAFFIC_REPLAY_FILE
variable. All available files can be found in the ./traffic-replay/captures directory. These are copied at startup so to add new files you must recreate the container using docker-compose build --no-cache traffic-replay
. Once you have specified the file to replay use either docker-compose up traffic-replay
or docker-compose up --profile dev
to replay the file.
Once the service has been started it will copy the content of every packet in the specified .pcap file into the /tmp/payloads.txt file. A python script will then loop through this file, parse each line from hex to binary and send its value to host.docker.internal:2055
which should just be the host.