Skip to content

mimecast/logscale-integration

Repository files navigation

mimecast-logscale-integration

The purpose of this solution (integration) is to fetch logs from Mimecast APIs 2.0 periodically and ingest it into LogScale.

Features

1. Easy Integration Setup

Description: A user-friendly setup process with minimal dependencies.

2. Pull Logs from Mimecast and Send to LogScale

Description: Retrieve logs from Mimecast and ingest them into LogScale.

  • Supported Log Types:
    • "dlp-logs"
    • "audit-events"
    • "ttp-ip-logs"
    • "ttp-ap-logs"
    • "ttp-url-logs"
    • "threat-intel-logs-malware-customer"
    • "threat-intel-logs-malware-grid"
    • "awareness-training-user-data"
    • "awareness-training-performance-details"
    • "awareness-training-watchlist-details"
    • "siem-av-logs"
    • "siem-delivery-logs"
    • "siem-internal-email-protect-logs"
    • "siem-journal-logs"
    • "siem-process-logs"
    • "siem-receipt-logs"
    • "siem-spam-logs"

3. View Integration Logs

Description: Monitor Integration logs.

Getting Started

Prerequisites

  • Docker (28.0.4)
  • Docker swarm (Single-node Swarm)
  • Linux machine with Docker is recommended. However, if you are using a Windows machine, you will need to set up WSL2 along with Docker.

Installation Steps

  1. Clone the Repository:

    git clone https://github.com/mimecast/logscale-integration.git
    cd logscale-integration
  2. Create/Update Integration Configuration Variables:

    • Create/Update an integrations_configuration.yaml file in the config folder which is at the same level as docker-compose file. Optionally add log_level in the configuration at outermost level.
    • Following are the points to keep in mind while configuring:
      • valid structure of yaml file
      • all required keys are present
      • no extra key is provided
      • no empty or null values for a key
      • valid log level is provided if present in configuration (possible log levels are "DEBUG", "INFO", "WARNING", "ERROR" and "CRITICAL")
      • no duplicate integrations
      • valid Mimecast and LogScale urls
      • valid cron (refer crontab for valid standard cron) and log types (following are possible log types)
        • "dlp-logs"
        • "audit-events"
        • "ttp-ip-logs"
        • "ttp-ap-logs"
        • "ttp-url-logs"
        • "threat-intel-logs-malware-customer"
        • "threat-intel-logs-malware-grid"
        • "awareness-training-user-data"
        • "awareness-training-performance-details"
        • "awareness-training-watchlist-details"
        • "siem-av-logs"
        • "siem-delivery-logs"
        • "siem-internal-email-protect-logs"
        • "siem-journal-logs"
        • "siem-process-logs"
        • "siem-receipt-logs"
        • "siem-spam-logs"
      • No duplicated log-types within one integration
    • Note: It is recommended to use 0 */12 * * * cron (every 12hr) for Awareness Training log types and 0 0 * * * cron (every day at 00:00:00 i.e. every 24hr) for SIEM log types.
  3. Create Docker Secrets JSON File:

    • Refer to the integrations_secrets_example.json file and create a new file integrations_secrets.json with your secret values. Ensure you use the exact same keys (mimecast_client_id, mimecast_client_secret, logscale_api_token) as specified in the example JSON.
    • Note: Following are the apps and associated log types with them. Make sure you have necessary apps installed in your Mimecast account associated with the mimecast_client_id and mimecast_client_secret to run the configured log types in the integration.
    App name Log types included
    Security Events
    • dlp-logs
    • ttp-ap-logs
    • ttp-ip-logs
    • ttp-url-logs
    Audit Events
    • audit-events
    Threat Management
    • threat-intel-logs-malware-grid
    • threat-intel-logs-malware-customer
    Awareness Training
    • awareness-training-performance-details
    • awareness-training-watchlist-details
    • awareness-training-user-data
    Threats, Security Events, and Data
    • siem-av-logs
    • siem-delivery-logs
    • siem-internal-email-protect-logs
    • siem-journal-logs
    • siem-process-logs
    • siem-receipt-logs
    • siem-spam-logs
    • Note: logscale_api_token is the ingestion token in LogScale. To view dashboards in mimecast/email-security package, you must create the token with the parser mimecast-emailsecurity associated with the package.
  4. Update docker compose file based on integrations configured

    • Update docker-compose.yml file to reflect the integrations configured in integrations_configuration.yaml.
    • here are the keys to change in docker compose file (You may take the reference from the below table for the specifications recommended):
      • in services > mimecast-logscale > deploy > resources > limits > memory: maximum memory that container can take
      • in services > mimecast-logscale > deploy > resources > limits > cpus: maximum number of CPUs that container can take
      • in services > mimecast-logscale > deploy > resources > reservations > memory: minimum memory that container can take
      • in services > mimecast-logscale > deploy > resources > reservations > cpus: minimum number of CPUs that container can take
    • Refer the table for CPU and Memory specifications based on integrations
    Integration count Max events per log type Limit CPUs Reservation CPUs Limit memory Reservation memory
    1 1000 1 1 500MB 250MB
    1 10000 1 1 1.25GB 750MB
    2 1000 2 2 750MB 500MB
    2 10000 2 2 2GB 1.25GB
    3 1000 3 3 1GB 750MB
    3 10000 3 3 2.5GB 1.75GB
  5. Server Certificate Configuration (For On-Premise LogScale):

    • If you are using LogScale in an on-premise setup with SSL enabled, you must add the server certificate for each LogScale instance. Follow these steps to ensure proper configuration:
    1. Prepare the ssl_certs Folder:
      • Ensure that the ssl_certs folder exists at the same level as your deployment script. This folder is required even if there are no .crt files to add initially.
      • If the folder does not exist, create it manually or the deployment script will create it automatically when using the script in the next step.
    2. Add Server Certificates:
      • Place all the on-premise server’s .crt file(s) into the ssl_certs folder.
  6. Use Deployment Helper Script:

    • Replace grep -oP to grep -o at line 101 and 102 in deploy_integration_helper.sh if the script is not compatible with your os/arch.
    • Ensure the deploy_integration_helper.sh file has execute permissions:
    chmod +x deploy_integration_helper.sh
    • You can then use the script to automate the creation of Docker swarm secrets and stack deployment:
    ./deploy_integration_helper.sh
    • It will prompt users to remove the integration secrets file for enhanced security. Users can either remove the file or keep it.
    • Note: If you use this script, you can skip the next 3-steps (7, 8 and 9).
  7. Create Docker Swarm Secret with the Newly Created JSON File:

    • Make sure you have swarm mode enabled. If not hit the command:
    docker swarm init
    • To create swarm secret:
    docker secret create integrations_secrets integrations_secrets.json
  8. [Optional] Remove the secret JSON file created in step 4 to enhance security:

    rm integrations_secrets.json
  9. Start the Application:

    docker stack deploy -c <docker-compose-file path> integration_stack
    • Note: In case you want to remove the stack deployment: docker stack rm <stack_name>
  10. To redeploy the integration: You can just rerun the deployment script, where you will be prompted to remove the existing stack and create a new one:

    ./deploy_integration_helper.sh

    Note: You don't need to redeploy in case of change in cron or log types in configuration.

Configuration

  1. The configuration for the mimecast logscale integration is managed through a integrations_configuration.yaml file under config folder. Below is an example with single configuration. You can customize all the values under yaml as per your need.:

    integrations:
      - name: "Integration1"
        source:
          base_url: "https://api.services.mimecast.com"
          log_groups:
            - cron_schedule: "*/2 * * * *"
              log_types:
                - "dlp-logs"
                - "audit-events"
                - "ttp-ip-logs"
                - "ttp-ap-logs"
                - "ttp-url-logs"
            - cron_schedule: "*/2 * * * *"
              log_types:
                - "threat-intel-logs-malware-customer"
                - "threat-intel-logs-malware-grid"
            - cron_schedule: "0 */12 * * *" # Every 12hr
              log_types:
                - "awareness-training-user-data"
                - "awareness-training-performance-details"
                - "awareness-training-watchlist-details"
            - cron_schedule: "0 0 * * *" # Every day at 00:00:00(24hr)
              log_types:
                - "siem-av-logs"
                - "siem-delivery-logs"
                - "siem-internal-email-protect-logs"
                - "siem-journal-logs"
                - "siem-process-logs"
                - "siem-receipt-logs"
                - "siem-spam-logs"
        destination:
          base_url: "https://cloud.community.humio.com"
          repo_name: "Temp"

    Below is an example with multiple configurations. You can customize all the values under yaml as per your need.:

    log_level: "ERROR"
    integrations:
      - name: "Integration1"
        source:
          base_url: "https://api.services.mimecast.com"
          log_groups:
            - cron_schedule: "*/2 * * * *"
              log_types:
                - "dlp-logs"
                - "audit-events"
                - "ttp-ip-logs"
                - "ttp-ap-logs"
                - "ttp-url-logs"
            - cron_schedule: "*/2 * * * *"
              log_types:
                - "threat-intel-logs-malware-customer"
                - "threat-intel-logs-malware-grid"
            - cron_schedule: "0 */12 * * *" # Every 12hr
              log_types:
                - "awareness-training-user-data"
                - "awareness-training-performance-details"
                - "awareness-training-watchlist-details"
            - cron_schedule: "0 0 * * *" # Every day at 00:00:00(24hr)
              log_types:
                - "siem-av-logs"
                - "siem-delivery-logs"
                - "siem-internal-email-protect-logs"
                - "siem-journal-logs"
                - "siem-process-logs"
                - "siem-receipt-logs"
                - "siem-spam-logs"
        destination:
          base_url: "https://cloud.community.humio.com"
          repo_name: "Temp"
      - name: "Integration2"
        source:
          base_url: "https://api.services.mimecast.com"
          log_groups:
            - cron_schedule: "*/2 * * * *"
              log_types:
                - "threat-intel-logs-malware-customer"
                - "threat-intel-logs-malware-grid"
            - cron_schedule: "*/3 * * * *"
              log_types:
                - "awareness-training-user-data"
                - "awareness-training-performance-details"
                - "awareness-training-watchlist-details"
        destination:
          base_url: "https://cloud.community.humio.com"
          repo_name: "Temp"
  2. Integrations secrets json file. Below is an example for single configuration(Please add actual value for all the keys):

    {
      "Integration1": {
        "mimecast_client_id": "",
        "mimecast_client_secret": "",
        "logscale_api_token": ""
      }
    }

    Below is an example for multiple configuration(Please add actual value for all the keys):

    {
      "Integration1": {
        "mimecast_client_id": "",
        "mimecast_client_secret": "",
        "logscale_api_token": ""
      },
      "Integration2": {
        "mimecast_client_id": "",
        "mimecast_client_secret": "",
        "logscale_api_token": ""
      }
    }

Troubleshooting

  1. To check the logs for the integration:

    • To list the Docker volumes
      docker volume ls
    • To get path of logs using the volume
      docker volume inspect mimecast-logscale-logs
      This would print the mountpoint for the logs
    • To see the logs
      cd <mountpoint_path>
      Here there would be directories for each integration configured and other log files like app.log and process logs per integration.
    • Note: app.log would use log level INFO and for other files it would be as per configuration. If not provided in the configuration, then INFO will be set by default.
  2. To debug any issues while deploying the docker stack

    There are few commands to check the status of the Docker setup:

    • To check the status of the stack:
      docker stack ls
    • To check the services running:
      docker service ls
    • To check status of the service:
      docker service ps <service_id>
    • To check containers running:
      docker ps
    • To check logs of container:
      docker logs <container_id>
    • To check logs of service:
      docker service logs <service_id>
  3. We recommend using a Linux machine. However, if you are using a Windows machine, you will need to set up WSL2 on Windows.:

    1. Enable WSL2 and then restart computer:
      • wsl --install
        wsl --set-default-version 2
    2. Create the docker Group, Add Your User to the docker Group and Restart Your WSL2 Distribution:
      • sudo groupadd docker
        sudo usermod -aG docker $USER
        wsl --shutdown
    3. On WSL2 terminal Start Docker Daemon (if it’s not already running): Run the following command to start the Docker service:
      • sudo dockerd > /dev/null 2>&1 &
    4. Initialize Docker Swarm In PowerShell, run the following command to initialize Docker Swarm: powershell
      • docker swarm init
      • If you encounter an error related to the system's IP address, proceed with the following steps:
    5. Determine System IP Address Run the following command in powershell to display your network configuration:
      • ipconfig
      • Identify the appropriate IP address from the output (probably under your Ethernet or Wi-Fi adapter).
    6. Initialize Docker Swarm with Specific IP Address Use the identified IP address to initialize Docker Swarm:
      • docker swarm init --advertise-addr <your-ip-address>
      • Replace with the actual IP address of your machine.
    7. If you want to deploy integration using deploy_integration_helper.sh then Open Bash as Administrator.

About

Log Scale integration with Mimecast

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages