|
1 | 1 | # Serverless Heimdall Pusher (AWS)
|
2 | 2 |
|
| 3 | +This lambda function is meant to allow you listen to and S3 bucket for HDF results and push them to a Heimdall Server. |
3 | 4 |
|
| 5 | +## Table of Contents |
| 6 | +- [How Does This Lambda Work?](#how-does-this-lambda-work) |
| 7 | +- [How Can I Deploy This Lambda with Terraform?](#how-can-i-deploy-this-lambda-with-terraform) |
| 8 | +- [What Format Do JSON Files Need to Be in for the Function to Process Results?](#what-format-do-json-files-need-to-be-in-for-the-function-to-process-results) |
| 9 | + |
| 10 | +## How Does This Lambda Work? |
| 11 | + |
| 12 | +The lambda function is triggered when new files hit an S3 bucket that you specify under the `unprocessed/*` folder. The lambda will then take several steps to process the results: |
| 13 | +1. Fetch the new file from S3 |
| 14 | +2. Form a valid API request for a [Heimdall server](https://github.com/mitre/heimdall2) and tag the result with `HeimdallPusher` |
| 15 | +3. Send the API request to the configured Heimdall server |
| 16 | +4. Save the HDF to the same S3 bucket under `hdf/*` |
| 17 | +5. Save the original file to the same S3 bucket under `processed/*` |
| 18 | +6. Delete the unprocessed version of the file from the S3 bucket |
| 19 | + |
| 20 | +## How Can I Deploy This Lambda with Terraform? |
| 21 | + |
| 22 | +Before deploying with terraform you will need to pull the docker image to your deployment machine |
| 23 | +```bash |
| 24 | +docker pull ghcr.io/mitre/serverless-heimdall-pusher-lambda:<version> |
| 25 | +``` |
| 26 | + |
| 27 | +```hdf |
| 28 | +## |
| 29 | +# Heimdall Pusher Lambda function |
| 30 | +# |
| 31 | +# https://github.com/mitre/serverless-heimdall-pusher-lambda |
| 32 | +# |
| 33 | +module "serverless-heimdall-pusher-lambda" { |
| 34 | + source = "github.com/mitre/serverless-heimdall-pusher-lambda" |
| 35 | + heimdall_url = "https://target-heimdall.com" |
| 36 | + heimdall_user = "" |
| 37 | + heimdall_password = "" |
| 38 | + results_bucket_id = "bucket_name" |
| 39 | + subnet_ids = ["subnet-00000000000000000"] |
| 40 | + security_groups = ["sg-00000000000000000"] |
| 41 | + lambda_role_arn = aws_iam_role.InSpecRole.arn |
| 42 | + lambda_name = "serverless-inspec-lambda" |
| 43 | +} |
| 44 | +``` |
| 45 | + |
| 46 | +## What Format Do JSON Files Need to Be in for the Function to Process Results? |
| 47 | + |
| 48 | +New files that are added to the S3 bucket under `unprocessed/*` and are in the below format can trigger the lambda and have it process the results properly. |
| 49 | + |
| 50 | +```javascript |
| 51 | +{ |
| 52 | + "data": {}, // This is where the HDF results go |
| 53 | + "eval_tags": "ServerlessInspec,RHEL7" // These are any tags that should be assigned in Heimdall |
| 54 | +} |
| 55 | +``` |
4 | 56 |
|
5 | 57 | ### NOTICE
|
6 | 58 |
|
|
0 commit comments