This guide explains how to deploy the ARM Compatibility Analyzer as an AWS Lambda function using Terraform. The deployed Lambda function can be invoked directly or through the included analyzer.py client module, and is also integrated with the CostNorm MCP server for automated ARM64 migration workflows.
The ARM Compatibility Analyzer consists of:
- Lambda Function (
src/): Core analysis logic deployed to AWS Lambda - Client Module (
analyzer.py): Python client for invoking the Lambda function - Supporting Tools: Lambda search and architecture change tools
- MCP Integration: Used by CostNorm MCP server for automated workflows
-
AWS Account & Credentials: You need AWS credentials with permissions to create Lambda functions, IAM roles, and CloudWatch Log Groups.
- Configure your AWS CLI:
aws configure --profile costnorm - Or set AWS environment variables:
AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY,AWS_REGION
- Configure your AWS CLI:
-
Terraform: Download and install Terraform (version >= 1.0.0)
- Download from: https://www.terraform.io/downloads
- Verify installation:
terraform -v
-
Docker: Docker must be installed and running on the machine where you execute
terraform apply. This is required for building the Python dependency layer.- Download from: https://www.docker.com/get-started
- Verify installation:
docker --version
-
Python & Dependencies (for Local Development): Python 3.8+ and pip installed.
- Install dependencies listed in
requirements.txtif you plan to run or test locally:pip install -r requirements.txt
- Install dependencies listed in
For local development and testing, you can use a .env file to configure environment variables:
-
Create a
.envfile in thesrc/directory with the necessary variables (seesrc/.env.samplefor a template):# GitHub API Access GITHUB_TOKEN=your_github_token_here # DockerHub Access (for Docker image inspection) DOCKERHUB_USERNAME=your_dockerhub_username DOCKERHUB_PASSWORD=your_dockerhub_password_or_token # Analyzer Configuration (set to True/False) ENABLE_TERRAFORM_ANALYZER=True ENABLE_DOCKER_ANALYZER=True ENABLE_DEPENDENCY_ANALYZER=True
-
Run the analyzer locally (from the
src/directory):cd src # Make sure dependencies are installed locally if needed for testing pip install -r ../requirements.txt python lambda_function.py
The code in src/config.py automatically detects whether it's running in Lambda or locally and will load the .env file if it's not in a Lambda environment.
-
Navigate to the Terraform directory:
cd terraform -
Create your variable definitions file by copying the example:
cp terraform.auto.tfvars.example terraform.auto.tfvars
-
Edit
terraform.auto.tfvarsand fill in your values:# --- Required Credentials --- dockerhub_username = "YOUR_DOCKERHUB_USERNAME" dockerhub_password = "YOUR_DOCKERHUB_PASSWORD_OR_PAT" github_token = "YOUR_GITHUB_TOKEN" # --- Analyzer Configuration --- enable_terraform_analyzer = "True" enable_docker_analyzer = "True" enable_dependency_analyzer = "True" # --- Optional Overrides --- aws_region = "ap-northeast-2" # Default region log_level = "INFO" # Or "DEBUG" # Customize function names if needed # lambda_function_name = "custom-arm-analyzer" # lambda_timeout = 300 # lambda_memory_size = 1024
Important: Do not commit this file with sensitive credentials to version control.
From within the terraform/ directory, run the following commands:
# Initialize Terraform (downloads providers like aws and archive)
terraform init
# Preview the changes Terraform will make
terraform plan
# Apply the changes to deploy to AWS
terraform applyConfirm the apply operation when prompted.
What terraform apply does:
- Creates a zip file from the
src/directory, excluding development files like.env,__pycache__/, etc. - Uses Docker to build Python dependencies into a Lambda layer for ARM64 architecture
- Creates/updates the IAM role, Lambda layer, Lambda function, and CloudWatch Log Group in your AWS account
- Injects variables from
terraform.auto.tfvarsinto the Lambda function's environment
Terraform will output information about the created resources upon successful completion.
If you plan to use the analyzer.py client module, update the Lambda function name:
-
Open
analyzer.py -
Update the
ARM_ANALYSIS_LAMBDA_FUNCTION_NAMEvariable with the Lambda function name from Terraform output:ARM_ANALYSIS_LAMBDA_FUNCTION_NAME = "arm-compatibility-analyzer" # Update if you customized the name
-
Ensure you have the correct AWS profile configured:
boto3_session = boto3.Session(profile_name='costnorm', region_name='ap-northeast-2')
The Lambda function expects a JSON payload with a github_url parameter:
# Using AWS CLI
aws lambda invoke \
--function-name arm-compatibility-analyzer \
--payload '{"github_url":"https://github.com/username/repo-to-analyze"}' \
response.json
# View the result
cat response.jsonfrom analyzer import _invoke_arm_analysis_lambda
# Analyze a repository
result = await _invoke_arm_analysis_lambda("https://github.com/username/repo")
print(result)The Lambda function is automatically integrated with the CostNorm MCP server through the analyze_repo_arm_compatibility tool. The MCP server can:
- Analyze repository ARM compatibility
- Search for existing Lambda functions
- Automatically migrate compatible functions to ARM64
The deployment also includes additional Lambda functions for ARM migration workflows:
# Search for Lambda functions by name
aws lambda invoke \
--function-name lambda_search_tool \
--payload '{"query":"my-function","only_x86":true}' \
search_result.json# Change function architecture to ARM64
aws lambda invoke \
--function-name lambda_architecture_change_tool \
--payload '{"function_name":"my-function","target_arch":"arm64"}' \
change_result.jsonThe ARM compatibility analyzer returns a structured JSON response:
{
"repository": "owner/repo",
"github_url": "https://github.com/owner/repo",
"default_branch": "main",
"analysis_details": {
"dependencies": {
"results": [...],
"recommendations": [...],
"reasoning": [...]
},
"docker_analysis": {
"results": [...],
"recommendations": [...],
"reasoning": [...]
},
"instance_types": {
"results": [...],
"recommendations": [...],
"reasoning": [...]
}
},
"overall_compatibility": "compatible|incompatible|unknown",
"recommendations": [...],
"context": {
"analysis_summary": {...},
"reasoning": [...],
"enabled_analyzers": [...],
"statistics": {...}
}
}To remove all resources created by Terraform:
# From within the terraform/ directory
terraform destroyConfirm the destroy operation when prompted.
-
Terraform Errors: Read the output carefully. Common issues include missing credentials, Docker not running, or AWS permissions errors. Run
terraform initif you add new providers. -
Lambda Execution Errors: Check AWS CloudWatch logs for the function (e.g.,
/aws/lambda/arm-compatibility-analyzer). Log group name is available in Terraform outputs. -
Configuration: Verify environment variables in the Lambda function settings via the AWS Console (Terraform should set these from
terraform.auto.tfvars). -
Dependencies: Ensure
requirements.txtis correct. Check the logs from thelocal-execlayer build step duringterraform applyfor Docker errors. -
Permissions: Verify the IAM role has the necessary permissions (
AWSLambdaBasicExecutionRoleis attached by default). -
Client Connection: If using
analyzer.py, ensure your AWS profile (costnorm) is configured correctly and has permissions to invoke the Lambda function.
- Store sensitive credentials (GitHub token, Docker Hub credentials) securely
- Consider using AWS Secrets Manager for production deployments instead of environment variables
- The Lambda function runs with minimal IAM permissions for security
- All network communication uses HTTPS
When adding new analyzers or modifying existing ones:
- Follow the
BaseAnalyzerinterface insrc/analyzers/base_analyzer.py - Add new analyzer configurations to
src/config.py - Update
requirements.txtif new dependencies are needed - Test locally before deploying
- Update this README with any new configuration options