Skip to content

aws-samples/easy-model-deployer

Easy Model Deployer: Easy Foundation Model Hosting on AWS

English | 简体中文

Documentation · Changelog

MIT License PyPI - Downloads Build Status

header

Introduction

Easy Model Deployer is a lightweight tool designed for simplify deploy Open-Source LLMs (Supported Models) and Custom Models on AWS. It provides OpenAI's Completions API and LangChain Interface. Built for developers who need reliable and scalable model serving without complex environment setup.

deploy

Key Features

  • One-click deployment of models to AWS (Amazon SageMaker, Amazon ECS, Amazon EC2)
  • Diverse model types (LLMs, VLMs, Embeddings, Vision, etc.)
  • Rich inference engine (vLLM, TGI, Lmdeploy, etc.)
  • Different instance types (CPU/GPU/AWS Inferentia)
  • Convenient integration (OpenAI Compatible API, LangChain client, etc.)

Supported Models

Easy Model Deployer supports a wide range of models including:

  • LLMs: Qwen, Llama, DeepSeek, GLM, InternLM, Baichuan, and more
  • Vision-Language Models: Qwen-VL, InternVL, Gemma3-Vision, and more
  • Embedding Models: BGE, Jina, BERT-based models
  • Reranking Models: BGE-Reranker, Jina-Reranker
  • ASR Models: Whisper variants
  • Custom Models: Support for custom Docker images

For the complete list of supported models and deployment configurations, see Supported Models.

🔧 Get Started

Installation

Install Easy Model Deployer with PyPI, currently support for Python 3.9 or above:

pip install easy-model-deployer

emd

Bootstrap

Prepare the essential resources required for model deployment.

For more information, please refer to Architecture.

emd bootstrap

💡 Tip Once you upgrade the EMD by pip, you need to run this command again to update the environment.

Deploy Models

Deploy models with an interactive CLI or one command line.

emd deploy

💡 Tip To view all available parameters, run emd deploy --help. When you see the message "Waiting for model: ...", it means the deployment task has started and you can stop the terminal output by pressing Ctrl+C.

Show Status

Check the status of the model deployment task.

emd status

💡 Tip The EMD allows launch multiple deployment tasks simultaneously.

Invocation

Invoke the deployed model for testing by CLI.

emd invoke <ModelId>

💡 Tip You can find the ModelId in the output by emd status. For example: emd invoke DeepSeek-R1-Distill-Qwen-1.5B

💡 Tip OpenAI Compatible API is supported only for Amazon ECS and Amazon EC2 deployment types.

List Supported Models

Quickly see what models are supported, this command will output all information related to deployment. (Please browse Supported Models for more information.)

emd list-supported-models

Delete Model

Delete the deployed model.

emd destroy <ModelId>

💡 Tip You can find the ModelId in the output by emd status. For example: emd destroy DeepSeek-R1-Distill-Qwen-1.5B

📖 Documentation

For advanced configurations and detailed guides, visit our documentation site.

🤝 Contributing

We welcome contributions! Please see CONTRIBUTING.md for guidelines.