A Model Context Protocol (MCP) server that enables AI assistants to execute Rust cargo commands safely with asynchronous operation support.
This MCP server allows AI to run cargo commands quickly in the background. Multiple commands can be queued and executed while the AI continues to think. This helps complete tasks faster and, with fewer interruptions, may use less AI premium tokens.
See the full product specification and roadmap: docs/product-specification.md
Most tool-calling AIs take advantage of this. All LLMs are equal, some are more equal than others. We continue to iterate and help is welcome.
Now let me run the full test suite to verify that all tests pass with my fixes:
async_cargo_mcp (MCP Server):
Perfect! The test is running in the background. While it's running, let me update the
agent-plan.md
to mark this task as substantially complete and document the final findings:agent-plan.md+3-3
I'll run clippy fix, build, nextest, and fmt across the workspace to ensure the changes compile and pass checks.
async_cargo_mcp (MCP Server) async_cargo_mcp (MCP Server) async_cargo_mcp (MCP Server) async_cargo_mcp (MCP Server) async_cargo_mcp (MCP Server) ⬇️ agent-plan.md +4 -5
I will now proceed with the plan. First, I'll run the tests to find the slowest ones.
async_cargo_mcp (MCP Server)
I've kicked off the process to find the 10 slowest tests. While that's running, I'll update our plan to reflect the current status.
agent-plan.md+6-6
Excellent. I'm now executing the plan.
First, I'm...
As you can see (1) the developer, (2) the AI, and (3) Rust tooling can be easily coordinated to all work productively and concurrently without loosing the storyline.
The server features a pre-warmed shell pool that provides 10x faster command start vs unpooled. Command startup latency from 50-200ms to just 5-20ms (Macbook Pro M1), delivering rapid responses while allowing stacking of commands. For example, test
and nextest
(after initial compile) do not hold the cargo filesystem lock while they run, allowing both the AI and other spawned cargo commands such as clippy
to do useful work while they complete.
More information is available in --help
.
build
- Compile the current packagerun
- Build and execute the binarytest
- Run the test suitecheck
- Check for compile errors without buildingclean
- Remove build artifactsdoc
- Build documentationadd
- Add dependencies to Cargo.toml (updatesCargo.toml
so synchronous)remove
- Remove dependencies from Cargo.toml (synchronous)update
- Update dependencies to latest compatible versions (synchronous)fetch
- Download dependencies without buildinginstall
- Install a Rust binarysearch
- Search for packages on crates.iotree
- Display dependency tree (synchronous)version
- Show cargo version information (synchronous)rustc
- Compile with custom rustc optionsmetadata
- Output package metadata as JSON (synchronous)
clippy
- Enhanced linting and code quality checksnextest
- Faster test executionfmt
- Code formatting with rustfmtaudit
- Security vulnerability scanningupgrade
- Upgrade dependencies to latest versions (synchronous)bump_version
- Bump package version (patch, minor, major) (synchronous)bench
- Run benchmarks
status
- Query running operations status (non-blocking, returns JSON)wait
- Wait for async operations to complete (synchronous, deprecated - results pushed automatically)cargo_lock_remediation
- Safely handletarget/.cargo-lock
with options to delete and optionallycargo clean
(synchronous, used as fallback when elicitation isn't available)
- Asynchronous execution with real-time progress updates
- Automatic result push - Operation results pushed to AI when complete (no manual wait required)
- Safe operations with proper working directory isolation
- Type-safe parameters with JSON schema validation
- Operation monitoring with timeout and cancellation support
- Comprehensive error handling and detailed logging
- Concurrency metrics for optimizing AI task parallelism
- Status queries - Non-blocking visibility into running operations
git clone https://github.com/paulirotta/async_cargo_mcp.git
cd async_cargo_mcp
cargo build --release
Enable MCP in VSCode settings:
{
"chat.mcp.enabled": true
}
Add the server configuration using Ctrl/Cmd+Shift+P
→ "MCP: Add Server":
{
"servers": {
"async_cargo_mcp": {
"type": "stdio",
"cwd": "${workspaceFolder}",
"command": "cargo",
"args": ["run", "--release", "--bin", "async_cargo_mcp"]
}
},
"inputs": []
}
-
Edit the Json to taste. Also see the optional Rust_Beast_Mode.chatmode.md including instructions to help your AI with tool use.
-
Restart VSCode to activate the server.
These are good reasons to use --synchronous
to have cargo commands run blocking the AI and terminal:
- you prefer less chatter of
waiting
in your AI dialogue - you prefer to see the
cargo
command execute in your terminal - you accept that the terminal is blocked while
cargo
commands execute - your AI does not actually think or act on the next steps while waiting for cargo operations to complete
- you prefer to keep it simple and take your time to stop and drink coffee
"args": ["run", "--release", "--bin", "async_cargo_mcp", "--", "--synchronous"]
Other command line arguments are less common. See --help
.
The server automatically manages pre-warmed shell pools for optimal performance. You can customize the behavior using command-line arguments:
# Configure shell pool size (default: 2 shells per directory)
cargo run --release -- --shell-pool-size 4
# Set maximum total shells across all pools (default: 20)
cargo run --release -- --max-shells 50
# Disable shell pools entirely (fallback to direct command spawning)
cargo run --release -- --disable-shell-pools
# Disable specific tools (comma-separated list or repeat flag)
cargo run --release -- --disable add,remove,update,upgrade
# Force synchronous execution mode (disables async callbacks for all operations)
cargo run --release -- --synchronous
# Combine options as needed
cargo run --release -- --shell-pool-size 3 --max-shells 30 --synchronous
- 10x Performance: Command startup reduced from 50-200ms to 5-20ms
- Automatic Management: Background health monitoring and cleanup
- Transparent Operation: Same API and behavior as before, just faster
- Resource Efficient: Idle shells are automatically cleaned up after 30 minutes
For production use, build with optimizations enabled:
cargo build --release
./target/release/async_cargo_mcp --shell-pool-size 3 --max-shells 25
Commands support both synchronous and asynchronous execution. For long-running operations, enable async notifications:
{
"working_directory": "/path/to/project",
"enable_async_notification": true
}
The bump_version
tool (requires cargo install cargo-edit
) safely bumps package versions:
{
"working_directory": "/path/to/project",
"bump_type": "patch"
}
Supported bump types:
"patch"
- 1.2.3 → 1.2.4"minor"
- 1.2.3 → 1.3.0"major"
- 1.2.3 → 2.0.0
Use "dry_run": true
to preview changes without modifying Cargo.toml.
When async is enabled, prefer `status` to check progress. Use `wait` only if blocked and you need results to proceed:
- `wait` with `operation_ids` waits for specific operations by ID.
Notes about `wait` semantics:
- Available only in async mode (default). It’s not offered in synchronous mode.
- Only `operation_ids` are accepted; unknown fields are rejected. Configure timeouts via the server CLI (e.g., `--timeout 30`).
- On timeout, you’ll get a clear message including how long it waited. If a `target/.cargo-lock` issue is detected, the server suggests remediation using the `cargo_lock_remediation` tool.
### Execution Modes
- **Async Mode (default)**: Operations can run in the background with notifications when `enable_async_notification: true`. `wait` is available but discouraged; prefer `status`.
- **Synchronous Mode**: Use `--synchronous` to run all operations synchronously. `wait` is not offered in this mode.
### Selectively Disabling Tools
Operators can hide or block specific tools from being used with the `--disable <tool>` flag. You can pass a comma-separated list (preferred for multiple) or repeat the flag. This is useful for:
- Hardening production environments (e.g. disable `upgrade`, `audit`, or mutation-causing commands)
- Restricting heavy operations (`bench`, `nextest`) in resource-constrained contexts
- Enforcing a narrow AI action surface during experimentation
Examples:
```bash
cargo run --release -- --disable build,test,clippy
# Equivalent using repeated flags
cargo run --release -- --disable build --disable test --disable clippy
If a disabled tool is invoked by a client that cached an older schema, the server returns an error with marker tool_disabled
.
Licensed under either Apache License 2.0 or MIT License.
Built with Anthropic's official Rust MCP SDK.