vCon Server is a powerful conversation processing and storage system that enables advanced analysis and management of conversation data. It provides a flexible pipeline for processing, storing, and analyzing conversations through various modules and integrations. The system includes secure API endpoints for both internal use and external partner integration, allowing third-party systems to securely submit conversation data with scoped access controls.
- 🐰 vCon Server
- Docker and Docker Compose
- Git
- Python 3.12 or higher (for local development)
- Poetry (for local development)
For a quick start using the automated installation script:
# Download the installation script
curl -O https://raw.githubusercontent.com/vcon-dev/vcon-server/main/scripts/install_conserver.sh
chmod +x install_conserver.sh
# Run the installation script
sudo ./install_conserver.sh --domain your-domain.com --email your-email@example.com
- Clone the repository:
git clone https://github.com/vcon-dev/vcon-server.git
cd vcon-server
- Create and configure the environment file:
cp .env.example .env
# Edit .env with your configuration
- Create the Docker network:
docker network create conserver
- Build and start the services:
docker compose build
docker compose up -d
The repository includes an automated installation script that handles the complete setup process. The script:
- Installs required dependencies
- Sets up Docker and Docker Compose
- Configures the environment
- Deploys the services
- Sets up monitoring
To use the automated installation:
./scripts/install_conserver.sh --domain your-domain.com --email your-email@example.com [--token YOUR_API_TOKEN]
Options:
--domain
: Your domain name (required)--email
: Email for DNS registration (required)--token
: API token (optional, generates random token if not provided)
Create a .env
file in the root directory with the following variables:
REDIS_URL=redis://redis
CONSERVER_API_TOKEN=your_api_token
CONSERVER_CONFIG_FILE=./config.yml
GROQ_API_KEY=your_groq_api_key
DNS_HOST=your-domain.com
DNS_REGISTRATION_EMAIL=your-email@example.com
The config.yml
file defines the processing pipeline, storage options, chain configurations, and external API access. Here's an example configuration:
# External API access configuration
# Configure API keys for external partners to submit vCons to specific ingress lists
ingress_auth:
# Single API key for an ingress list
customer_data: "customer-api-key-12345"
# Multiple API keys for the same ingress list (different clients)
support_calls:
- "support-api-key-67890"
- "support-client-2-key"
- "support-vendor-key-xyz"
# Multiple API keys for sales leads
sales_leads:
- "sales-api-key-abcdef"
- "sales-partner-key-123"
links:
webhook_store_call_log:
module: links.webhook
options:
webhook-urls:
- https://example.com/conserver
deepgram_link:
module: links.deepgram_link
options:
DEEPGRAM_KEY: your_deepgram_key
minimum_duration: 30
api:
model: "nova-2"
smart_format: true
detect_language: true
summarize:
module: links.analyze
options:
OPENAI_API_KEY: your_openai_key
prompt: "Summarize this transcript..."
analysis_type: summary
model: 'gpt-4'
storages:
postgres:
module: storage.postgres
options:
user: postgres
password: your_password
host: your_host
port: "5432"
database: postgres
s3:
module: storage.s3
options:
aws_access_key_id: your_key
aws_secret_access_key: your_secret
aws_bucket: your_bucket
chains:
main_chain:
links:
- deepgram_link
- summarize
- webhook_store_call_log
storages:
- postgres
- s3
enabled: 1
The vCon server supports dynamic installation of modules from PyPI or GitHub repositories. This applies to both link modules and general imports, allowing you to use external packages without pre-installing them, making deployment more flexible.
For general module imports that need to be available globally, use the imports
section:
imports:
# PyPI package with different module name
custom_utility:
module: custom_utils
pip_name: custom-utils-package
# GitHub repository
github_helper:
module: github_helper
pip_name: git+https://github.com/username/helper-repo.git
# Module name matches pip package name
requests_import:
module: requests
# pip_name not needed since it matches module name
# Legacy format (string value) - still supported
legacy_module: some.legacy.module
For modules where the pip package name matches the module name:
links:
requests_link:
module: requests
# Will automatically install "requests" from PyPI if not found
options:
timeout: 30
For modules where the pip package name differs from the module name:
links:
custom_link:
module: my_module
pip_name: custom-package-name
options:
api_key: secret
Install directly from GitHub repositories:
links:
github_link:
module: github_module
pip_name: git+https://github.com/username/repo.git@main
options:
debug: true
For private repositories, use a personal access token:
links:
private_link:
module: private_module
pip_name: git+https://token:your_github_token@github.com/username/private-repo.git
options:
config_param: value
The system will automatically detect missing modules and install them during processing. Modules are cached after installation for performance.
The vCon server supports sophisticated version management for dynamically installed modules (both imports and links). This allows you to control exactly which versions of external packages are used and when they should be updated.
Install a specific version of a package:
# For imports
imports:
my_import:
module: my_module
pip_name: my-package==1.2.3
# For links
links:
my_link:
module: my_module
pip_name: my-package==1.2.3
options:
config: value
Use version constraints to allow compatible updates:
links:
flexible_link:
module: flexible_module
pip_name: flexible-package>=1.0.0,<2.0.0
options:
setting: value
Install from specific Git tags, branches, or commits:
links:
# Install from specific tag
git_tag_link:
module: git_module
pip_name: git+https://github.com/username/repo.git@v1.2.3
# Install from specific branch
git_branch_link:
module: git_module
pip_name: git+https://github.com/username/repo.git@develop
# Install from specific commit
git_commit_link:
module: git_module
pip_name: git+https://github.com/username/repo.git@abc123def456
Include pre-release versions:
links:
prerelease_link:
module: beta_module
pip_name: beta-package --pre
options:
experimental: true
To install a new version of an already-installed link, rebuild the Docker container:
links:
upgraded_link:
module: my_module
pip_name: my-package==2.0.0 # Updated from 1.0.0
options:
new_feature: enabled
Recommended approach for version updates:
- Update the version in your configuration file
- Rebuild the Docker container to ensure clean installation
- This approach ensures consistent, reproducible deployments
For all deployments, the recommended approach is to rebuild containers:
- Update your configuration file with the new version:
# For imports
imports:
my_import:
module: my_module
pip_name: my-package==2.0.0 # Updated from 1.0.0
# For links
links:
my_link:
module: my_module
pip_name: my-package==2.0.0 # Updated from 1.0.0
- Rebuild and deploy the container:
docker compose build
docker compose up -d
This ensures clean, reproducible deployments without version conflicts.
links:
dev_link:
module: dev_module
pip_name: git+https://github.com/username/repo.git@develop
# Rebuild container frequently to get latest changes
links:
staging_link:
module: staging_module
pip_name: staging-package>=1.0.0,<2.0.0
# Use version ranges for compatibility testing
links:
prod_link:
module: prod_module
pip_name: prod-package==1.2.3
# Exact version pinning for stability
If you're experiencing import issues after a version update:
- Ensure you've rebuilt the container:
docker compose build
- Clear any cached images:
docker system prune
- Restart with fresh containers:
docker compose up -d
pip list | grep package-name
pip show package-name
If you encounter dependency conflicts:
- Use virtual environments
- Check compatibility with
pip check
- Consider using dependency resolution tools like
pip-tools
Monitor link versions in your logs:
# Links log their versions during import
logger.info("Imported %s version %s", module_name, module.__version__)
Consider implementing version reporting endpoints for operational visibility.
The vCon Server provides RESTful API endpoints for managing conversation data. All endpoints require authentication using API keys.
API authentication is handled through the x-conserver-api-token
header:
curl -H "x-conserver-api-token: YOUR_API_TOKEN" \
-X POST \
"https://your-domain.com/api/endpoint"
For internal use with full system access:
POST /vcon?ingress_list=my_ingress
Content-Type: application/json
x-conserver-api-token: YOUR_MAIN_API_TOKEN
{
"uuid": "123e4567-e89b-12d3-a456-426614174000",
"vcon": "0.0.1",
"created_at": "2024-01-15T10:30:00Z",
"parties": [...]
}
For external partners and 3rd party systems with limited access:
POST /vcon/external-ingress?ingress_list=partner_data
Content-Type: application/json
x-conserver-api-token: PARTNER_SPECIFIC_API_TOKEN
{
"uuid": "123e4567-e89b-12d3-a456-426614174000",
"vcon": "0.0.1",
"created_at": "2024-01-15T10:30:00Z",
"parties": [...]
}
The /vcon/external-ingress
endpoint is specifically designed for external partners and 3rd party systems to securely submit vCons with limited API access.
- Scoped Access: Each API key grants access only to predefined ingress list(s)
- Isolation: No access to other API endpoints or system resources
- Multi-Key Support: Multiple API keys can be configured for the same ingress list
- Configuration-Based: API keys are managed through the
ingress_auth
section inconfig.yml
Configure external API access in your config.yml
:
ingress_auth:
# Single API key for customer data ingress
customer_data: "customer-api-key-12345"
# Multiple API keys for support calls (different clients)
support_calls:
- "support-api-key-67890"
- "support-client-2-key"
- "support-vendor-key-xyz"
# Multiple partners for sales leads
sales_leads:
- "sales-api-key-abcdef"
- "sales-partner-key-123"
Single Partner Access:
curl -X POST "https://your-domain.com/vcon/external-ingress?ingress_list=customer_data" \
-H "Content-Type: application/json" \
-H "x-conserver-api-token: customer-api-key-12345" \
-d '{
"uuid": "123e4567-e89b-12d3-a456-426614174000",
"vcon": "0.0.1",
"created_at": "2024-01-15T10:30:00Z",
"parties": []
}'
Multiple Partner Access:
# Partner 1 using their key
curl -X POST "https://your-domain.com/vcon/external-ingress?ingress_list=support_calls" \
-H "x-conserver-api-token: support-api-key-67890" \
-d @vcon_data.json
# Partner 2 using their key
curl -X POST "https://your-domain.com/vcon/external-ingress?ingress_list=support_calls" \
-H "x-conserver-api-token: support-client-2-key" \
-d @vcon_data.json
Success (HTTP 204 No Content):
HTTP/1.1 204 No Content
Authentication Error (HTTP 403 Forbidden):
{
"detail": "Invalid API Key for ingress list 'customer_data'"
}
Validation Error (HTTP 422 Unprocessable Entity):
{
"detail": [
{
"loc": ["body", "uuid"],
"msg": "field required",
"type": "value_error.missing"
}
]
}
- Generate Strong API Keys: Use cryptographically secure random strings
- Rotate Keys Regularly: Update API keys periodically for security
- Monitor Usage: Track API usage per partner for billing and monitoring
- Rate Limiting: Consider implementing rate limiting for external partners
- Logging: Monitor external submissions for security and compliance
Python Integration:
import requests
import json
def submit_vcon_to_partner_ingress(vcon_data, ingress_list, api_key, base_url):
"""Submit vCon to external ingress endpoint."""
url = f"{base_url}/vcon/external-ingress"
headers = {
"Content-Type": "application/json",
"x-conserver-api-token": api_key
}
params = {"ingress_list": ingress_list}
response = requests.post(url, json=vcon_data, headers=headers, params=params)
if response.status_code == 204:
return {"success": True}
else:
return {"success": False, "error": response.json()}
# Usage
result = submit_vcon_to_partner_ingress(
vcon_data=my_vcon,
ingress_list="customer_data",
api_key="customer-api-key-12345",
base_url="https://your-domain.com"
)
Node.js Integration:
async function submitVconToIngress(vconData, ingressList, apiKey, baseUrl) {
const response = await fetch(`${baseUrl}/vcon/external-ingress?ingress_list=${ingressList}`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'x-conserver-api-token': apiKey
},
body: JSON.stringify(vconData)
});
if (response.status === 204) {
return { success: true };
} else {
const error = await response.json();
return { success: false, error };
}
}
The system is containerized using Docker and can be deployed using Docker Compose:
# Build the containers
docker compose build
# Start the services
docker compose up -d
# Scale the conserver service
docker compose up --scale conserver=4 -d
The system is designed to scale horizontally. The conserver service can be scaled to handle increased load:
docker compose up --scale conserver=4 -d
storages:
postgres:
module: storage.postgres
options:
user: postgres
password: your_password
host: your_host
port: "5432"
database: postgres
storages:
s3:
module: storage.s3
options:
aws_access_key_id: your_key
aws_secret_access_key: your_secret
aws_bucket: your_bucket
storages:
elasticsearch:
module: storage.elasticsearch
options:
cloud_id: "your_cloud_id"
api_key: "your_api_key"
index: vcon_index
For semantic search capabilities:
storages:
milvus:
module: storage.milvus
options:
host: "localhost"
port: "19530"
collection_name: "vcons"
embedding_model: "text-embedding-3-small"
embedding_dim: 1536
api_key: "your-openai-api-key"
organization: "your-org-id"
create_collection_if_missing: true
The system includes built-in monitoring through Datadog. Configure monitoring by setting the following environment variables:
DD_API_KEY=your_datadog_api_key
DD_SITE=datadoghq.com
View logs using:
docker compose logs -f
Common issues and solutions:
-
Redis Connection Issues:
- Check if Redis container is running:
docker ps | grep redis
- Verify Redis URL in .env file
- Check Redis logs:
docker compose logs redis
- Check if Redis container is running:
-
Service Scaling Issues:
- Ensure sufficient system resources
- Check network connectivity between containers
- Verify Redis connection for all instances
-
Storage Module Issues:
- Verify credentials and connection strings
- Check storage service availability
- Review storage module logs
For additional help, check the logs:
docker compose logs -f [service_name]
This project is licensed under the terms specified in the LICENSE file.
- Install as a non-root user: Create a dedicated user (e.g.,
vcon
) for running the application and Docker containers. - Clone repositories to /opt: Place
vcon-admin
andvcon-server
in/opt
for system-wide, non-root access. - Use persistent Docker volumes: Map Redis and other stateful service data to
/opt/vcon-data
for durability. - Follow the updated install script: Use
scripts/install_conserver.sh
which now implements these best practices.
/opt/vcon-admin
/opt/vcon-server
/opt/vcon-data/redis
volumes:
- /opt/vcon-data/redis:/data
The install script creates the vcon
user and sets permissions for all necessary directories.