SceneIntruderMCP is a revolutionary AI-driven interactive storytelling platform that combines traditional text analysis with modern AI technology, providing users with an unprecedented immersive role-playing and story creation experience.
- Multi-dimensional Parsing: Automatically extract scenes, characters, items, and plot elements
- Bilingual Support: Perfect support for intelligent recognition and processing of Chinese and English content
- Deep Analysis: Professional-grade text type identification based on literary theory
- Emotional Intelligence: 8-dimensional emotional analysis (emotion, action, expression, tone, etc.)
- Character Consistency: Maintain long-term memory and personality traits
- Dynamic Interaction: Intelligently triggered automatic dialogues between characters
- Non-linear Narrative: Support complex story branching and timeline management
- Intelligent Choice Generation: AI dynamically creates 4 types of choices based on context (Action/Dialogue/Investigation/Strategy)
- Story Rewind: Complete timeline rollback and state management
- User Customization: Custom items and skills system
- Creativity Control: 3-level creativity control (Strict/Balanced/Expansive)
- Progress Tracking: Real-time story completion and statistical analysis
- OpenAI GPT: GPT-3.5/4/4o series
- Anthropic Claude: Claude-3/3.5 series
- DeepSeek: Chinese-optimized models
- Google Gemini: Gemini-2.0 series
- Grok: xAI's Grok models
- Mistral: Mistral series models
- Qwen: Alibaba Cloud Qwen series
- GitHub Models: Via GitHub Models platform
- OpenRouter: Open source model aggregation platform
- GLM: Zhipu AI's GLM series
SceneIntruderMCP/
├── cmd/
│ └── server/ # Application entry point
│ └── main.go
├── internal/
│ ├── api/ # HTTP API routes and handlers
│ ├── app/ # Application core logic
│ ├── config/ # Configuration management
│ ├── di/ # Dependency injection
│ ├── llm/ # LLM provider abstraction layer
│ │ └── providers/ # Various LLM provider implementations
│ ├── models/ # Data model definitions
│ ├── services/ # Business logic services
│ └── storage/ # Storage abstraction layer
├── static/
│ ├── css/ # Style files
│ ├── js/ # Frontend JavaScript
│ └── images/ # Static images
├── web/
│ └── templates/ # HTML templates
├── data/ # Data storage directory
│ ├── scenes/ # Scene data
│ ├── stories/ # Story data
│ ├── users/ # User data
│ └── exports/ # Export files
└── logs/ # Application logs
- Backend: Go 1.21+, Gin Web Framework
- AI Integration: Multi-LLM provider support with unified abstraction interface
- Storage: File system-based JSON storage with database extension support
- Frontend: Vanilla JavaScript + HTML/CSS, responsive design
- Deployment: Containerization support, cloud-native architecture
- Go 1.21 or higher
- At least one LLM API key (OpenAI/Claude/DeepSeek, etc.)
- 2GB+ available memory
- Operating System: Windows/Linux/macOS
- Clone the Project
git clone https://github.com/Corphon/SceneIntruderMCP.git
cd SceneIntruderMCP
- Install Dependencies
go mod download
- Configure Environment
# Copy configuration template
cp data/config.json.example data/config.json
# Edit configuration file and add API keys
nano data/config.json
- Start Service
# Development mode
go run cmd/server/main.go
# Production mode
go build -o sceneintruder cmd/server/main.go
./sceneintruder
- Access Application
Open browser: http://localhost:8080
{
"llm": {
"default_provider": "openai",
"providers": {
"openai": {
"api_key": "your-openai-api-key",
"base_url": "https://api.openai.com/v1",
"default_model": "gpt-4"
},
"anthropic": {
"api_key": "your-claude-api-key",
"default_model": "claude-3-5-sonnet-20241022"
},
"deepseek": {
"api_key": "your-deepseek-api-key",
"default_model": "deepseek-chat"
}
}
},
"server": {
"port": 8080,
"debug": false
},
"storage": {
"data_path": "./data"
}
}
- Upload Text: Support various text formats including novels, scripts, stories
- AI Analysis: System automatically extracts characters, scenes, items, and other elements
- Scene Generation: Create interactive scene environments
- Select Character: Choose interaction targets from analyzed characters
- Natural Dialogue: Engage in natural language conversations with AI characters
- Emotional Feedback: Observe character emotions, actions, and expression changes
- Dynamic Choices: AI generates 4 types of choices based on current situation
- Story Development: Advance non-linear story plots based on choices
- Branch Management: Support story rewind and multi-branch exploration
- Interaction Records: Export complete dialogue history
- Story Documents: Generate structured story documents
- Statistical Analysis: Character interaction and story progress statistics
GET /api/scenes # Get scene list
POST /api/scenes # Create scene
GET /api/scenes/{id} # Get scene details
GET /api/scenes/{id}/characters # Get scene characters
GET /api/scenes/{id}/conversations # Get scene conversations
GET /api/scenes/{id}/aggregate # Get scene aggregate data
GET /api/scenes/{id}/story # Get story data
POST /api/scenes/{id}/story/choice # Make story choice
POST /api/scenes/{id}/story/advance # Advance story
POST /api/scenes/{id}/story/rewind # Rewind story
GET /api/scenes/{id}/story/branches # Get story branches
POST /api/scenes/{id}/story/rewind # Rewind story to specific node
GET /api/scenes/{id}/export/scene # Export scene data
GET /api/scenes/{id}/export/interactions # Export interactions
GET /api/scenes/{id}/export/story # Export story document
POST /api/analyze # Analyze text content
GET /api/progress/{taskID} # Get analysis progress
POST /api/cancel/{taskID} # Cancel analysis task
POST /api/upload # Upload file
POST /api/chat # Basic chat with characters
POST /api/chat/emotion # Chat with emotion analysis
POST /api/interactions/trigger # Trigger character interactions
POST /api/interactions/simulate # Simulate character dialogue
POST /api/interactions/aggregate # Aggregate interaction processing
GET /api/interactions/{scene_id} # Get interaction history
GET /api/interactions/{scene_id}/{character1_id}/{character2_id} # Get specific character interactions
GET /api/settings # Get system settings
POST /api/settings # Update system settings
POST /api/settings/test-connection # Test connection
GET /api/llm/status # Get LLM service status
GET /api/llm/models # Get available models
PUT /api/llm/config # Update LLM configuration
# User Profile
GET /api/users/{user_id} # Get user profile
PUT /api/users/{user_id} # Update user profile
GET /api/users/{user_id}/preferences # Get user preferences
PUT /api/users/{user_id}/preferences # Update user preferences
# User Items Management
GET /api/users/{user_id}/items # Get user items
POST /api/users/{user_id}/items # Add user item
GET /api/users/{user_id}/items/{item_id} # Get specific item
PUT /api/users/{user_id}/items/{item_id} # Update user item
DELETE /api/users/{user_id}/items/{item_id} # Delete user item
# User Skills Management
GET /api/users/{user_id}/skills # Get user skills
POST /api/users/{user_id}/skills # Add user skill
GET /api/users/{user_id}/skills/{skill_id} # Get specific skill
PUT /api/users/{user_id}/skills/{skill_id} # Update user skill
DELETE /api/users/{user_id}/skills/{skill_id} # Delete user skill
WS /ws/scene/{id} # Scene WebSocket connection
WS /ws/user/status # User status WebSocket connection
GET /api/ws/status # Get WebSocket connection status
// 1. Get story data
const storyData = await fetch('/api/scenes/scene123/story');
// 2. Make a choice
const choiceResult = await fetch('/api/scenes/scene123/story/choice', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
node_id: 'node_1',
choice_id: 'choice_a'
})
});
// 3. Export story
const storyExport = await fetch('/api/scenes/scene123/export/story?format=markdown');
// 1. Basic chat
const chatResponse = await fetch('/api/chat', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
scene_id: 'scene123',
character_id: 'char456',
message: 'Hello, how are you?'
})
});
// 2. Trigger character interaction
const interaction = await fetch('/api/interactions/trigger', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
scene_id: 'scene123',
character_ids: ['char1', 'char2'],
topic: 'Discussing the mysterious artifact'
})
});
// 1. Add custom item
const newItem = await fetch('/api/users/user123/items', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
name: 'Magic Sword',
description: 'A legendary sword with mystical powers',
type: 'weapon',
properties: { attack: 50, magic: 30 }
})
});
// 2. Add skill
const newSkill = await fetch('/api/users/user123/skills', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
name: 'Fireball',
description: 'Cast a powerful fireball spell',
type: 'magic',
level: 3
})
});
// Connect to scene WebSocket
const sceneWs = new WebSocket(`ws://localhost:8080/ws/scene/scene123?user_id=user456`);
sceneWs.onmessage = (event) => {
const data = JSON.parse(event.data);
console.log('Scene update:', data);
};
// Send character interaction
sceneWs.send(JSON.stringify({
type: 'character_interaction',
character_id: 'char123',
message: 'Hello everyone!'
}));
// Connect to user status WebSocket
const statusWs = new WebSocket(`ws://localhost:8080/ws/user/status?user_id=user456`);
statusWs.onmessage = (event) => {
const data = JSON.parse(event.data);
if (data.type === 'heartbeat') {
console.log('Connection alive');
}
};
{
"success": true,
"data": {
// Response data
},
"timestamp": "2024-01-01T12:00:00Z"
}
{
"success": false,
"error": "Error message description",
"code": "ERROR_CODE",
"timestamp": "2024-01-01T12:00:00Z"
}
{
"file_path": "/exports/story_20240101_120000.md",
"content": "# Story Export\n\n...",
"format": "markdown",
"size": 1024,
"timestamp": "2024-01-01T12:00:00Z"
}
Currently, the API uses session-based authentication for user management. For production deployment, consider implementing:
- JWT Authentication: Token-based authentication for API access
- Rate Limiting: API call frequency limits
- Input Validation: Strict parameter validation and sanitization
- HTTPS Only: Force HTTPS for all production traffic
For detailed API documentation, see: API Documentation
# Run all tests
go test ./...
# Run tests with coverage report
go test -coverprofile=coverage.out ./...
go tool cover -html=coverage.out
# Run specific package tests
go test ./internal/services/...
- Implement Interface: Create new provider in
internal/llm/providers/
- Register Provider: Register in
init()
function - Add Configuration: Update configuration file template
- Write Tests: Add corresponding unit tests
- models/: Data models defining core entities in the system
- services/: Business logic layer handling core functionality
- api/: HTTP handlers exposing RESTful APIs
- llm/: LLM abstraction layer supporting multiple AI providers
- Concurrent Processing: Support multiple simultaneous users
- Caching Mechanism: Intelligent caching of LLM responses
- Memory Optimization: Load on demand, prevent memory leaks
- File Compression: Automatic compression of historical data
- API Usage Statistics: Request count and token consumption
- Response Time: AI model response speed monitoring
- Error Rate: System and API error tracking
- Resource Usage: CPU and memory usage monitoring
- API Keys: Secure storage with environment variable support
- User Data: Local storage with complete privacy control
- Access Control: User session and permission management support
- Data Backup: Automatic backup of important data
- HTTPS Support: HTTPS recommended for production environments
- CORS Configuration: Secure cross-origin resource sharing configuration
- Input Validation: Strict user input validation and sanitization
We welcome all forms of contributions!
- Bug Reports: Use GitHub Issues to report problems
- Feature Suggestions: Propose ideas and suggestions for new features
- Code Contributions: Submit Pull Requests
- Documentation Improvements: Help improve documentation and examples
- Fork the project repository
- Create feature branch:
git checkout -b feature/amazing-feature
- Commit changes:
git commit -m 'Add amazing feature'
- Push branch:
git push origin feature/amazing-feature
- Create Pull Request
- Follow official Go coding style
- Add necessary comments and documentation
- Write unit tests covering new features
- Ensure all tests pass
This project is licensed under the MIT License - see the LICENSE file for details
- Go - High-performance programming language
- Gin - Lightweight web framework
- OpenAI - GPT series models
- Anthropic - Claude series models
Thanks to all developers and users who have contributed to this project!
- Project Homepage: GitHub Repository
- Issue Reports: GitHub Issues
- Feature Requests: GitHub Discussions
- Email Contact: project@sceneintruder.dev
🌟 If this project helps you, please consider giving it a Star! 🌟
Made with ❤️ by SceneIntruderMCP Team