This project implements a Facebook-like application using a microservices architecture. The system consists of four primary microservices, each responsible for a specific domain area, all connected through an Nginx load balancer acting as an API gateway.
┌─────────────────┐ ┌──────────────────────────────────────┐
│ │ │ API Gateway │
│ Clients │────▶│ (Nginx) │
│ │ │ │
└─────────────────┘ └───┬─────────────┬─────────┬──────────┘
│ │ │
┌─────────▼──┐ ┌───────▼─────┐ ┌▼──────────────┐ ┌─────────────┐
│ User App │ │ Message App │ │ Search App │ │ Wall App │
│ (Auth/User │ │ (Messages) │ │ (Search) │ │ (Posts & │
│ Profiles) │ │ │ │ │ │ Comments) │
└─────┬──────┘ └──────┬──────┘ └───────┬──────┘ └──────┬──────┘
│ │ │ │
┌─────▼──────┐ ┌──────▼──────┐ ┌──────▼──────┐ ┌─────▼──────┐
│ PostgreSQL │ │ MongoDB │ │ PostgreSQL │ │ MongoDB │
│ (User data)│ │ (Messages) │ │ (Search │ │ (Posts & │
│ │ │ │ │ indexes) │ │ Comments) │
└────────────┘ └─────────────┘ └─────────────┘ └────────────┘
┌─────────────┐ ┌─────────────┐
│ RabbitMQ │ │ Redis │
│ (Messaging) │ │ (Caching) │
└─────────────┘ └─────────────┘
-
User App (Port 8081):
- Manages user authentication and profiles
- Handles user registration, login, and profile management
- Uses PostgreSQL for storing user data
- Implements JWT-based authentication and authorization
- Exposes RESTful APIs for user management
- Caches frequent user data in Redis
- Publishes user events to RabbitMQ
-
Message App (Port 8083):
- Handles user messaging functionality
- Manages conversations and message history
- Uses MongoDB to store message data
- Implements real-time messaging features
- Provides APIs for creating conversations and sending messages
- Caches recent conversations and messages in Redis
- Consumes user events from RabbitMQ for user presence
-
Search App (Port 8084):
- Provides search capabilities across the platform
- Creates and manages search indexes
- Uses PostgreSQL to store search indexes and configurations
- Implements advanced search algorithms and filtering
- Provides APIs for searching users, posts, and messages
- Caches popular search results in Redis
- Consumes events from RabbitMQ to update search indexes
-
Wall App (Port 8082):
- Manages posts, comments, and reactions
- Handles timeline features
- Uses MongoDB to store posts and comments
- Implements feed generation algorithms
- Provides APIs for creating and retrieving posts
- Caches trending posts and user feeds in Redis
- Publishes and consumes post-related events via RabbitMQ
-
Nginx (Port 80):
- Acts as an API gateway and load balancer
- Routes requests to the appropriate microservices based on path
- Provides a unified entry point for all client requests
- Handles cross-cutting concerns like CORS and rate limiting
- Performs health checks on microservices
- Offers performance optimizations like response caching and compression
- Provides load balancing strategies like round-robin and least connections
-
RabbitMQ:
- Message broker for inter-service communication
- Enables event-driven architecture
- Provides reliable message delivery between services
- Implements various exchange types (direct, topic, fanout)
- Supports message acknowledgment and delivery guarantees
- Enables asynchronous processing patterns
- Configured with high availability and fault tolerance
- Features management interface for monitoring and administration
-
Redis:
- Caching layer for improved performance
- Reduces database load for frequently accessed data
- Used by all services for caching
- Implements different caching strategies (LRU, TTL-based)
- Provides pub/sub capabilities for real-time features
- Supports data structures for specialized use cases
- Configured with persistence for data durability
- Uses separate database numbers for service isolation
-
Databases:
-
PostgreSQL: Relational database for structured data
- User service: Stores user profiles, credentials, relationships
- Search service: Stores search indexes, configurations, and metadata
- Configured with connection pooling for efficient resource utilization
- Implements optimized query patterns and indexing strategies
- Setup with replication for high availability (production)
-
MongoDB: NoSQL database for document-oriented data
- Message service: Stores conversations, messages, attachments
- Wall service: Stores posts, comments, reactions, and media
- Optimized for high write throughput and flexible schema
- Implements sharding for horizontal scaling (production)
- Uses replica sets for redundancy and failover
-
The Nginx configuration uses advanced routing techniques to direct traffic to the appropriate microservice:
# facebook of API routing in Nginx
location /api/users/ {
proxy_pass http://user_service/api/users/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
The gateway also implements:
- Load balancing: Distributes requests across multiple instances of the same service
- Health monitoring: Regular checks to ensure services are operational
- Fault tolerance: Automatic failover to healthy instances
- Request/response transformation: Modifies headers and content as needed
- SSL termination: Handles HTTPS traffic (in production)
The authentication process uses JWT (JSON Web Tokens) and follows these steps:
- Client sends credentials to
/api/auth/login
- User App validates credentials and generates a JWT
- JWT is returned to the client
- Client includes JWT in the Authorization header for subsequent requests
- API Gateway validates token structure and forwards to appropriate service
- Individual services perform fine-grained authorization
User Database:
users
: Stores user credentials and profile informationrelationships
: Manages connections between usersuser_preferences
: Stores user settings and preferencesauth_tokens
: Manages authentication and refresh tokens
Search Database:
search_indexes
: Stores indexed content for quick retrievalsearch_configurations
: Contains search algorithm settingssearch_history
: Logs user search patterns for recommendations
Message Database:
conversations
: Stores metadata about conversationsmessages
: Contains actual message content and metadatamessage_attachments
: References to media included in messages
Wall Database:
posts
: Contains user posts and associated metadatacomments
: Stores comments linked to postsreactions
: Records user reactions (likes, etc.) to contentmedia
: Stores references to images and videos
Each microservice implements a specialized caching strategy:
-
User App:
- Caches user profiles with a 15-minute TTL
- Implements cache-aside pattern for frequently accessed data
- Uses Redis database 0 with isolation
-
Message App:
- Caches recent conversations with sliding window expiration
- Implements write-through caching for new messages
- Uses Redis database 1 with specialized data structures
-
Search App:
- Caches popular search results with frequency-based eviction
- Implements memoization for complex search operations
- Uses Redis database 2 with bloom filters for optimization
-
Wall App:
- Caches trending posts using LRU eviction strategy
- Implements feed caching with selective invalidation
- Uses Redis database 3 with sorted sets for ranking
The system uses various RabbitMQ exchange types for different communication patterns:
-
Direct exchanges for command patterns:
- Authentication events
- User management operations
-
Topic exchanges for publish/subscribe patterns:
- Content creation events (posts, comments)
- User activity streams
-
Fanout exchanges for broadcast patterns:
- System-wide notifications
- Cache invalidation events
facebook message flow:
[User App] --user.created--> [RabbitMQ] --user.created--> [Search App, Message App]
Each microservice implements the following resilience patterns:
- Circuit Breaker: Prevents cascading failures when dependent services are down
- Retry with Backoff: Automatically retries failed operations with exponential backoff
- Bulkhead: Isolates failures to protect critical system components
- Fallback: Provides degraded functionality when primary operations fail
- Timeout: Sets maximum duration for external calls to prevent blocking
The system implements multiple layers of security:
-
Transport Layer:
- TLS/SSL encryption for all external communication
- Secure internal communication in production environments
-
Authentication Layer:
- JWT-based authentication with refresh token rotation
- Password hashing with bcrypt and configurable work factors
- OAuth2 integration (planned for production)
-
Authorization Layer:
- Role-based access control (RBAC)
- Fine-grained permission system
- Resource ownership validation
-
API Security:
- Input validation and sanitization
- CSRF protection
- Rate limiting and throttling
- Content Security Policy implementation
-
Backend: Java with Spring Boot 3.x
- Spring Security for authentication and authorization
- Spring Data JPA/MongoDB for data access
- Spring AMQP for messaging
- Spring Cache for caching abstraction
-
API: RESTful services
- OpenAPI/Swagger for documentation
- JSON/HTTP for communication
- HATEOAS for resource navigation
-
Database:
- PostgreSQL 16 for relational data
- MongoDB 6 for document data
- Database migration tools (Flyway/Liquibase)
-
Caching: Redis 7
- Lettuce client with connection pooling
- Multi-database configuration
- Advanced data structures
-
Messaging: RabbitMQ 3
- Exchange and queue configuration
- Message acknowledgement
- Dead letter queues
-
API Gateway: Nginx 1.25
- Load balancing configuration
- SSL termination
- Path-based routing
-
Containerization: Docker
- Multi-stage builds
- Custom images
- Volume management
-
Container Orchestration: Docker Compose
- Service dependency management
- Health checks
- Network configuration
The application can be deployed to a k8s cluster, providing enhanced scalability, resilience, and automated management capabilities compared to Docker Compose.
The k8s deployment mirrors the Docker Compose architecture but takes advantage of k8s native features like:
- Declarative configuration
- Self-healing capabilities
- Advanced scaling options
- Robust service discovery
- Built-in health monitoring
- Resource management
- A k8s cluster (local or cloud-based)
kubectl
command-line tool installed and configured- Container registry access (Docker Hub, GCR, ECR, etc.)
- Helm (optional, for more advanced deployments)
Before deploying to k8s, build and push your container images to a registry:
# Build all microservice images
./build_all.sh
# Tag images for your registry
docker tag facebook/user_app:latest your-registry/facebook/user_app:latest
docker tag facebook/message_app:latest your-registry/facebook/message_app:latest
docker tag facebook/search_app:latest your-registry/facebook/search_app:latest
docker tag facebook/wall_app:latest your-registry/facebook/wall_app:latest
# Push images to your registry
docker push your-registry/facebook/user_app:latest
docker push your-registry/facebook/message_app:latest
docker push your-registry/facebook/search_app:latest
docker push your-registry/facebook/wall_app:latest
For local development with Minikube, you can load images directly:
# Build images
./build_all.sh
# Load images into Minikube
minikube image load facebook/user_app:latest
minikube image load facebook/message_app:latest
minikube image load facebook/search_app:latest
minikube image load facebook/wall_app:latest
The application is deployed using a series of configuration files in the /k8s
directory. Apply them in order:
# Create namespace
kubectl apply -f k8s/00-namespace.yaml
# Create persistent volume claims
kubectl apply -f k8s/01-storage.yaml
# Deploy databases
kubectl apply -f k8s/02-postgres.yaml
kubectl apply -f k8s/03-mongodb.yaml
# Deploy infrastructure services
kubectl apply -f k8s/04-redis.yaml
kubectl apply -f k8s/05-rabbitmq.yaml
# Deploy microservices
kubectl apply -f k8s/06-user-app.yaml
kubectl apply -f k8s/07-message-app.yaml
kubectl apply -f k8s/08-wall-app.yaml
kubectl apply -f k8s/09-search-app.yaml
# Deploy API gateway
kubectl apply -f k8s/10-nginx-gateway.yaml
For convenience, you can apply all configurations at once:
kubectl apply -f k8s/
Check the status of all deployed resources:
# Check pods
kubectl get pods -n facebook-app
# Check services
kubectl get services -n facebook-app
# Check persistent volume claims
kubectl get pvc -n facebook-app
# Check deployments
kubectl get deployments -n facebook-app
The application is exposed through the Nginx API Gateway service:
# Get external IP/port
kubectl get service nginx-gateway -n facebook-app
For Minikube:
minikube service nginx-gateway -n facebook-app
For cloud providers, the LoadBalancer service will provide an external IP address that you can access directly.
One advantage of k8s is the ability to easily scale individual components:
# Scale the user-app to 3 replicas
kubectl scale deployment user-app -n facebook-app --replicas=3
# Scale the wall-app to 2 replicas
kubectl scale deployment wall-app -n facebook-app --replicas=2
Monitor the application and view logs:
# View logs for a specific pod
kubectl logs -f <pod-name> -n facebook-app
# View events
kubectl get events -n facebook-app
# Describe a resource for detailed information
kubectl describe pod <pod-name> -n facebook-app
The k8s deployment consists of the following configuration files:
- 00-namespace.yaml: Creates a dedicated namespace for the application
- 01-storage.yaml: Defines persistent volume claims for databases and stateful services
- 02-postgres.yaml: Sets up PostgreSQL databases for user and search services
- 03-mongodb.yaml: Configures MongoDB databases for message and wall services
- 04-redis.yaml: Deploys Redis with optimized performance configurations
- 05-rabbitmq.yaml: Implements RabbitMQ message broker with management interface
- 06-user-app.yaml: Deploys the user authentication and profile management service
- 07-message-app.yaml: Deploys the messaging service
- 08-wall-app.yaml: Deploys the posts and comments service
- 09-search-app.yaml: Deploys the search functionality service
- 10-nginx-gateway.yaml: Sets up the API gateway for routing requests
Each microservice configuration includes:
- ConfigMaps for environment variables and application settings
- Secrets for sensitive information
- Deployments with resource limits and health checks
- Services for inter-service communication
- Persistent volume claims for stateful components
The k8s configurations include resource requests and limits to ensure optimal performance:
- Microservices: 256Mi-512Mi memory, 200m-500m CPU
- Databases: 256Mi-512Mi memory, 250m-500m CPU
- Redis: 128Mi-512Mi memory, 100m-300m CPU
- RabbitMQ: 256Mi-512Mi memory, 200m-500m CPU
- Nginx Gateway: 128Mi-256Mi memory, 100m-200m CPU
To remove the entire application from your k8s cluster:
kubectl delete namespace facebook-app
Or remove individual components:
kubectl delete -f k8s/10-nginx-gateway.yaml
kubectl delete -f k8s/09-search-app.yaml
kubectl delete -f k8s/08-wall-app.yaml
# etc.
- Scaling: k8s allows individual service scaling independently
- Resilience: k8s automatically restarts failed containers and provides self-healing
- Resource Management: k8s offers more granular control over CPU and memory allocation
- Service Discovery: k8s provides DNS-based service discovery out of the box
- Rolling Updates: k8s supports zero-downtime updates and rollbacks
- Health Checks: k8s has built-in health checking and readiness probes
- Networking: k8s provides more advanced networking options and policies
After deploying the basic application, consider implementing these advanced features:
- Horizontal Pod Autoscaling:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: user-app-hpa
namespace: facebook-app
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: user-app
minReplicas: 1
maxReplicas: 5
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 80
- Network Policies for enhanced security:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: db-access-policy
namespace: facebook-app
spec:
podSelector:
matchLabels:
app: postgres-user
ingress:
- from:
- podSelector:
matchLabels:
app: user-app
ports:
- protocol: TCP
port: 5432
-
ConfigMap and Secret Management using external tools like HashiCorp Vault or Sealed Secrets
-
Metrics Collection with Prometheus and Grafana:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: facebook-app-monitor
namespace: facebook-app
spec:
selector:
matchLabels:
app: user-app
endpoints:
- port: web
path: /actuator/prometheus
interval: 15s
- Docker and Docker Compose installed
- Java Development Kit (JDK) 17 or later (for local development)
- Maven (for local development)
- Git (for version control)
- Postman or similar tool (for API testing)
The entire application can be launched using Docker Compose:
# Build and start all services
docker-compose up -d
# Check service status
docker-compose ps
# View logs
docker-compose logs -f
# View logs for a specific service
docker-compose logs -f user_app
# Stop all services
docker-compose down
When starting the system for the first time:
- The databases will initialize with default schemas
- Test users will be created automatically (in development mode)
- Sample content will be generated (in development mode)
Default admin credentials:
- Username: admin@facebook-app.local
- Password: adminPassword123!
The repository includes several testing scripts:
# Test all authentication functionalities
./run_all_auth_tests.sh
# Test message application features
./test_message_app.sh
# Test user management features
./test_user_app.sh
These scripts use cURL to make API calls and validate responses.
Once the system is running, you can access the services through the Nginx gateway:
-
Authentication:
- User Registration:
POST http://localhost/api/auth/register
{ "email": "user@facebook.com", "password": "securePassword123!", "firstName": "John", "lastName": "Doe" }
- User Login:
POST http://localhost/api/auth/login
Response:{ "email": "user@facebook.com", "password": "securePassword123!" }
{ "token": "eyJhbGciOiJIUzI1NiJ9...", "refreshToken": "eyJhbGciOiJIUzI1NiJ9...", "expiresIn": 86400 }
- User Registration:
-
User Management:
-
Get User Profile:
GET http://localhost/api/users/{userId}
Headers:Authorization: Bearer {token}
-
Update User Profile:
PUT http://localhost/api/users/{userId}
Headers:Authorization: Bearer {token}
{ "firstName": "Updated", "lastName": "Name", "bio": "My updated profile bio" }
-
-
Messages:
-
Get Conversations:
GET http://localhost/api/messages/conversations
Headers:Authorization: Bearer {token}
-
Send Message:
POST http://localhost/api/messages/send
Headers:Authorization: Bearer {token}
{ "recipientId": "user123", "content": "Hello, how are you?", "attachments": [] }
-
Get Messages:
GET http://localhost/api/messages/{conversationId}
Headers:Authorization: Bearer {token}
-
-
Search:
-
Search Users:
GET http://localhost/api/search/users?query={query}
Headers:Authorization: Bearer {token}
-
Search Posts:
GET http://localhost/api/search/posts?query={query}
Headers:Authorization: Bearer {token}
-
-
Wall:
-
Create Post:
POST http://localhost/api/posts
Headers:Authorization: Bearer {token}
{ "content": "This is my new post!", "visibility": "PUBLIC", "mediaUrls": [] }
-
Get Posts:
GET http://localhost/api/posts
Headers:Authorization: Bearer {token}
-
Add Comment:
POST http://localhost/api/comments
Headers:Authorization: Bearer {token}
{ "postId": "post123", "content": "Great post!" }
-
Health checks are available for monitoring the status of each service:
- Nginx Gateway:
http://localhost/health
- Individual Services:
- User Service:
http://localhost/actuator/health/user
- Message Service:
http://localhost/actuator/health/messages
- Search Service:
http://localhost/actuator/health/search
- Wall Service:
http://localhost/actuator/health/wall
- User Service:
The services expose detailed health metrics through Spring Boot Actuator:
- Service Info:
http://localhost/actuator/info/{service}
- Service Metrics:
http://localhost/actuator/metrics/{service}
- Health Components:
http://localhost/actuator/health/{service}/components
For local development, you can run each service separately:
# Navigate to service directory
cd user_app
# Run the service with development profile
./mvnw spring-boot:run -Dspring-boot.run.profiles=dev
# Run with remote debugging enabled
./mvnw spring-boot:run -Dspring-boot.run.jvmArguments="-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005"
You can use in-memory databases for development:
# Run with in-memory database
./mvnw spring-boot:run -Dspring-boot.run.profiles=dev,inmemory
During development, you can start only the infrastructure services:
# Start only infrastructure services (databases, message broker, cache)
docker-compose up -d postgres-user postgres-search mongodb-message mongodb-wall rabbitmq redis
Each service has its own Dockerfile. To build an individual service:
# Navigate to service directory
cd user_app
# Build the image
docker build -t facebook/user_app .
# Build with a specific tag
docker build -t facebook/user_app:v1.2.3 .
# Build with build arguments
docker build --build-arg JAR_FILE=target/user_app-0.0.1-SNAPSHOT.jar -t facebook/user_app .
The Dockerfiles use multi-stage builds to minimize image size:
# Build stage
FROM maven:3.8.3-openjdk-17-slim AS build
WORKDIR /app
COPY . .
RUN mvn clean package -DskipTests
# Run stage
FROM openjdk:17-slim
WORKDIR /app
COPY --from=build /app/target/*.jar app.jar
ENTRYPOINT ["java", "-jar", "app.jar"]
Each service includes comprehensive testing:
# Run all tests
./mvnw test
# Run specific test class
./mvnw test -Dtest=UserServiceTest
# Generate test coverage report
./mvnw test jacoco:report
# Run integration tests only
./mvnw verify -P integration-tests
The project uses various testing approaches:
- Unit Tests: Test individual components in isolation
- Integration Tests: Test component interactions with real or mocked dependencies
- API Tests: Test REST endpoints with MockMvc
- End-to-End Tests: Test complete flows across services
The project uses several code quality tools:
# Run static code analysis
./mvnw sonar:sonar
# Check code formatting
./mvnw spotless:check
# Apply code formatting
./mvnw spotless:apply
-
user_app/: User authentication and management service
src/main/java/com/facebook/user_app/controller
: REST controllerssrc/main/java/com/facebook/user_app/service
: Business logicsrc/main/java/com/facebook/user_app/repository
: Data accesssrc/main/java/com/facebook/user_app/model
: Domain entitiessrc/main/java/com/facebook/user_app/config
: Configuration classessrc/main/java/com/facebook/user_app/security
: Authentication and authorization
-
message_app/: Messaging service
src/main/java/com/facebook/message_app/controller
: REST controllerssrc/main/java/com/facebook/message_app/service
: Business logicsrc/main/java/com/facebook/message_app/repository
: Data accesssrc/main/java/com/facebook/message_app/model
: Domain entitiessrc/main/java/com/facebook/message_app/config
: Configuration classes
-
search_app/: Search functionality service
src/main/java/com/facebook/search_app/controller
: REST controllerssrc/main/java/com/facebook/search_app/service
: Business logicsrc/main/java/com/facebook/search_app/repository
: Data accesssrc/main/java/com/facebook/search_app/model
: Domain entitiessrc/main/java/com/facebook/search_app/config
: Configuration classessrc/main/java/com/facebook/search_app/indexing
: Search indexing logic
-
wall_app/: Posts and comments service
src/main/java/com/facebook/wall_app/controller
: REST controllerssrc/main/java/com/facebook/wall_app/service
: Business logicsrc/main/java/com/facebook/wall_app/repository
: Data accesssrc/main/java/com/facebook/wall_app/model
: Domain entitiessrc/main/java/com/facebook/wall_app/config
: Configuration classes
-
shared/: Shared libraries and utilities
src/main/java/com/facebook/shared/dto
: Data transfer objectssrc/main/java/com/facebook/shared/exception
: Common exceptionssrc/main/java/com/facebook/shared/util
: Utility classessrc/main/java/com/facebook/shared/event
: Event classes for messaging
-
nginx/: API gateway and load balancer configuration
conf.d/default.conf
: Main Nginx configuration fileDockerfile
: Nginx container build definition
-
docker-compose.yml: Container orchestration configuration
pom.xml
: Parent Maven configuration fileREADME.md
: Project documentationrun-facebook-app.sh
: Convenience script for managementbuild_all.sh
: Script to build all servicesrun_all_auth_tests.sh
: Script to test authentication flowstest_message_app.sh
: Script to test messaging featurestest_user_app.sh
: Script to test user management
The application follows key microservices principles:
- Single Responsibility: Each service handles a specific business domain
- Autonomous Services: Services can be developed, deployed, and scaled independently
- Decentralized Data Management: Each service has its own database
- Infrastructure Automation: Containerization and orchestration
- Design for Failure: Resilience patterns and fault tolerance
- Evolutionary Design: Services can evolve independently
The Nginx gateway implements:
- Routing: Directs requests to appropriate services
- Aggregation: Combines responses from multiple services (for complex operations)
- Protocol Translation: Converts between protocols if needed
- Offloading Cross-cutting Concerns: Handles authentication, logging, etc.
The RabbitMQ messaging system enables:
- Loose Coupling: Services communicate without direct dependencies
- Asynchronous Processing: Non-blocking operations for better scalability
- Event Sourcing: Recording all state changes as a sequence of events
- CQRS: Separation of read and write operations for complex domains
The Redis caching layer implements:
- Cache-Aside: Services look for data in cache first, then database
- Write-Through: Updates cache when database is updated
- Time-to-Live: Automatic expiration of cached data
- Distributed Caching: Shared cache across service instances
Each service owns its data with:
- Private Tables: Each service has exclusive access to its database
- API Encapsulation: Data is only accessed through service APIs
- Polyglot Persistence: Different database types for different needs
- Database Transactions: ACID guarantees within service boundaries
The system implements multiple resilience strategies:
-
Load Balancing: Nginx distributes traffic across service instances
- Round-robin distribution
- Least connections algorithm
- IP hash for session persistence
- Health-check based routing
-
Service Discovery: Services can find each other dynamically
- DNS-based service discovery in Docker
- Health-aware service resolution
- Automatic service registration
-
Health Monitoring: Automatic health checks detect service issues
- Readiness probes: Determine if service can receive traffic
- Liveness probes: Detect hung or deadlocked services
- Custom health indicators: Application-specific health criteria
-
Horizontal Scaling: Additional service instances can be added for scaling
- Stateless service design
- Session externalization
- Concurrent request handling
- Data partitioning strategies
The application uses Resilience4j for circuit breaking:
@CircuitBreaker(name = "userService", fallbackMethod = "getUserProfileFallback")
public UserProfile getUserProfile(String userId) {
// Remote service call that might fail
}
public UserProfile getUserProfileFallback(String userId, Exception ex) {
// Fallback implementation
}
Services implement thread isolation:
@Bulkhead(name = "messagingOperations")
public void sendMessage(Message message) {
// Message sending logic
}
Failed operations are automatically retried:
@Retry(name = "searchIndex")
public SearchResult performSearch(String query) {
// Search operation that might fail temporarily
}
The system uses JWT-based authentication:
- Token Generation: Creates signed JWTs upon successful login
- Token Validation: Verifies token signature and expiration
- Claims Extraction: Retrieves user identity and roles from tokens
- Refresh Mechanism: Allows obtaining new access tokens
Role-based access control is implemented:
@PreAuthorize("hasRole('ADMIN') or @userSecurity.isOwner(authentication, #userId)")
public UserProfile updateUserProfile(String userId, UserProfileUpdate update) {
// Update user profile
}
Sensitive data is protected:
- Password Hashing: Uses bcrypt with configurable work factor
- Field Encryption: Encrypts sensitive fields in the database
- Data Masking: Hides sensitive data in logs and responses
API usage is limited to prevent abuse:
# Rate limiting configuration in Nginx
limit_req_zone $binary_remote_addr zone=api_limit:10m rate=5r/s;
location /api/ {
limit_req zone=api_limit burst=10 nodelay;
proxy_pass http://backend;
}
For production environments, consider:
- Blue-Green Deployment: Maintain two identical environments for zero-downtime updates
- Canary Releases: Gradually roll out changes to a subset of users
- Feature Toggles: Enable/disable features without redeploying
Implement comprehensive monitoring:
- Centralized Logging: Aggregate logs with ELK stack or similar
- Metrics Collection: Gather performance metrics with Prometheus
- Distributed Tracing: Track requests across services with Jaeger
- Alerting: Set up alerts for abnormal conditions
Plan for growth with:
- Database Sharding: Horizontally partition data for better performance
- Caching Tiers: Implement multi-level caching for frequently accessed data
- CDN Integration: Serve static assets from content delivery networks
- Autoscaling: Dynamically adjust resources based on load
Additional security for production:
- API Gateway Security: Implement WAF functionality
- Network Segmentation: Isolate services in separate security groups
- Secrets Management: Use a vault for sensitive configuration
- Regular Security Audits: Conduct penetration testing and code reviews
Implement data protection:
- Regular Backups: Schedule database backups with retention policies
- Point-in-Time Recovery: Enable transaction logs for fine-grained recovery
- Multi-Region Deployment: Replicate services across geographic regions
- Disaster Recovery Plan: Document procedures for various failure scenarios
-
Service Mesh Integration
- Implement Istio or Linkerd for enhanced service-to-service communication
- Add mutual TLS between services
- Improve traffic management capabilities
-
Advanced Observability
- Implement distributed tracing with Jaeger or Zipkin
- Add business metrics collection
- Create comprehensive dashboards with Grafana
-
CI/CD Pipeline Enhancement
- Implement GitOps workflow with ArgoCD
- Add automatic canary analysis
- Integrate security scanning in the CI pipeline
-
Security Enhancements
- Implement OAuth2.0 and OpenID Connect
- Add two-factor authentication
- Enhance API security with JWT scope validation
-
k8s Deployment
- Migrate from Docker Compose to k8s
- Implement horizontal pod autoscaling
- Add custom resource definitions for application-specific resources
-
Enhanced Social Features
- Friend recommendations system
- Advanced privacy controls
- Group functionality
-
Multimedia Enhancements
- Video streaming capabilities
- Image processing and filters
- Rich media embedding
-
AI-Powered Features
- Content recommendation engine
- Automated content moderation
- Smart reply suggestions
-
Mobile Integration
- Native mobile API endpoints
- Push notification services
- Offline capabilities
-
Analytics Platform
- User behavior analytics
- Content performance metrics
- A/B testing framework
-
Services failing to start:
- Check the logs with
docker-compose logs <service_name>
- Verify that environment variables are set correctly
- Ensure volume mounts are properly configured
- Check for port conflicts with other applications
- Check the logs with
-
Database connection errors:
- Ensure database containers are running and healthy
- Verify connection strings in application properties
- Check database credentials
- Confirm that database initialization scripts ran correctly
-
Network connectivity issues:
- Verify that all services are on the same Docker network
- Check firewall settings and network policies
- Ensure service names are correctly referenced in application configs
- Test connectivity between containers with ping or curl
-
Authentication failures:
- Check JWT secret configuration across services
- Verify token expiration settings
- Ensure clocks are synchronized across services
- Confirm that roles and permissions are correctly assigned
For debugging, you can:
- Access individual service logs:
docker-compose logs -f <service_name>
- Check container health:
docker ps
(look for "health" status) - Access service metrics:
http://localhost/actuator/metrics/<service>
(when available) - Use remote debugging:
docker-compose -f docker-compose.yml -f docker-compose.debug.yml up -d <service_name>
- Inspect container environment:
docker exec -it <container_name> /bin/sh
If the API gateway is not routing correctly:
- Check Nginx configuration:
docker exec -it nginx-gateway nginx -t
- Verify upstream service health:
curl http://user_app:8081/actuator/health
- Test direct service access (bypass Nginx):
curl http://localhost:8081/api/users
If services cannot connect to Redis:
- Check Redis logs:
docker-compose logs redis
- Verify Redis is running:
docker exec -it redis redis-cli ping
- Test connection from service container:
docker exec -it user_app /bin/sh wget -O- redis:6379
If events are not being processed:
- Check RabbitMQ management UI:
http://localhost:15672
- Verify exchanges and queues are created
- Check for messages stuck in queues
- Review consumer acknowledgements
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature
) - Setup local environment:
# Install dependencies ./mvnw clean install -DskipTests # Setup pre-commit hooks cp hooks/pre-commit .git/hooks/ chmod +x .git/hooks/pre-commit
The project follows these conventions:
- Java Code Style: Google Java Style Guide
- REST API Design: REST API Design Best Practices
- Commit Messages: Conventional Commits format
- Documentation: Javadoc for public APIs, README updates for features
- Commit your changes (
git commit -m 'Add some amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request with:
- Clear description of changes
- Reference to related issue
- Screenshots or facebooks (when applicable)
- Updated documentation
- Address review comments
- Merge after approval
Contributors are expected to adhere to the project's Code of Conduct:
- Respectful communication
- Constructive feedback
- Inclusive language
- Collaborative problem solving
This project is licensed under the MIT License - see the LICENSE file for details.
This project uses GitHub Actions for Continuous Integration and Continuous Deployment.
Runs on every push to the main branch and pull requests:
- Builds and tests all microservices
- Runs JavaScript tests
- Uploads test reports as artifacts
Builds and publishes Docker images:
- Triggered by pushes to the main branch and tags
- Builds Docker images for all services
- Pushes to GitHub Container Registry
Deploys to Kubernetes:
- Runs after successful Docker builds
- Can be triggered manually
- Supports staging and production environments
Monitors deployed services:
- Runs on a schedule (hourly)
- Checks Kubernetes pods status
- Tests service health endpoints
- Sends notifications on failures
To use these workflows, set up the following secrets in your GitHub repository:
JWT_SECRET
: Secret key for JWT token generationKUBECONFIG
: Kubernetes configuration file contentSLACK_WEBHOOK_URL
: URL for Slack notificationsPOSTGRES_USER_PASSWORD
: Password for user databasePOSTGRES_SEARCH_PASSWORD
: Password for search databaseRABBITMQ_USERNAME
: RabbitMQ usernameRABBITMQ_PASSWORD
: RabbitMQ password
The project includes a comprehensive build script (build.sh
) that automates the build and deployment process:
# Basic Maven build
./build.sh
# Skip tests during build
./build.sh --skip-tests
# Build Maven artifacts and Docker images
./build.sh --skip-tests --docker
# Build and deploy with Docker Compose
./build.sh --skip-tests --deploy
# Build and deploy to Kubernetes
./build.sh --skip-tests --k8s
For local development, the easiest way to get the entire system running is using Docker Compose:
# Start all services
docker-compose up -d
# Check service status
docker-compose ps
# View logs
docker-compose logs -f
# Stop all services
docker-compose down
For production or staging environments, the application can be deployed to Kubernetes:
# Interactive setup
./k8s-setup.sh
# Automated setup for CI/CD environments
./k8s-setup-auto.sh --non-interactive
This project uses GitHub Actions for Continuous Integration and Continuous Deployment. See CI/CD Documentation for details.
The CI/CD pipeline:
- Builds all microservices
- Runs automated tests
- Builds Docker images
- Deploys to Kubernetes
- Validates deployment health