Skip to content

mariamhmostafa/Facebook-Replica

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Facebook App - Microservices Architecture

Build, Test and Deploy License: MIT Docker Images Kubernetes

Project Overview

This project implements a Facebook-like application using a microservices architecture. The system consists of four primary microservices, each responsible for a specific domain area, all connected through an Nginx load balancer acting as an API gateway.

Architecture

┌─────────────────┐     ┌──────────────────────────────────────┐
│                 │     │            API Gateway               │
│    Clients      │────▶│               (Nginx)                │
│                 │     │                                      │
└─────────────────┘     └───┬─────────────┬─────────┬──────────┘
                            │             │         │
                  ┌─────────▼──┐  ┌───────▼─────┐  ┌▼──────────────┐  ┌─────────────┐
                  │  User App  │  │ Message App │  │  Search App   │  │  Wall App   │
                  │ (Auth/User │  │ (Messages)  │  │  (Search)     │  │ (Posts &    │
                  │  Profiles) │  │             │  │               │  │  Comments)  │
                  └─────┬──────┘  └──────┬──────┘  └───────┬──────┘   └──────┬──────┘
                        │                │                 │                 │
                  ┌─────▼──────┐  ┌──────▼──────┐   ┌──────▼──────┐   ┌─────▼──────┐
                  │ PostgreSQL │  │  MongoDB    │   │  PostgreSQL │   │  MongoDB   │
                  │ (User data)│  │ (Messages)  │   │ (Search     │   │ (Posts &   │
                  │            │  │             │   │  indexes)   │   │  Comments) │
                  └────────────┘  └─────────────┘   └─────────────┘   └────────────┘
                                          
                                      ┌─────────────┐     ┌─────────────┐
                                      │   RabbitMQ  │     │    Redis    │
                                      │ (Messaging) │     │  (Caching)  │
                                      └─────────────┘     └─────────────┘

Microservices

  1. User App (Port 8081):

    • Manages user authentication and profiles
    • Handles user registration, login, and profile management
    • Uses PostgreSQL for storing user data
    • Implements JWT-based authentication and authorization
    • Exposes RESTful APIs for user management
    • Caches frequent user data in Redis
    • Publishes user events to RabbitMQ
  2. Message App (Port 8083):

    • Handles user messaging functionality
    • Manages conversations and message history
    • Uses MongoDB to store message data
    • Implements real-time messaging features
    • Provides APIs for creating conversations and sending messages
    • Caches recent conversations and messages in Redis
    • Consumes user events from RabbitMQ for user presence
  3. Search App (Port 8084):

    • Provides search capabilities across the platform
    • Creates and manages search indexes
    • Uses PostgreSQL to store search indexes and configurations
    • Implements advanced search algorithms and filtering
    • Provides APIs for searching users, posts, and messages
    • Caches popular search results in Redis
    • Consumes events from RabbitMQ to update search indexes
  4. Wall App (Port 8082):

    • Manages posts, comments, and reactions
    • Handles timeline features
    • Uses MongoDB to store posts and comments
    • Implements feed generation algorithms
    • Provides APIs for creating and retrieving posts
    • Caches trending posts and user feeds in Redis
    • Publishes and consumes post-related events via RabbitMQ

Supporting Services

  1. Nginx (Port 80):

    • Acts as an API gateway and load balancer
    • Routes requests to the appropriate microservices based on path
    • Provides a unified entry point for all client requests
    • Handles cross-cutting concerns like CORS and rate limiting
    • Performs health checks on microservices
    • Offers performance optimizations like response caching and compression
    • Provides load balancing strategies like round-robin and least connections
  2. RabbitMQ:

    • Message broker for inter-service communication
    • Enables event-driven architecture
    • Provides reliable message delivery between services
    • Implements various exchange types (direct, topic, fanout)
    • Supports message acknowledgment and delivery guarantees
    • Enables asynchronous processing patterns
    • Configured with high availability and fault tolerance
    • Features management interface for monitoring and administration
  3. Redis:

    • Caching layer for improved performance
    • Reduces database load for frequently accessed data
    • Used by all services for caching
    • Implements different caching strategies (LRU, TTL-based)
    • Provides pub/sub capabilities for real-time features
    • Supports data structures for specialized use cases
    • Configured with persistence for data durability
    • Uses separate database numbers for service isolation
  4. Databases:

    • PostgreSQL: Relational database for structured data

      • User service: Stores user profiles, credentials, relationships
      • Search service: Stores search indexes, configurations, and metadata
      • Configured with connection pooling for efficient resource utilization
      • Implements optimized query patterns and indexing strategies
      • Setup with replication for high availability (production)
    • MongoDB: NoSQL database for document-oriented data

      • Message service: Stores conversations, messages, attachments
      • Wall service: Stores posts, comments, reactions, and media
      • Optimized for high write throughput and flexible schema
      • Implements sharding for horizontal scaling (production)
      • Uses replica sets for redundancy and failover

Detailed Technical Implementation

API Gateway Implementation

The Nginx configuration uses advanced routing techniques to direct traffic to the appropriate microservice:

# facebook of API routing in Nginx
location /api/users/ {
    proxy_pass http://user_service/api/users/;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
}

The gateway also implements:

  • Load balancing: Distributes requests across multiple instances of the same service
  • Health monitoring: Regular checks to ensure services are operational
  • Fault tolerance: Automatic failover to healthy instances
  • Request/response transformation: Modifies headers and content as needed
  • SSL termination: Handles HTTPS traffic (in production)

Authentication Flow

The authentication process uses JWT (JSON Web Tokens) and follows these steps:

  1. Client sends credentials to /api/auth/login
  2. User App validates credentials and generates a JWT
  3. JWT is returned to the client
  4. Client includes JWT in the Authorization header for subsequent requests
  5. API Gateway validates token structure and forwards to appropriate service
  6. Individual services perform fine-grained authorization

Database Schema Overview

PostgreSQL Schemas:

User Database:

  • users: Stores user credentials and profile information
  • relationships: Manages connections between users
  • user_preferences: Stores user settings and preferences
  • auth_tokens: Manages authentication and refresh tokens

Search Database:

  • search_indexes: Stores indexed content for quick retrieval
  • search_configurations: Contains search algorithm settings
  • search_history: Logs user search patterns for recommendations

MongoDB Collections:

Message Database:

  • conversations: Stores metadata about conversations
  • messages: Contains actual message content and metadata
  • message_attachments: References to media included in messages

Wall Database:

  • posts: Contains user posts and associated metadata
  • comments: Stores comments linked to posts
  • reactions: Records user reactions (likes, etc.) to content
  • media: Stores references to images and videos

Caching Strategy

Each microservice implements a specialized caching strategy:

  1. User App:

    • Caches user profiles with a 15-minute TTL
    • Implements cache-aside pattern for frequently accessed data
    • Uses Redis database 0 with isolation
  2. Message App:

    • Caches recent conversations with sliding window expiration
    • Implements write-through caching for new messages
    • Uses Redis database 1 with specialized data structures
  3. Search App:

    • Caches popular search results with frequency-based eviction
    • Implements memoization for complex search operations
    • Uses Redis database 2 with bloom filters for optimization
  4. Wall App:

    • Caches trending posts using LRU eviction strategy
    • Implements feed caching with selective invalidation
    • Uses Redis database 3 with sorted sets for ranking

Event-Driven Communication

The system uses various RabbitMQ exchange types for different communication patterns:

  1. Direct exchanges for command patterns:

    • Authentication events
    • User management operations
  2. Topic exchanges for publish/subscribe patterns:

    • Content creation events (posts, comments)
    • User activity streams
  3. Fanout exchanges for broadcast patterns:

    • System-wide notifications
    • Cache invalidation events

facebook message flow:

[User App] --user.created--> [RabbitMQ] --user.created--> [Search App, Message App]

Resilience Patterns

Each microservice implements the following resilience patterns:

  1. Circuit Breaker: Prevents cascading failures when dependent services are down
  2. Retry with Backoff: Automatically retries failed operations with exponential backoff
  3. Bulkhead: Isolates failures to protect critical system components
  4. Fallback: Provides degraded functionality when primary operations fail
  5. Timeout: Sets maximum duration for external calls to prevent blocking

Security Implementation

The system implements multiple layers of security:

  1. Transport Layer:

    • TLS/SSL encryption for all external communication
    • Secure internal communication in production environments
  2. Authentication Layer:

    • JWT-based authentication with refresh token rotation
    • Password hashing with bcrypt and configurable work factors
    • OAuth2 integration (planned for production)
  3. Authorization Layer:

    • Role-based access control (RBAC)
    • Fine-grained permission system
    • Resource ownership validation
  4. API Security:

    • Input validation and sanitization
    • CSRF protection
    • Rate limiting and throttling
    • Content Security Policy implementation

Technology Stack

  • Backend: Java with Spring Boot 3.x

    • Spring Security for authentication and authorization
    • Spring Data JPA/MongoDB for data access
    • Spring AMQP for messaging
    • Spring Cache for caching abstraction
  • API: RESTful services

    • OpenAPI/Swagger for documentation
    • JSON/HTTP for communication
    • HATEOAS for resource navigation
  • Database:

    • PostgreSQL 16 for relational data
    • MongoDB 6 for document data
    • Database migration tools (Flyway/Liquibase)
  • Caching: Redis 7

    • Lettuce client with connection pooling
    • Multi-database configuration
    • Advanced data structures
  • Messaging: RabbitMQ 3

    • Exchange and queue configuration
    • Message acknowledgement
    • Dead letter queues
  • API Gateway: Nginx 1.25

    • Load balancing configuration
    • SSL termination
    • Path-based routing
  • Containerization: Docker

    • Multi-stage builds
    • Custom images
    • Volume management
  • Container Orchestration: Docker Compose

    • Service dependency management
    • Health checks
    • Network configuration

k8s Deployment

The application can be deployed to a k8s cluster, providing enhanced scalability, resilience, and automated management capabilities compared to Docker Compose.

k8s Architecture

The k8s deployment mirrors the Docker Compose architecture but takes advantage of k8s native features like:

  • Declarative configuration
  • Self-healing capabilities
  • Advanced scaling options
  • Robust service discovery
  • Built-in health monitoring
  • Resource management

Prerequisites for k8s Deployment

  • A k8s cluster (local or cloud-based)
  • kubectl command-line tool installed and configured
  • Container registry access (Docker Hub, GCR, ECR, etc.)
  • Helm (optional, for more advanced deployments)

Building and Pushing Docker Images

Before deploying to k8s, build and push your container images to a registry:

# Build all microservice images
./build_all.sh

# Tag images for your registry
docker tag facebook/user_app:latest your-registry/facebook/user_app:latest
docker tag facebook/message_app:latest your-registry/facebook/message_app:latest
docker tag facebook/search_app:latest your-registry/facebook/search_app:latest
docker tag facebook/wall_app:latest your-registry/facebook/wall_app:latest

# Push images to your registry
docker push your-registry/facebook/user_app:latest
docker push your-registry/facebook/message_app:latest
docker push your-registry/facebook/search_app:latest
docker push your-registry/facebook/wall_app:latest

For local development with Minikube, you can load images directly:

# Build images
./build_all.sh

# Load images into Minikube
minikube image load facebook/user_app:latest
minikube image load facebook/message_app:latest
minikube image load facebook/search_app:latest
minikube image load facebook/wall_app:latest

Applying k8s Configurations

The application is deployed using a series of configuration files in the /k8s directory. Apply them in order:

# Create namespace
kubectl apply -f k8s/00-namespace.yaml

# Create persistent volume claims
kubectl apply -f k8s/01-storage.yaml

# Deploy databases
kubectl apply -f k8s/02-postgres.yaml
kubectl apply -f k8s/03-mongodb.yaml

# Deploy infrastructure services
kubectl apply -f k8s/04-redis.yaml
kubectl apply -f k8s/05-rabbitmq.yaml

# Deploy microservices
kubectl apply -f k8s/06-user-app.yaml
kubectl apply -f k8s/07-message-app.yaml
kubectl apply -f k8s/08-wall-app.yaml
kubectl apply -f k8s/09-search-app.yaml

# Deploy API gateway
kubectl apply -f k8s/10-nginx-gateway.yaml

For convenience, you can apply all configurations at once:

kubectl apply -f k8s/

Verifying Deployment

Check the status of all deployed resources:

# Check pods
kubectl get pods -n facebook-app

# Check services
kubectl get services -n facebook-app

# Check persistent volume claims
kubectl get pvc -n facebook-app

# Check deployments
kubectl get deployments -n facebook-app

Accessing the Application

The application is exposed through the Nginx API Gateway service:

# Get external IP/port
kubectl get service nginx-gateway -n facebook-app

For Minikube:

minikube service nginx-gateway -n facebook-app

For cloud providers, the LoadBalancer service will provide an external IP address that you can access directly.

Scaling Services

One advantage of k8s is the ability to easily scale individual components:

# Scale the user-app to 3 replicas
kubectl scale deployment user-app -n facebook-app --replicas=3

# Scale the wall-app to 2 replicas
kubectl scale deployment wall-app -n facebook-app --replicas=2

Monitoring and Logs

Monitor the application and view logs:

# View logs for a specific pod
kubectl logs -f <pod-name> -n facebook-app

# View events
kubectl get events -n facebook-app

# Describe a resource for detailed information
kubectl describe pod <pod-name> -n facebook-app

k8s Configuration Details

The k8s deployment consists of the following configuration files:

  1. 00-namespace.yaml: Creates a dedicated namespace for the application
  2. 01-storage.yaml: Defines persistent volume claims for databases and stateful services
  3. 02-postgres.yaml: Sets up PostgreSQL databases for user and search services
  4. 03-mongodb.yaml: Configures MongoDB databases for message and wall services
  5. 04-redis.yaml: Deploys Redis with optimized performance configurations
  6. 05-rabbitmq.yaml: Implements RabbitMQ message broker with management interface
  7. 06-user-app.yaml: Deploys the user authentication and profile management service
  8. 07-message-app.yaml: Deploys the messaging service
  9. 08-wall-app.yaml: Deploys the posts and comments service
  10. 09-search-app.yaml: Deploys the search functionality service
  11. 10-nginx-gateway.yaml: Sets up the API gateway for routing requests

Each microservice configuration includes:

  • ConfigMaps for environment variables and application settings
  • Secrets for sensitive information
  • Deployments with resource limits and health checks
  • Services for inter-service communication
  • Persistent volume claims for stateful components

k8s Resource Management

The k8s configurations include resource requests and limits to ensure optimal performance:

  • Microservices: 256Mi-512Mi memory, 200m-500m CPU
  • Databases: 256Mi-512Mi memory, 250m-500m CPU
  • Redis: 128Mi-512Mi memory, 100m-300m CPU
  • RabbitMQ: 256Mi-512Mi memory, 200m-500m CPU
  • Nginx Gateway: 128Mi-256Mi memory, 100m-200m CPU

Cleaning Up

To remove the entire application from your k8s cluster:

kubectl delete namespace facebook-app

Or remove individual components:

kubectl delete -f k8s/10-nginx-gateway.yaml
kubectl delete -f k8s/09-search-app.yaml
kubectl delete -f k8s/08-wall-app.yaml
# etc.

Differences between Docker Compose and k8s Deployments

  1. Scaling: k8s allows individual service scaling independently
  2. Resilience: k8s automatically restarts failed containers and provides self-healing
  3. Resource Management: k8s offers more granular control over CPU and memory allocation
  4. Service Discovery: k8s provides DNS-based service discovery out of the box
  5. Rolling Updates: k8s supports zero-downtime updates and rollbacks
  6. Health Checks: k8s has built-in health checking and readiness probes
  7. Networking: k8s provides more advanced networking options and policies

Advanced k8s Features

After deploying the basic application, consider implementing these advanced features:

  1. Horizontal Pod Autoscaling:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: user-app-hpa
  namespace: facebook-app
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: user-app
  minReplicas: 1
  maxReplicas: 5
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 80
  1. Network Policies for enhanced security:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: db-access-policy
  namespace: facebook-app
spec:
  podSelector:
    matchLabels:
      app: postgres-user
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: user-app
    ports:
    - protocol: TCP
      port: 5432
  1. ConfigMap and Secret Management using external tools like HashiCorp Vault or Sealed Secrets

  2. Metrics Collection with Prometheus and Grafana:

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: facebook-app-monitor
  namespace: facebook-app
spec:
  selector:
    matchLabels:
      app: user-app
  endpoints:
  - port: web
    path: /actuator/prometheus
    interval: 15s

Getting Started

Prerequisites

  • Docker and Docker Compose installed
  • Java Development Kit (JDK) 17 or later (for local development)
  • Maven (for local development)
  • Git (for version control)
  • Postman or similar tool (for API testing)

Running the Application

The entire application can be launched using Docker Compose:

# Build and start all services
docker-compose up -d

# Check service status
docker-compose ps

# View logs
docker-compose logs -f

# View logs for a specific service
docker-compose logs -f user_app

# Stop all services
docker-compose down

First-time Setup

When starting the system for the first time:

  1. The databases will initialize with default schemas
  2. Test users will be created automatically (in development mode)
  3. Sample content will be generated (in development mode)

Default admin credentials:

Testing with Provided Scripts

The repository includes several testing scripts:

# Test all authentication functionalities
./run_all_auth_tests.sh

# Test message application features
./test_message_app.sh

# Test user management features
./test_user_app.sh

These scripts use cURL to make API calls and validate responses.

Using the API

Once the system is running, you can access the services through the Nginx gateway:

  • Authentication:

    • User Registration: POST http://localhost/api/auth/register
      {
        "email": "user@facebook.com",
        "password": "securePassword123!",
        "firstName": "John",
        "lastName": "Doe"
      }
    • User Login: POST http://localhost/api/auth/login
      {
        "email": "user@facebook.com",
        "password": "securePassword123!"
      }
      Response:
      {
        "token": "eyJhbGciOiJIUzI1NiJ9...",
        "refreshToken": "eyJhbGciOiJIUzI1NiJ9...",
        "expiresIn": 86400
      }
  • User Management:

    • Get User Profile: GET http://localhost/api/users/{userId} Headers: Authorization: Bearer {token}

    • Update User Profile: PUT http://localhost/api/users/{userId} Headers: Authorization: Bearer {token}

      {
        "firstName": "Updated",
        "lastName": "Name",
        "bio": "My updated profile bio"
      }
  • Messages:

    • Get Conversations: GET http://localhost/api/messages/conversations Headers: Authorization: Bearer {token}

    • Send Message: POST http://localhost/api/messages/send Headers: Authorization: Bearer {token}

      {
        "recipientId": "user123",
        "content": "Hello, how are you?",
        "attachments": []
      }
    • Get Messages: GET http://localhost/api/messages/{conversationId} Headers: Authorization: Bearer {token}

  • Search:

    • Search Users: GET http://localhost/api/search/users?query={query} Headers: Authorization: Bearer {token}

    • Search Posts: GET http://localhost/api/search/posts?query={query} Headers: Authorization: Bearer {token}

  • Wall:

    • Create Post: POST http://localhost/api/posts Headers: Authorization: Bearer {token}

      {
        "content": "This is my new post!",
        "visibility": "PUBLIC",
        "mediaUrls": []
      }
    • Get Posts: GET http://localhost/api/posts Headers: Authorization: Bearer {token}

    • Add Comment: POST http://localhost/api/comments Headers: Authorization: Bearer {token}

      {
        "postId": "post123",
        "content": "Great post!"
      }

Health Checks and Monitoring

Health checks are available for monitoring the status of each service:

  • Nginx Gateway: http://localhost/health
  • Individual Services:
    • User Service: http://localhost/actuator/health/user
    • Message Service: http://localhost/actuator/health/messages
    • Search Service: http://localhost/actuator/health/search
    • Wall Service: http://localhost/actuator/health/wall

The services expose detailed health metrics through Spring Boot Actuator:

  • Service Info: http://localhost/actuator/info/{service}
  • Service Metrics: http://localhost/actuator/metrics/{service}
  • Health Components: http://localhost/actuator/health/{service}/components

Development Workflow

Local Development

For local development, you can run each service separately:

# Navigate to service directory
cd user_app

# Run the service with development profile
./mvnw spring-boot:run -Dspring-boot.run.profiles=dev

# Run with remote debugging enabled
./mvnw spring-boot:run -Dspring-boot.run.jvmArguments="-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005"

You can use in-memory databases for development:

# Run with in-memory database
./mvnw spring-boot:run -Dspring-boot.run.profiles=dev,inmemory

Running Dependent Services

During development, you can start only the infrastructure services:

# Start only infrastructure services (databases, message broker, cache)
docker-compose up -d postgres-user postgres-search mongodb-message mongodb-wall rabbitmq redis

Building Docker Images

Each service has its own Dockerfile. To build an individual service:

# Navigate to service directory
cd user_app

# Build the image
docker build -t facebook/user_app .

# Build with a specific tag
docker build -t facebook/user_app:v1.2.3 .

# Build with build arguments
docker build --build-arg JAR_FILE=target/user_app-0.0.1-SNAPSHOT.jar -t facebook/user_app .

The Dockerfiles use multi-stage builds to minimize image size:

# Build stage
FROM maven:3.8.3-openjdk-17-slim AS build
WORKDIR /app
COPY . .
RUN mvn clean package -DskipTests

# Run stage
FROM openjdk:17-slim
WORKDIR /app
COPY --from=build /app/target/*.jar app.jar
ENTRYPOINT ["java", "-jar", "app.jar"]

Testing

Each service includes comprehensive testing:

# Run all tests
./mvnw test

# Run specific test class
./mvnw test -Dtest=UserServiceTest

# Generate test coverage report
./mvnw test jacoco:report

# Run integration tests only
./mvnw verify -P integration-tests

The project uses various testing approaches:

  1. Unit Tests: Test individual components in isolation
  2. Integration Tests: Test component interactions with real or mocked dependencies
  3. API Tests: Test REST endpoints with MockMvc
  4. End-to-End Tests: Test complete flows across services

Code Quality Tools

The project uses several code quality tools:

# Run static code analysis
./mvnw sonar:sonar

# Check code formatting
./mvnw spotless:check

# Apply code formatting
./mvnw spotless:apply

Project Structure

Module Overview

  • user_app/: User authentication and management service

    • src/main/java/com/facebook/user_app/controller: REST controllers
    • src/main/java/com/facebook/user_app/service: Business logic
    • src/main/java/com/facebook/user_app/repository: Data access
    • src/main/java/com/facebook/user_app/model: Domain entities
    • src/main/java/com/facebook/user_app/config: Configuration classes
    • src/main/java/com/facebook/user_app/security: Authentication and authorization
  • message_app/: Messaging service

    • src/main/java/com/facebook/message_app/controller: REST controllers
    • src/main/java/com/facebook/message_app/service: Business logic
    • src/main/java/com/facebook/message_app/repository: Data access
    • src/main/java/com/facebook/message_app/model: Domain entities
    • src/main/java/com/facebook/message_app/config: Configuration classes
  • search_app/: Search functionality service

    • src/main/java/com/facebook/search_app/controller: REST controllers
    • src/main/java/com/facebook/search_app/service: Business logic
    • src/main/java/com/facebook/search_app/repository: Data access
    • src/main/java/com/facebook/search_app/model: Domain entities
    • src/main/java/com/facebook/search_app/config: Configuration classes
    • src/main/java/com/facebook/search_app/indexing: Search indexing logic
  • wall_app/: Posts and comments service

    • src/main/java/com/facebook/wall_app/controller: REST controllers
    • src/main/java/com/facebook/wall_app/service: Business logic
    • src/main/java/com/facebook/wall_app/repository: Data access
    • src/main/java/com/facebook/wall_app/model: Domain entities
    • src/main/java/com/facebook/wall_app/config: Configuration classes
  • shared/: Shared libraries and utilities

    • src/main/java/com/facebook/shared/dto: Data transfer objects
    • src/main/java/com/facebook/shared/exception: Common exceptions
    • src/main/java/com/facebook/shared/util: Utility classes
    • src/main/java/com/facebook/shared/event: Event classes for messaging
  • nginx/: API gateway and load balancer configuration

    • conf.d/default.conf: Main Nginx configuration file
    • Dockerfile: Nginx container build definition
  • docker-compose.yml: Container orchestration configuration

Key Files

  • pom.xml: Parent Maven configuration file
  • README.md: Project documentation
  • run-facebook-app.sh: Convenience script for management
  • build_all.sh: Script to build all services
  • run_all_auth_tests.sh: Script to test authentication flows
  • test_message_app.sh: Script to test messaging features
  • test_user_app.sh: Script to test user management

Architectural Patterns

Microservices Architecture

The application follows key microservices principles:

  • Single Responsibility: Each service handles a specific business domain
  • Autonomous Services: Services can be developed, deployed, and scaled independently
  • Decentralized Data Management: Each service has its own database
  • Infrastructure Automation: Containerization and orchestration
  • Design for Failure: Resilience patterns and fault tolerance
  • Evolutionary Design: Services can evolve independently

API Gateway Pattern

The Nginx gateway implements:

  • Routing: Directs requests to appropriate services
  • Aggregation: Combines responses from multiple services (for complex operations)
  • Protocol Translation: Converts between protocols if needed
  • Offloading Cross-cutting Concerns: Handles authentication, logging, etc.

Event-Driven Architecture

The RabbitMQ messaging system enables:

  • Loose Coupling: Services communicate without direct dependencies
  • Asynchronous Processing: Non-blocking operations for better scalability
  • Event Sourcing: Recording all state changes as a sequence of events
  • CQRS: Separation of read and write operations for complex domains

Caching Strategies

The Redis caching layer implements:

  • Cache-Aside: Services look for data in cache first, then database
  • Write-Through: Updates cache when database is updated
  • Time-to-Live: Automatic expiration of cached data
  • Distributed Caching: Shared cache across service instances

Database Per Service

Each service owns its data with:

  • Private Tables: Each service has exclusive access to its database
  • API Encapsulation: Data is only accessed through service APIs
  • Polyglot Persistence: Different database types for different needs
  • Database Transactions: ACID guarantees within service boundaries

Fault Tolerance and Scalability

Resilience Strategies

The system implements multiple resilience strategies:

  • Load Balancing: Nginx distributes traffic across service instances

    • Round-robin distribution
    • Least connections algorithm
    • IP hash for session persistence
    • Health-check based routing
  • Service Discovery: Services can find each other dynamically

    • DNS-based service discovery in Docker
    • Health-aware service resolution
    • Automatic service registration
  • Health Monitoring: Automatic health checks detect service issues

    • Readiness probes: Determine if service can receive traffic
    • Liveness probes: Detect hung or deadlocked services
    • Custom health indicators: Application-specific health criteria
  • Horizontal Scaling: Additional service instances can be added for scaling

    • Stateless service design
    • Session externalization
    • Concurrent request handling
    • Data partitioning strategies

Circuit Breaker Implementation

The application uses Resilience4j for circuit breaking:

@CircuitBreaker(name = "userService", fallbackMethod = "getUserProfileFallback")
public UserProfile getUserProfile(String userId) {
    // Remote service call that might fail
}

public UserProfile getUserProfileFallback(String userId, Exception ex) {
    // Fallback implementation
}

Bulkhead Pattern

Services implement thread isolation:

@Bulkhead(name = "messagingOperations")
public void sendMessage(Message message) {
    // Message sending logic
}

Retry Mechanism

Failed operations are automatically retried:

@Retry(name = "searchIndex")
public SearchResult performSearch(String query) {
    // Search operation that might fail temporarily
}

Security Features

Authentication

The system uses JWT-based authentication:

  • Token Generation: Creates signed JWTs upon successful login
  • Token Validation: Verifies token signature and expiration
  • Claims Extraction: Retrieves user identity and roles from tokens
  • Refresh Mechanism: Allows obtaining new access tokens

Authorization

Role-based access control is implemented:

@PreAuthorize("hasRole('ADMIN') or @userSecurity.isOwner(authentication, #userId)")
public UserProfile updateUserProfile(String userId, UserProfileUpdate update) {
    // Update user profile
}

Data Protection

Sensitive data is protected:

  • Password Hashing: Uses bcrypt with configurable work factor
  • Field Encryption: Encrypts sensitive fields in the database
  • Data Masking: Hides sensitive data in logs and responses

Rate Limiting

API usage is limited to prevent abuse:

# Rate limiting configuration in Nginx
limit_req_zone $binary_remote_addr zone=api_limit:10m rate=5r/s;

location /api/ {
    limit_req zone=api_limit burst=10 nodelay;
    proxy_pass http://backend;
}

Production Considerations

Deployment Strategies

For production environments, consider:

  • Blue-Green Deployment: Maintain two identical environments for zero-downtime updates
  • Canary Releases: Gradually roll out changes to a subset of users
  • Feature Toggles: Enable/disable features without redeploying

Monitoring and Logging

Implement comprehensive monitoring:

  • Centralized Logging: Aggregate logs with ELK stack or similar
  • Metrics Collection: Gather performance metrics with Prometheus
  • Distributed Tracing: Track requests across services with Jaeger
  • Alerting: Set up alerts for abnormal conditions

Scalability Planning

Plan for growth with:

  • Database Sharding: Horizontally partition data for better performance
  • Caching Tiers: Implement multi-level caching for frequently accessed data
  • CDN Integration: Serve static assets from content delivery networks
  • Autoscaling: Dynamically adjust resources based on load

Security Hardening

Additional security for production:

  • API Gateway Security: Implement WAF functionality
  • Network Segmentation: Isolate services in separate security groups
  • Secrets Management: Use a vault for sensitive configuration
  • Regular Security Audits: Conduct penetration testing and code reviews

Backup and Disaster Recovery

Implement data protection:

  • Regular Backups: Schedule database backups with retention policies
  • Point-in-Time Recovery: Enable transaction logs for fine-grained recovery
  • Multi-Region Deployment: Replicate services across geographic regions
  • Disaster Recovery Plan: Document procedures for various failure scenarios

Future Enhancements

Planned Technical Improvements

  1. Service Mesh Integration

    • Implement Istio or Linkerd for enhanced service-to-service communication
    • Add mutual TLS between services
    • Improve traffic management capabilities
  2. Advanced Observability

    • Implement distributed tracing with Jaeger or Zipkin
    • Add business metrics collection
    • Create comprehensive dashboards with Grafana
  3. CI/CD Pipeline Enhancement

    • Implement GitOps workflow with ArgoCD
    • Add automatic canary analysis
    • Integrate security scanning in the CI pipeline
  4. Security Enhancements

    • Implement OAuth2.0 and OpenID Connect
    • Add two-factor authentication
    • Enhance API security with JWT scope validation
  5. k8s Deployment

    • Migrate from Docker Compose to k8s
    • Implement horizontal pod autoscaling
    • Add custom resource definitions for application-specific resources

Planned Feature Enhancements

  1. Enhanced Social Features

    • Friend recommendations system
    • Advanced privacy controls
    • Group functionality
  2. Multimedia Enhancements

    • Video streaming capabilities
    • Image processing and filters
    • Rich media embedding
  3. AI-Powered Features

    • Content recommendation engine
    • Automated content moderation
    • Smart reply suggestions
  4. Mobile Integration

    • Native mobile API endpoints
    • Push notification services
    • Offline capabilities
  5. Analytics Platform

    • User behavior analytics
    • Content performance metrics
    • A/B testing framework

Troubleshooting

Common Issues

  1. Services failing to start:

    • Check the logs with docker-compose logs <service_name>
    • Verify that environment variables are set correctly
    • Ensure volume mounts are properly configured
    • Check for port conflicts with other applications
  2. Database connection errors:

    • Ensure database containers are running and healthy
    • Verify connection strings in application properties
    • Check database credentials
    • Confirm that database initialization scripts ran correctly
  3. Network connectivity issues:

    • Verify that all services are on the same Docker network
    • Check firewall settings and network policies
    • Ensure service names are correctly referenced in application configs
    • Test connectivity between containers with ping or curl
  4. Authentication failures:

    • Check JWT secret configuration across services
    • Verify token expiration settings
    • Ensure clocks are synchronized across services
    • Confirm that roles and permissions are correctly assigned

Debugging

For debugging, you can:

  1. Access individual service logs: docker-compose logs -f <service_name>
  2. Check container health: docker ps (look for "health" status)
  3. Access service metrics: http://localhost/actuator/metrics/<service> (when available)
  4. Use remote debugging:
    docker-compose -f docker-compose.yml -f docker-compose.debug.yml up -d <service_name>
  5. Inspect container environment:
    docker exec -it <container_name> /bin/sh

Resolving Specific Issues

Nginx Gateway Issues

If the API gateway is not routing correctly:

  1. Check Nginx configuration: docker exec -it nginx-gateway nginx -t
  2. Verify upstream service health: curl http://user_app:8081/actuator/health
  3. Test direct service access (bypass Nginx): curl http://localhost:8081/api/users

Redis Connection Problems

If services cannot connect to Redis:

  1. Check Redis logs: docker-compose logs redis
  2. Verify Redis is running: docker exec -it redis redis-cli ping
  3. Test connection from service container:
    docker exec -it user_app /bin/sh
    wget -O- redis:6379

RabbitMQ Message Delivery Issues

If events are not being processed:

  1. Check RabbitMQ management UI: http://localhost:15672
  2. Verify exchanges and queues are created
  3. Check for messages stuck in queues
  4. Review consumer acknowledgements

Contributing

Development Setup

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Setup local environment:
    # Install dependencies
    ./mvnw clean install -DskipTests
    
    # Setup pre-commit hooks
    cp hooks/pre-commit .git/hooks/
    chmod +x .git/hooks/pre-commit

Coding Standards

The project follows these conventions:

  • Java Code Style: Google Java Style Guide
  • REST API Design: REST API Design Best Practices
  • Commit Messages: Conventional Commits format
  • Documentation: Javadoc for public APIs, README updates for features

Pull Request Process

  1. Commit your changes (git commit -m 'Add some amazing feature')
  2. Push to the branch (git push origin feature/amazing-feature)
  3. Open a Pull Request with:
    • Clear description of changes
    • Reference to related issue
    • Screenshots or facebooks (when applicable)
    • Updated documentation
  4. Address review comments
  5. Merge after approval

Code of Conduct

Contributors are expected to adhere to the project's Code of Conduct:

  • Respectful communication
  • Constructive feedback
  • Inclusive language
  • Collaborative problem solving

License

This project is licensed under the MIT License - see the LICENSE file for details.

CI/CD with GitHub Actions

This project uses GitHub Actions for Continuous Integration and Continuous Deployment.

CI/CD Workflows

CI Pipeline

CI Pipeline

Runs on every push to the main branch and pull requests:

  • Builds and tests all microservices
  • Runs JavaScript tests
  • Uploads test reports as artifacts

Docker Build

Docker Build

Builds and publishes Docker images:

  • Triggered by pushes to the main branch and tags
  • Builds Docker images for all services
  • Pushes to GitHub Container Registry

Kubernetes Deployment

K8s Deploy

Deploys to Kubernetes:

  • Runs after successful Docker builds
  • Can be triggered manually
  • Supports staging and production environments

Health Check

Health Check

Monitors deployed services:

  • Runs on a schedule (hourly)
  • Checks Kubernetes pods status
  • Tests service health endpoints
  • Sends notifications on failures

Setting Up GitHub Actions

To use these workflows, set up the following secrets in your GitHub repository:

  • JWT_SECRET: Secret key for JWT token generation
  • KUBECONFIG: Kubernetes configuration file content
  • SLACK_WEBHOOK_URL: URL for Slack notifications
  • POSTGRES_USER_PASSWORD: Password for user database
  • POSTGRES_SEARCH_PASSWORD: Password for search database
  • RABBITMQ_USERNAME: RabbitMQ username
  • RABBITMQ_PASSWORD: RabbitMQ password

Build and Deployment

Local Build and Deployment

The project includes a comprehensive build script (build.sh) that automates the build and deployment process:

# Basic Maven build
./build.sh

# Skip tests during build
./build.sh --skip-tests

# Build Maven artifacts and Docker images
./build.sh --skip-tests --docker

# Build and deploy with Docker Compose
./build.sh --skip-tests --deploy

# Build and deploy to Kubernetes
./build.sh --skip-tests --k8s

Docker Compose Deployment

For local development, the easiest way to get the entire system running is using Docker Compose:

# Start all services
docker-compose up -d

# Check service status
docker-compose ps

# View logs
docker-compose logs -f

# Stop all services
docker-compose down

Kubernetes Deployment

For production or staging environments, the application can be deployed to Kubernetes:

# Interactive setup
./k8s-setup.sh

# Automated setup for CI/CD environments
./k8s-setup-auto.sh --non-interactive

CI/CD Pipeline

This project uses GitHub Actions for Continuous Integration and Continuous Deployment. See CI/CD Documentation for details.

The CI/CD pipeline:

  • Builds all microservices
  • Runs automated tests
  • Builds Docker images
  • Deploys to Kubernetes
  • Validates deployment health

About

No description, website, or topics provided.

Resources

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 12