Tugas Besar 1 Sistem Paralel dan Terdistribusi Kelompok [A]
Oleh Kelompok A:
- 13522020 Aurelius Justin Philo Fanjaya
- 13522090 Fedrianz Dharma
- 13522094 Andhika Tantyo Anugrah
Sekolah Teknik Elektro dan Informatika
Institut Teknologi Bandung
Semester VI Tahun 2024/2025
This project implements the Raft consensus algorithm in Java using gRPC for communication. The implementation provides a distributed key-value store with the following features:
- Heartbeat: Node health monitoring & periodic messages
- Leader Election: Leader node failover mechanism
- Log Replication: Cluster action logging system
- Membership Change: Mechanism to add and remove nodes from the cluster
- gRPC Communication: Remote Procedure Call protocol for inter-node communication
ping
- Check connection with server (returns "PONG")get <key>
- Get value for a key (returns empty string if key doesn't exist)set <key> <value>
- Set value for a key (overwrites if exists)strln <key>
- Get length of value for a keydel <key>
- Delete entry for a key (returns deleted value)append <key> <value>
- Append to value of a key (creates key if doesn't exist)
The project uses Maven for dependency management. Required dependencies are automatically managed through pom.xml
:
- gRPC Java (1.72.0)
- Protocol Buffers (3.25.1)
- Gson (2.13.1)
mvn clean compile
Start the first server (becomes leader):
java -cp target/classes Main server localhost 8080
Start additional servers (join existing cluster):
java -cp target/classes Main server localhost 8081 localhost 8080
java -cp target/classes Main server localhost 8082 localhost 8080
Start the client with known server addresses:
java -cp target/classes Main client localhost:8080 localhost:8081 localhost:8082
Once the client is running, you can use these commands:
> ping
PONG
> set key1 value1
OK
> get key1
"value1"
> append key1 value2
OK
> get key1
"value1value2"
> strln key1
12
> del key1
"value1value2"
> get key1
""
> log
Leader Log:
[0] Term:1 Command:set key1 value1
[1] Term:1 Command:append key1 value2
[2] Term:1 Command:del key1
> quit
- RaftNode: Implements the Raft consensus algorithm
- KVStore: In-memory key-value storage application
- RaftServer: gRPC server wrapper
- RaftClient: gRPC client with leader discovery and redirection
- execute: Execute application commands (client interface)
- requestLog: Get leader's log entries (client interface)
- applyMembership: Add new nodes to cluster (client interface)
- appendEntries: Log replication (inter-node)
- requestVote: Leader election (inter-node)
- heartbeat: Health monitoring (inter-node)
The implementation ensures:
- Consistency: All nodes agree on the same log
- Partition Tolerance: System continues to operate despite network partitions
- Majority Requirement: Commands are only committed when replicated to majority of nodes (⌊n/2⌋ + 1)
- Leader Redirection: Non-leader nodes redirect clients to the current leader
- Leader Failure: Automatic leader election when heartbeats are missed
- Node Failure: Failed nodes are detected but not automatically removed from cluster
- Network Partitions: Minority partitions cannot commit new entries
- Split Brain Prevention: Only majority partitions can elect leaders
You can test the system by:
- Starting multiple servers
- Using the client to perform operations
- Stopping the leader server to test election
- Checking log consistency across nodes