A high-performance, reliable network connection pool management system for Go applications.
- Features
- Installation
- Quick Start
- Usage
- Security Features
- Connection Modes
- Connection Keep-Alive
- Dynamic Adjustment
- Advanced Usage
- Performance Considerations
- Troubleshooting
- License
- Thread-safe connection management with mutex protection
- Support for both client and server connection pools
- Dynamic capacity adjustment based on usage patterns
- Automatic connection health monitoring
- Connection keep-alive management for maintaining active connections
- Multiple TLS security modes (none, self-signed, verified)
- Connection identification and tracking
- Single and multi-connection modes for different use cases
- Graceful error handling and recovery
- Configurable connection creation intervals
- Auto-reconnection with exponential backoff
- Connection activity validation
go get github.com/NodePassProject/pool
Here's a minimal example to get you started:
package main
import (
"net"
"time"
"github.com/NodePassProject/pool"
)
func main() {
// Create a client pool
dialer := func() (net.Conn, error) {
return net.Dial("tcp", "example.com:8080")
}
pool := pool.NewClientPool(
5, 20, // min/max capacity
500*time.Millisecond, 5*time.Second, // min/max intervals
30*time.Second, // keep-alive period
"0", // TLS mode
false, // isSingle mode
"example.com", // hostname
dialer,
)
// Start the pool manager
go pool.ClientManager()
// Use the pool
conn := pool.ClientGet("connection-id")
if conn != nil {
// Use connection...
defer conn.Close()
}
// Clean up
defer pool.Close()
}
package main
import (
"net"
"time"
"github.com/NodePassProject/pool"
)
func main() { // Create a dialer function
dialer := func() (net.Conn, error) {
return net.Dial("tcp", "example.com:8080")
}
// Create a new client pool with:
// - Minimum capacity: 5 connections
// - Maximum capacity: 20 connections
// - Minimum interval: 500ms between connection attempts
// - Maximum interval: 5s between connection attempts
// - Keep-alive period: 30s for connection health monitoring
// - TLS mode: "2" (verified certificates)
// - Single mode: false (multi-connection mode)
// - Hostname for certificate verification: "example.com"
clientPool := pool.NewClientPool(
5, 20,
500*time.Millisecond, 5*time.Second,
30*time.Second,
"2",
false,
"example.com",
dialer,
)
// Start the client manager (usually in a goroutine)
go clientPool.ClientManager()
// Get a connection by ID (usually received from the server)
conn := clientPool.ClientGet("connection-id")
// Use the connection...
// When finished with the pool
clientPool.Close()
}
package main
import (
"crypto/tls"
"net"
"github.com/NodePassProject/pool"
)
func main() {
// Create a listener
listener, err := net.Listen("tcp", ":8080")
if err != nil {
panic(err)
}
// Optional: Create a TLS config
tlsConfig := &tls.Config{
// Configure TLS settings
MinVersion: tls.VersionTLS13, }
// Create a new server pool
// - Restrict to specific client IP (optional, "" for any IP, "192.168.1.10" to only allow that specific IP)
// - Use TLS config (optional, nil for no TLS)
// - Use the created listener
// - Keep-alive period: 30s for connection health monitoring
serverPool := pool.NewServerPool("192.168.1.10", tlsConfig, listener, 30*time.Second)
// Start the server manager (usually in a goroutine)
go serverPool.ServerManager()
// Get a new connection from the pool (blocks until available)
id, conn := serverPool.ServerGet()
// Use the connection...
// When finished with the pool
serverPool.Close()
}
When you finish using a connection, you can return it to the pool using the Put
method. This helps avoid connection leaks and maximizes reuse:
// After using the connection
pool.Put(id, conn)
id
is the connection ID (for multi-connection mode only).conn
is thenet.Conn
object you want to return.
If the pool is full or the connection is already present, Put
will close the connection automatically.
Best Practice: Always call Put
(or Close
if not reusing) after you are done with a connection to prevent resource leaks.
// Check if the pool is ready
if clientPool.Ready() {
// The pool is initialized and ready for use
}
// Get current active connection count
activeConnections := clientPool.Active()
// Get current capacity setting
capacity := clientPool.Capacity()
// Get current connection creation interval
interval := clientPool.Interval()
// Manually flush all connections (rarely needed)
clientPool.Flush()
// Record an error (increases internal error counter)
clientPool.AddError()
// Get the current error count
errorCount := clientPool.ErrorCount()
// Reset the error count to zero
clientPool.ResetError()
The NewServerPool
function allows you to restrict incoming connections to a specific client IP address. The function signature is:
func NewServerPool(
maxCap int,
clientIP string,
tlsConfig *tls.Config,
listener net.Listener,
keepAlive time.Duration,
) *Pool
maxCap
: Maximum pool capacity.clientIP
: Restrict allowed client IP ("" for any).tlsConfig
: TLS configuration (can be nil).listener
: TCP listener.keepAlive
: Keep-alive period.
When the clientIP
parameter is set:
- All connections from other IP addresses will be immediately closed.
- This provides an additional layer of security beyond network firewalls.
- Particularly useful for internal services or dedicated client-server applications.
To allow connections from any IP address, use an empty string:
// Create a server pool that accepts connections from any IP
serverPool := pool.NewServerPool(20, "", tlsConfig, listener, 30*time.Second)
Mode | Description | Security Level | Use Case |
---|---|---|---|
"0" |
No TLS (plain TCP) | None | Internal networks, maximum performance |
"1" |
Self-signed certificates | Medium | Development, testing environments |
"2" |
Verified certificates | High | Production, public networks |
// No TLS - maximum performance
clientPool := pool.NewClientPool(5, 20, minIvl, maxIvl, keepAlive, "0", false, "example.com", dialer)
// Self-signed TLS - development/testing
clientPool := pool.NewClientPool(5, 20, minIvl, maxIvl, keepAlive, "1", false, "example.com", dialer)
// Verified TLS - production
clientPool := pool.NewClientPool(5, 20, minIvl, maxIvl, keepAlive, "2", false, "example.com", dialer)
Implementation Details (from pool.go):
-
Connection ID Generation:
- In multi-connection mode, the server generates an 8-byte ID and sends it to the client after TLS handshake.
- In single-connection mode, the client generates its own ID using random bytes.
-
Put Method:
- Prevents duplicate connections in the pool.
- If the pool is full or the connection is already present, the connection is closed automatically.
-
Flush/Close:
Flush
closes all connections and resets the pool.Close
cancels the context and flushes the pool.
-
Dynamic Adjustment:
adjustInterval
andadjustCapacity
are used internally for pool optimization based on usage and success rate.
-
isActive:
- Checks if a connection is alive by sending an empty write with a short deadline.
-
Error Handling:
AddError
andErrorCount
are thread-safe and use mutex protection.
The pool supports two connection modes through the isSingle
parameter:
In this mode, the pool manages multiple connections with server-generated IDs:
// Multi-connection mode - server generates connection IDs
clientPool := pool.NewClientPool(
5, 20,
500*time.Millisecond, 5*time.Second,
30*time.Second,
"2",
false, // Multi-connection mode
"example.com",
dialer,
)
// Get connection by server-provided ID
conn := clientPool.ClientGet("server-provided-id")
Features:
- Server generates unique 8-byte connection IDs
- Client reads ID from connection after TLS handshake
- Ideal for load balancing and connection tracking
- Better for complex distributed systems
In this mode, the pool generates its own IDs and manages connections independently:
// Single-connection mode - client generates connection IDs
clientPool := pool.NewClientPool(
5, 20,
500*time.Millisecond, 5*time.Second,
30*time.Second,
"0",
true, // Single-connection mode
"example.com",
dialer,
)
// Get any available connection (no specific ID needed)
conn := clientPool.ClientGet("")
Features:
- Client generates its own connection IDs
- No dependency on server-side ID generation
- Simpler connection management
- Better for simple client-server applications
Aspect | Multi-Connection (false ) |
Single-Connection (true ) |
---|---|---|
ID Generation | Server-side | Client-side |
Connection Tracking | Server-controlled | Client-controlled |
Complexity | Higher | Lower |
Use Case | Distributed systems | Simple applications |
Load Balancing | Advanced | Basic |
The pool implements TCP keep-alive functionality to maintain connection health and detect broken connections:
- Automatic Keep-Alive: All connections automatically enable TCP keep-alive
- Configurable Period: Set custom keep-alive periods for both client and server pools
- Connection Health: Helps detect and remove dead connections from the pool
- Network Efficiency: Reduces unnecessary connection overhead
// Client pool with 30-second keep-alive
clientPool := pool.NewClientPool(
5, 20,
500*time.Millisecond, 5*time.Second,
30*time.Second, // Keep-alive period
"2", // TLS mode
false, // isSingle mode
"example.com", // hostname
dialer,
)
// Server pool with 60-second keep-alive
serverPool := pool.NewServerPool(
"192.168.1.10",
tlsConfig,
listener,
60*time.Second, // Keep-alive period
)
Period Range | Use Case | Pros | Cons |
---|---|---|---|
15-30s | High-frequency apps, real-time systems | Quick dead connection detection | Higher network overhead |
30-60s | General purpose applications | Balanced performance/overhead | Standard detection time |
60-120s | Low-frequency, batch processing | Minimal network overhead | Slower dead connection detection |
Recommendations:
- Web applications: 30-60 seconds
- Real-time systems: 15-30 seconds
- Batch processing: 60-120 seconds
- Behind NAT/Firewall: Use shorter periods (15-30s)
The pool automatically adjusts:
-
Connection creation intervals based on idle connection count (using
adjustInterval
method)- Decreases interval when pool is under-utilized (< 20% idle connections)
- Increases interval when pool is over-utilized (> 80% idle connections)
-
Connection capacity based on connection creation success rate (using
adjustCapacity
method)- Decreases capacity when success rate is low (< 20%)
- Increases capacity when success rate is high (> 80%)
These adjustments ensure optimal resource usage:
// Check current capacity and interval settings
currentCapacity := clientPool.Capacity()
currentInterval := clientPool.Interval()
package main
import (
"log"
"net"
"time"
"github.com/NodePassProject/pool"
"github.com/NodePassProject/logs"
)
func main() { logger := logs.NewLogger(logs.Info, true)
clientPool := pool.NewClientPool(
5, 20,
500*time.Millisecond, 5*time.Second,
30*time.Second,
"2",
false,
"example.com",
func() (net.Conn, error) {
conn, err := net.Dial("tcp", "example.com:8080")
if err != nil {
// Log the error
logger.Error("Connection failed: %v", err)
// Record the error in the pool
clientPool.AddError()
}
return conn, err
},
)
go clientPool.ClientManager()
// Your application logic...
}
package main
import (
"context"
"net"
"time"
"github.com/NodePassProject/pool"
)
func main() {
// Create a context that can be cancelled ctx, cancel := context.WithCancel(context.Background())
defer cancel()
clientPool := pool.NewClientPool(
5, 20,
500*time.Millisecond, 5*time.Second,
30*time.Second,
"2",
false,
"example.com",
func() (net.Conn, error) {
// Use context-aware dialer
dialer := net.Dialer{Timeout: 5 * time.Second}
return dialer.DialContext(ctx, "tcp", "example.com:8080")
},
)
go clientPool.ClientManager()
// When needed to stop the pool:
// cancel()
// clientPool.Close()
}
package main
import (
"net"
"sync/atomic"
"time"
"github.com/NodePassProject/pool"
)
func main() {
// Create pools for different servers
serverAddresses := []string{
"server1.example.com:8080",
"server2.example.com:8080",
"server3.example.com:8080",
}
pools := make([]*pool.Pool, len(serverAddresses))
for i, addr := range serverAddresses {
serverAddr := addr // Create local copy for closure pools[i] = pool.NewClientPool(
5, 20,
500*time.Millisecond, 5*time.Second,
30*time.Second,
"2",
false,
serverAddr[:len(serverAddr)-5], // Extract hostname
func() (net.Conn, error) {
return net.Dial("tcp", serverAddr)
},
)
go pools[i].ClientManager()
}
// Simple round-robin load balancer
var counter int32 = 0
getNextPool := func() *pool.Pool {
next := atomic.AddInt32(&counter, 1) % int32(len(pools))
return pools[next]
}
// Usage
id, conn := getNextPool().ServerGet()
// Use connection...
// When done with all pools
for _, p := range pools {
p.Close()
}
}
Pool Size | Pros | Cons | Best For |
---|---|---|---|
Too Small (< 5) | Low resource usage | Connection contention, delays | Low-traffic applications |
Optimal (5-50) | Balanced performance | Requires monitoring | Most applications |
Too Large (> 100) | No contention | Resource waste, server overload | High-traffic, many clients |
Sizing Guidelines:
- Start with
minCap = baseline_load
andmaxCap = peak_load × 1.5
- Monitor connection usage with
pool.Active()
andpool.Capacity()
- Adjust based on observed patterns
Aspect | No TLS | Self-signed TLS | Verified TLS |
---|---|---|---|
Handshake Time | ~1ms | ~10-50ms | ~50-100ms |
Memory Usage | Low | Medium | High |
CPU Overhead | Minimal | Medium | High |
Throughput | Maximum | ~80% of max | ~60% of max |
The isActive
method performs lightweight connection health checks:
- Cost: ~1ms per validation
- Frequency: On connection retrieval
- Trade-off: Reliability vs. slight performance overhead
For ultra-high-throughput systems, consider implementing custom validation strategies.
Symptoms: Connections fail to establish
Solutions:
- Check network connectivity to target host
- Verify server address and port are correct
- Increase connection timeout in dialer:
dialer := func() (net.Conn, error) { d := net.Dialer{Timeout: 10 * time.Second} return d.Dial("tcp", "example.com:8080") }
Symptoms: TLS connections fail with certificate errors
Solutions:
- Verify certificate validity and expiration
- Check hostname matches certificate Common Name
- For testing, temporarily use TLS mode
"1"
:pool := pool.NewClientPool(5, 20, minIvl, maxIvl, keepAlive, "1", false, hostname, dialer)
Symptoms: ServerGet()
blocks indefinitely
Solutions:
- Increase maximum capacity
- Reduce connection hold time in application code
- Check for connection leaks (ensure connections are properly closed)
- Monitor with
pool.Active()
andpool.ErrorCount()
Symptoms: Frequent connection failures
Solutions:
- Implement exponential backoff in dialer
- Monitor server-side issues
- Track errors with
pool.AddError()
andpool.ErrorCount()
- Network connectivity: Can you ping/telnet to the target?
- Port availability: Is the target port open and listening?
- Certificate validity: For TLS, are certificates valid and not expired?
- Pool capacity: Is
maxCap
sufficient for your load? - Connection leaks: Are you properly closing connections?
- Error monitoring: Are you tracking
pool.ErrorCount()
?
Add logging at key points for better debugging:
dialer := func() (net.Conn, error) {
log.Printf("Attempting connection to %s", address)
conn, err := net.Dial("tcp", address)
if err != nil {
log.Printf("Connection failed: %v", err)
pool.AddError() // Track the error
} else {
log.Printf("Connection established successfully")
}
return conn, err
}
Copyright (c) 2025, NodePassProject. Licensed under the BSD 3-Clause License. See the LICENSE file for details.