-
Notifications
You must be signed in to change notification settings - Fork 13
Description
It seems that there is no write queue limiting for this library.
Once the connection is created and we start logging to logstash socket and logstash slows down to a trickle the write logs back up to such a degree that you
run out of heap.
Two solutions could be.
-
write queue for the socket with an async callback when the socket has emptied some of the queue to be sent. This is obviously limited to x size and after the queue is full logs are dropped. This would also help with batching (an optimisation that is really required for this library)
-
Write buffer ring (LMAX in mutli-threaded languages is awesome). Size limited array that is used as a buffer ring to write log messages into.
simplified untested psedo-code. note that the older versions just get stomped on so if you run out of space only the most recent logs will end up getting sent. You can always just get it to drop too but that is slightly more complex.
writeIndex = 0
readIndex
write(x) =>{
// can add the dropping here if the write is going to write on a non null ( slightly slower and probably not necessary)
buffer[ writeIndex % buffer.length] = x;
writeIndex++;
}
nextBuffer = () => {
x = buffer[readIndex%buffer.length];
buffer[readIndex%buffer.length] = null;
return x
}
// called on async socket empty
fullSocket()=> {
while( (x = nextBuffer()) != null ) {
// or do some batching here
socket.send( x );
}
}
Im unsure that the consecutive writes will end up in the same buffer or will be send in separate tcp buffers. Im not too familiar with nodejs sockets.
I might have to switch to another library because of this ... or try create a pull request if i have the time.
Also have you though about integration with winston logging ?
The logstash-winston logger seems a bit badly done to be honest.