Always manually send heartbeats when processing message batches in ruby-kafka

I have found it useful to always manually send heartbeats when processing message batches with ruby-kafka. Not doing it often leads to instability in the consumer group at the worst possible time – when you have a performance degradation within the consumer loop. Consumer group instability means: frequent consumer group rebalances. Frequent consumer group rebalances […]

Sidekiq Parameter Object Pattern

In my previous post I explained that the signature of a Sidekiq job should be treated as an interface between the Sidekiq client and the Sidekiq server. Therefore, you should pay attention to backward compatibility whenever changes are made to that interface: adding/removing arguments, changing the class name, etc. The solutions I proposed to the […]

An overview of Redis::Distributed – Redis client side partitioning in Ruby

Redis::Distributed is the Ruby implementation of Redis client side partitioning in Ruby. Partitioning (also known as sharding) is the process of taking the dataset that would originally be held in a single Redis server and splitting it over multiple Redis servers. Partitioning allows you to distribute writes/reads over a set of nodes – horizontal scaling. […]

How to implement backpressure and load shedding?

Backpressure and load shedding are methods you can use to mitigate queue overload. These are methods that kick in automatically therefore your system needs to have enough instrumentation to know if a queue is overloaded. For example, we can have a queue object which responds to the overloaded? message, (i.e. queue.overloaded?). The definition of queue […]

Why you need backpressure/load shedding for queues?

Backpressure – slowing down producers – and load shedding – dropping messages – are two of the methods you can use to mitigate queue overload. Backpressure and load shedding are reaction mechanisms your producers and/or consumers automatically take during queue overload. These methods are useful because they enforce limits to your queues. A limit is […]

Tracking queue metrics with the ruby-kafka gem

Kafka may be used as a queue to send messages between different systems therefore the relevant metrics should be collected. The ruby-kafka gem has amazing support to track the most important queue metrics. It has out of the box instrumentation for Statsd and Datadog. It also has instrumentation hooks which rely on Active Support Notifications. […]

Tracking queue metrics with Sidekiq

Sidekiq is a critical component of many Rails applications therefore its metrics should be appropriately collected. For example: Imagine you have a Sidekiq job which sends password reset e-mails to customers. If that job’s queue has a latency of 1 hour you want to know about it because it has a significant impact to a […]

Dealing with queue overload

When dealing with queue overload you effectively have two levers: increasing the consumption rate or reducing the production rate. How to increase consumption rate? Add consumers This is about adding boxes/containers. An example is whenever you add Sidekiq workers or Kafka consumers to a consumer group. This is a common resolution because it does not […]