NATS Weekly #20

NATS Weekly #20

Week of March 28 - April 3, 2022

🗞 Announcements, writings, and projects

A short list of  announcements, blog posts, projects updates and other news.

⚡Releases

Official releases from NATS repos and others in the ecosystem.

⚙️ Projects️

Open Source and commercial projects (or products) that use or extends NATS.

📖 Articles

Nats.io is mentioned in an aside, but the impact should not be overlooked.

Consul doesn't give us the load (in concurrent requests) on all the services. For a long time, we abused Consul for this, too: we tracked load in Consul KV. Never do this! Today, we use a messaging system to gossip load across our fleet.

Specifically, we use NATS, a simple "brokered" asynchronous messaging system, the pretentious way to say "almost exactly like IRC, but for programs instead of people".

Unlike Consul, [core] NATS is neither consistent nor reliable. That's what we like about it. We can get our heads around it. That's a big deal: it's easy to get billed for a lot of complexity by systems that solve problems 90% similar to yours. It seems like a win, but that 10% is murder. So our service discovery will likely never involve an event-streaming platform like Kafka.

👆 I inserted the "[core]" bit since the "consistent nor reliable" does not apply to JetStream.

💬 Discussions

Github Discussions from various NATS repositories.

💡 Recently asked questions

Questions sourced from Slack, Twitter, or individuals. Responses and examples are in my own words, unless otherwise noted.

How do you choose between "file" storage and "memory" storage for a stream?

For a single replica stream, the question you should ask yourself is whether its OK to lose that stream if the NATS node dies or is restarted. Since the stream is in-memory, the data will be lost.

That said, one could argue that even for a file-based stream, if the disk/drive on the node fails, the stream data would be lost as well. Reduced probability of data loss is one of the primary reasons for replication along with high availability. A replication factor of 3 for a stream is recommended or a max of 5 (more on this in a bit). This loosely translates to "there will be N copies of every message in a given stream on separate NATS nodes in a cluster."

The stream replicas can be defined with the stream configuration:

js.AddStream(&nats.StreamConfig{

Name: "ORDERS",

Replicas: 1,

})

Start out with one replica.. just because.As of nats-server v2.7.3, the stream replicas can be dynamically increased or decreased when a stream config update is issued. For example:

js.UpdateStream(&nats.StreamConfig{

Name: "ORDERS",

Replicas: 3,

})

This stream is important, let's bump to 3 replicas!I briefly touched on what happens to a replicated message in a previous post. The short of it is that NATS has a tailored Raft implementation where each stream has its own replication routine and can evolve independently.

Since the Raft algorithm guarantees consistency among at least a quorum of replicas at any given time, we know that messages published to the stream returned successful are safe. We also know that with three replicas, we can tolerate one node being offline while still being available and two nodes offline with a replication of five.

With this in mind, let's return to the original question. Removing everything else, there are two fundamental differences between a memory-based stream for a file-based one. The fairly obvious one is that with a memory-based stream, all messages must be able to fit in memory. For a file-based stream, the stream can grow larger than the memory available.

The second difference is latency. We are removing the disk and therefore do not incur the additional milliseconds of latency on reads and writes. This latency difference may or may not matter to your application (yes, everyone likes fast things, but you need to weigh the trade-offs).

Assuming replication is set to three or five, an in-memory stream can tolerate partial node failure without data loss. For example, a single node going offline while the two others are consistent can allow the third one to come back online and recover to a fully consistent state. This means that, with enough operational care, that a replicated, in-memory stream could avoid data loss for a long period of time.

However, if there are circumstances where more than the majority of nodes go offline, an in-memory stream may not be able to recover to a fully consistent state. This is fundamentally where a file-based stream has an advantage since data has been synced to disk and can handle arbitrary process restarts.

My default choice is using file-based streams since they are safer and easier to think about. However, I don't have extremely latency-sensitive workloads. If you do, then confirm a file-based stream exceeds your latency requirements by benchmarking and then go with a memory-based stream. Alternatively, memory-based streams could be used if they are short-lived and/or the SLA for them in your application does not guarantee loss of data due to node faults or restarts.

What other replication options are available?

The above question calls out the synchronous replication that occurs due to the nature of the Raft consensus protocol, but NATS also has two options for asynchronous replication depending on the use case. These are called mirrors and sources.

Mirrors are streams configured to mirror another stream. Read the docs for the full description and constraints, but the primary use case of a mirror is to support async replication of a stream to another stream with different properties. A couple use cases include:

  • A primary memory-based stream (singular or replicated) to achieve some latency requirements that is mirrored for archival reasons to a file-based stream.

  • A primary stream that is mirrored to another geographic region, acting as a local read-only replica.

Sources are streams configured to source messages from one or more other streams as well as accept messages on its own set of bound subjects. This isn't a dedicated replication strategy per se, but it does still result in another copy of messages from the source streams.

This kind of stream essentially does a fan-in from multiple source streams to an aggregated stream. Consumers can then be created on this aggregated stream rather than a client needing to create independent consumers per stream for cross-stream consumption.

As noted in the docs, one trade-off is that ordering of messages across streams are not guaranteed. That is, a message older in one stream (by timestamp) could arrive after a newer message in another stream.

A more subtle trade-off is that consumers on this aggregated stream ack messages in this stream. If more control over concurrency of consumption, consumers should be created against the original streams and the client would need to manage separate subscriptions and ack-ing of messages in those respective streams.

How can you identify/differentiate ephemeral consumers?

The context of this question is that names cannot be explicitly set on ephemeral consumers, however, for monitoring purposes an observer cannot correlate an empheral consumer to the origin (where it was created in the system).

A great tip by Julien Viard de Gilbert on Slack was to use the consumer description field which can container client-defined info in-lieu of the ephemeral name that is created.