All posts

NATS Weekly #9

Week of January 10 - 16, 2022

🗞 Announcements, writings, and projects

A short list of  announcements, blog posts, projects updates and other news.

⚡Releases

💁‍♀️ FYI

💡 Recently asked questions

Questions sourced from Slack, Twitter, or individuals. Responses and examples are in my own words, unless otherwise noted.

Is it possible to allow a specific account to access a specific stream from every account that exports it?

Given the accounts A and B which expose a stream by some name and subject space foo.*

Acc A -> exports stream to subject foo.*

Acc B -> exports stream to subject foo.*

The question is whether an account C can be defined that can implicitly import these streams across accounts that have the same properties.

Acc C -> can consume foo.* from A and B (and any future account)

NATS supports account exports and imports. This can be done in server configuration or using nsc when using JWT-based auth. Although the term stream is used in defining these exports and imports, it actually is a subject space, rather than the name of a stream.

In this example, foo.* would be exported and then account C would need to explicitly import from both accounts. If new accounts are added, the import for C would also need to be updated.

accounts: {

A: {

exports: [

{stream: foo.*}

]

}

B: {

exports: [

{stream: foo.*}

]

}

C: {

imports: [

{stream: {account: A, subject: foo.*}}

{stream: {account: B, subject: foo.*}}

]

}

}

Optionally, the exports could also define a list of accounts that are allowed to import it (rather than any account). This is done by specifying a list of accounts in the export.

accounts: {

A: {

exports: [

{stream: foo.*, accounts: [C]}

]

}

...

}

This was a long way of saying no, implicit importing of stream exports is not currently supported. However, there may be a way to monitor for new accounts or account changes and then react to those changes to update imports dynamically.

I have not taken the time to come up with an example for this, but if anyone has done this or would like to see this, please let me know!

Can I (or should I) put an L4 load balancer in front of NATS?

This question comes up quite often since it is so common and often necessary to putt HTTP servers behind a load balancer. This is desirable for load balancing requests as well as TLS termination.

NATS was designed from the ground up to run on its own. It can scale and is resilient to slow consumers to maintain high availability. A cluster of nodes gossips between each in order to optimize communication. It also has a variety of security features and supports TLS natively.

This is all to say that adding more layers in front of NATS is not recommended. Specifically adding a L4 layer with TLS does not work at all since the initial connection handshake between a NATS client and the server is unencrypted (Postgres does this as well). If an L4 proxy is necessary, then simply make it a passthrough TCP stream and configure NATS with the TLS certs.

Regarding load balancers, NATS clients support taking multiple URLs (e.g. nats://example:4222,nats://example:4223,nats://example.4224) when setting a connection. In addition, the server will tell the client of any other nodes that were added while connected. This increases the connective resilience between clients and the cluster. If any of the nodes in the clusters goes offline, the clients will automatically reconnect to another one in the pool.

Since NATS was designed to run on bare metal without any dependency on other infra, this should be the default starting place. When choosing how to deploy it, make minimal choices based on your environment/teams’ operational desires (e.g. all kubernetes shop). But when it comes to load balances (and service meshes), Derek says it best.

Using load balancers and service mesh tech in front of @nats_io makes about as much sense as training wheels on a car.— Derek Collison (@derekcollison) January 9, 2022

What do msg.{Ack,Nak,Term,InProgress} all mean?

By default when a subscription from a JetStream consumer receives a message, it must acknowledge (ack) that it successfully processed that message. Clients can auto-ack (usually the default) or opt for manual control over this (my personal preference) since there are more options than just an ack.

Why does a message need to be ack’ed? While core NATS provides at-most-once delivery guarantees, JetStream provides at-least-once. This means that NATS will redeliver a message until it is acknowledged (in some way) or when max_deliver attempts are made (if defined).

Using the Go client as an example, there are four methods on a nats.Msg

  • Ack - successful processing, no redeliver

  • Nak - explicit unsuccessful processing, redeliver

  • Term - explicit unsuccessful processing, no redeliver

  • InProgress - processing in progress, reset the ack wait time

If not acknowledged within the AckWait window (default is 30 seconds), redelivery will occur. The precise time when redelivery will occur is not defined. In addition, redelivered messages will be interleaved with new messages arriving at the same time.

If partial or total ordering is required, it is usually wise to create a consumer that isolates the subject space that needs this ordering and set the MaxAckPending to one.