4 min read

NATS Weekly #7

NATS Weekly #7

Week of December 27, 2021 - January 2, 2022

🗞 Announcements, writings, and projects

A short list of  announcements, blog posts, projects updates and other news.

⚡Releases

📖 Articles

🎬 Media

🎙Discussions

  • Are you a NATS end-user? - Survey/discussion by the NATS team hosted on Github Discussions. This is from June 2021, but I just saw it and why not recycle it for the end of the year!!
  • Add section on minimum system requirements - A common question that arises is "what are the resource requirements for NATS?". This is a Github issue which drafts the addition to the docs. If you have specific questions, please leave comments!

💡 Recently asked questions

Questions sourced from Slack, Twitter, or individuals. Responses and examples are in my own words, unless otherwise noted.

What is the current best approach for creating a large number of accounts or users programmatically?

NATS supports several different ways to manage accounts, users, and permissions depending on your needs. However, at scale or if decentralization is necessary, the JWT-based method will need to be used.

The recommended and default approach is to use the nsc command-line tool which handles nkey and JWT generation and management as well as push/syncing with the server. In other words, it does a lot of the boilerplate required to make this whole auth method work.

For managing a large number of accounts or users, manually using nsc could be tedious or untenable. Fortunately, nsc relies on two Go libraries for creating nkeys (NATS keys) and JWTs which ultimately get pushed to the server. Although the Go libraries are the reference implementation, there are additional nkeys implementations in progress for JavaScript/TypeScript, Python, and Ruby.

Matthias Hanel, an engineer with Synadia and core NATS contributor, suggested the following in Slack:

What you do is essentially use nsc to create your operator and one-off accounts. Add signing keys to the operator and distribute the signing key seed to a program using the jwt library to create accounts. These created accounts also can use signing keys to then create users. Our in-depth guide has examples: https://docs.nats.io/using-nats/developer/tutorials/jwt#automated-sign-up-services-jwt-and-nkey-libraries

So the strategy is to use nsc to bootstrap the setup and then creating signing keys which are used by the program to then create subsequent accounts or users.

If you don't want to drop down to using these libraries programmatically, another option could be to create a small service that can be hit programmatically that shells out to nsc.

Where does NATS store its application state?

NATS has two types of application state that it manages, depending on how the server is configured.

Using core NATS (without JetStream enabled) with one of the config-based authentication methods, there is no state. You can freely deploy new nodes, move nodes, etc. without worrying about any state.

If the decentralized JWT authentication method is used, servers will store the JWTs as files on disk. However, since these JWTs are pushed from a human operator using nsc (or programmatically.. see the previous question), these are only copies on the NATS server and can always be re-pushed. Likewise if new nodes are added to the cluster, these JWTs are automatically replicated to the nodes.

It is worth noting that this does put the onus on human operator to manage these files, but from NATS standpoint, it is a very good design choice to support evolving topologies.

The optional second type of state NATS has to manage is JetStream related. If your deployment has JetStream-enabled nodes and streams have been created, NATS has a storage subsystem for writing the stream data on disk (for file-based streams). This includes consumer state as well. For in-memory streams, all state is held in memory.

From a fault tolerance and disaster recovery standpoint there are a few things note. When creating a stream, the number of replicas can be defined, such as three. Each replica will be placed on a different server in the cluster. This means the data will be written to the local disk of that server. It is important to guarantee that each server is hosted on a separate host and/or has guarantees around the attach storage (in a cloud or container environment).

Since JetStream uses a consensus protocol for writes, there are guarantees around the consistency and integrity of the replicas.

For disaster recovery, NATS does support backups of streams using the nats CLI. A regular frequency of backups for disaster situations can be pretty easily achieved. Another, more native setup, could be to host a second cluster (in a different region) that effectively mirrors all streams from the primary.

How can I use serverless functions as NATS consumers?

In a previous issue, I went fairly in-depth on this topic specifically asking the question "are people using NATS with serverless offerings?"

A person in Slack asked posed the idea of having a Lambda function consume from a JetStream stream. Of course, Lambda does not integrate natively with NATS, so native triggering when a message is pushed is not an option.

In the previous issue, I mentioned you could have a bridge that effectively consumes from the stream and re-publishes to a SQS or EventBridge which can natively trigger lambdas. Since this would require hosting a long-running process as the consumer to do the forwarding, this strategy would only make sense if the cost, scalability, and platform advantages of doing the forwarding and using Lambda would outweigh hosting the consumers yourself 🤷🏻‍♂.️

It is could certainly make sense for some use cases, but one would need to ensure the forwarding and ack'ing is done properly between the NATS stream and, say, SQS.

Another option that Scott Fauerbach posed was to have Lambda function that gets woken up on some interval or via a SQS message to indicate some new messages are in the NATS stream. When initially created, the function would create a consumer against a stream and subscribe to consume messages. Optionally, the SQS message could include metadata like the stream or sequence to start consuming from.

Using a durable consumer, the Lambda would wake up and simply continue consuming where it left off. After some timeout of not receiving new messages, it would go back to sleep until the next interval or invocation.


If you would like more in-depth information with examples to any of these questions, please reach out on Slack or Twitter!