4 min read

NATS Weekly #37

NATS Weekly #37

Week of July 25 - 31, 2022

🗞 Announcements, writings, and projects

A short list of  announcements, blog posts, projects updates and other news.

️ 💁 News

News or announcements that don't fit into the other categories.

I joined Synadia! 🎉 After years of being a NATS user, inspired by the vision behind the technology, and having increased enjoyment (and involvement) with the NATS developer community, I am delighted for this opportunity to focus on developer relations full-time.

Have you heard of Synadia?? Founded by the creator of NATS, Synadia employs the core maintainers of the NATS server, many of the client libraries, the CLI, various Kubernetes Helm charts, the Prometheus exporter and Grafana dashboards, and many other tools in the ecosystem.

On top of the incredible amount of Open Source contributions, Synadia offers a managed, secured, and optimized global deployment of NATS called NGS. You can get started with a free account to connect. No need to manage any clusters (or superclusters). Deploy your application anywhere in the world, connect to NGS, and you are done 😉.

If you are an enterprise that needs to run NATS on your own infrastructure, cloud VPCs, devices, etc. check out NATS Enterprise to take advantage of architectural review and dedicated support from the experts.

If you want to learn more about either of these offerings or what value they bring, please reach out!

Finally, I am thankful for all of the support and excitement from folks that have subscribed to and follow this newsletter! I mentioned this on Twitter, but worth reiterating here, that the weekly newsletter will continue. I personally find it useful aggregating this information to keep a pulse on the NATS ecosystem. I am working with the Synadia team to determine ways of improving it and where it's new home will be. ✨

⚡Releases

Official releases from NATS repos and others in the ecosystem.

🎬 Media

Audio or video recordings about or referencing NATS.

📖 Articles

Blog posts, tutorials, or any other text-based content about NATS.

💬 Discussions

Github Discussions from various NATS repositories.

🧑‍🎓 Examples

New or updated examples on NATS by Example.

  • Limits-based stream - Node and Deno
  • Pull Consumer - Go
  • Pull Consumer - Applying Limits - Go

💡 Recently asked questions

Questions sourced from Slack, Twitter, or individuals. Responses and examples are in my own words, unless otherwise noted.

How can I view server/system info in a NATS server or cluster?

NATS has this notion of a system account which is reserved for operations and monitoring of servers in the cluster or supercluster. When using the NATS CLI with a fresh and zero config server, you may try to run the following command and get this message:

$ nats server info
nats: error: no results received, ensure the account used has system privileges and appropriate permissions, try --help

This adds a bit of friction for those getting started, but it is a trade-off with the security-first approach NATS takes when introducing new capabilities.

Fortunately, to get this working requires only a few lines of configuration.

accounts: {
  \$SYS: {
    users: [{user: sys, password: pass}]
  }
}

Setting the user and password (or adding a context), we can now get the server info.

$ nats --user sys --password pass server info
Server information for NB2MEK5VEFZGMLMCV35G557VTY3NJVOUIAJC2H42I5PXRLYWDZ4CYRKY

Process Details:

         Version: 2.8.4
      Git Commit: 66524ed
      Go Version: go1.17.10
      Start Time: 2022-08-01 18:55:25.918663239 +0000 UTC
          Uptime: 0s

Connection Details:

   Auth Required: true
    TLS Required: false
            Host: 0.0.0.0:4222
     Client URLs: 

Limits:

        Max Conn: 65536
        Max Subs: 0
     Max Payload: 1.0 MiB
     TLS Timeout: 2s
  Write Deadline: 10s

Statistics:

       CPU Cores: 2 0.00%
          Memory: 11 MiB
     Connections: 1
   Subscriptions: 37
            Msgs: 1 in 0 out
           Bytes: 2 B in 0 B out
  Slow Consumers: 0

Check out the full example here.

What is this stream RePublish feature again?

A few months ago, I touched on a newly released republish feature for streams. This past week, the architecture decision record, ADR-28, was written up highlighting more details and semantics.

To re-iterate the core functionality, if a stream is configured to republish, for each message that is appended to the stream, it will republish the message immediately to another subject. The target subject is mapped from the source using the standard subject mapping semantics. For example, given a subject mapping foo.* to bar.*, a message received on foo.1 will be republished to bar.1.

The primary use case is that this target subject is not bound to another stream, but rather the receiver is using a core NATS subscription. This provides an at-most-once guarantee since the message is not traversing a stream.

You may be thinking, the original message is published to a stream, but then republished to a subject not bound to a stream. So any messages published on this mapped subject will be dropped if the receiver is offline.. 🤔

This is where the headers come in.  This feature was explicitly design to support broadcasting out persisted messages to millions of subscribers (not currently something JetStream can do with consumers on a single stream). So the trade-off is scaling the distribution of these messages, at the cost of not having some of them received if a subscriber is unavailable.

The headers provide information about the message with respect to the stream. For example, the Nats-Sequence indicates the sequence of the message in the stream. If a subscriber is receiving these messages and then crashes and comes back online, it can use this header to detect gaps. The process can then do ad-hoc get requests or partial consumption against the stream to fill in those gaps.

For the case when a subscription comes online after re-publishing starts, it can lookup the current start sequence of the stream and consume up to the sequence that it started receiving. Again, these cases should be few and short-lived in comparison to full-fledged consumers.

What use cases would take of advantage of this feature? Beyond what the problem statement defines, it could be a solution for a certain level of scale of devices that all need to receive messages from the stream. There may be other considerations like filtering and/or permissions, but hopefully this re-intro provides a bit more insight into this new feature!


If you would like more in-depth information with examples to any of these questions, please reach out on Slack or Twitter!