Attending Rust Nation UK 2026 in London

Februaury 20268 min read
rustnation 2026

I think this is my favourite Rust conference so far 😁

I don't know if it is because I'm in London or it could be due to multiple factors. A big city as such is able to attract huge names in the Rust community such as Jon Gjengset who was there with Helsing; a company specialising in defense and aiming to be Europe-first.

I think as well, attending a conference ideally entails the whole experience. The people you meet at the conference, people you connect with, things you learn, merch you collect, the quality of the food at the conference, the impression you get from the venue and the organisers as well as the energy of the city around you and the activities that you do when you're not at the conference. Needless to say, in a city like London, I always meet wonderful people in events I attend such as language events and I always have incredible food which makes me excited for my next visit to London ❤️


Conference day

Rust Nation UK is a 1-day conference with 1 day of workshops the day before and some post-conf activities the day after.

"What if the way you code could change the planet?" by Lisa Crossman

This was the first keynote talk on conference day. Lisa is a microbiologist and a Rust enthusiast. She is passionate about analysing genomics data more efficiently and in an environmentally conscious way. She has a sequencing consultancy aimed at achieving the latter.

She starts of by stating the challenge of our current times which is computation of big data, notably the size of genomics data and the need for smarter approaches.

Knowing that the go-to language for researchers and ML engineers is Python, she argues for languages like Rust which are not only faster to run but also consume 70% less energy. Languages like Python with interpreted nature and typing make it challenging in terms of energy consumption.

She has created a community called MicroBioRust to advocate for using Rust in genetic research. Together with her team, they have benchmarked BioPython and MicroBioRust with the latter being faster.

One challenge she mentions is trying to convince researchers in the field to switch to using Rust for the reasons listed above. But she makes a distinction between an occasional bioinformatician who cares about seeing clear errors, who doesn't want to worry about setting up or learning new tools but most importantly aims for long-term reproducability of their research papers.

A high-throughput bioinformatician on the other hand, cares about concurrency, memory efficiency, big data and cheaper cloud costs. And it is this group that looks to optimise their processes.

I found this talk to be both relevant and engaging. It is always interesting to see Rust being used in all sorts of domains.

"From fRusTrait-ion to Async Mastery: Become a Rust Trait Hero in One Hour" by Lawrence Freeman

This tutorial session covered Send/Sync marker traits between threads. A marker trait is a trait with no methods or associated items; as seen in their definitions in Rust source code here and here. They simply mark types with certain properties to help the compiler enforce safety rules for concurrency. With the Send trait indicating that the type can be safely transferred between threads and the Sync trait indicating that the type can be safely referenced or shared between threads.

The speaker described the borrow checker as a tory 😂 to help us understand that the borrow checker is strict and is there to enforce rigid rules about ownership and borrowing that help prevent unsafe memory access and maintain order and discipline in the code.

The speaker states the 3 types of concurrency: futures (async concurrency), multi-threads on a single core and true multiparallelism on multi-core.

One thing to understand is the difference between an atomic variable and a regular variable eg. Atomici32 and i32. With atomic variables being types that support atomic operations. They are a special type that allow safe, lock-free concurrent access to data and that prevent race conditions. Unlike regular types like i32 that are not thread-safe and that may cause data races.

Something else is cache line size. It is the amount of data (in bytes) that a CPU reads from or writes to memory at once. Knowing the cache line size can be important when a program performs concurrency to help threads avoid using the same cache line.

To find out the cache line size on your machine (Mac) you can use:

  $ sysctl hw.cachelinesize

When I run that from terminal I get hw.cachelinesize: 128

The first part of the tutorial demonstrates the different ordering modes like Relaxed and Release and the behaviour of atomic variables across different architectures and scenarios.

Part 2 helps deepen the understanding of thread-safe shared ownership of data using atomic types.

"use<'lifetimes> for<'what>" by Ethan Brierley

Lifetime is an important concept to understand when writing Rust code. The speaker aimed at explaining this topic from a somewhat mathematical point of view. But essentially, every type has a lifetime which is meta data used to ensure safety. A variable is "alive" as long as it will still be used later on in the code.

'static is the longest possible lifetime in Rust and it means that the variable is available for the entire duration of the program.

where is used to specify complex lifetimes or trait bounds for generic types making the constraint clearer and easier to read.

  fn print_ref<'a, T>(val: &'a T)
  where
    T: std::fmt::Display,
    {
      println!("{}", val);
    }

for is used for higher-ranked trait bounds that allows specifying a trait or function implementation for any lifetime.

  fn takes_fn(f: F)
  where
    F: for<'a> Fn(&'a str),
    {
      f("hello");
    }

The speaker advised to use a single lifetime variable in a function definition when possible. And there was a mention of Polonius which has been integrated into rustc.

"rk8s: A Lightweight Rust-based Alternative to Kubernetes" by Zhouxuan Tang

This talk was one of my favourite. A team at Nanjing university in China, has decided to rewrite Kubernetes in Rust with smarter design decisions.

rk8s is a lightweight, Rust-based alternative to Kubernetes designed for container orchestration. It aims to manage distributed nodes more efficiently and runs on Youki; a container runtime written in Rust.

Some of the architectural differences between k8s and rk8s include a difference in structuring where the control plane in k8s is organised into different components such as the controller, node agent, state store, etc. compared to rk8s, where components run as independent processes and where QUIC is used for communication between the different components instead of gRPC.

Kubernetes follows 3 interface standards:

CRI

Container Runtime Interface, defines how k8s interacts with the container runtime to create and manage pods.

CNI

Container Network Interface, defines how pods acquire IP addresses to communicate with each other and the outside world.

CSI

Container Storage Interface, defines how persistent data is stored and mounted to pods.

rk8s makes use of various open-source tools to achieve smarter design decisions such as Youki which replaces some layers of complexity in the CRI standard by simple function calls to the container runtime. RustFS; a high-performance distributed object storage system built in Rust.

The speaker concluded by mentioning some of their roadmap goals such as Rkforge; a Docker-like Rust-based tool for building container images. The team aims to build tools that are compatible with mainstream clients.

"Rust Adoption At Scale with Ubuntu" by Jon Seager

This is a well-suited keynote for this moment in time. With all the hype that we have been hearing around using Rust to rewrite elements of Linux in Rust.

Jon Seager is a technical leader at Canonical; the company that builds and maintains Ubuntu.

Jon mentions that they have replaced core system utilities with Rust-based ones in the Ubuntu 29.10 interim release. They hope to be more bald and replace coreutils, findutils and diffutils.

But why is Canonical advocating for Rust? He states that they want to be leaders in this widespread adoption, to attract talent, to have a better default UX and to build more resilient software knowing that Rust by nature provides many safe practices. Rust is included in the Ubuntu distribution and it is there for Rust developers.

He then concludes by mentioning 3 future projects they are working on: Anbox, Mir and Dqlite.


Post conference event

"Rust in the age of AI" panel discussion

Even though I categorise myself as a traditional software developer, I still thought this would be an important discussion to listen to. Mainly because I am interested in learning more about Rust and how it can be more widely adopted in domains seemingly dominated by Python. Also the panelists are well-known members of the Rust community.

The discussion was divided into 2 parts with the first being about inference and whether Rust can be used for that. And the second part about the use of LLMs to generate Rust code. I will list some notes below that I jotted down from each panelist.

Part 1: Rust in inference

Stephan Eckes: Rust can be used for inference while Python can still be used for training. Python is used mostly by researchers with little programming background. Not only is it difficult to convert this group to Rust, but also Python provides better dependency and versioning management when compared to languages like Rust and C. In addition to that, Python data modules like numpy can be loaded and integrated into Rust code.

David Haig: Training should be done in Python because it is mostly done by scientists. Python is not a good fit for inference. He mentions an example of a vllm (virtual large language model) written in Python with half a million lines of code. Most of this code is just doing plumbing and "pretending to be a better language" which you can get out-of-the-box in Rust.

Stu Harris: MLOps to create infrastructure do to training can be Rust-based. Rust might not be a good fit for research, why? Because it is challenging to do prototyping in Rust i.e. writing something initial to see what it roughly does then throwing it away or repurposing it. The priority is doing the research, and blocks should not be put in the way of that. He believes in using multiple languages for multiple purposes.

There was a question about Mojo; for software development in AI. The panelists mention that it is mostly a programming language for GPUs rather than a general purpose language. It is used for faster training or inference in the cloud. It could be used to optimise hardware running the models similar to when assembly languages are used.

For wider Rust adoption, Jon Gjengset mentions that the community needs to pay attention to tooling.

Part 2: LLMs generating Rust code

This is an interesting topic. I myself was resistant to the idea of using LLMs to generate code. I think every language has beautiful features and digging deep into those features to learn more about how the language works under the hood can be a rewarding part of being a programmer. However about a year ago I worked at a startup and realised that time is precious for some companies. They need to iterate and ship fast to have a chance of succeeding. I learned that using LLMs to write repetitive code such as tests can save time. The key though is having background knowledge in coding and knowing what the LLM is generating because most of the time it returns complex code that might be a bottleneck for maintenance in the future. I find that I go back and forth with an LLM to get it to write good enough code.

The panelists here metioned the following thoughts: maybe use LLMs for things like interfaces, asking it to find bugs in a piece of code, to bounce ideas off of or to generate documentation. It shouldn't be blindy trusted and a human is needed to check the code that it generates.

I also find that depending on the task at hand, if there is enough training data online and if you describe what excatly it is that you are looking for, the LLM might give a reasonable output. But this might not be the case if the problem you are trying to solve is niche.


Another Rust blog

(with the occasional DevOps article) 😬