28 stories
·
11 followers

Cognitive Surrender

2 Shares
Cognitive offloading is delegating to the AI and still owning the answer. Cognitive surrender is when the AI's output quietly becomes your output and there is nothing left to check. For software engineers the line between the two moves under your feet most days, and most of us are crossing it without noticing.
Read the whole story
aristov
2 hours ago
reply
Stuttgart, Germany
bogorad
2 hours ago
a different take: https://skiplabs.io/blog/codegen_as_compiler
Share this story
Delete

76 - Rust Crosses the Chasm

1 Share

Rust has been the language of choice for early adopters for years. This week the evidence suggests it is becoming the language of choice for pragmatists too.

In this issue: Ubuntu naming Rust the language for new foundational system software, a production post-mortem that achieved a 78× speedup by redesigning data layout, and SurrealDB 3.0 shipping as a purpose-built store for AI agent memory.

Let's dive in.

Ubuntu Names Rust Its Language for New Foundational Work

Niko Matsakis wrote a post-Rust Nation UK reflection framing what it means that Ubuntu is now using Rust. His framing is worth taking seriously because he uses it to make a point about what mainstream adoption actually requires, and where Rust still has work to do.

The concrete Ubuntu details: Canonical sponsors the Trifecta Tech Foundation's development of sudo-rs and ntpd-rs, and supports the uutils coreutils project. Ubuntu has named Rust, alongside Python, C/C++, and Go, as a primary language for new development, and specifically as the language of choice for new foundational efforts replacing C and C++.

Niko frames this through Geoffrey Moore's technology adoption lifecycle. Rust has crossed the chasm in some domains, most visibly in data plane infrastructure within companies like Amazon, but it remains nascent in others, including safety-critical software. Ubuntu represents a mainstream adopter, a pragmatic organization choosing Rust not because it is exciting but because it is the best available option for the problem. That is a different and more durable kind of adoption than enthusiast-driven use.

The harder point he makes: "we need to make Rust the best option not just in terms of what it could be but in terms of what it actually is." Pragmatists do not bet on potential. They need tooling, documentation, hiring pools, and library coverage that works today. That is a challenge the Rust community still needs to meet deliberately.

The Ubuntu signal matters because Canonical is a risk-averse organization with a long institutional memory. When they standardize on a language for system-level infrastructure, they expect to live with that choice for a decade. Rust is now that choice for new work.

Takeaways:

  • Ubuntu replacing C and C++ with Rust for new foundational tooling, including sudo-rs, ntpd-rs, and uutils coreutils, is mainstream adoption in the most literal sense
  • Niko's "crossing the chasm" framing is useful: Rust has crossed in some domains but not others, and treating it as universally adopted or universally nascent both miss the picture
  • Mainstream pragmatists need Rust to be excellent today, not eventually; the community's response to that pressure will determine how far adoption continues to spread

78× Faster: What the Matrix SDK Taught Us About Lock Contention

The Matrix Rust SDK's room list was freezing. Not briefly, up to five minutes to render a list of rooms. The team at mnt.io wrote a detailed post-mortem that is one of the better performance debugging write-ups I have read this year.

The root causes turned out to be two things working together. First, memory pressure: the sorting algorithm was cloning LatestEventValue objects, each 144 bytes, repeatedly during comparisons. Second, lock contention: the sort called into protected data structures 322,042 times, acquiring a lock on each call. The profiler showed 743MB of total allocations and the same number of lock acquisitions just to sort a list.

The fix was data-oriented design, applied specifically. The team introduced a RoomListItem struct at 64 bytes that caches all the fields the sorters actually need, populated once before the sort runs. Instead of reaching through a lock 322,000 times, the sort operates entirely on a flat array of small structs that fits in L1 cache. The sort no longer needed to acquire any locks: the RoomListItem struct carried everything needed, so the protected data was never touched during the sort itself.

The numbers: 53ms down to 0.676ms. Throughput from 18,800 elements per second to 1.4 million. That is a 78× improvement from two structural changes: eliminating clones during comparison and moving lock access out of the hot path.

What makes this write-up worth reading beyond the benchmarks is the diagnosis process. The authors used profiling to identify that the bottleneck was not the algorithm but the data layout, then applied a well-understood technique from game development, data-oriented design, to a Rust application context. The pattern is not specific to the Matrix SDK. Any Rust code that reads from protected data inside a loop is a candidate for this approach.

Takeaways:

  • Lock acquisitions inside sort comparisons compound badly: 322K acquisitions for a single sort pass is a design problem, not a tuning problem, and no amount of lock optimization fixes it
  • Data-oriented design in Rust means precomputing a cache-friendly struct before the hot path runs, not restructuring your entire domain model; a 64-byte struct that lives in L1 cache is often enough
  • The 78× speedup came from two changes, fewer allocations and zero locks in the hot path; profiling told them which to fix first

SurrealDB 3.0: A Database Built for AI Agent Memory

SurrealDB 3.0, released February 17, is the most substantial release the project has shipped. The headline positioning is "AI agent memory," and the features behind it are worth examining rather than dismissing as marketing.

The architecture changes in 3.0 are real improvements. Separation of values from expressions eliminates redundant computation. ID-based storage reduced catalog keys from 80 to 42 bytes. Synced writes are now the default for durability. Over 150 bug fixes shipped. These are the kinds of changes that make a database trustworthy for production, not just interesting in demos.

The AI-specific features center on a few additions. File support via buckets and file pointers lets you store images, audio, and documents alongside structured data, which matters for multimodal agents. Enhanced vector search with compound indexes, prefix and range scans, and concurrent index builds improves retrieval quality for agent memory lookups. The most interesting addition is Surrealism: a WebAssembly extension system that lets you define custom business logic and integrate AI models, including local LLMs and remote APIs, directly in the database layer.

The DEFINE API statement is also notable. It lets you define custom endpoints in SurrealQL, effectively embedding middleware and rate limiting logic in the database without an external service. For agent-first architectures where you want the data and the logic close together, this reduces the number of moving parts.

SurrealDB is written in Rust, which is not the headline of this story but is the reason it can ship a WebAssembly extension system without a separate runtime. The same binary handles query execution, WASM evaluation, and vector search. That kind of integration is hard to achieve in languages that lean on managed runtimes.

Takeaways:

  • The AI agent memory framing is real: file storage, vector search with concurrent indexes, and WASM-based custom logic cover the three things agent memory needs, retrieval, storage, and computation
  • DEFINE API is an interesting pattern, embedding endpoint logic in the database reduces the surface area of agent architectures that would otherwise require an API layer between the DB and the agent
  • 150+ bug fixes and architectural durability improvements make this a serious production release, not just a feature drop

Snippets

  • Rust debugging survey 2026 The compiler team wants to know how you debug Rust, submissions open until March 13, and if debugging Rust frustrates you this is your chance to say so with data.

  • FOSDEM 2026: Rust Devroom in Review A recap of the talks and community projects presented at the Rust devroom at FOSDEM 2026.

  • crates.io malicious crate policy update The crates.io team updated how it communicates malicious crate removals: individual blog posts per removal are being replaced by RustSec advisories, keeping the registry transparent without the noise.


We are thrilled to have you as part of our growing community of Rust enthusiasts! If you found value in this newsletter, don't keep it to yourself — share it with your network and let's grow the Rust community together.

👉 Take Action Now:

  • Share: Forward this email to share this newsletter with your colleagues and friends.

  • Engage: Have thoughts or questions? Reply to this email.

  • Subscribe: Not a subscriber yet? Click here to never miss an update from Rust Trends.

Cheers,
Bob Peters

Want to sponsor Rust Trends? We reach thousands of Rust developers biweekly. Get in touch!

Read the whole story
aristov
66 days ago
reply
Stuttgart, Germany
Share this story
Delete

Seeking Purity

1 Share

The concept of purity — historically a guiding principle in social and moral contexts — is also found in passionate, technical discussions. By that I mean that purity in technology translates into adherence to a set of strict principles, whether it be functional programming, test-driven development, serverless architectures, or, in the case of Rust, memory safety.

Memory Safety

Rust positions itself as a champion of memory safety, treating it as a non-negotiable foundation of good software engineering. I love Rust: it's probably my favorite language. It probably won't surprise you that I have no problem with it upholding memory safety as a defining feature.

Rust aims to achieve the goal of memory safety via safe abstractions, a compile time borrow checker and a type system that is in service of those safe abstractions. It comes as no surprise that the Rust community is also pretty active in codifying a new way to reason about pointers. In many ways, Rust pioneered completely new technical approaches and it it widely heralded as an amazing innovation.

However, as with many movements rooted in purity, what starts as a technical pursuit can evolve into something more ideological. Similar to how moral purity in political and cultural discourse can become charged, so does the discourse around Rust, which has been dominated by the pursuit of memory safety. Particularly within the core Rust community itself, discussion has moved beyond technical merits into something akin to ideological warfare. The fundamental question of “Is this code memory safe?”, has shifted to “Was it made memory safe in the correct way?”. This distinction matters because it introduces a purity test that values methodology over outcomes. Safe C code, for example, is often dismissed as impossible, not necessarily because it is impossible, but because it lacks the strict guarantees that Rust's borrow checker enforces. Similarly, using Rust’s unsafe blocks is increasingly frowned upon, despite their intended purpose of enabling low-level optimizations when necessary.

This ideological rigidity creates significant friction when Rust interfaces with other ecosystems (or gets introduced there), particularly those that do not share its uncompromising stance. For instance, the role of Rust in the Linux kernel has been a hot topic. The Linux kernel operates under an entirely different set of priorities. While memory safety is important there is insufficient support for adopting Rust in general. The kernel is an old project and it aims to remain maintainable for a long time into the future. For it to even consider a rather young programming language should be seen as tremendous success for Rust and also for how open Linus is to the idea.

Yet that introduction is balanced against performance, maintainability, and decades of accumulated engineering expertise. Many of the kernel developers, who have found their own strategies to write safe C for decades, are not accepting the strongly implied premise that their work is inherently flawed simply because it does not adhere to Rust's strict purity rules.

Tensions rose when a kernel developer advocating for Rust's inclusion took to social media to push for changes in the Linux kernel development process. The public shaming tactic failed, leading the developer to conclude:

“If shaming on social media does not work, then tell me what does, because I'm out of ideas.”

It's not just the kernel where Rust's memory safety runs up against the complexities of the real world. Very similar feelings creep up in the gaming industry where people love to do wild stuff with pointers. You do not need large disagreements to see the purist approach create some friction. A recent post of mine for instance triggered some discussions about the trade-offs between more dependencies, and moving unsafe to centralized crates.

I really appreciate that Rust code does not crash as much. That part of Rust, among many others, makes it very enjoyable to work with. Yet I am entirely unconvinced that memory safety should trump everything, at least at this point in time.

What people want in the Rust in Linux situation is for the project leader to come in to declare support for Rust's call for memory safety above all. To make the detractors go away.

Python's Migration Lesson

Hearing this call and discussion brings back memories. I have lived through a purity driven shift in a community before. The move from Python 2 to Python 3 started out very much the same way. There was an almost religious movement in the community to move to Python 3 in a ratcheting motion. The idea that you could maintain code bases that support both 2 and 3 were initially very loudly rejected. I took a lot of flak at the time (and for years after) for advocating for a more pragmatic migration which burned me out a lot. That feedback came both in person and online and it largely pushed me away from Python for a while. Not getting behind the Python 3 train was seen as sabotaging the entire project. However, a decade later, I feel somewhat vindicated that it was worth being pragmatic about that migration.

At the root of that discourse was a idealistic view of how Unicode could work in the language and that you can move an entire ecosystem at once. Both those things greatly clashed with the lived realities in many projects and companies.

I am a happy user of Python 3 today. This migration has also taught me the important lesson not be too stuck on a particular idea. It would have been very easy to pick one of the two sides of that debate. Be stuck on Python 2 (at the risk of forking), or go all in on Python 3 no questions asked. It was the path in between that was quite painful to advocate for, but it was ultimately the right path. I wrote about my lessons of that migration a in 2016 and I think most of this still rings true. That was motivated by even years later people still reaching out to me who did not move to Python 3, hoping for me to embrace their path. Yet Python 3 has changed! Python 3 is a much better language than it was when it first released. It is a great language because it's used by people solving real, messy problems and because it over time found answers for what to do, if you need to have both Python 2 and 3 code in the wild. While the world of Python 2 is largely gone, we are still in a world where Unicode and bytes mix in certain contexts.

The Messy Process

Fully committing to a single worldview can be easier because you stop questioning everything — you can just go with the flow. Yet truths often reside on both sides. Allowing yourself to walk the careful middle path enables you to learn from multiple perspectives. You will face doubts and open yourself up to vulnerability and uncertainty. The payoff, however, is the ability to question deeply held beliefs and push into the unknown territory where new things can be found. You can arrive at a solution that isn't a complete rejection of any side. There is genuine value in what Rust offers—just as there was real value in what Python 3 set out to accomplish. But the Python 3 of today isn't the Python 3 of those early, ideological debates; it was shaped by a messy, slow, often contentious, yet ultimately productive transition process.

I am absolutely sure that in 30 years from now we are going to primarily program in memory safe languages (or the machines will do it for us) in environments where C and C++ prevail. That glimpse of a future I can visualize clearly. The path to there however? That's a different story altogether. It will be hard, it will be impure. Maybe the solution will not even involve Rust at all — who knows.

We also have to accept that not everyone is ready for change at the same pace. Forcing adoption when people aren't prepared only causes the pendulum to swing back hard. It's tempting to look for a single authority to declare “the one true way,” but that won't smooth out the inevitable complications. Indeed, those messy, incremental challenges are part of how real progress happens. In the long run, these hard-won refinements tend to produce solutions that benefit all sides—if we’re patient enough to let them take root. The painful and messy transition is here to stay, and that's exactly why, in the end, it works.

Read the whole story
aristov
449 days ago
reply
Stuttgart, Germany
Share this story
Delete

From Paxos to BFT

1 Share

This is a sequel to Notes on Paxos post. Similarly, the primarily goal here is for me to understand why the BFT consensus algorithm works in detail. This might, or might not be useful for other people! The Paxos article is a prerequisite, best to read that now, and return to this article tomorrow :)

Note also that while Paxos was more or less a direct translation of Lamport’s lecture, this post is a mish-mash oft the original BFT paper by Liskov and Castro, my own thinking, and a cursory glance as this formalization. As such, the probability that there are no mistakes here is quite low.

What is BFT?

BFT stands for Byzantine Fault Tolerant consensus. Similarly to Paxos, we imagine a distributed system of computers communicating over a faulty network which can arbitrary reorder, delay, and drop messages. And we want computers to agree on some specific choice of value among the set of possibilities, such that any two computers pick the same value. Unlike Paxos though, we also assume that computers themselves might be faulty or malicious. So, we add a new condition to our list of bad things. Besides reordering, duplication, delaying and dropping, a fake message can be manufactured out of thin air.

Of course, if absolutely arbitrary messages can be forged, then no consensus is possible — each machine lives in its own solipsistic world which might be completely unlike the world of every other machine. So there’s one restriction — messages are cryptographically signed by the senders, and it is assumed that it is impossible for a faulty node to impersonate non-faulty one.

Can we still achieve consensus? As long as for each f faulty, malicious nodes, we have at least 2f + 1 honest ones.

Similarly to the Paxos post, we will capture this intuition into a precise mathematical statement about trajectories of state machines.

Paxos Revisited

Our plan is to start with vanilla Paxos, and then patch it to allow byzantine behavior. Here’s what we’ve arrived at last time:

Paxos
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
Sets:
  𝔹       -- Numbered set of ballots (for example, ℕ)
  𝕍       -- Arbitrary set of values
  𝔸       -- Finite set of acceptors
  ℚ ∈ 2^𝔸 -- Set of quorums

  -- Sets of messages for each of the four subphases
  Msgs1a ≡ {type: {"1a"}, bal: 𝔹}

  Msgs1b ≡ {type: {"1b"}, bal: 𝔹, acc: 𝔸,
            vote: {bal: 𝔹, val: 𝕍} ∪ {null}}

  Msgs2a ≡ {type: {"2a"}, bal: 𝔹, val: 𝕍}

  Msgs2b ≡ {type: {"2b"}, bal: 𝔹, val: 𝕍, acc: 𝔸}

Assume:
  ∀ q1, q2 ∈ ℚ: q1 ∩ q2 ≠ {}

Vars:
  -- Set of all messages sent so far
  msgs ∈ 2^(Msgs1a ∪ Msgs1b ∪ Msgs2a ∪ Msgs2b)

  -- Function that maps acceptors to ballot numbers or -1
  -- maxBal :: 𝔸 -> 𝔹 ∪ {-1}
  maxBal ∈ (𝔹 ∪ {-1})^𝔸

  -- Function that maps acceptors to their last vote
  -- lastVote :: 𝔸 -> {bal: 𝔹, val: 𝕍} ∪ {null}
  lastVote ∈ ({bal: 𝔹, val: 𝕍} ∪ {null})^𝔸

Send(m) ≡ msgs' = msgs ∪ {m}

Safe(b, v) ≡
  ∃ q ∈ ℚ:
  let
    qmsgs  ≡ {m ∈ msgs: m.type = "1b" ∧ m.bal = b ∧ m.acc ∈ q}
    qvotes ≡ {m ∈ qmsgs: m.vote ≠ null}
  in
      ∀ a ∈ q: ∃ m ∈ qmsgs: m.acc = a
    ∧ (  qvotes = {}
       ∨ ∃ m ∈ qvotes:
             m.vote.val = v
           ∧ ∀ m1 ∈ qvotes: m1.vote.bal <= m.vote.bal)

Phase1a(b) ≡
    maxBal' = maxBal
  ∧ lastVote' = lastVote
  ∧ Send({type: "1a", bal: b})

Phase1b(a) ≡
  ∃ m ∈ msgs:
      m.type = "1a" ∧ maxBal(a) < m.bal
    ∧ maxBal' = λ a1 ∈ 𝔸: if a = a1
                            then m.bal - 1
                            else maxBal(a1)
    ∧ lastVote' = lastVote
    ∧ Send({type: "1b", bal: m.bal, acc: a, vote: lastVote(a)})

Phase2a(b, v) ≡
   ¬∃ m ∈ msgs: m.type = "2a" ∧ m.bal = b
  ∧ Safe(b, v)
  ∧ maxBal' = maxBal
  ∧ lastVote' = lastVote
  ∧ Send({type: "2a", bal: b, val: v})

Phase2b(a) ≡
  ∃ m ∈ msgs:
      m.type = "2a" ∧ maxBal(a) < m.bal
    ∧ maxBal' = λ a1 ∈ 𝔸: if a = a1 then m.bal else maxBal(a1)
    ∧ lastVote' = λ a1 ∈ 𝔸: if a = a1
                              then {bal: m.bal, val: m.val}
                              else lastVote(a1)
    ∧ Send({type: "2b", bal: m.bal, val: m.val, acc: a})

Init ≡
    msgs = {}
  ∧ maxBal   = λ a ∈ 𝔸: -1
  ∧ lastVote = λ a ∈ 𝔸: null

Next ≡
    ∃ b ∈ 𝔹:
        Phase1a(b) ∨ ∃ v ∈ 𝕍: Phase2a(b, v)
  ∨ ∃ a ∈ 𝔸:
        Phase1b(a) ∨ Phase2b(a)

chosen ≡
  {v ∈ V: ∃ q ∈ ℚ, b ∈ 𝔹: AllVotedFor(q, b, v)}

AllVotedFor(q, b, v) ≡
  ∀ a ∈ q: (a, b, v) ∈ votes

votes ≡
  let
    msgs2b ≡ {m ∈ msgs: m.type = "2b"}
  in
    {(m.acc, m.bal, m.val): m ∈ msgs2b}

Our general idea is to add some “evil” acceptors 𝔼 to the mix and allow them sending arbitrary messages, while at the same time making sure that the subset of “good” acceptors continues to run Paxos. What makes this complex is that we don’t know which acceptor are good and which are bad. So this is our setup

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
Sets:
  𝔹       -- Numbered set of ballots (for example, ℕ)
  𝕍       -- Arbitrary set of values
  𝔸       -- Finite set of good acceptors
  𝔼       -- Finite set of evil acceptors
  𝔸𝔼 ≡ 𝔸 ∪ 𝔼 -- All acceptors
  ℚ ∈ 2^𝔸𝔼 -- Set of quorums

  Msgs1a ≡ {type: {"1a"}, bal: 𝔹}

  Msgs1b ≡ {type: {"1b"}, bal: 𝔹, acc: 𝔸𝔼,
            vote: {bal: 𝔹, val: 𝕍} ∪ {null}}

  Msgs2a ≡ {type: {"2a"}, bal: 𝔹, val: 𝕍}

  Msgs2b ≡ {type: {"2b"}, bal: 𝔹, val: 𝕍, acc: 𝔸𝔼}

Assume:
  𝔼 ∩ 𝔸 = {}
  ∀ q1, q2 ∈ ℚ: q1 ∩ q2 ∩ 𝔸 ≠ {}

If previously the quorum condition was “any two quorums have an acceptor in common”, it is now “any two quorums have a good acceptor in common”. An alternative way to say that is “a byzantine quorum is a super-set of normal quorum”, which corresponds to the intuition where we are running normal Paxos, and there are just some extra evil guys whom we try to ignore. For Paxos, we allowed f faulty out of 2f + 1 total nodes with f+1 quorums. For Byzantine Paxos, we’ll have f byzantine out 3f + 1 nodes with 2f+1 quorums. As I’ve said, if we forget about byzantine folks, we get exactly f + 1 out of 2f + 1 picture of normal Paxos.

The next step is to determine behavior for byzantine nodes. They can send any message, as long as they are the author:

1
2
3
4
5
6
7
8
Byzantine(a) ≡
      ∃ b ∈ 𝔹:             Send({type: "1a", bal: b})
    ∨ ∃ b ∈ 𝔹, v ∈ 𝕍:      Send({type: "2a", bal: b, val: v})
    ∨ ∃ b1, b2 ∈ 𝔹, v ∈ 𝕍: Send({type: "1b", bal: b1, acc: a,
                                  vote: {bal: b2, val: v}})
    ∨ ∃ b ∈ 𝔹, v ∈ 𝕍:      Send({type: "2b", bal: b, val: v, acc: a})
  ∧ maxBal' = maxBal
  ∧ lastVote' = lastVote

That is, a byzantine acceptor can send any 1a or 2a message at any time, while for 1b and 2b the author should match.

What breaks? The most obvious thing is Phase2b, that is, voting. In Paxos, as soon as an acceptor receives a 2a message, it votes for it. The correctness of Paxos hinges on the Safe check before we send 2a message, but a Byzantine node can send an arbitrary 2a.

The solution here is natural: rather than blindly trust 2a messages, acceptors would themselves double-check the safety condition, and reject the message if it doesn’t hold:

1
2
3
4
5
6
7
8
9
Phase2b(a) ≡
  ∃ m ∈ msgs:
      m.type = "2a" ∧ maxBal(a) < m.bal
    ∧ Safe(m.bal, m.val)
    ∧ maxBal' = λ a1 ∈ 𝔸: if a = a1 then m.bal else maxBal(a1)
    ∧ lastVote' = λ a1 ∈ 𝔸: if a = a1
                              then {bal: m.bal, val: m.val}
                              else lastVote(a1)
    ∧ Send({type: "2b", bal: m.bal, val: m.val, acc: a})

Implementation wise, this means that, when a coordinator sends a 2a, it also wants to include 1b messages proving the safety of 2a. But in the spec we can just assume that all messages are broadcasted, for simplicity. Ideally, for correct modeling you also want to model how each acceptor learns new messages, to make sure that negative reasoning about a certain message not being sent doesn’t creep in, but we’ll avoid that here.

However, just re-checking safety doesn’t fully solve the problem. It might be the case that several values are safe at a particular ballot (indeed, in the first ballot any value is safe), and it is exactly the job of a coordinator / 2a message to pick one value to break the tie. And in our case a byzantine coordinator can send two 2a for different valid values.

And here we’ll make the single non-trivial modification to the algorithm. Like the Safe condition is at the heart of Paxos, the Confirmed condition is the heart here.

So basically we expect a good coordinator to send just one 2a message, but a bad one can send many. And we want to somehow distinguish the two cases. One way to do that is to broadcast ACKs for 2a among acceptors. If I received a 2a message, checked that the value therein is safe, and also know that everyone else received this same 2a message, I can safely vote for the value.

So we introduce a new message type, 2ac, which confirms a valid 2a message:

1
Msgs2ac ≡ {type: {"2ac"}, bal: 𝔹, val: 𝕍, acc: 𝔸}

Naturally, evil acceptors can confirm whatever:

1
2
3
4
5
6
7
8
9
Byzantine(a) ≡
      ∃ b ∈ 𝔹:             Send({type: "1a", bal: b})
    ∨ ∃ b1, b2 ∈ 𝔹, v ∈ 𝕍: Send({type: "1b", bal: b1, acc: a,
                                 vote: {bal: b2, val: v}})
    ∨ ∃ b ∈ 𝔹, v ∈ 𝕍:      Send({type: "2a", bal: b, val: v})
    ∨ ∃ b ∈ 𝔹, v ∈ 𝕍:      Send({type: "2ac", bal: b, val: v, acc: a})
    ∨ ∃ b ∈ 𝔹, v ∈ 𝕍:      Send({type: "2b", bal: b, val: v, acc: a})
  ∧ maxBal' = maxBal
  ∧ lastVote' = lastVote

But, if we get a quorum of confirmations, we can be sure that no other value will be confirmed in a given ballot (each good acceptors confirms at most a single message in a ballot (and we need a bit of state for that as well))

1
2
Confirmed(b, v) ≡
  ∃ q ∈ ℚ: ∀ a ∈ q: {type: "2ac", bal: b, val: v, acc: a} ∈ msgs

Putting everything so far together, we get

Not Yet BFT Paxos
  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
Sets:
  𝔹          -- Numbered set of ballots (for example, ℕ)
  𝕍          -- Arbitrary set of values
  𝔸          -- Finite set of acceptors
  𝔼          -- Finite set of evil acceptors
  𝔸𝔼 ≡ 𝔸 ∪ 𝔼 -- Set of all acceptors
  ℚ ∈ 2^𝔸𝔼   -- Set of quorums

  Msgs1a ≡ {type: {"1a"}, bal: 𝔹}

  Msgs1b  ≡ {type: {"1b"}, bal: 𝔹, acc: 𝔸,
             vote: {bal: 𝔹, val: 𝕍} ∪ {null}}

  Msgs2a  ≡ {type: {"2a"}, bal: 𝔹, val: 𝕍}
  Msgs2ac ≡ {type: {"2ac"}, bal: 𝔹, val: 𝕍, acc: 𝔸}

  Msgs2b  ≡ {type: {"2b"}, bal: 𝔹, val: 𝕍, acc: 𝔸}

Assume:
  𝔼 ∩ 𝔸 = {}
  ∀ q1, q2 ∈ ℚ: q1 ∩ q2 ∩ 𝔸 ≠ {}

Vars:
  -- Set of all messages sent so far
  msgs ∈ 2^(Msgs1a ∪ Msgs1b ∪ Msgs2a ∪ Msgs2ac ∪ Msgs2b)

  -- Function that maps acceptors to ballot numbers or -1
  -- maxBal :: 𝔸 -> 𝔹 ∪ {-1}
  maxBal ∈ (𝔹 ∪ {-1})^𝔸

  -- Function that maps acceptors to their last vote
  -- lastVote :: 𝔸 -> {bal: 𝔹, val: 𝕍} ∪ {null}
  lastVote ∈ ({bal: 𝔹, val: 𝕍} ∪ {null})^𝔸

  -- Function which maps acceptors to values they confirmed as safe
  -- confirm :: (𝔸, 𝔹) -> 𝕍 ∪ {null}
  confirm ∈ (𝕍 ∪ {null})^(𝔸 × 𝔹)

Send(m) ≡ msgs' = msgs ∪ {m}

Confirmed(b, v) ≡
  ∃ q ∈ ℚ: ∀ a ∈ q: {type: "2ac", bal: b, val: v, acc: a} ∈ msgs

Safe(b, v) ≡
  ∃ q ∈ ℚ:
  let
    qmsgs  ≡ {m ∈ msgs: m.type = "1b" ∧ m.bal = b ∧ m.acc ∈ q}
    qvotes ≡ {m ∈ qmsgs: m.vote ≠ null}
  in
      ∀ a ∈ q: ∃ m ∈ qmsgs: m.acc = a
    ∧ (  qvotes = {}
       ∨ ∃ m ∈ qvotes:
             m.vote.val = v
           ∧ ∀ m1 ∈ qvotes: m1.vote.bal <= m.vote.bal)

Byzantine(a) ≡
      ∃ b ∈ 𝔹:             Send({type: "1a", bal: b})
    ∨ ∃ b1, b2 ∈ 𝔹, v ∈ 𝕍: Send({type: "1b", bal: b1, acc: a,
                                 vote: {bal: b2, val: v}})
    ∨ ∃ b ∈ 𝔹, v ∈ 𝕍:      Send({type: "2a", bal: b, val: v})
    ∨ ∃ b ∈ 𝔹, v ∈ 𝕍:      Send({type: "2ac", bal: b, val: v, acc: a})
    ∨ ∃ b ∈ 𝔹, v ∈ 𝕍:      Send({type: "2b", bal: b, val: v, acc: a})
  ∧ maxBal' = maxBal
  ∧ lastVote' = lastVote
  ∧ confirm' = confirm

Phase1b(a) ≡
  ∃ m ∈ msgs:
      m.type = "1a" ∧ maxBal(a) < m.bal
    ∧ maxBal' = λ a1 ∈ 𝔸: if a = a1
                            then m.bal - 1
                            else maxBal(a1)
    ∧ lastVote' = lastVote
    ∧ confirm' = confirm
    ∧ Send({type: "1b", bal: m.bal, acc: a, vote: lastVote(a)})

Phase2ac(a) ≡
  ∃ m ∈ msgs:
      m.type = "2a"
    ∧ confirm(a, m.bal) = null
    ∧ Safe(m.bal, m.val)
    ∧ maxBal' = maxBal
    ∧ lastVote' = lastVote
    ∧ confirm' = λ a1 ∈ 𝔸, b1 \in 𝔹:
                 if a = a1 ∧ b1 = m.bal then m.val else confirm(a1, b1)
    ∧ Send({type: "2ac", bal: m.bal, val: m.val, acc: a})

Phase2b(a) ≡
  ∃ b ∈ 𝔹, v ∈ 𝕍:
      Confirmed(b, v)
    ∧ maxBal' = λ a1 ∈ 𝔸: if a = a1 then m.bal else maxBal(a1)
    ∧ lastVote' = λ a1 ∈ 𝔸: if a = a1
                              then {bal: m.bal, val: m.val}
                              else lastVote(a1)
    ∧ confirm' = confirm
    ∧ Send({type: "2b", bal: m.bal, val: m.val, acc: a})

Init ≡
    msgs = {}
  ∧ maxBal   = λ a ∈ 𝔸: -1
  ∧ lastVote = λ a ∈ 𝔸: null
  ∧ confirm = λ a ∈ 𝔸, b ∈ 𝔹: null

Next ≡
    ∃ a ∈ 𝔸:
        Phase1b(a) ∨ Phase2ac(a) ∨ Phase2b(a)
  ∨ ∃ a ∈ 𝔼:
        Byzantine(a)

chosen ≡
  {v ∈ V: ∃ q ∈ ℚ, b ∈ 𝔹: AllVotedFor(q, b, v)}

AllVotedFor(q, b, v) ≡
  ∀ a ∈ q: (a, b, v) ∈ votes

votes ≡
  let
    msgs2b ≡ {m ∈ msgs: m.type = "2b"}
  in
    {(m.acc, m.bal, m.val): m ∈ msgs2b}

In the above, I’ve also removed phases 1a and 2a, as byzantine acceptors are allowed to send arbitrary messages as well (we’ll need explicit 1a/2a for liveness, but we won’t discuss that here).

The most important conceptual addition is Phase2ac — if an acceptor receives a new 2a message for some ballot with a safe value, it sends out the confirmation provided that it hadn’t done that already. In Phase2b then we can vote for confirmed values: confirmation by a quorum guarantees both that the value is safe at this ballot, and that this is a single value that can be voted for in this ballot (two different values can’d be confirmed in the same ballot, because quorums have an honest acceptor in common). This almost works, but there’s still a problem. Can you spot it?

The problem is in the Safe condition. Recall that the goal of the Safe condition is to pick a value v for ballot b, such that, if any earlier ballot b1 concludes, the value chosen in b1 would necessary be v. The way Safe works for ballot b in normal Paxos is that the coordinator asks a certain quorum to abstain from further voting in ballots earlier than b, collects existing votes, and uses those votes to pick a safe value. Specifically, it looks at the vote for the highest-numbered ballot in the set, and declares a value from it as safe (it is safe: it was safe at that ballot, and for all future ballots there’s a quorum which abstained from voting).

This procedure puts a lot of trust in that highest vote, which makes it vulnerable. An evil acceptor can just say that it voted in some high ballot, and force a choice of arbitrary value. So, we need some independent confirmation that the vote was cast for a safe value. And we can re-use 2ac messages for this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
Safe(b, v) ≡
  ∃ q ∈ Q:
  let
    qmsgs  ≡ {m ∈ msgs: m.type = "1b" ∧ m.bal = b ∧ m.acc ∈ q}
    qvotes ≡ {m ∈ qmsgs: m.vote ≠ null}
  in
      ∀ a ∈ q: ∃ m ∈ qmsgs: m.acc = a
   ∧ (  qvotes = {}
       ∨ ∃ m ∈ qvotes:
             m.vote.val = v
           ∧ ∀ m1 ∈ qvotes: m1.vote.bal <= m.vote.bal
           ∧ Confirmed(m.vote.bal, v))

And …​ that’s it, really. Now we can sketch a proof that this thing indeed achieves BFT consensus, because it actually models normal Paxos among non-byzantine acceptors.

Phase1a messages of Paxos are modeled by Phase1a messages of BFT Paxos, as they don’t have any preconditions, the same goes for Phase1b. Phase2a message of Paxos is emitted when a value becomes confirmed in BFT Paxos. This is correct modeling, because BFT’s Safe condition models normal Paxos Safe condition (this …​ is a bit inexact I think, to make this exact, we want to separate “this value is safe” from “we are voting for this value” in original Paxos as well). Finally, Phase2b also displays direct correspondence.

As a final pop-quiz, I claim that the Confirmed(m.vote.bal, v) condition in Safe above can be relaxed. As stated, Confirmed needs a byzantine quorum of confirmations, which guarantees both that the value is safe and that it is the single confirmed value, which is a bit more than we need here. Do you see what would be enough?

The final specification contains this relaxation:

BFT Paxos
  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
Sets:
  𝔹          -- Numbered set of ballots (for example, ℕ)
  𝕍          -- Arbitrary set of values
  𝔸          -- Finite set of acceptors
  𝔼          -- Finite set of evil acceptors
  𝔸𝔼 ≡ 𝔸 ∪ 𝔼 -- Set of all acceptors
  ℚ ∈ 2^𝔸𝔼   -- Set of quorums
  𝕎ℚ ∈ 2^𝔸𝔼  -- Set of weak quorums

  Msgs1a ≡ {type: {"1a"}, bal: 𝔹}

  Msgs1b  ≡ {type: {"1b"}, bal: 𝔹, acc: 𝔸𝔼,
             vote: {bal: 𝔹, val: 𝕍} ∪ {null}}

  Msgs2a  ≡ {type: {"2a"}, bal: 𝔹, val: 𝕍}
  Msgs2ac ≡ {type: {"2ac"}, bal: 𝔹, val: 𝕍, acc: 𝔸𝔸𝔼}

  Msgs2b  ≡ {type: {"2b"}, bal: 𝔹, val: 𝕍, acc: 𝔸𝔸𝔼}

Assume:
  𝔼 ∩ 𝔸 = {}
  ∀ q1, q2 ∈ ℚ: q1 ∩ q2 ∩ 𝔸 ≠ {}
  ∀ q ∈ 𝕎ℚ: q ∩ 𝔸 ≠ {}

Vars:
  -- Set of all messages sent so far
  msgs ∈ 2^(Msgs1a ∪ Msgs1b ∪ Msgs2a ∪ Msgs2ac ∪ Msgs2b)

  -- Function that maps acceptors to ballot numbers or -1
  -- maxBal :: 𝔸 -> 𝔹 ∪ {-1}
  maxBal ∈ (𝔹 ∪ {-1})^𝔸

  -- Function that maps acceptors to their last vote
  -- lastVote :: 𝔸 -> {bal: 𝔹, val: 𝕍} ∪ {null}
  lastVote ∈ ({bal: 𝔹, val: 𝕍} ∪ {null})^𝔸

  -- Function which maps acceptors to values they confirmed as safe
  -- confirm :: (𝔸, 𝔹) -> 𝕍 ∪ {null}
  confirm ∈ (𝕍 ∪ {null})^(𝔸 × 𝔹)

Send(m) ≡ msgs' = msgs ∪ {m}

Safe(b, v) ≡
  ∃ q ∈ ℚ:
  let
    qmsgs  ≡ {m ∈ msgs: m.type = "1b" ∧ m.bal = b ∧ m.acc ∈ q}
    qvotes ≡ {m ∈ qmsgs: m.vote ≠ null}
  in
      ∀ a ∈ q: ∃ m ∈ qmsgs: m.acc = a
    ∧ (  qvotes = {}
       ∨ ∃ m ∈ qvotes:
             m.vote.val = v
           ∧ ∀ m1 ∈ qvotes: m1.vote.bal <= m.vote.bal
           ∧ confirmedWeak(m.vote.val, v))

Confirmed(b, v) ≡
  ∃ q ∈ ℚ: ∀ a ∈ q: {type: "2ac", bal: b, val: v, acc: a} ∈ msgs

ConfirmedWeak(b, v) ≡
  ∃ q ∈ 𝕎ℚ: ∀ a ∈ q: {type: "2ac", bal: b, val: v, acc: a} ∈ msgs

Byzantine(a) ≡
      ∃ b ∈ 𝔹:             Send({type: "1a", bal: b})
    ∨ ∃ b1, b2 ∈ 𝔹, v ∈ 𝕍: Send({type: "1b", bal: b1, acc: a,
                                 vote: {bal: b2, val: v}})
    ∨ ∃ b ∈ 𝔹, v ∈ 𝕍:      Send({type: "2a", bal: b, val: v})
    ∨ ∃ b ∈ 𝔹, v ∈ 𝕍:      Send({type: "2ac", bal: b, val: v, acc: a})
    ∨ ∃ b ∈ 𝔹, v ∈ 𝕍:      Send({type: "2b", bal: b, val: v, acc: a})
  ∧ maxBal' = maxBal
  ∧ lastVote' = lastVote
  ∧ confirm' = confirm

Phase1b(a) ≡
  ∃ m ∈ msgs:
      m.type = "1a" ∧ maxBal(a) < m.bal
    ∧ maxBal' = λ a1 ∈ 𝔸: if a = a1
                            then m.bal - 1
                            else maxBal(a1)
    ∧ lastVote' = lastVote
    ∧ confirm' = confirm
    ∧ Send({type: "1b", bal: m.bal, acc: a, vote: lastVote(a)})

Phase2ac(a) ≡
  ∃ m ∈ msgs:
      m.type = "2a"
    ∧ confirm(a, m.bal) = null
    ∧ Safe(m.bal, m.val)
    ∧ maxBal' = maxBal
    ∧ lastVote' = lastVote
    ∧ confirm' = λ a1 ∈ 𝔸, b1 \in 𝔹:
                 if a = a1 ∧ b1 = m.bal then m.val else confirm(a1, b1)
    ∧ Send({type: "2ac", bal: m.bal, val: m.val, acc: a})

Phase2b(a) ≡
  ∃ b ∈ 𝔹, v ∈ 𝕍:
      confirmed(b, v)
    ∧ maxBal' = λ a1 ∈ 𝔸: if a = a1 then m.bal else maxBal(a1)
    ∧ lastVote' = λ a1 ∈ 𝔸: if a = a1
                              then {bal: m.bal, val: m.val}
                              else lastVote(a1)
    ∧ confirm' = confirm
    ∧ Send({type: "2b", bal: m.bal, val: m.val, acc: a})

Init ≡
    msgs = {}
  ∧ maxBal   = λ a ∈ 𝔸: -1
  ∧ lastVote = λ a ∈ 𝔸: null
  ∧ confirm = λ a ∈ 𝔸, b ∈ 𝔹: null

Next ≡
    ∃ b ∈ 𝔹:
        Phase1a(b) ∨ ∃ v ∈ 𝕍: Phase2a(b, v)
  ∨ ∃ a ∈ 𝔸:
        Phase1b(a) ∨ Phase2ac(a) ∨ Phase2b(a)
  ∨ ∃ a ∈ 𝔼:
        Byzantine(a)

chosen ≡
  {v ∈ V: ∃ q ∈ ℚ, b ∈ 𝔹: AllVotedFor(q, b, v)}

AllVotedFor(q, b, v) ≡
  ∀ a ∈ q: (a, b, v) ∈ votes

votes ≡
  let
    msgs2b ≡ {m ∈ msgs: m.type = "2b"}
  in
    {(m.acc, m.bal, m.val): m ∈ msgs2b}

TLA+ specs for this post are available here: https://github.com/matklad/paxosnotes.

Read the whole story
aristov
1308 days ago
reply
Stuttgart, Germany
Share this story
Delete

Announcing Rust 1.49.0

2 Shares

The Rust team is happy to announce a new version of Rust, 1.49.0. Rust is a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, getting Rust 1.49.0 is as easy as:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.49.0 on GitHub.

What's in 1.49.0 stable

For this release, we have some new targets and an improvement to the test framework. See the detailed release notes to learn about other changes not covered by this post.

64-bit ARM Linux reaches Tier 1

The Rust compiler supports a wide variety of targets, but the Rust Team can't provide the same level of support for all of them. To clearly mark how supported each target is, we use a tiering system:

  • Tier 3 targets are technically supported by the compiler, but we don't check whether their code build or passes the tests, and we don't provide any prebuilt binaries as part of our releases.
  • Tier 2 targets are guaranteed to build and we provide prebuilt binaries, but we don't execute the test suite on those platforms: the produced binaries might not work or might have bugs.
  • Tier 1 targets provide the highest support guarantee, and we run the full suite on those platforms for every change merged in the compiler. Prebuilt binaries are also available.

Rust 1.49.0 promotes the aarch64-unknown-linux-gnu target to Tier 1 support, bringing our highest guarantees to users of 64-bit ARM systems running Linux! We expect this change to benefit workloads spanning from embedded to desktops and servers.

This is an important milestone for the project, since it's the first time a non-x86 target has reached Tier 1 support: we hope this will pave the way for more targets to reach our highest tier in the future.

Note that Android is not affected by this change as it uses a different Tier 2 target.

64-bit ARM macOS and Windows reach Tier 2

Rust 1.49.0 also features two targets reaching Tier 2 support:

  • The aarch64-apple-darwin target brings support for Rust on Apple M1 systems.
  • The aarch64-pc-windows-msvc target brings support for Rust on 64-bit ARM devices running Windows on ARM.

Developers can expect both of those targets to have prebuilt binaries installable with rustup from now on! The Rust Team is not running the test suite on those platforms though, so there might be bugs or instabilities.

Test framework captures output in threads

Rust's built-in testing framework doesn't have a ton of features, but that doesn't mean it can't be improved! Consider a test that looks like this:

#[test]
fn thready_pass() {
    println!("fee");
    std::thread::spawn(|| {
        println!("fie");
        println!("foe");
    })
    .join()
    .unwrap();
    println!("fum");
}

Here's what running this test looks like before Rust 1.49.0:

❯ cargo +1.48.0 test
   Compiling threadtest v0.1.0 (C:\threadtest)
    Finished test [unoptimized + debuginfo] target(s) in 0.38s
     Running target\debug\deps\threadtest-02f42ffd9836cae5.exe

running 1 test
fie
foe
test thready_pass ... ok

test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out

   Doc-tests threadtest

running 0 tests

test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out

You can see that the output from the thread is printed, which intermixes from the output of the test framework itself. Wouldn't it be nice if every println! worked like that one that prints "fum?" Well, that's the behavior in Rust 1.49.0:

❯ cargo test
   Compiling threadtest v0.1.0 (C:\threadtest)
    Finished test [unoptimized + debuginfo] target(s) in 0.52s
     Running target\debug\deps\threadtest-40aabfaa345584be.exe

running 1 test
test thready_pass ... ok

test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

   Doc-tests threadtest

running 0 tests

test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

But don't worry; if the test were to fail, you'll still see all of the output. By adding a panic! to the end of the test, we can see what failure looks like:

❯ cargo test
   Compiling threadtest v0.1.0 (C:\threadtest)
    Finished test [unoptimized + debuginfo] target(s) in 0.52s
     Running target\debug\deps\threadtest-40aabfaa345584be.exe

running 1 test
test thready_pass ... FAILED

failures:

---- thready_pass stdout ----
fee
fie
foe
fum
thread 'thready_pass' panicked at 'explicit panic', src\lib.rs:11:5

Specifically, the test runner makes sure to capture the output, and saves it in case the test fails.

Library changes

In Rust 1.49.0, there are three new stable functions:

And two functions were made const:

See the detailed release notes to learn about other changes.

Other changes

There are other changes in the Rust 1.49.0 release: check out what changed in Rust, Cargo, and Clippy.

Contributors to 1.49.0

Many people came together to create Rust 1.49.0. We couldn't have done it without all of you. Thanks!

Read the whole story
aristov
1950 days ago
reply
Stuttgart, Germany
Share this story
Delete

Rust Survey 2020 Results

1 Share

Greetings Rustaceans!

Another year has passed, and with it comes another annual Rust survey analysis! The survey was conducted in the second half of September 2020 over a two-week period. We’d like to thank everyone who participated in this year’s survey with a special shout-out to those who helped translate non-English responses.

Without further ado, let’s dive into the analysis!

Survey Audience

The survey was available in 14 different languages and had a record 8,323 total responses.

Here's the distribution of languages across the responses:

  • English: 75.0%
  • Simplified Chinese: 5.4%
  • Russian: 5.3%
  • German: 4.0%
  • French: 2.7%
  • Japanese: 2.2%
  • Korean: 1.2%
  • Traditional Chinese: 1.1%
  • Spanish: 1.0%
  • Portuguese: 0.7%
  • Italian: 0.6%
  • Swedish: 0.5%
  • Vietnamese: 0.1%
  • Polish: 0.1%

83.0% of respondents said they used Rust (an all time high) while 7% said they had used Rust in the past but no longer do. When asked why they had stopped using Rust, the largest group (35%) said they just hadn’t learned it yet (presumably from lack of time), followed by those whose company was not using Rust (34%) and those who said switching to Rust would “slow them down” compared to their current language of choice (19%).

Stability

While Rust itself has always had a strong stability guarantee, stability often means more than just ensuring users’ code doesn’t break when compiled with a new version of the compiler. Rust in 2020 has largely been about cleaning up and stabilizing features and initiatives that were already under way. While this work is not nearly completed, respondents have noted that the stability of Rust in general has been improving.

First, we’d like to make a shout out to the rust-analyzer and IntelliJ Rust plugin projects which both enjoy relatively happy user bases. Nearly 3/4ths of all respondents noted that they saw at least some improvement in the IDE story, but users of rust-analyzer and IntelliJ were especially happy with 47% of rust-analyzer users noting “a lot of improvement” while 40% of IntelliJ users said the same.

In addition to improvements in the IDE experience, the number of users who are relying on a nightly compiler at least part of the time continues to drop - down to 28% compared with last year’s 30.5% with only 8.7% of respondents saying they use nightly exclusively. When asked why people are using nightly the largest reason was to use the Rocket web framework which has announced it will work on the stable version of Rust in its next release. The next largest reason for nightly was const generics, but with a minimal version of const generics reaching stable, we should see less of a reliance on nightly for this feature.

Which versions of Rust do you use?

It’s worth noting that a decent percentage of users who use nightly do so out of habit because “nightly is stable enough”. When asked what broke people’s code most often, by far the largest answer was the introduction of new warnings to a code base where warnings break the build (which is not part of Rust’s stability guarantee though Rust is designed so that adding new warnings never breaks your dependencies). Since we rely on nightly testing to catch regressions, this is a very good sign: nightly is stable enough to be useful while still allowing for continual changes. A shout-out to the Rust infrastructure, compiler, and libs teams for doing such a good job of ensuring that what lands in the nightly compiler is already fairly stable!

Who’s using Rust?

Rust continues to make inroads as a language used for production with roughly 40% of respondents that work in software noting that they use Rust at their day job. Additionally, the future of Rust on the job is bright with nearly half of those who knew saying that their employer planned to hire Rust developers in the next year.

Do you use Rust at work?

The seemingly largest change in those using Rust seems to be students with a much larger percentage (~15% vs ~11% last year) of the respondents answering that they don’t use Rust at work because they’re students or software hobbyists and therefore don’t have a job in software.

Additionally, the use of Rust at respondents' workplaces seems to be getting bigger with 44% of respondents saying that the amount of Rust at work was 10,000 lines of code or more compared to 34% last year.

Size of Rust code bases at work

Improving Rust

While Rust usage seems to be growing at a healthy pace, the results of the survey made it clear that there is still work to be done to make Rust a more appropriate tool for many people’s workflows.

C++ Interop

Interestingly, C++ interop was the most requested language for better interop with Rust with C and Python in second and third place. Improved C++ interop was especially often mentioned as a way to improve Rust usage specifically at work. In fact, for users who work on large codebases (100,000 lines of code or larger) C++ interop and unsurprisingly compile times were the most cited ways to improve their Rust experience.

If you want better language interop, with which language?

Improved Learnability

When asked how to improve adoption of Rust, many cited making Rust easier to learn with 15.8% of respondents saying they would use Rust more if it were “less intimidating, easier to learn, or less complicated”. Additionally when directly asked how people think we can improve adoption of Rust, the largest category of feedback was documentation and training.

When we asked respondents to rate their expertise in Rust, there was a clear peak at 7 out of 10. It’s hard to say how this compares across languages but it seems notable that relatively few are willing to claim full expertise. However, when compared with last year, the Rust community does seem to be gaining expertise in the language.

How would you rate your expertise in Rust?

We also asked about the difficulty of specific topics. The most difficult topic to learn according to survey results is somewhat unsurprisingly lifetime management with 61.4% of respondents saying that the use of lifetimes is either tricky or very difficult.

Percent of respondents rating each topic as tricky or very difficult.

It does seem that having C++ knowledge helps with 20.2% of respondents with at least some C++ experience noting lifetimes to be “very difficult” while 22.2% of those without C++ knowledge found the topic to be “very difficult”. Overall, systems programming knowledge (defined as at least some experience in C and C++), tends to make for more confident Rust users: those with systems programming experience rated themselves as 5.5 out of 10 on their Rust expertise, while those with experience in statically typed garbage collected languages like Java or C# rated themselves as 4.9 out of 10. Those with only experience in dynamically typed languages like Ruby or JavaScript rated themselves as 4.8 out of 10.

Unsurprisingly, the more often people use Rust, the more they feel they are experts in the language with 56.3% of those who use Rust daily ranking themselves as 7 or more out of 10 on how much of an expert they are on Rust compared with 22% of those who use Rust monthly.

How would you rate your expertise in Rust? (Daily Rust users)

Compile Times

One continuing topic of importance to the Rust community and the Rust team is improving compile times. Progress has already been made with 50.5% of respondents saying they felt compile times have improved. This improvement was particularly pronounced with respondents with large codebases (10,000 lines of code or more) where 62.6% citing improvement and only 2.9% saying they have gotten worse. Improving compile times is likely to be the source of significant effort in 2021, so stay tuned!

Library Support

In general, respondents seemed pleased with the growing library support in the Rust ecosystem with 65.9% of respondents saying they had seen at least some improvement and only 4.9% saying they hadn't seen any improvement. When asked what type of library support was missing most, GUI programming was the overwhelming answer with only 26.9% of respondents noting that this was an area of improvement in the last year.

Additional topics for improvement include maturing the async programming story, more libraries for specific tasks not already covered by the crates.io ecosystem, and more "blessed" libraries for common tasks.

Community

Ways that the Rust community could improve varied but were highlighted by two popular points. First, improving the state of the Rust community for those who do not wish to or cannot participate in English. There does not seem to be a particular language that is especially underserved with Russian, Mandarin, Japanese, Portuguese, Spanish and French coming up frequently.

Additionally, many said that having large corporate sponsors in the Rust community will make it easier for them to make the case for using Rust at work.

Another interesting find was that Europe seemed by far to be the most favored place for holding a Rust conference with all parts of Europe (West, East, North, South, and Central) all having more than 14% of respondents saying they would be interested in attending a conference there with Western Europe getting the highest percentage (26.3% of respondents). The only other region in the same ballpark was the United States with 21.6% of respondents saying they’d be interested in a conference located there.

Getting Excited for Rust’s Future

Generally, respondents seemed to have a positive picture not only for how Rust has improved over the last year but for the year to come. In particular, many noted their excitement for new features to the language such as const generics and generic associated types (GATs) as well as the 2021 edition, improvements to async, the Bevy game engine, more adoption of Rust by companies, WebAssembly and more!

Here’s to an exciting 2021! 🎉🦀

Read the whole story
aristov
1965 days ago
reply
Stuttgart, Germany
Share this story
Delete
Next Page of Stories