Rust Forge

Welcome to the Rust Forge! Rust Forge serves as a repository of supplementary documentation useful for members of The Rust Programming Language. If you find any mistakes, typos, or want to add to the Rust Forge, feel free to file an issue or PR on the Rust Forge GitHub.

Help Wanted

Want to contribute to Rust, but don’t know where to start? Here’s a list of rust-lang projects that have marked issues that need help and issues that are good first issues.

RepositoryDescription
rustThe Rust Language & Compiler
cargoThe Rust package manager
crates.ioSource code for crates.io
www.rust-lang.orgThe Rust website

Current Release Versions

ChannelVersionWill be stable onWill branch from master on
Stable
Beta
Nightly
Nightly +1

See the release process documentation for details on what happens in the days leading up to a release.

No Tools Breakage Week

To ensure the beta release includes all the tools, no tool breakages are allowed in the week before the beta cutoff (except for nightly-only tools).

Beta CutNo Breakage Week
  • Bibliography of research papers and other projects that influenced Rust.
  • Rust Pontoon is a translation management system used to localize the Rust website.

Platforms

Rust uses a number of different platforms for organizing work and internal communications between teams. This does not currently seek to be an exhaustive list, rather documenting the policies for a select few platforms used by the teams.

Twitter

The Rust project has a number of official Twitter accounts, credentials for which are currently maintained by the infrastructure team.

Twitter Guidelines

The project runs the Twitter account @rustlang. The account is handled by a small team of volunteers.

The account will mostly tweet links to the Rust blog and Rust Insiders blog. Additionally it will retweet:

  • links to blog posts about Rust, retweeting the original author if possible
  • questions about Rust, so all followers can help
  • Meetup or conference announcements
  • announcements of new Rust projects
  • anything else relevant

We will not retweet:

  • content that bashes other programming languages/projects or is otherwise unconstructive in its discussion of language/tech choice
  • Personal announcements (“Today I start my job at $COMPANY writing Rust”)
  • Learning Rust updates (“Today I started to learn Rust”)

The Direct Messages are open to everyone. If someone wants something retweeted, they should send the tweet via DM. The vast majority of these things should be retweeted, keeping it to the above rules. Requests of an author via DM or Tweet to not retweet something will be honored.

Additionally account handlers may look through the #rustlang hashtag for noteworthy content.

The account will only follow a small number of Project-owned/related Twitter accounts. At the time of writing (February 2022) this is only @cratesiostatus and @rust_foundation.

Access

Currently access to all four accounts is granted together via a 1password vault; we don’t split this into more fine-grained access. Some automation uses API keys of the status accounts to automatically tweet about upcoming events on crates.io.

Access is limited to a small set of folks in the twitter marker team; this isn’t automated (changes should ping infra admins for provisioning access).

People with access to 1password should:

  • Never change the password or take other administrative action (this is only to be done by infra admins)
  • Exclusively use the project-hosted instance to keep a copy of the password (don’t save it to any other password database, including in browser)
  • Never share the password with others (even if they’re in the list)
    • All access should always go through regular channels to ensure we’re not accidentally leaking the password by passing it through unsecure channels (e.g., email)
  • Be aware that the password may change regularly (requiring re-authorization)

If you believe you should have access, please file a PR against the team repository requesting it and note in the description that you’ve read this policy.

Discord

Rust’s Discord is currently used by a variety of teams such as Community, Ops, and Documentation, as well as their working groups. It is also maintained as a communication tool for Domain Working Groups, and provides a space for general discussion among Rust users, contributors, and beginners.

Where to go for help with using Discord

Discord’s support center provides documentation about its user interface and account settings.

Getting started

  1. Understand community standards
    Discord, like all official Rust spaces, is governed by the Code of Conduct. Before joining the conversation there, you can prepare by reading the Code of Conduct and Moderation Guidelines. It is also useful to read Discord’s Community Guidelines

  2. Access channels
    To access the Rust Discord, visit https://discord.gg/rust-lang. If you do not already have a Discord account, you can register for one as part of the process of gaining access. Your first action should be agreeing to our Code of Conduct by following the instructions in #welcome.

  3. Configure notifications
    It is a good idea to disable notifications for channels that are not relevant to you, so that you will not be overwhelmed with messages. Select the expansion arrow next to the server name banner (titled “The Rust Programming Language”) and select Notifications from the dropdown. Then follow the configuration instructions provided on the Discord Support site.

Appropriate conversation

Discussions should be related to the channel purpose. On team channels, conversation should be related to team business. All channels are expected to be used for purposes related to the Rust project. Discussion of (for example) wildlife or sightseeing are not appropriate.

Channels

The following channels are relevant to newcomers to the Rust project:

  • welcome - Where you agree to the CoC.
  • rust-usage - This is a channel where you can access support for resolving specific language use questions. The Rust Users Forum is also relevant to your needs.
  • beginners - Here, you can meet people who began using Rust relatively recently.
  • contribute - Interested in contributing to the Rust project? In addition to joining this channel, you can subscribe to the This Week In Rust newsletter, where many opportunities are regularly posted. It may also help to find out more about specific teams.

Channels outside of General are for contributors to Rust.

Messages

Discord conversation takes place when people are available, so you should not generally expect that your messages will receive a response quickly unless a meeting is taking place. Depending on how your notifications are configured, you will see a red circle on top of the Discord icon in your system tray when new messages are received. If you wish to communicate with a specific individual, right-click on their user icon and select “Message” in the dropdown menu.

Read-only view

Set up a Discord account (as described in Getting Started, above) in order to access Discord. There is not currently a read-only archive view available.

Email

While most of Rust’s discussion happens on other platforms, email is eternal and we occasionally need a way to approach individuals or groups privately. Our email is hosted through Mailgun (provided by Mozilla). We create and edit the mailing lists for teams through the rust-lang/team repository. Our email domain is rust-lang.org, e.g. ferris@rust-lang.org.

Sending a public broadcast

If your teams need to reach everyone in the Rust organisation, they can send an email to all@. It is recommended that you only use this mailing list when you know that you need to contact every member, such as for organising a members event like the All Hands, or for security alerts.

Keeping responses private

When sending a message to all@, do not put all@ in To. This will mean that any replies to your broadcast will also be sent to everyone. Instead, put your team’s email address in To field, and place all@ in the Bcc field. Then any replies will be sent to just your team.

GitHub

GitHub is where the Rust project hosts all of its code, as well as large parts of its discussions.

Organisations

  • rust-lang — The Rust project organisation.
  • rust-embedded — The Embedded Working Group organisation.
  • rustwasm — The WebAssembly Working Group organisation.
  • rust-cli — The Command Line Application Working Group organisation.
  • rust-secure-code — The Secure Code Working Group organisation.
  • rust-gamedev — The Game Development Working Group organisation.

rust-lang organization policy

The following is the policy for management of the rust-lang organization.

Access

All access to the rust-lang GitHub organization is managed via the team repository1. Teams that want to assign access levels, or create new repositories should open a Pull Request to that repository to request the change.

The Infrastructure Team is responsible for overall administration of the rust-lang GitHub organization. Selected members of the Infrastructure Team may be organization owners if their work requires it.

All GitHub accounts used to interact with the Rust-Lang GitHub organization (owner or non-owner) must have 2FA enabled. This is enforced by GitHub.

Bot accounts controlled by the Infrastructure Team (such as the triagebot) can be granted any level of access required for them to work at the discretion of the Infrastructure Team.

1

See Team Maintenance for policy on how the team repo is managed.

Zulip

Rust’s Zulip is used by a number of teams, notably the compiler, language, and library teams, along with their working groups.

Zulip can be an unintuitive platform to get started with. To get started, take a look at the getting started guide. For more detail, examine the Zulip user documentation!

Where to go for help with using Zulip

If you’re testing a feature, or want to get help, the #zulip stream is the place to go. Like elsewhere, the best thing to do is to create a new topic for each question.

Getting started

It is recommended to first look at the official getting started guide. Like Rust itself, Zulip is a bit special and reading the documentation before digging can be really helpful.

You’ll definitely want to configure the streams that you’re subscribed to when getting started; the default set is quite limited, and there are many groups that exist beyond it. Subscribing to a stream is very low cost – it is similar to being “in” an IRC channel, except that logs are available for all streams, regardless of subscription status.

It’s not necessary to introduce yourself, but feel free to say hello in the #new members stream.

User groups

User groups can be pinged by anyone with the @<group> notation, same as pinging another user. Groups can be created by anyone, and anyone can join a group.

Users should feel free to join (or leave) groups on their own. Furthermore, users should feel free to create groups as needed, though it is currently expected that this is somewhat rare. You should name your group similar to how you would name a stream for the same purpose, though groups can be more fine-grained (or less). For example, @T-compiler/meeting currently does not have a dedicated stream.

Appropriate conversation

In most streams, you should try to keep conversations related to team business. The #general stream is a bit broader, but even there, discussions should be closely related to Rust (though may not relate to projects of any particular team). All channels are expected to be used for discussions related to the Rust project, though; discussions of (for example) wildlife or sightseeing are not appropriate.

Streams

These are similar to “channels” on other platforms (i.e., there should not be too many). On the other hand, you can choose which streams you subscribe to, so there can be more than channels on other platforms. Read Zulip’s documentation for more details.

Streams are appropriate for any Rust official group. For example, working groups, project groups, teams are all examples of official groups. These should ideally also be represented in the team repository.

Default streams

This section is still under debate, and it is not yet clear which direction we will go. It is non-normative, and should not be used yet for modifications to the Zulip instance.

The default set of streams is chosen to allow incoming people to be able to have at least one place to go that can then, if necessary, direct them to a more specific location.

Currently that means that every top-level group present on Zulip is by default visible. Specifically, no stream that contains a / will be enabled by default.

Currently this set is:

  • general
  • t-lang
  • t-compiler
  • t-libs
  • project-ffi-unwind
  • project-inline-asm
  • project-safe-transmute
  • rust-survey-2019
  • wg-async-foundations
  • wg-database
  • wg-formal-methods
  • wg-secure-code
  • wg-traits
  • zulip

An alternative, minimalistic, approach is to use:

  • general
  • zulip
  • announce
  • new members

as the default set, which would push people into customizing their default set when starting out.

Stream naming

A stream should be named such as #t-{team}/{group name}. For example, #t-compiler/wg-parallel-rustc. More levels of nesting are fine, e.g., a working group might want “subgroups” as well, though you may want to omit the team name in such a case – keeping the stream name short is good for usability, to avoid confusion between different streams which share the same prefix.

If no top-level team exists, or the group spans multiple teams (e.g., project-ffi-unwind), then the top level team should be omitted.

Streams should be clearly communicated as being for a specific purpose. That purpose can be broad, but it should likely include a group of some kind (even if that group is transient, e.g., people who are having trouble with the rust build system, or people working on the compiler). Furthermore, we do not currently intend for this Zulip to be a general place for community projects not affiliated with the Rust organization; if they wish to use Zulip, it is free for open source.

When a new stream is created, you should announce it in #announce. This is generally done automatically by Zulip.

Topics

A topic is attached to every message within a given stream (these are the subdivisions within streams). Topics are generally transient, and live for as long as there is active discussion on a topic. Thinking of topics like email subjects is helpful.

New conversation in a given stream should almost always start in a new topic, not a preexisting one. Unlike (for example) GitHub issues, you should not attempt to search for a past topic on the same subject. Do not spend too long on the name of the topic, either, beyond trying to make it short. Topics should generally be no longer than 20 characters (loosely two to three words), to make sure it is visible to users.

You should eagerly fork new discussion topics into fresh topics. Note that this can be done with the tail of another topic (if accidentally you diverge into another area of discussion).

To fork from an existing topic, see Zulip’s documentation here.

Messages

Zulip is a unique platform which combines synchronous and asynchronous communication in one location. You should not generally expect that your messages will receive a response quickly, and unlike (for example) Discord, there is likely not much reason to “re-ping” on a particular issue every few hours as your message is unlikely to vanish into history, being isolated to a specific topic.

Linkifiers

Our Zulip supports a lot of helpful linkifiers, and we’re generally happy to add more on request. See the documentation for the format. Propose one in #zulip!

Generally, github-org/repo#123 works for linking to an issue or PR; the below list gives a few more “special cased” repositories.

Don’t forget that standard Markdown syntax for links also works.

We support linking to issues on repositories inside the rust-lang GitHub organisation without requiring the rust-lang/ prefix. For example:

  • rust-lang/rfcs with RFC#3434 or rfc#3434
  • rust-lang/async-book with async-book#2334
  • rust-lang/cargo with cargo#2334

rust-lang/rust issues can linked without needing any prefix:

  • rust-lang/rust with #4545 or rust#4545

We currently support linking to commits on these repositories:

Read-only view

Our Zulip instance has the web-public streams beta feature enabled, and we use it for all public streams. Please let us or Zulip developers know if there’s any problems with this. The previous solution to the web-public view was the zulip archive, which now redirects to the web public view.

Zulip Moderation

Zulip, like all official Rust spaces, is governed by the Code of Conduct. If you have concerns, please feel free to escalate to the moderation team.

However, though the moderation team is the top-level body here, it is not the only place where you can seek help with moderation within Zulip.

One method for reaching the Zulip administrators privately is to email zulip-admin.239bd484c0347d2d43214d8581f3e125.show-sender@streams.zulipchat.com. See this page for details on how this works.

You can also ping the @mods group on Zulip; note that this will be public.

It is not currently possible for normal users to self-administrate (e.g., muting another user). However, each individual stream, including private streams, can be muted:

For admins/moderators

Some common actions for moderators are listed on this page.

Notably,

New admins/moderators should add themselves to the mods group on Zulip. (Note that this is something that any user can do!)

Rust Blog Guidelines

Context

The Rust project maintains two blogs. The “main blog” (blog.rust-lang.org) and a “team blog” (blog.rust-lang.org/inside-rust). This document provides the guidelines for what it takes to write a post for each of those blogs, as well as how to propose a post and to choose which blog is most appropriate.

How to select the right blog: audience

So you want to write a Rust blog post, and you’d like to know which blog you should post it on. Ultimately, there are three options:

  • The main Rust blog
    • Suitable when your audience is “all Rust users or potential users”
    • Written from an “official position”, even if signed by an individual
  • The team Rust blog
    • Suitable when your audience is “all Rust contributors or potential contributors”
    • Written from an “official position”, even if signed by an individual
  • Your own personal blog
    • Everything else

There are two key questions to answer in deciding which of these seems right:

  • Are you speaking in an “official capacity” or as a “private citizen”?
  • Who is the audience for your post?

In general, if you are speaking as a “private citizen”, then you are probably best off writing on your own personal blog.

If, however, you are writing in an official capacity, then one of the Rust blogs would be a good fit. Note that this doesn’t mean you can’t write as an individual. Plenty of the posts on Rust’s blog are signed by individuals, and, in fact, that is the preferred option. However, those posts are typically documenting the official position of a team — a good example is Aaron Turon’s classic post on Rust’s language ergonomics initiative. Sometimes, the posts are describing an exciting project, but again in a way that represents the project as a whole (e.g., Manish Goregaokar’s report on Fearless Concurrency in Firefox Quantum).

To decide between the main blog and the team blog, the question to ask yourself is who is the audience for your post. Posts on the main blog should be targeting all Rust users or potential users — they tend to be lighter on technical detail, and written without requiring as much context. Posts on the team blog can assume a lot more context and familiarity with Rust.

Writing for the Main Rust blog

The Leadership Council ultimately decides what to post on the main Rust blog.

Post proposals describing exciting developments from within the Rust org are welcome, as well as posts that describe exciting applications of Rust. We do not generally do “promotional cross-posting” with other projects, however.

If you would like to propose a blog post for the main blog, please reach out to a Leadership Council member. It is not suggested to just open PRs against the main Rust blog that add posts without first discussing it with a Leadership Council member.

Release note blog posts

One special case are the regular release note posts that accompany every Rust release. These are managed by the release team and go on the main blog.

The blog posts are published on the same day as the release by the same person in the release team running the release. Releases always happen on Thursdays.

Before publishing a release post, it goes through a drafting process:

  1. The milestone (e.g. for 1.39.0) for the release is consulted.
  2. PRs that we think are sufficiently important are included, and some items are headlined. The writing of a blog post typically happens through a hackmd document.
  3. Headlined items are sometimes written by different people, and we try to peer-review each subsection.
  4. The blog post draft is submitted as a PR on the blog repo for final review a few days before the release.

Team Rust blogs

Teams can generally decide for themselves what to write on the team Rust blog.

Typical subjects for team Rust blog posts include:

  • New initiatives and calls for participation
  • Updates and status reports from ongoing work
  • Design notes

To propose a blog post for the team blog of a particular team, reach out to the team lead or some other team representative.

Calendars

Many Rust teams and working groups have regular meetings, and it can get challenging quickly to manage all the calendar events.

That’s why we have automation available for generating both one-time and recurring calendar events. It can be found in the calendar repository, which also contains a guide for its usage.

You can use it to create and update calendar invites declaratively using a TOML file, and the tool will then generate .ics files from them, which can be imported into various calendar tools.

Triagebot

Triagebot (AKA rustbot) is a general-purpose bot used for a wide variety of tasks in the rust-lang organization, usually involving sending commands via GitHub or Zulip comments. The following pages explain the available features.

Commands are usually issued by writing comments starting with the text @rustbot. The commands that are available depends on which repository you are using. Each repository has a triagebot.toml where you can see which features are enabled.

For example, the following comment:

@rustbot label A-diagnostics A-macros

will set the given labels on a GitHub issue or pull request, even for people who don’t have direct permissions to do that in the GitHub UI.

GitHub commands

Commands on GitHub issues or pull requests are usually issued by writing @rustbot followed by the command anywhere in the comment. @rustbot will ignore commands in markdown code blocks, inline code spans, or blockquotes. Multiple rustbot commands can be entered in a single comment.

Triagebot also allows editing of a comment. If you don’t modify the text of the command, then triagebot will ignore the edit. However, if you modify an existing command, or add new ones, then those commands will be processed.

Configuration

Individual GitHub repositories can configure triagebot features via a file called triagebot.toml in the root of the default branch. The following pages explain the syntax needed for each feature.

For example, the rust-lang/rust configuration file is at https://github.com/rust-lang/rust/blob/master/triagebot.toml.

When first adding triagebot.toml to a new repository, you will need to enable permissions for the bot to operate. This can be done by posting a PR to the rust-lang/team database to add bots = ["rustbot"] to the repository in the repos/rust-lang directory.

Common command summary

The following are some common commands you may see on rust-lang/rust.

CommandDescriptionDocs
@rustbot claimAssigns an issue to yourself.Issue Assignment
@rustbot release-assignmentRemoves your assignment to an issue.Issue Assignment
@rustbot assign @octocatAssigns an issue to a specific user.Issue Assignment
@rustbot readyIndicates a PR is ready for review.Shortcuts
@rustbot authorIndicates a PR is waiting on the author.Shortcuts
@rustbot blockedIndicates a PR is blocked on something.Shortcuts
@rustbot label A-diagnostics A-macrosAdds two labels to an issue or PR.Labeling
@rustbot label -P-highRemoves a label from an issue or PR.Labeling
@rustbot ping windowsPosts a comment pinging the Windows ping group.Pinging
@rustbot prioritizeRequests prioritization from the Prioritization WG.Prioritization
r? @octocatAssigns a PR to a user.PR Assignment
r? libsAssigns to a random person in the libs review group.PR Assignment
r? rust-lang/cargoAssigns a random person from the cargo team.PR Assignment

The following are some common commands you may see on Zulip:

CommandDescriptionDocs
@triagebot readWaits for people to read a document in a meeting.Zulip Meeting Management
@triagebot end-topicChecks if everyone is done discussing a topic in a meeting.Zulip Meeting Management
@triagebot end-meetingChecks if everyone is ready to finish a meeting.Zulip Meeting Management

Implementation

The source code for triagebot can be found at https://github.com/rust-lang/triagebot. If you are interested in extending triagebot, the documentation there should provide some guidance on how to get started.

Agenda Generator

The lang team uses the agenda generator to assist with meeting agendas.

Usage

The agenda generator can be viewed at https://triage.rust-lang.org/agenda.

Configuration

This feature has no configuration.

Implementation

See src/agenda.rs.

Issue Assignment

The issue assignment commands allows any user to assign themselves to a GitHub issue.

Usage

Issue assignment is done by entering one of these commands in a GitHub comment:

  • @rustbot claim — Assigns the issue to yourself.
  • @rustbot release-assignment — Removes the current assignee. Only the current assignee or a team member can release an assignment.
  • @rustbot assign @user — Assigns a specific user. Only team members can assign other users.

Due to GitHub restrictions, not all users can be directly assigned to an issue. Only users with write permission to the repo, or rust-lang organization members can be directly assigned. If triagebot is unable to directly assign the user, it will instead assign @rustbot and edit the top-level comment with a message that the issue has been claimed.

Configuration

Issue assignment is enabled on a repository by the existence of the [assign] table in triagebot.toml:

[assign]

Implementation

See parser/src/command/assign.rs and src/handlers/assign.rs.

PR Assignment

Triagebot handles automatic and manual assignment of GitHub PRs. It also handles welcoming new users when they post a PR.

Rust contributors can track and manage their own work queue using the Zulipchat integration. See Tracking PR assignment.

Usage

Automatic assignment of new PRs is handled by the configuration in the triagebot.toml, described below.

Manual assignment can be done by posting a comment on the PR with the text:

  • r? @octocat — Assigns a specific user.
  • r? octocat — The @ is optional.
  • r? libs — Chooses a random person from the libs ad-hoc group defined in triagebot.toml. For example, for the rust-lang/rust repository, see triagebot.toml for a list of ad-hoc group names.
  • r? rust-lang/libs — The rust-lang/ org name prefix is optional.
  • r? rustdoc — Chooses a random person from the rustdoc team. See the teams database for a list of team names.
  • r? rust-lang/rustdoc — The org name prefix is optional. It is strongly recommended that you do not use @, as that will subscribe and notify the entire team to the PR.

When choosing a user from a team, triagebot only looks at direct team members (it ignores subteams).

When looking up a name, triagebot will first look at ad-hoc groups, then rust-lang teams, and if it doesn’t match either of those it assumes it is a GitHub user.

PRs can only be assigned to users with write permissions to the repo, any rust-lang org members with read permissions, or anyone who has commented on the PR.

Ghost

Using r? ghost in the initial PR top-level comment when opening a PR will disable triagebot’s auto-assignment. ghost is GitHub’s placeholder account for deleted accounts. It is used here for convenience. This is typically used for rollups or experiments where you don’t want any assignments or noise.

Configuration

PR assignment is enabled on the repository by having an [assign.owners] table in triagebot.toml:

# These are ad-hoc groups that can be referenced in `r?` and the `owners` table below.
# The values may contain GitHub usernames, other groups, or rust-lang teams.
# The `@` is optional.
# Group names should be lowercase.
[assign.adhoc_groups]
libs = ["@joshtriplett", "@Mark-Simulacrum", "@kenntytm", "@m-ou-se", "@thomcc"]
# Can reference other groups.
compiler = ["compiler-team", "compiler-team-contributors"]
compiler-team = ["cjgillot", "estebank"]
compiler-team-contributors = ["compiler-errors", "jackh726"]
# Can reference rust-lang teams.
libs = ["rust-lang/libs-api"]
# This is a special group that will be used if none of the `owners` entries matches.
fallback = ["@Mark-Simulacrum"]

# This specifies users, groups, or teams to assign for different paths.
# Triagebot will pick one person to assign.
# Paths are gitignore-style matches.
[assign.owners]
# Examples of assigning individuals.
"Cargo.lock" = ["@Mark-Simulacrum"]
"/library/std/src/sys/windows" = ["@ChrisDenton"]
# Example of assigning to a group.
"/library/std" = ["libs"]
# Supports gitignore patterns.
"*.js" = ["@octocat"]
# If you want to match all files, `*` should be sufficient.
"*" = ["@octocat"]
# Can use teams from the rust-lang teams database.
"/src/tools/cargo" = ["@rust-lang/cargo"]

If the owners map is configured, then triagebot will automatically select a reviewer based on which files were modified in the PR. The existence of the owners table also enables the ability for users to post a comment with r? name to set the assignment to a specific user.

Vacation

If a reviewer wants to temporarily prevent themselves from being assigned (automatically or manually) they can add themselves to the special assign.users_on_vacation group.

[assign]
users_on_vacation = ["jyn514", "ChrisDenton"]

Additional new PR trigger options

Triagebot will also post a welcome message to the user. Its behavior depends on a few factors:

  • PR authors who have not previously made any commits will get a more detailed welcome message.
  • PR authors who have made commits will get an abbreviated message.
  • If the initial PR comment has an r? command, then no welcome will be posted.

There are several options in triagebot.toml for controlling its behavior on new PRs:

[assign]
# If set, posts a warning message if the PR is opened against a non-default
# branch (usually main or master).
warn_non_default_branch = true
# If set, the welcome message to new contributors will include this link to
# a contributing guide.
contributing_url = "https://rustc-dev-guide.rust-lang.org/contributing.html"

Additionally, triagebot will post a comment with a warning if the PR modifies any submodules.

Exceptions to default branch warning

Some PRs may have a different default branch than the rest of the PRs, in these cases it is possible to add exceptions based on the PR title, which will therefore warn if the PR is targeting a different branch than specified.

[assign]
warn_non_default_branch.enable = true

[[assign.warn_non_default_branch.exceptions]]
title = "[beta" # title contains "[beta" in it
branch = "beta"

Implementation

See parser/src/command/assign.rs and src/handlers/assign.rs.

Tracking PR assignment

If you contribute in some capacity to the Rust compiler development, you might also be assigned pull requests to be reviewed.

You can check your current review assignment in two ways:

  • by visiting this GitHub URL
  • by interacting with the triagebot on the Zulip chat in a DM (Direct Messages) thread. You can open a direct message session with the triagebot clicking on this link (requires Zulip login).

This chapter will describe how to interact with the triagebot on Zulip.

Configuration

Tracking the PR assignment is enabled on the git repository by having a [pr-tracking] table in triagebot.toml. No additional configuration is needed.

Usage

Open a Direct Message session with the triagebot and send a message with one of these commands:

  • work — Will emit an error and show the available commands
  • work show — Will show your Github username and a list of pull requests assigned to you for review (on the rust-lang/rust git repository)

Implementation

See parser/src/handlers/pr_tracking.rs.

Autolabels

Auto labels will automatically apply labels to GitHub issues and PRs based on the [autolabel] configuration in triagebot.toml.

Usage

Auto labels have no manual control. See labeling for manually changing labels.

Configuration

Triggered by labels

Labels can be added when another label is added. The trigger_labels config option specifies which labels will cause this to trigger.

# Automatically applies the `I-prioritize` label whenever one of the labels
# listed below is added to an issue (unless the issue already has one of the
# labels listed in `exclude_labels`).
[autolabel."I-prioritize"]
trigger_labels = [
    "regression-untriaged",
    "regression-from-stable-to-stable",
    "regression-from-stable-to-beta",
    "regression-from-stable-to-nightly",
    "I-unsound",
]
exclude_labels = [
    "P-*",
    "T-infra",
    "T-release",
    "requires-nightly",
]

Exclude labels support shell-like * glob patterns.

Triggered by files

Labels can be added based on which files are modified in a PR. The trigger_files config option specifies which files will cause the label to be added. Paths are matched with starts_with.

# Adds the `T-compiler` label to any PR that touches `compiler` or
# `src/test/ui` unless it already has a `T-*` label.
[autolabel."T-compiler"]
trigger_files = [
    "compiler",
    "tests/ui",
]
exclude_labels = [
    "T-*",
]

Triggered by new PRs

Labels can be added to any PR when it is opened. Set the new_pr = true config option to enable this. For example:

[autolabel."S-waiting-on-review"]
new_pr = true

Triggered by new issues

Labels can be added to any issue when it is opened. Set the new_issue = true config option to enable this. For example:

[autolabel."new-issue"]
new_issue = true

Implementation

See src/handlers/autolabel.rs.

Close

The close command can be used to close a GitHub issue or pull request.

Usage

To close an issue or pull request, any rust-lang team member may enter the command:

@rustbot close

This will immediately close the issue or PR.

Configuration

This feature is enabled on a repository by having a [close] table in triagebot.toml:

[close]

Implementation

See src/handlers/close.rs and parser/src/command/close.rs.

Documentation Updates

Triagebot automatically generates a PR to rust-lang/rust every two weeks that updates all of the book submodules. This PR requires manual approval. These updates are currently managed by @ehuss.

Usage

There are no settings or manual controls for this feature.

Implementation

See src/handlers/docs_update.rs.

GitHub Releases

Triagebot can be used to automatically create releases on GitHub when a tag is pushed, using the relevant section of the changelog as the release body. No artifacts are uploaded when doing this.

Usage

Any time you push a git tag, or update the contents of the changelog, triagebot will synchronize all tags with the releases. That is, any tag that doesn’t have a release will create a new release. Additionally, the text of all the releases will be synchronized with the text in the changelog.

Tags that don’t have entries in the changelog will not create a release.

Configuration

To enable automatically creating GitHub Releases, add this to the triagebot.toml at the root of your repository:

[github-releases]
format = "rustc"
project-name = "Rust"
changelog-path = "RELEASES.md"
changelog-branch = "master"

The format defines which format the changelog file adheres to, and it’s used to properly extract the relevant section from it. You can add another format by changing triagebot’s src/changelogs/. The currently supported formats are:

  • rustc: follows the custom style of rustc’s RELEASES.md.

The project-name defines what the title of the release should be. The final title will be {project-name} {tag}.

The changelog-path and changelog-branch keys define where triagebot should look at when searching for the changelog.

Implementation

See src/handlers/github_releases.rs and src/changelogs/.

Glacier

Triagebot can be used to automatically generate PRs on https://github.com/rust-lang/glacier/ that contain code snippets that cause an ICE (Internal Compiler Error).

Usage

Enter the code you want to post on the Rust Playground. Click the “Share” button and then copy the link for “Direct link to the gist”. Then post a comment on a GitHub issue with that link as:

@rustbot glacier "https://gist.github.com/rust-play/3d9134282f880c93bfe65e7db6b0680f"

Note that the link must be in double quotes.

Configuration

This feature is enabled on a repository by having a [glacier] table in triagebot.toml:

[glacier]

Implementation

See parser/src/command/glacier.rs and src/handlers/glacier.rs.

Issue Transfer

The transfer command allows you to transfer a GitHub issue from one repository to another.

Usage

To transfer an issue to another repository, enter a comment with the form:

@rustbot transfer <repository-name>

It is recommended to also include a comment explaining why you are transferring. For example:

Transferring to rust-lang/cargo since this is an issue with how cargo
implements diagnostic reports.

@rustbot transfer cargo

IMPORTANT: There will be no visual indication that the issue is being transferred. Due to GitHub API limitations, you will not see any activity. You must reload the page to view the issue in its new location. It may take a few moments for GitHub to transfer all the data.

WARNING: Transferring is a partially destructive command. For example, labels and milestones that don’t exist in the target repository will be removed from the issue.

The transfer command is limited to team members of the rust-lang org, and transfers can only happen to repositories in the rust-lang org. Also, the destination repository must have triagebot enabled on it.

Configuration

The source repository must have an empty transfer table to enable transfers from that repository. Issues can be transferred to any repository in the rust-lang org (that has triagebot enabled).

[transfer]

Implementation

See parser/src/command/transfer.rs and src/handlers/transfer.rs.

Labeling

You can apply GitHub labels to an issue or PR by posting a comment. Labeling of issues can be very helpful for searching, tying issues together, and indicating information in a formal way, such as the status.

The Triage WG helps with labeling issues. If you are interested in helping triaging issues, see the Triage WG procedure.

Usage

The general form of the comment should be @rustbot label followed by a space-separated list of labels to add or remove. You can remove labels by prefixing them with the - character. Some examples:

  • @rustbot label A-diagnostics A-macros
  • @rustbot label +T-lang -T-compiler — Removes T-compiler and adds T-lang.

The syntax for the command is somewhat flexible, supporting a few different forms to suit your pleasure. Some examples of variants you can use:

  • @rustbot label: +T-lang, -T-compiler
  • @rustbot label: +T-lang and -T-compiler
  • @rustbot modify labels to +T-lang and -T-compiler
  • @rustbot modify labels: +T-lang and -T-compiler
  • @rustbot modify labels to +T-lang -T-compiler

The command can be terminated with a ., ;, or the end of the line.

Formally the grammar is:

Command → @rustbot modify? label-word to? :? label-list (; | .)?

label-word →
      label
   | labels

label-list →
      label-delta
   | label-delta and label-list
   | label-delta , label-list
   | label-delta , and label-list

label-delta →
      + label
   | - label
   | label

label → [^.,:!?;\n() ]+

Permissions

All labels can be assigned by rust-lang organization team members (and wg-triage, wg-prioritization, and wg-async). Users not on a team can only assign labels that are explicitly authorized in triagebot.toml. It is encouraged for maintainers to allow the majority of labels to be applied by anyone. An example of one that would be restricted is beta-accepted, since accepting a backport to beta is usually only done by a team member.

Configuration

Labeling support is enabled on a repo by having a [relabel] table in triagebot.toml:

[relabel]

Permissions for allowing unauthenticated labeling is done by listing the labels in the allow-unauthenticated list:

[relabel]
# any label is allowed to be set by team members (anyone on a team in rust-lang/team)
# but these can be set by anyone in the world
allow-unauthenticated = [
    "C-*", # any C- prefixed label will be allowed for anyone, independent of authorization with rust-lang/team
    "!C-bug", # but not C-bug (order does not matter)
]

Implementation

See src/handlers/autolabel.rs.

Major Changes

Triagebot helps with automated processing of Major Change Proposals.

Usage

The process starts when the appropriate label is set on an issue. For example, the rust-lang/compiler-team repo has a major change template which will automatically set the major-change label. Triagebot will detect this and create a new Zulip topic for hosting discussion, and post a comment to the issue with a link to Zulip stream.

If a team member writes a comment on the GitHub issue with @rustbot second (or @rustbot seconded), then triagebot will set the appropriate label, and post a comment to Zulip.

If a team member adds the major-change-accepted label, then triagebot will post a comment to Zulip to let people know that it has been accepted.

Configuration

This feature is enabled by the [major-change] table in triagebot.toml:

[major-change]
# Issues that have this label will start the MCP process.
# Defaults to "major-change".
enabling_label = "major-change"

# Label to apply once an MCP is seconded.
second_label = "final-comment-period"

# Label to apply when an MCP is created.
# Typically this is used to track what needs to be discussed at a meeting.
meeting_label = "to-announce"

# When this label is added to an issue, that triggers acceptance of the proposal
# which sends an update to Zulip.
# Defaults to "major-change-accepted".
accept_label = "major-change-accepted"

# Optional extra text that is included in the GitHub comment when the issue is opened.
open_extra_text = "cc @rust-lang/compiler @rust-lang/compiler-contributors"

# The Zulip stream to automatically create topics about MCPs in
# Can be found by looking for the first number in URLs, e.g.
# https://rust-lang.zulipchat.com/#narrow/stream/131828-t-compiler
zulip_stream = 233931

# An Zulip group or username to tag in the Zulip message when a
# proposal has been seconded.
zulip_ping = "T-compiler"

Implementation

See src/handlers/major_change.rs.

Mentions

Triagebot can leave a comment on PRs that touch certain files. This can be useful to alert people who want to review any change to those files, or to provide a informational message to the author.

Usage

Mentions are triggered automatically when a PR is opened (or new changes are pushed) based on the configuration in triagebot.toml of the repo.

Configuration

To enable mentions, add entries to the [mentions] table in triagebot.toml. Each key in the table should be a path in the repo. Triagebot will check for modifications to any file that starts with the given path. For example, library/std would match anything under the library/std directory like library/std/src/process.rs.

There are two optional values that can be specified in the table:

  • cc — A list of strings of users to ping. They should start with @ like @ehuss or @rust-lang/clippy. If this is not specified, nobody will be pinged.
  • message — This is the message that will be included in the comment. If this is not specified, the comment will say Some changes occurred in {path}.

Example:

[mentions."src/tools/cargo"]
cc = ["@ehuss"]

[mentions."src/rustdoc-json-types"]
message = """
rustdoc-json-types is a **public** (although nightly-only) API.
If possible, consider changing `src/librustdoc/json/conversions.rs`;
otherwise, make sure you bump the `FORMAT_VERSION` constant.
"""

Implementation

See parser/src/mentions.rs and src/handlers/mentions.rs

Merge Conflicts

The merge-conflicts feature detects if a Pull Request has a merge conflict, and will post a comment asking the author to resolve the conflicts.

Usage

This is triggered automatically when a commit is made to a branch that causes existing, open PRs to have a merge conflict. The bot will post a comment to the PR that roughly looks like this:

☔ The latest upstream changes (possibly #152) made this pull request unmergeable. Please resolve the merge conflicts.

Note that it may take a minute or so for the comments to be posted.

Configuration

This feature is enabled on a repository by having a [merge-conflicts] table in triagebot.toml:

[merge-conflicts]

There are several optional keys that you can include:

  • remove — A list of labels to remove from the PR when a conflict is detected.
  • add — A list of labels to add to the PR when a conflict is detected.
  • unless — A list of labels that, if already present on the PR, will prevent triagebot from adding or removing labels.

Example

[merge-conflicts]
remove = ['S-waiting-on-bors']
add = ['S-waiting-on-author']
unless = ['S-blocked', 'S-waiting-on-crater', 'S-waiting-on-team', 'S-waiting-on-review']

Implementation

See src/handlers/merge_conflicts.rs.

No Merge Policy

The no-merge policy informs users if they have merge commits in their pull request. Some repositories prefer to only use a rebase-oriented workflow.

Usage

This is triggered automatically if a PR has merge commits. Triagebot will post a comment on the PR if it detects merge commits. The comment will explain the no-merge policy, and how the user can avoid merge commits.

Configuration

This feature is enabled on a repository by having a [no-merges] table in triagebot.toml:

[no-merges]

There are three optional values that can be specified in the table:

  • exclude_titles — A list of strings of title segments to exclude. PRs with titles containing these substrings will not be checked for merge commits. Case sensitive.

  • labels — A list of strings of label names to add. These labels will be set on the PR when merge commits are detected.

  • message — Override the default message posted for merge commits. The message will always be followed up with “The following commits are merge commits:” and then a list of the merge commits.

Default message

There are merge commits (commits with multiple parents) in your changes. We have a no merge policy so these commits will need to be removed for this pull request to be merged.

You can start a rebase with the following commands:

$ # rebase
$ git pull --rebase https://github.com/rust-lang/rust.git master
$ git push --force-with-lease

Example

[no-merges]
# PRs with the following labels will be skipped 
exclude_labels = ["rollup", "sync"]
# Add the following labels to PRs with merge commits
labels = ["has-merge-commits", "S-waiting-on-author"]
# Post the following warning message as a comment on PRs with merge commits
message = """
This repository does not allow merge commits.
Your PR cannot be merged until it is rebased.
"""

Implementation

See src/handlers/no_merges.rs.

Nominate

The nominate commands are used for nominating issues for backporting.

Usage

There are multiple commands that can be issued in a GitHub comment to handle nomination:

  • @rustbot beta-nominate <team> — Adds the beta-nominated and the given team’s label. This indicates that the issue is nominated for beta backport, and the team should decide whether to accept or reject it.
  • @rustbot nominate <team> — Adds the I-nominated and the given team’s label. This is used to nominate an issue for the team to discuss.
  • @rustbot beta-accept — Adds the beta-accepted label. This indicates that it has been approved for beta backport, and someone (usually the release team) will take care of applying the backport.
    • @rustbot beta-approve — An alias for beta-accept.

Only rust-lang team members may use the nominate commands.

Only teams that are listed in the configuration can be nominated.

If you need to nominate multiple teams, add each one in a separate command. This is to encourage descriptions of what to do targeted at each team, rather than a general summary.

Configuration

This feature is enabled on a repository by having a [nominate] table in triagebot.toml. The nominate.teams table lists the team names, and the associated labels that should be used for that team.

[nominate.teams]
compiler = "T-compiler"
release = "T-release"
core = "T-core"
infra = "T-infra"

Implementation

See src/handlers/nominate.rs and parser/src/command/nominate.rs.

Note

The note command can be used to update the top comment of a GitHub issue with a summary.

Usage

A summary note can be added to a GitHub issue by writing a comment with the command:

@rustbot note summary-title

The word after note is then added as a link to the top comment of the GitHub issue:

<!-- TRIAGEBOT_SUMMARY_START -->

### Summary Notes

- ["summary-title" by @username](link-to-comment)

Generated by triagebot, see [help](https://github.com/rust-lang/triagebot/wiki/Note) for how to add more
<!-- TRIAGEBOT_SUMMARY_END -->

with a link to the comment where you posted the note command.

The title word can be a sequence of characters matching the regular expression [^.,:!?;\n() ]+. Or it can be a quoted string like "this is a title".

Additional notes will get appended to the list:

<!-- TRIAGEBOT_SUMMARY_START -->

### Summary Notes

- ["first-note" by @username](link-to-comment)
- ["second-note" by @username](link-to-comment)
- ["summary-title" by @username](link-to-comment)

<!-- TRIAGEBOT_SUMMARY_END -->

This summary section should not be edited by hand.

Removing an existing summary

Notes can be removed by writing a comment with @rustbot note remove summary-title, where summary-title is the word used when the note was created. Triagebot will remove the entry from the summary list.

Configuration

This feature is enabled by having a [note] table in triagebot.toml:

[note]

Implementation

See parser/src/command/note.rs and src/handlers/note.rs.

Notifications

The notifications system helps a user keep track of GitHub notifications.

Usage

Each registered team member has a notifications page at:

https://triage.rust-lang.org/notifications?user=<github-username>

Whenever you are mentioned on GitHub with a direct mention (@user) or via a team mention (@rust-lang/libs) anywhere in the rust-lang organization, this will add an entry to the notifications list.

The notifications list can also be edited via Zulip by private-messaging triagebot. Any Rust organization member can edit their notifications page, or pages of other Rust organization team members. To do so, the editor must have a zulip-id listed in their people/username.toml file in the team repository. The bot will tell you which ID to use when talking to it for the first time; please r? @Mark-Simulacrum on PRs adding Zulip IDs.

The following commands are supported:

  • acknowledge <url> (or short form ack <url>)
  • acknowledge <idx> (or short form ack <idx>)

These both acknowledge (and remove) a notification from the list.

  • acknowledge all or acknowledge * (or short form ack all or ack *)

This acknowledges and removes all notifications.

  • add <url> <description... (multiple words)>

This adds a new notification to the list.

  • move <from> <to>

This moves the notification at index from to the index to.

  • meta <idx> <metadata...>

This adds some text as a sub-bullet to the notification at idx. If the metadata is empty, the text is removed.

  • as <github username> <command...>

This executes any of the above commands as if you were the other GitHub user.

Configuration

There is no configuration for this feature.

Implementation

See src/handlers/notification.rs, src/notification_listing.rs, and src/db/notifications.rs.

Pinging

Triagebot can be used to “ping” teams of people that do not have corresponding GitHub teams. This is useful because sometimes we want to keep groups of people that we can notify but we don’t want to add all the members in those groups to the GitHub org, as that would imply that they are members of the Rust team (for example, GitHub would decorate their names with “member” and so forth). The compiler team uses this feature to reach the notification groups.

When a team is pinged, we will both post a message to the issue and add a label. The message will include a cc line that @-mentions all members of the team.

Usage

On repositories with a ping group configured, any Rust team member (and wg-triage, wg-prioritization, and wg-async) can write a GitHub comment such as:

@rustbot ping windows

which would cause triagebot to post a comment notifying the members of the windows ping group.

Teams that can be pinged

To be pinged, teams have to be created in the Rust team repository. Frequently those teams will be marked as marker-team, meaning that they do not appear on the website. The Icebreakers LLVM team is an example.

Additionally, the team needs to be configured in the repository’s triagebot.toml file.

Configuration

To enable the team (e.g. TeamName) to be pinged, you have to add section to the triagebot.toml file at the root of a repository, like so:

[ping.TeamName]
message = """\
Put your message here. It will be added as a Github comment,
so it can include Markdown and other markup.
"""
label = "help wanted"

This configuration would post the given message and also add the label help wanted to the issue.

You can also define aliases to add additional labels to refer to same target team. Aliases can be useful to add mnemonic labels or accommodate slight misspellings (such as “llvms” instead “llvm”), see the following example:

[ping.cleanup-crew]
alias = ["cleanup", "cleanups", "shrink", "reduce", "bisect"]
message = """\
message content...
"""

This will allow the command @rustbot ping cleanup-crew to be understood with all the aliased variants, ex.:

@rustbot ping cleanup
@rustbot ping shrink
...

Check out the rust-lang/rust configuration for an up-to-date examples.

Implementation

See parser/src/command/ping.rs and src/handlers/ping.rs.

Requesting Prioritization

Users can request an issue to be prioritized by the Prioritization WG.

Usage

On repositories configured for prioritization, any user can post a comment with:

@rustbot prioritize

which will add the I-prioritize label to the issue to notify the Prioritization WG that the issue needs prioritization.

Configuration

This feature is enabled on a repository by the [prioritize] table in triagebot.toml:

[prioritize]
# Name of the label used for requesting prioritization on issues
label = "I-prioritize"

Implementation

See parser/src/command/prioritize.rs and src/handlers/prioritize.rs.

Review Changes Requested

This feature will automatically adjust the labels on a pull request when a reviewer sends a review with changes requested.

Usage

When creating a pull request review, click the “Request Changes” option when finishing the review. This will automatically remove the review labels, and add a new label to indicate that the PR is waiting on the author.

Configuration

This feature is enabled on a repository by having a [review-submitted] table in triagebot.toml:

[review-submitted]
# These labels are removed when a review is submitted.
review_labels = ["S-waiting-on-review"]
# This label is added when a review is submitted.
reviewed_label = "S-waiting-on-author"

Implementation

See src/handlers/review_submitted.rs.

Review Requested

This feature will automatically adjust the labels on a pull request when the PR author requests a review from an assignee.

Usage

In the list of reviewers, click the “Re-request review” button near an assignee’s name. This will automatically remove the “waiting on the author” labels, and add a new labels to indicate that the PR is waiting on the review.

Configuration

This feature is enabled on a repository by having a [review-requested] table in triagebot.toml:

[review-requested]
# Those labels are removed when PR author requests a review from an assignee
remove_labels = ["S-waiting-on-author"]
# Those labels are added when PR author requests a review from an assignee
add_labels = ["S-waiting-on-review"]

Implementation

See src/handlers/review_requested.rs.

Rustc Commit Tracking

Triagebot keeps a database of commits to the rust-lang/rust repository. This is useful since the GitHub API for fetching this information can be slow. For example, this is used by the rustc-perf system.

Usage

The top-level bors merge commits can be fetched from https://triage.rust-lang.org/bors-commit-list.

Configuration

This has no configuration, it is processed automatically.

Implementation

See src/db/rustc_commits.rs and src/handlers/rustc_commits.rs.

Shortcuts

Shortcuts are simple commands for performing common tasks.

Usage

Shortcut commands can be issued by writing a GitHub comment as indicated below.

ready

@rustbot ready

This indicates that a PR is ready for review. This assigns the S-waiting-on-review label on the pull request and removes both S-waiting-on-author and S-blocked if present.

@rustbot review or @rustbot reviewer are aliases for ready.

author

@rustbot author

This indicates that a PR is waiting on the author. This assigns the S-waiting-on-author label on the pull request and removes both S-waiting-on-review and S-blocked if present.

blocked

@rustbot blocked

This indicates that a PR is blocked on something. This assigns the S-blocked label on the pull request and removes both S-waiting-on-author and S-waiting-on-review if present.

Configuration

This feature is enabled on a repository by having a [shortcut] table in triagebot.toml:

[shortcut]

Implementation

See parser/src/command/shortcut.rs and src/handlers/shortcut.rs.

Triagebot Dashboard

The triage dashboard is used to assist with triaging open pull requests.

Usage

The triage dashboard for repositories can be found at https://triage.rust-lang.org/triage.

Any rust-lang repository can be viewed with the form https://triage.rust-lang.org/triage/<owner>/<repo>.

Configuration

This feature has no configuration.

Implementation

See src/triage.rs.

Zulip Meeting Management

Triagebot can respond to some commands in Zulip to assist with running a meeting.

Usage

Enter a message in Zulip addressed to @triagebot with a command listed below.

Document reading

@triagebot read

This command will cause triagebot to post a comment to poll when everyone is finished reading some document, and are ready to start discussing it. The message looks something like:

Click on the :book: when you start reading (and leave it clicked).
Click on the :checkered_flag: when you finish reading.

Users can then click the emoji reaction buttons to indicate that they are currently reading, and then again when they are finished.

End topic

@triagebot end-topic

This command will cause triagebot to post a comment to poll if everyone in the meeting is ready to move on to the next topic. The message looks something like:

Does anyone have something to add on the current topic?
React with :working_on_it: if you have something to say.
React with :all_good: if not.

Users can then click the emoji reaction buttons to indicate if they are ready or not.

@triagebot await is an alias for end-topic.

End meeting

@triagebot end-meeting

This command will cause triagebot to post a comment to poll if everyone is ready to end the meeting. The message looks something like:

Does anyone have something to bring up?
React with :working_on_it: if you have something to say.
React with :all_good: if you're ready to end the meeting.

Users can then click the emoji reaction buttons to indicate if they are ready to end or not.

Configuration

This feature has no configuration, it is available to all team members. Note that your Zulip ID needs to be configured in the teams database.

Implementation

See src/zulip.rs.

Zulip Notifications

Triagebot can send messages to Zulip based on various triggers like issue labels.

Usage

Zulip notifications are automated based on the configuration described below. They can be triggered based on the addition or removal of labels, or when an issue is closed or reopened.

For example, the rust-lang/rust repository is configured to automatically post a message whenever an issue is tagged with the A-edition-2021 label to the “Edition 2021” stream, which looks something like:

triagebot

Issue #109298 “ICE Subslice unexpected because it isn't captured –edition=2021” has been added.

Configuration

This feature is enabled on a repository by having a [notify-zulip] table in triagebot.toml:

# Triggers a Zulip notification based on the given label name.
[notify-zulip."label-name"]
# The Zulip stream to post to.
# Can be found by looking for the first number in URLs, e.g. https://rust-lang.zulipchat.com/#narrow/stream/131828-t-compiler
zulip_stream = 245100 # #t-compiler/wg-prioritization/alerts

# The Zulip topic to post to.
# {number} is replaced with the issue/PR number.
# {title} is replaced with the issue/PR title.
topic = "#{number} {title}"

# The message to post when the label is added.
# Supports {number} and {title} substitution.
message_on_add = "Issue #{number} \"{title}\" has been added."

# The message to post when the label is removed.
# Supports {number} and {title} substitution.
message_on_remove = "Issue #{number}'s nomination has been removed. Thanks all for participating!"

# The message to post when the issue/PR is closed and it has the label.
# Supports {number} and {title} substitution.
message_on_close = "Issue #{number} has been closed. Thanks for participating!"

# The message to post when the issue/PR is reopened and it has the label.
# Supports {number} and {title} substitution.
message_on_reopen = "Issue #{number} has been reopened. Pinging @*T-types*."

# The Zulip notification will not be posted unless the issue/PR has all of these labels.
# Please replace the `{team}` placeholder with the appropriate team to be notified for the nomination
# (ex. `I-compiler-nominated`, `I-lang-nominated`, ...)
required_labels = ["I-{team}-nominated"]

Implementation

See src/handlers/notify_zulip.rs.

GitHub Actions created PR open/closer

This automation triggers an automatic close & reopen on PRs opened by the github-actions user, i.e., from a GitHub actions job. This enables CI to run on those PRs without needing a manual poke from some human.

Configuration

This feature is enabled on a repository by having a [bot-pull-requests] table in triagebot.toml:

[bot-pull-requests]

Implementation

See src/handlers/bot_pull_requests.rs.

Community

This section documents the processes of the community team, and related projects.

  • The Community team GitHub repository contains information about how the community team organizes.
  • The RustBridge website contains information on hosting your own local RustBridge event.
  • Rustlings is an project with small exercises designed around getting newcomers used to reading and writing Rust.

State of Rust Survey FAQ

In this FAQ we try to answer common questions about the Annual State of the Rust Language Community Survey. If in your opinion there is a missing question or if you have a concern about this document, please do not hesitate to contact the Rust Community Team or open an issue with the Community Team.

Why is this survey important for the Rust project?

Rust is an Open Source project. As such, we want to hear both from people inside and outside our ecosystem about the language, how it is perceived, and how we can make the language more accessible and our community more welcoming. This feedback will give our community the opportunity to participate on shaping the future of the project. We want to focus in the requirements of the language current and potential users to offer a compelling tool for them to solve real world problems in a safe, efficient and modern way.

What are the goals of the survey?

  • To understand the community’s main development priorities and needs
  • To categorize the population of users of the language
  • To focus our efforts on events and conferences to drive more impact
  • To identify potential new contributors to the community goals

How much time will it take to answer the survey?

In average, it should take from 10 to 15 minutes.

What kind of questions are included in the survey?

It includes some basic questions about how do responders use Rust, their opinion the ecosystem’s tools and libraries, some basic questions regarding the responders’ employer or organization and their intention to use Rust, technical background and demographic questions and some feedback related to the Rust project’s community activities and general priorities.

How will we use the data from the survey responses?

The answers from the survey will be anonymized, aggregated, and summarized. A high level writeup will be posted to https://blog.rust-lang.org.

How is personally identifiable information handled?

Nearly every question in the survey is optional. You are welcome to share as much or as little information as you are comfortable with. The Rust Foundation will have access to the raw datasets from the survey. All the answers are anonymized prior to be shared with the rest of the teams and prior to the results publication.

Why is the survey collecting contact information?

The survey optionally collects contact information for the following cases if you expressed interest in:

  • future conferences or meetups in your area
  • helping to organize a Rust event, meetup, or conference
  • talking to a Rust team about using Rust inside your company
  • Rust training
  • interest in a Rust team contacting you about your survey responses

If you would like to be contacted about any of this, or any other concerns, but you don’t want to associate your email with your survey responses, you can instead email the Rust Community Team at community-team@rust-lang.org, and we will connect you to the right people.

Where and when is the survey results report published?

We expect to publish results from the survey within a month or two of the survey completion. The survey results will be posted to project’s blog.

Where can I see the previous survey reports?

Compiler

This section documents the Rust compiler itself, its APIs, and how to contribute and provide bug fixes for the compiler.

  • The Rustc Dev Guide documents how the compiler works as well providing helpful information to help get new contributors involved in the development.
  • Rustc’s internal documentation.
  • The Compiler team website is the home for all of the compiler team’s planning.
  • oli-obk’s FIXME page lists all of the FIXME comments in the Rust compiler.

Cross Compilation

This subsection documents cross compiling your code on one platform to another.

Windows

  1. Acquire LLD somehow. Either your distro provides it or you have to build it from source.
  2. You’ll need an lld-link wrapper, which is just lld using the link flavor so it accepts the same flags as link.exe. You may either have a binary called lld-link, or you may have to write some sort of script to wrap lld.
  3. If you want to be able to cross compile C/C++ as well, you will need to obtain clang-cl, which is clang pretending to be cl.
  4. You’ll need libraries from an existing msvc installation on Windows to link your Rust code against. You’ll need the VC++ libraries from either VS 2015 or VS 2017, and the system libraries from either the Windows 8.1 or Windows 10 SDK. Here are some approximate paths which may vary depending on the exact version you have installed. Copy them over to your non-windows machine.
    • VS 2015: C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\lib
    • VS 2017: C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.10.24728\lib
    • Windows 10 SDK: C:\Program Files (x86)\Windows Kits\10\Lib\10.0.14393.0
    • Windows 8.1 SDK: C:\Program Files (x86)\Windows Kits\8.1\Lib\winv6.3
  5. If you want to cross compile C/C++ you’ll also need headers. Replace lib in the above paths with include to get the appropriate headers.
  6. Set your LIB and INCLUDE environment variables to semicolon separated lists of all the relevant directories for the correct architecture.
  7. In your .cargo/config add [target.x86_64-pc-windows-msvc] linker = "lld-link" or whatever your lld pretending to be link.exe is called.
  8. For cross compiling C/C++, you’ll need to get the gcc crate working correctly. I never tested it to cross compile, I have no idea whether it will even do anything sane.
  9. Install the appropriate target using rustup and pass --target=x86_64-pc-windows-msvc while building. Hopefully it works. If it doesn’t, well… I don’t know.

Cross-team Collaboration

If you are a member of another team and would like to raise an issue with the compiler team..

..for discussion

Write a comment on a GitHub issue describing the reason for the nomination (i.e. what decision needs to be made/what opinion is sought; what are the relevant parts to the compiler team, etc) and add the I-compiler-nominated label to a issue (you can include @rustbot label +I-compiler-nominated in your comment to do this).

Once nominated, the issue will be discussed in a upcoming triage meeting. The compiler team doesn’t always get through all nominated issues each week, so it can take more than one meeting for your issue to be discussed.

Once discussed, a member of the team will comment on the issue with the conclusion of the discussion and linking to the relevant Zulip chat.

..to be fixed

If there is an existing working relationship between a member of the requesting team and a contributor to the compiler, then the first option that a team has for requesting tasks be completed is to ping that contributor and ask if they can complete the task. It is recommended that pings take place in public Zulip channels so that..

  • ..other contributors that have free time have the opportunity to offer their help.
  • ..other compiler team members/leadership can ensure that requests being made are reasonable (see the rest of this section for the types of issues that the compiler team commits to prioritizing on behalf of other teams).

It is worth considering the available bandwidth of the contributor that the request is being made of, and whether their areas of expertise in the compiler are relevant.

When there is not a appropriate contact in the compiler team to reach out to directly, write a comment on a GitHub issue (or create an issue) describing the task that needs completed. Teams should nominate issues for the compiler team when issues..

  • ..are not already tracked by/part of an existing initiative or working group and..
  • ..are blocking/impeding the work of the other team (e.g. a feature or bug preventing the stabilization of something otherwise complete), but..
  • ..aren’t absolutely mission-critical - a soundness bug or otherwise critical issue will be prioritized by the prioritization working group and addressed through the compiler team’s other processes for these bugs. If the issue lacks a prioritization label, you can add the I-prioritize label and it will be enqueued for prioritization.

A detailed description of the feature being requested or the bug to be fixed is helpful wherever possible (so that the compiler contributor does not need to make a guess as to a solution that would solve the problem for the requesting team). If a member of the requesting team isn’t explicitly listed as the point-of-contact for the issue, then the author of the comment will be assumed to be the point-of-contact.

Add the I-compiler-nominated label to a issue (you can use @rustbot label +I-compiler-nominated to do this).

Once nominated, the issue will be discussed in a upcoming triage meeting. The compiler team doesn’t always get through all nominated issues each week, so it can take more than one meeting for your issue to be discussed. In the compiler team’s discussion, the issue may..

  • ..be accepted, in which case it will be assigned to a contributor and the nomination label removed. Once assigned, a member of the team will work on the issue. If no work is completed after a reasonable time, then re-nominate the issue and the compiler team will find someone else to complete the work.
  • ..or not accepted (e.g. due to insufficient bandwidth, other critical/high-priority bugs, being unable to find an appropriate contributor, or the issue lacking feasibility). In this case, the compiler team will reply to the nomination with an explanation and will remove the nomination label.

Review Policy

It is the purpose of code reviews to:

  • Reduce the risk of introducing bugs and usability and performance regressions.
  • Keep our code maintainable: readable, documented and well-tested.
  • Ensure that changes are made with the big picture and appropriate context in mind. This is particularly relevant for changes that seem harmless in isolation but are problematic or undesirable in the larger context.

Reviewing accomplishes this by bringing in another set of eyes to look at a proposed change from a different perspective, which increases the chance of catching mistakes early and uncovering potential blind spots in the reasoning behind the change.

Basic Reviewing Requirements

There are a number of requirements that need to be met in order for reviewing to be effective:

  • Reviewers must have a sufficient understanding of the code under review.
    • This is important to help spot non-obvious, unintentional side effects of a given change.
  • Pull request authors must provide:
    1. A concise high-level description of the change and (2) the rationale behind it, unless the change is extremely trivial, like typo fixes.
    2. For the rationale to be even more useful, authors are encouraged to list potential points of contention, compromises that needed to be made, alternative approaches that have been considered, relevant documentation, discussions and context, etc.
    • Reviewing code is difficult, and reviewers only have a limited amount of time to do it. Jump-starting the review process by not making the reviewer puzzle together the intention and context of a pull request will not only speed things up but also improve the quality of the review.
  • Reviewers must have a good idea on whether they are the right person to approve the change.
    • Knowledge of the code under review is an obvious but not the only criteria for answering this question.
    • Procedure wise, the reviewer also needs to decide:
      • Can the reviewer make the decision alone?
      • Does the PR needs to go through the Major Change Process, or does it need a T-compiler [Final Comment Period][fcp] sign-off, or does it need a full Request For Comments (RFC)?
      • Does the PR need reviews and/or sign-off from other teams, particularly T-lang?
      • Can the changes break stable code or begin accepting new code that we do not intend to? If the PR contains risks, is it sufficiently justified? Does the changes need ecosystem impact evaluation through crater runs?
      • Will the PR introduce significant perf changes? If there might be a perf regression, is it justified? Does the PR need a perf run?
      • Can the reviewer perform the review sufficiently thorough and in a timely fashion?
      • Is the reviewer impartial enough to provide a sufficiently unbiased perspective? E.g. due to co-authorship (sufficiently significant changes to the PR made by the reviewer) or other conflicts-of-interest?

Reviewing Checklist

The following list of questions will help reviewers and PR authors alike to bring PRs into good shape and meet the above criteria:

Checklist for PR authors and reviewers

  • Does the PR message have
    • A concise high-level description of the changes? (what is being changed)
    • A clear rationale for doing them? (why is it being changed)
    • If non-trivial and if suitable, how the bug is fixed or how the change is implemented?
    • A list of potential points of contention? Alternatives? Trade-offs? Risks?
    • Links to relevant issues, RFCs, MCPs, etc?
  • Does the PR need a regression test? Does the PR have sufficient test coverage?
  • Does the change need to be covered by a major change proposal? Is it already covered? If there is already a MCP open, was it already accepted, or is the PR blocked on that?
  • Does the PR need a perf run?
  • Does the PR need reviews and/or sign-offs from other teams? E.g. T-lang for lint expansions if the ecosystem impact is large or language changes.
  • Does the PR affect other teams in a non-trivial way? Should the affected teams get a heads-up? E.g. changes to rustfmt or rust-analyzer or subtrees.
  • Would someone trying to understand the PR in a year’s time be able to quickly reconstruct what’s going on?
  • Is the new code properly documented? Is existing documentation still up-to-date?
  • Does the changes in the PR need updates to the Reference or Edition Guide?
  • Does the PR introduce a regression of any of the following:
    • Error message quality
    • Maintainability (e.g. complex code, no documentation, unsafe)
    • Any specific target platforms
    • Downstream tooling (e.g. linkers, debuggers)
    • Compile times
    • Memory usage
    • Targets (e.g. baselines, target features, calling conventions, etc.)

Checklist for reviewers

  • Am I the right person to review this PR:
    • Does the changes in this PR fall under T-compiler purview?
    • Do I understand the code well enough?
    • Would I be able to spot non-obvious side effects?
    • Would I be able to fix a bug introduced by this PR?
    • Can I do the review in a timely fashion?
    • Do I feel pressure to quickly approve the PR for some reason?
    • Am I impartial enough?
  • Before merging:
    • Is the PR title and description still accurate?
    • Is the commit history clean enough? We do not need 16 “fix typo” commits in the PR history.
    • Does the PR correctly/incorrectly close relevant issues?
  • Do I need to roll reviewers from other relevant teams?

Guidance for dealing with common situations

In most cases common sense is enough for deciding how to apply this policy. However, sometimes there are gray areas where it is not immediately clear how to proceed. This section lists a few common cases together with guidance on how to deal with them.

I don’t think I am a good fit for reviewing - what now?

It is completely normal that you get (randomly) assigned a PR (via rustbot or otherwise) but don’t feel comfortable reviewing it. Here is what you can do, depending on the concrete case:

  • If the change seems really big or contentious, consider asking for an MCP (see below). The reviewer does not and should not be expected to just stomach a large and/or significant PR.
  • If you know just the right person for the review, assign them via r? @<github-name>. It’s polite to leave a comment asking them if they can take over – but you don’t have to make sure beforehand that they can actually do it.
  • If the change is not too complicated and you don’t expect that another randomly rolled compiler reviewer will also have trouble with the PR, you can reroll a random compiler reviewer with r? compiler.
  • If the change is complicated, or you expect randomly rolling another compiler reviewer will just lead to multiple rerolls, you should open a thread in #t-compiler/private to ask the rest of the team – for someone who might be able to review it, or even if the team is comfortable with accepting the change at all.
  • If the change is intended for another team, roll a reviewer from the relevant team.

You can also always ask for help on the #t-compiler Zulip stream for finding a reviewer. That being said, you are always welcome to do an initial review (to the extent you are comfortable with) and then pass the PR on to the final reviewer. This way the PR author will get helpful feedback sooner, subsequent reviewers will have less work to do, and you might also improve your own understanding of diverse areas in the compiler.

It is unclear if something constitutes a major change

Deciding if something is a “major change” requiring an MCP is not always straightforward. The official guidelines are here. When in doubt, err on side of treating something as a major change. You can also nominate the PR for discussion in the compiler team’s triage meeting by tagging it I-compiler-nominated. If you nominate a PR please make sure to state a concrete question for the compiler team to discuss, include useful context if suitable.

Example triage meeting nomination message

Nominating for T-compiler triage meeting to determine if we want to make
`-Z unstable-options` also accept value(s), e.g. `-Z unstable-options=values` to
guard *unstable values* of *stable flags* like `-C opt-level=4`.

Relevant discussion: <https://i.am.a.link>.
See also #123456.

@rustbot label +I-compiler-nominated

Discussion or rationale is too intransparent

Sometimes there are PRs that seem to be the result of some prior discussion, with no description or rationale. They usually have a title like “Change X” and the only content of the PR message is “r? @xyz”. Even though the change might make sense and may even have been suggested by a compiler team member, this is not good form.

Contributors may stumble across the PR several year later during bisections only to find the PR with absolutely zero context that was discussed offline or elsewhere, and the information is not available to future contributors. This is not good for maintainability. Including relevant context will very often help the PR author themselves in the future!

The PR message should give a self-contained description of what is being changed, why it is being changed and anything else that might be of interest.

Try to put yourself in the shoes of someone who, a few years down the road, needs to fix a bug related to the code touched by the PR and needs to reconstruct the rationale for the way things are.

Reviewer and PR author report to the same entity / work for the same employer

There is no rule that prevents two employees of the same company from reviewing each other’s PRs. We assume compiler team reviewers to act in good faith, and vest trust in team members to do so.

The concerns in such a case are no different than for any other two reviewers. We expect the mechanisms and principles we articulated above to be respected by ALL reviewers, whatever their employer. Does the PR concisely describe the changes that are being made? Does it give a clear, transparent rationale for why the changes make sense so that contributors down the line can follow the reasoning and reconstruct what’s going on? Have points of contention been discussed and cleared up? Then you are in the clear.

If you are in doubt if something is contentious, give a heads up to @rust-lang/compiler and ask for another opinion. If the proposed change is large and/or potentially has a big impact, you can discuss in a #t-compiler zulip topic, and/or create a Major Change Proposal.

Reviewing and Mentoring

In the course of mentoring someone through a PR it often happens that the reviewer has ended up effectively co-writing the changes. This can be a tricky case because the reviewer is effectively approving their own changes. There are a number of considerations to take into account when deciding how to proceed:

  • If the general direction of the changes has already been approved as part of an MCP and the concrete advice given during mentoring was only concerned with resolving minor technical issues, then no further review is required.
  • Similarly, if any contentious decisions have visibly been discussed and resolved on the PR with other compiler team members and the rest of the changes don’t deviate from the general direction that has been agreed upon then no further review is required either.
  • If the PR was opened as a response to a concrete suggestion by the reviewer (and the changes are not entirely trivial) then it is advisable that the final review is done by someone else. However, the initial reviewer/mentor is encouraged to help bring the PR into good shape before handing it off.

In general, it is advisable to ask for a second opinion by someone knowledgable in the field in such cases, just to increase the chance of uncovering oversights and blindspots a mentor might have.

Nobody understands the code that’s being changed

Sometimes there is a bug in some code that nobody understands anymore. The original authors are unavailable and it is hard to gauge the implications of a proposed fix. In such a case it is a good idea for reviewers to I-compiler-nominate the PR (if they intend to stay the main reviewer) or assign a compiler team lead to the issue and add the S-waiting-on-team label.

In both cases, the PR will be brought in the weekly triage meeting. It is also especially valuable to gather and document as much information as possible about the issue, such as a description of the problem being fixed, points of unclarity, potential risks, alternatives that have been considered, et cetera. It is also a good idea to open a tracking issue to document the lack of understanding of such area, to document the specific questions, concerns and bugs, and it can be resolved if compiler team members regain better understanding.

Reviewers should ask PR authors to add this kind of information as comments in the code and/or to the PR message (which will become part of the git commit history).

PR makes a change to support use of rustc internals for external projects

This will need to be determined on a case-by-case basis.

In general, we should allow changes making things public, cleaning up things or making them more general, as long as the owners of the compiler region agree (so just assign to them).

As a concrete example: if someone is using the mir interpreter, and they want to make something public, it is likely not a problem, but there are some functions that are module- or crate-private on purpose, as they uphold invariants within the mir interpreter. So basically, just assign such PRs to the relevant people (usually they get pinged anyway due to having told highfive that they want to get pinged on changes to these parts).

Require a doc comment on such APIs identifying which external consumers the API concerns, and for what kinds of purpose.

If this is possibly contentious, ask for an mcp.

Note that this can non-obviously bound supposedly-internal compiler APIs to external consumers. Convey to the external consumers (that are not rust-lang/ projects) that we can offer the convenience so as long as it does not impose significant maintenance burden on the compiler, e.g. gets in the way of refactorings, and no hard stability guarantees are promised.

The PR is very large and complicated

Reviewers are not expected to stomach PRs that are very large and complicated. Bring the PR to the attention of the team (through zulip threads and/or nominate for compiler triage meeting), and the team can decide if:

  • The team can find suitable reviewers who can aid the PR author to break up the large change into smaller logical PRs that are possible to review on their own, but also in the context of the larger change.
  • The team does not have the bandwidth, or team members are is not ready or willing or able to accept the large change as-is. In such cases, the team should make a decision to postpone or close, and clearly communicate the decision to the PR author to explain the reasoning. It is very frustrating if a PR stalls for many months only for it to be rejected anyway.

Technical Aspects of Reviewing

Every PR that lands in the compiler and its associated crates must be reviewed by at least one person who is knowledgeable with the code in question.

When a PR is opened, you can request a reviewer by including r? @username in the PR description. If you don’t do so, rustbot will automatically assign someone from the pool of reviewer candidates, determined by the files affected.

It is common to leave a r? @username comment at some later point to request review from someone else. This will also reassign the PR.

It is possible to request reviews from multiple reviewers, for example

Rolling both a T-compiler and T-bootstrap reviewer as this PR contains both
compiler and bootstrap changes.

r? compiler
r? bootstrap

bors

We never merge PRs directly. Instead, we use bors. A qualified reviewer with bors privileges (e.g., a compiler team member) will leave a comment like @bors r+. This indicates that they approve the PR.

People with bors privileges may also leave a @bors r=username command. This indicates that the PR was already approved by @username. This is commonly done after rebasing.

Finally, in some cases, PRs can be “delegated” by writing @bors delegate+ or @bors delegate=username. This will allow the PR author or the delegated user to approve the PR by issuing @bors commands like the ones above (but this privilege is limited to the single PR).

Reverts

If a merged PR is found to have caused a meaningful unanticipated regression, the best policy is to revert it quickly and re-land it later once a fix and regression test are added.

A “meaningful regression” in this case is up to the judgment of the person approving the revert.

Some criteria to consider if a revert is warranted:

  • A bug in a stable or otherwise important feature that causes code to stop compiling, changes runtime behavior, or triggers a (warn-by-default or higher) lint incorrectly in real-world code. Especially if the bug is reachable without any unstable feature gates.
  • If a bug or change (incl. ICEs) is particularly easy to hit.
  • If a bug or change significantly degrades contributor experience.
  • If a test is flaky and unreliable.

When these criteria are in doubt, and especially if real-world code is affected, revert the PR. This has three benefits:

  1. It allows bleeding edge users (esp. nightly or beta) to continue to use and report bugs on HEAD with a higher degree of certainty about where new bugs are introduced.
  2. It takes pressure off the original PR author and the team, that no one is pressured to feel like they have to fix it immediately.
  3. It might prevent the significant bug or regression from reaching another nightly/beta/stable build.

Before being reverted, a PR should be shown to cause a regression with a fairly high degree of certainty (e.g. bisection on commits, or bisection on nightlies with one or more compiler team members pointing to this PR, or it’s simply obvious to everyone involved). Only revert with lower certainty if the issue is particularly critical or urgent to fix.

Creating reverts

The easiest method for creating a revert is to use the “Revert” button on Github. This appears next to the “bors merged commit abcd” message on a pull request, and creates a new pull request.

Location of the “Revert” button

Alternatively, a revert commit can be created using the git CLI and then uploaded as a pull request:

$ git revert -m 1 62d5bee

Don’t rely only on the default commit title and message created by git. Instead, title the revert commit meaningfully, and link to the relevant PR that introduced the regression. Link to the specific PR that is being fully or partially reverted. Link to relevant issues and discussions. Retain the commit hash being reverted.

Example revert commit title and message

Revert #131669 due to ICEs

Revert <https://github.com/rust-lang/rust/pull/131669> due to ICE
reports:

- <https://github.com/rust-lang/rust/issues/134059> (real-world)
- <https://github.com/rust-lang/rust/issues/134060> (fuzzing)

The changes can be re-landed with those cases addressed.

This reverts commit 703bb982303ecab02fec593899639b4c3faecddd, reversing
changes made to f415c07494b98e4559e4b13a9c5f867b0e6b2444.

It’s polite to tag the author and reviewer of the original PR so they know what’s going on. You can use the following message template for the revert PR description:

Reverts rust-lang/rust#123456
cc @author @reviewer

This revert is based on the following report of a regression caused by this PR:
<link to issue or comment(s)>

In accordance with the compiler team [revert policy], PRs that cause meaningful
regressions should be reverted and re-landed once the regression has been fixed
(and a regression test has been added, where appropriate).
[revert policy]: https://forge.rust-lang.org/compiler/reviews.html#reverts

Fear not! Regressions happen. Please rest assured that this does not
represent a negative judgment of your contribution or ability to contribute
positively to Rust in the future. We simply want to prioritize keeping existing
use cases working, and keep the compiler more stable for everyone.

r? compiler

Please include a temporary regression test in a separate commit to check that the regression is actually addressed by the revert commit. In a reland, this temporary regression test can be adapted or removed following improved test coverage as suitable.

If you have r+ privileges, you can self-approve a revert if the revert is clean and is unlikely to cause new regressions on its own, make sure the revert is not a case of “the cure is worse than the poison”. If non-trivial, please wait for a review – from the original reviewer or from another compiler reviewer via r? compiler. You can ask in #t-compiler if the matter is more urgent.

Generally speaking, reverts should have elevated priority and match the rollup status of the PR they are reverting. If a non-rollup PR is shown to have no impact on performance, it can be marked rollup=always. The revert author can coordinate with contributors authoring rollups to reschedule rollups or interleave the revert PR between rollups if suitable.

Forward fixes

Often it is tempting to address a regression by posting a follow-up PR that, rather than reverting the regressing PR, instead augments the original in small ways without reverting its changes overall. However, if real-world users have reported being affected, this practice is strongly discouraged unless one of the following is true:

  • A high-confidence fix is already in the bors queue.
  • The regression has made it to a release branch (beta or stable) and a backport is needed. Often the “smallest possible change” is desired for a backport (so that the fix doesn’t introduce new regressions). The offending PR may or may not still be reverted on the main branch; this is left to the discretion of someone who can r+ it.

While it can feel like a significant step backward to have your PR reverted, in most cases it is much easier to reland the PR once a fix can be confirmed. Allowing a revert to land takes pressure off of you and your reviewers to act quickly and gives you time to address the issue fully. It also is an opportunity to take a step back and reassess the test coverage.

Rollups

All reviewers are strongly encouraged to explicitly mark a PR as to whether or not it should be part of a rollup. This is usually done either when approving a PR with @bors r+ $ROLLUP_STATUS or with @bors $ROLLUP_STATUS where $ROLLUP_STATUS is substituted with one of the following:

  • rollup=always: These PRs are very unlikely to break tests or have performance implications. Example scenarios:
    • Changes are limited to documentation, comments, etc. that is highly unlikely to fail a build.
    • Changes cannot have performance implications.
    • Your PR is not landing possibly-breaking or behavior altering changes.
      • Feature stabilization without other changes is likely fine to rollup, though.
    • When in doubt do not use this option!
  • rollup=maybe: This is the default if @bors r+ does not specify any rollup status at all. Use this if you have some doubt that the change won’t break tests. This can be used if you aren’t sure if it should be one of the other categories. Since this is the default, there is usually no need to explicitly specify this, unless you are un-marking the rollup level from a previous command.
  • rollup=iffy: Use this for mildly risky PRs (more risky than “maybe”). Example scenarios:
    • The PR is large and non-additive (note: adding 2000 lines of completely new tests is fine to rollup).
    • Has platform-specific changes that are not checked by the normal PR checks.
    • May be affected by MIR migrate mode.
  • rollup=never: This should never be included in a rollup (please include a comment explaining why you have chosen this). Example scenarios:
    • May have performance implications.
    • May cause unclear regressions (we would likely want to bisect to this PR specifically, as it would be hard to identify as the cause from a rollup).
    • Has a high chance of failure.
    • Is otherwise dangerous to rollup.
      • Messes too much with:
      • LLVM or code generation
      • bootstrap or the build system
      • build-manifest
  • rollup: this is equivalent to rollup=always
  • rollup-: this is equivalent to rollup=maybe

Priority

Reviewers are encouraged to set one of the rollup statuses listed above instead of setting priority. Bors automatically sorts based on the rollup status (never is the highest priority, always is the lowest), and also by PR age. If you do change the priority, please use your best judgment to balance fairness and urgency with other PRs.

The following is some guidance for setting priorities:

  • 1-5
    • P-high issue fixes
    • Toolstate fixes
    • Reverts containing the above
    • Beta-nominated PRs
    • Submodule/Subtree updates
  • 5+
    • P-critical issue fixes
  • 10+
    • Bitrot-prone PRs (particularly very large ones that touch many files)
    • Urgent PRs (e.g. urgent reverts)
    • Beta backports
  • 20+
    • High priority that needs to jump ahead of any rollups
    • Fixes or changes something that has a high risk of being re-broken by another PR in the queue.
  • 1000
    • Absolutely critical fixes
    • Release promotions

Expectations for r+

bors privileges are binary: the bot doesn’t know which code you are familiar with and what code you are not. They must therefore be used with discretion. Do not r+ code that you do not know well – you can definitely review such code, but try to hand off reviewing to someone else for the final r+.

Similarly, never issue a r=username command unless that person has done the review, and the code has not changed substantially since the review was done, and that the person has explicitly indicated that another contributor can r= on their behalf.

Rebasing is fine and often necessary, but changes in functionality typically require re-review. It is very helpful for the reviewer if the PR author can produce a brief summary of what has changed since last review, in addition to responding to individual review comments.

Please refer to bors documentation for bot usage.

Social aspects of reviewing

First and foremost, PR authors and compiler reviews alike are expected to uphold the Code of Conduct. Simply speaking, a reviewer is expected to be respectful to the PR author, even if the reviewer disagrees with the changes.

Reviewers are encouraged to consider matters from the perspectives of the PR author too. If a change is stuck due to procedural reasons or reviewer bandwidth for months without any resolution (including a resolution that the compiler might not be ready to accept such a change at present time, but thank the PR author for the contributions anyway), and accrues constant merge conflicts, it can be very frustrating.

If some discussions are getting heated, ask the moderation team to step in.

So you want to add a new (stable) option to rustc

So you want to add a new command-line flag to rustc. What is the procedure?

Is this a perma-unstable option?

The first question to ask yourself is:

  • Is this a “perma-unstable” option meant only for debugging rustc (e.g., -Ztreat-err-as-bug)?

If so, you can just add it in a PR, no check-off is required beyond ordinary review.

Other options

If this option is meant to be used by end-users or to be exposed on the stable channel, however, it represents a “public commitment” on the part of rustc that we will have to maintain, and hence there are a few more details to take care of.

There are two main things to take care of, and they can proceed in either order, but both must be completed:

  • Proposal and check-off
  • Implementation and documentation

Finally, some options begin as unstable and only get stabilized over time, in which case you will also need:

  • Tracking issue and stabilization

Proposal and check-off

The “proposal” part describes the motivation and design of the new option you wish to add. It doesn’t necessarily have to be very long. It takes the form of a Major Change Proposal.

The proposal should include the following:

  • Motivation: what is this flag used for?
  • Design: What input does the flag take and what is its observable effect?
  • Implementation notes: You don’t have to talk about the implementation normally, but if there are any key things to note (i.e., it was very invasive to implement), you night note them here.
  • Precedent, links, and related material: Are similar flags available on other compilers/linkers/tools, like clang or lld?
  • Alternatives, concerns, and key decisions: Were there any alernatives considered? If so, why did you pick this design?

Note that it is fine if you don’t have any implementation notes, precedent, or alternatives to discuss.

Also, one good approach to writing the MCP is basically to write the documentation you will have to write anyway to explain to users how the option works, and then add any additional notes on alternatives and so forth that are required.

Once you’ve written up the proposal, you can open a MCP issue. But note that since this MCP is promoting a permanent change, a full compiler-team FCP is required, and not just a “second”. This can be done by @rfcbot fcp merge by a team member.

Implementation, documentation

Naturally your new option will also have to be implemented. You can implement the option and open up a PR. Often, this implementation work actually happens before the MCP is created, and that’s fine – we’ll just ask you to open an MCP with the write-up.

See the Command-line Arguments chapter in the rustc dev guide for guidelines on how to name and define a new argument.

A few notes that are sometimes overlooked:

  • Many options begin as “unstable” options, either because they use -Z or because they require -Zunstable-options to use.
  • You should document the option. Often this documentation can just be copied from the MCP text. Where you add this documentation depends on whether the option is available on stable Rust:

Stabilization and tracking issue

Typically options begin as unstable, meaning that they are either used with -Z or require -Zunstable-options.

Once the issue lands we should create a tracking issue that links to the MCP and where stabilization can be proposed.

Stabilization generally proceeds when the option has a seen a bit of use and the implementation seems to be working as expected for its intended purpose.

Remember that when stabilization occurs, documentation should be moved from the Unstable Book to the Rustc Book.

Major Change Proposals

Introduced in RFC 2904, a “major change proposal” is a lightweight form of RFC that the compiler team uses for architectural changes that are not end-user facing. (It can also be used for small user-facing changes like adding new compiler flags, though in that case we also require an rfcbot fcp to get full approval from the team.) Larger changes or modifications to the Rust language itself require a full RFC (the latter fall under the lang team’s purview).

Motivation

As the compiler grows in complexity, it becomes harder and harder to track what’s going on. We don’t currently have a clear channel for people to signal their intention to make “major changes” that may impact other developers in a lightweight way (and potentially receive feedback).

Our goal is to create a channel for signaling intentions that lies somewhere between opening a PR (and perhaps cc’ing others on that PR) and creating a compiler team design meeting proposal or RFC.

Goals

Our goals with the MCP are as follows:

  • Encourage people making a major change to write at least a few paragraphs about what they plan to do.
  • Ensure that folks in the compiler team are aware the change is happening and given a chance to respond.
  • Ensure that every proposal has a “second”, meaning some expert from the team who thinks it’s a good idea.
  • Ensure that major changes have an assigned and willing reviewer.
  • Avoid the phenomenon of large, sweeping PRs landing “out of nowhere” onto someone’s review queue.
  • Avoid the phenomenon of PRs living in limbo because it’s not clear what level of approval is required for them to land.

Major Change Proposals

If you would like to make a major change to the compiler, the process is as follows:

  • Open a tracking issue on the rust-lang/compiler-team repo using the major change template.
    • A Zulip topic in the stream #t-compiler/major changes will automatically be created for you by a bot.
    • If concerns are raised, you may want to modify the proposal to address those concerns.
    • Alternatively, you can submit a design meeting proposal to have a longer, focused discussion.
  • To be accepted, a major change proposal needs three things:
    • One or more reviewers, who commit to reviewing the work. This can be the person making the proposal, if they intend to mentor others.
    • A second, a member of the compiler team or a contributor who approves of the idea, but is not the one originating the proposal.
    • A final comment period (a 10 day wait to give people time to comment).
      • The FCP can be skipped if the change is easily reversed and/or further objections are considered unlikely. This often happens if there has been a lot of prior discussion, for example.
  • Once the FCP completes, if there are no outstanding concerns, PRs can start to land.
    • If those PRs make outward-facing changes that affect stable code, then either the MCP or the PR(s) must be approved with a rfcbot fcp merge comment.

Conditional acceptance

Some major change proposals will be conditionally accepted. This indicates that we’d like to see the work land, but we’d like to re-evaluate the decision of whether to commit to the design after we’ve had time to gain experience. We should try to be clear about the things we’d like to evaluate, and ideally a timeline.

Deferred or not accepted

Some proposals will not be accepted. Some of the possible reasons:

  • You may be asked to do some prototyping or experimentation before a final decision is reached
  • The idea might be reasonable, but there may not be bandwidth to do the reviewing, or there may just be too many other things going on.
  • The idea may be good, but it may be judged that the resulting code would be too complex to maintain, and not worth the benefits.
  • There may be flaws in the idea or it may not sufficient benefit.

What happens if someone opens a PR that seems like a major change without doing this process?

The PR should be closed or marked as blocked, with a request to create a major change proposal first.

If the PR description already contains suitable text that could serve as an MCP, then simply copy and paste that into an MCP issue. Using an issue consistently helps to ensure that the tooling and process works smoothly.

Can I work on code experimentally before a MCP is accepted?

Of course! You are free to work on PRs or write code. But those PRs should be marked as experimental and they should not land, nor should anyone be expected to review them (unless folks want to).

What constitutes a major change?

The rough intuition is “something that would require updates to the rustc-dev-guide or the rustc book”. In other words:

  • Something that alters the architecture of some part(s) of the compiler, since this is what the rustc-dev-guide aims to document.
  • A simple change that affects a lot of people, such as altering the names of very common types or changing coding conventions.
  • Adding a compiler flag or other public facing changes, which should be documented (ultimately) in the rustc book. This is only appropriate for “minor” tweaks, however, and not major things that may impact a lot of users. (Also, public facing changes will require a full FCP before landing on stable, but an MCP can be a good way to propose the idea.)

Note that, in some cases, the change may be deemed too big and a full FCP or RFC may be required to move forward. This could occur with significant public facing change or with sufficiently large changes to the architecture. The compiler team leads can make this call.

Note that whether something is a major change proposal is not necessarily related to the number of lines of code that are affected. Renaming a method can affect a large number of lines, and even require edits to the rustc-dev-guide, but it may not be a major change. At the same time, changing names that are very broadly used could constitute a major change (for example, renaming from the tcx context in the compiler to something else would be a major change).

Public-facing changes require rfcbot fcp

The MCP “seconding” process is only meant to be used to get agreement on the technical architecture we plan to use. It is not sufficient to stabilize new features or make public-facing changes like adding a -C flag. For that, an rfcbot fcp is required (or perhaps an RFC, if the change is large enough).

For landing compiler flags in particular, a good approach is to start with an MCP introducing a -Z flag and then “stabilize” the flag by moving it to -C in a PR later (which would require rfcbot fcp).

Major change proposals are not sufficient for language changes or changes that affect cargo.

Steps to open a MCP

  • Open a tracking issue on the rust-lang/compiler-team repo using the major change template.
  • Create a Zulip topic in the stream #t-compiler/major changes:
    • The topic should be named something like “modify the whiz-bang component compiler-team#123”, which describes the change and links to the tracking issue.
    • The stream will be used for people to ask questions or propose changes.

What kinds of comments should go on the tracking issue in compiler-team repo?

Please direct technical conversation to the Zulip stream.

The compiler-team repo issues are intended to be low traffic and used for procedural purposes. Note that to “second” a design or offer to review, you should be someone who is familiar with the code, typically but not necessarily a compiler team member or contributor.

  • Announcing that you “second” or approve of the design.
  • Announcing that you would be able to review or mentor the work.
  • Noting a concern that you don’t want to be overlooked.
  • Announcing that the proposal will be entering FCP or is accepted.

How does one register as reviewer, register approval, or raise an objection?

These types of procedural comments can be left on the issue (it’s also good to leave a message in Zulip). See the previous section. To facilitate a machine parsable scanning of the concerns please use the following syntax to formally register a concern:

@rfcbot concern reason-for-concern

<long description of the concern>

And the following syntax to lift a concern when resolved:

@rfcbot resolve reason-for-concern

Who decides whether a concern is unresolved?

Usually the experts in the given area will reach a consensus here. But if there is some need for a “tie breaker” vote or judgment call, the compiler-team leads make the final call.

What are some examples of major changes from the past?

Here are some examples of changes that were made in the past that would warrant the major change process:

  • overhauling the way we encode crate metadata
  • merging the gcx, tcx arenas
  • renaming a widely used, core abstraction, such as the Ty type
  • introducing cargo pipelining
  • adding a new -C flag that exposes some minor variant

What are some examples of things that are too big for the major change process?

Here are some examples of changes that are too big for the major change process, or which at least would require auxiliary design meetings or a more fleshed out design before they can proceed:

  • introducing incremental or the query system
  • introducing MIR or some new IR
  • introducing parallel execution
  • adding ThinLTO support

What are some examples of things that are too small for the major change process?

Here are some examples of things that don’t merit any MCP:

  • adding new information into metadata
  • fixing an ICE or tweaking diagnostics
  • renaming “less widely used” methods

When should Major Change Proposals be closed?

Major Change Proposals can be closed:

  • by the author, if they have lost interest in pursuing it.
  • by a team lead or expert, if there are strong objections from key members of the team that don’t look likely to be overcome.
  • by folks doing triage, if there have been three months of inactivity. In this case, people should feel free to re-open the issue if they would like to “rejuvenate” it.

Membership

This team discusses membership in the compiler team. There are currently two levels of membership:

The path to membership

People who are looking to contribute to the compiler typically start in one of two ways. They may tackle “one off” issues, or they may get involved in some kind of existing working group. They don’t know much about the compiler yet and have no particular privileges. They are assigned to issues using the triagebot and (typically) work with a mentor or mentoring instructions.

Compiler team contributors

Once a working group participant has been contributing regularly for some time, they can be promoted to the level of a compiler team contributor (see the section on how decisions are made below). This title indicates that they are someone who contributes regularly.

It is hard to define the precise conditions when such a promotion is appropriate. Being promoted to contributor is not just a function of checking various boxes. But the general sense is that someone is ready when they have demonstrated three things:

  • “Staying power” – the person should be contributing on a regular basis in some way. This might for example mean that they have completed a few projects.
  • “Independence and familiarity” – they should be acting somewhat independently when taking on tasks, at least within the scope of the working group. They should plausibly be able to mentor others on simple PRs.
  • “Cordiality” – contributors will be members of the organization and are held to a higher standard with respect to the Code of Conduct. They should not only obey the letter of the CoC but also its spirit.

Being promoted to contributor implies a number of privileges:

  • Contributors have r+ (approve a pull request) privileges and can do reviews (they are expected to use those powers appropriately, as discussed previously). They also have access to control perf/rustc-timer and other similar bots. See the documentation for bors and r+ here.

    Tip: some baseline rules around bors permissions are: don’t do a try build unless you have done a check for malicious code first and don’t r+ unless you are reasonably confident that you can effectively review the code in question.

  • Contributors are members of the organization so they can modify labels and be assigned to issues.

  • Contributors are a member of the rust-lang/compiler team on GitHub, so that they receive pings when people are looking to address the team as a whole.

  • Contributors are listed on the rust-lang.org web page.

It also implies some obligations (in some cases, optional obligations):

  • Contributors will be asked if they wish to be added to the reviewer rotation.
  • Contributors are held to a higher standard than ordinary folk when it comes to the Code of Conduct.

What it means to be a compiler contributor

Once you’re a member of the compiler team contributors, a number of events will happen:

  • You will gain access to a private Zulip stream, where internal discussions happen or ideas in very draft state are shared. Come and say hello to your new team members!

  • You will be subscribed and gain write access to a number of Github repositories. Check this GitHub page to see which repositories you have now access to. Some of them are pretty quiet or obsolete, so don’t worry about all of them.

    Tip: Github automatically adds you as subscriber to every repo you get write permission too. You can disable this in the settings (here).

  • You will also be subscribed to the all@rust-lang.org mailing list. See this file to check how subscriptions to mailing lists work. It’s a very low-volume mailing list (maybe a few emails per year), it’s a way to communicate things to all contributors. We will not send you spam from this address.

Full members

As a contributor gains in experience, they may be asked to become a compiler team member. This implies that they are not only a regular contributor, but are actively helping to shape the direction of the team or some part of the compiler (or multiple parts).

  • Compiler team members are the ones who select when people should be promoted to compiler team contributor or to the level of member.
  • Compiler team members are consulted on FCP decisions (which, in the compiler team, are relatively rare).
  • There will be a distinct GitHub team containing only the compiler team members, but the name of this team is “to be determined”.
  • Working groups must always include at least one compiler team member as a lead (though groups may have other leads who are not yet full members).

How promotion decisions are made

Promotion decisions (from participant to contributor, and from contributor to member) are made by having an active team member send an e-mail to the alias compiler-private@rust-lang.org. This e-mail should include:

  • the name of the person to be promoted
  • a draft of the public announcement that will be made

Compiler-team members should send e-mail giving their explicit assent, or with objections. Objections should always be resolved before the decision is made final. E-mails can also include edits or additions for the public announcement.

To make the final decision:

  • All objections must be resolved.
  • There should be a “sufficient number” (see below) of explicit e-mails in favor of addition (including the team lead).
  • The nominator (or some member of the team) should reach out to the person in question and check that they wish to join.

We do not require all team members to send e-mail, as historically these decisions are not particularly controversial. For promotion to a contributor, the only requirement is that the compiler team lead agrees. For promotion to a full member, more explicit mails in favor are recommended.

Once we have decided to promote, then the announcement can be posted to internals, and the person added to the team repository.

Not just code

It is worth emphasizing that becoming a contributor or member of the compiler team does not necessarily imply writing PRs. There are a wide variety of tasks that need to be done to support the compiler and which should make one eligible for membership. Such tasks would include organizing meetings, participating in meetings, bisecting and triaging issues, writing documentation, working on the rustc-dev-guide. The most important criteria for elevation to contributor, in particular, is regular and consistent participation. The most important criteria for elevation to member is actively shaping the direction of the team or compiler.

Alumni status

If at any time a current contributor or member wishes to take a break from participating, they can opt to put themselves into alumni status. When in alumni status, they will be removed from Github aliases and the like, so that they need not be bothered with pings and messages. They will also not have r+ privileges. Alumni members will however still remain members of the GitHub org overall.

People in alumni status can ask to return to “active” status at any time. This request would ordinarily be granted automatically barring extraordinary circumstances.

People in alumni status are still members of the team at the level they previously attained and they may publicly indicate that, though they should indicate the time period for which they were active as well.

Changing back to contributor

If desired, a team member may also ask to move back to contributor status. This would indicate a continued desire to be involved in rustc, but that they do not wish to be involved in some of the weightier decisions, such as who to add to the team. Like full alumni, people who were once full team members but who went back to contributor status may ask to return to full team member status. This request would ordinarily be granted automatically barring extraordinary circumstances.

Automatic alumni status after 6 months of inactivity

If a contributor or a member has been inactive in the compiler for 6 months, then we will ask them if they would like to go to alumni status. If they respond yes or do not respond, they can be placed on alumni status. If they would prefer to remain active, that is also fine, but they will get asked again periodically if they continue to be inactive.

Prioritization

This section documents the processes of the prioritization WG.

Prioritization WG - Procedure

This document details the procedure the WG-prioritization follows to fill the agenda for the weekly meeting of T-compiler. The working group focuses mainly on triaging T-compiler regressions, identifying possibly critical (and thus potential release blocker) issues and building the agenda for the weekly T-compiler meeting summarizing the main points to be discussed.

General issues review process

  • Check the status of the issue
  • Try moving it forward if possible (ex. stimulate further comments from the issue author / reviewer)
  • Ask for more info if it’s needed
  • Is there an MCVE for the issue already?
  • Check if it’s a regression and label it accordingly (regression-* labels)
  • Figure out the area the issue belongs and label it accordingly (A-* labels)
  • Ping notify groups or relevant teams
  • Assign if possible
  • Nominate the issue if it’s unclear and needs to be discussed

Generating the T-compiler meeting’s agenda

The T-compiler agenda is generated from a template (available on HackMD or Github). We suggest working the following steps in this order:

Prepare agenda content

1. Add T-compiler labels where appropriate

2. Assign a priority label to issues where needed

Regressions labeled with I-prioritize are signaling that a priority assessment is waiting. When this label is added to an issue, the triagebot creates automatically a notification for @WG-prioritization members on the Zulip stream.

To assign a priority, we replace the I-prioritize label with one of P-critical, P-high, P-medium or P-low and adding a succinct comment to link the Zulip discussion where the issue prioritization occurred, example of a template for the comment:

WG-prioritization assigning priority (Zulip discussion).

@rustbot label -I-prioritize +P-XXX

Ideally, we want all T-compiler issues with a I-prioritize label to have a priority assigned, or strive to reach this goal: sometimes different factors are blocking issues from being assigned a priority label, either because the report or the context is unclear or because cannot be reproduced and an MCVE would help. Don’t hesitate to ask for clarifications to the issue reporter or ping the ICEbreaker team when an ICE (“Internal Compiler Errors”) needs a reduction (add a comment on the issue with @rustbot ping icebreakers-cleanup-crew)

Keep an eye also on regressions (stable, beta and nightly), ideally they should an assignee.

3. Accept MCPs

An MCP is a Major Change Proposal, in other words a change to the rust compiler that needs a bit more thought and discussion within the compiler team than a pull request. The life cycle of an MCP is described in the documentation. The relevant part for the WG-Prioritization is keeping an eye on them and accept all MCPs that have been on final-comment-period for 10 or more days.

To accept an MCP, remove final-comment-period label, add major-change-accepted label and close the issue. A notification to the relevant Zulip topic (in this stream) will be automatically sent by the triagebot.

Generate the meeting’s agenda

Run triagebot’s CLI to generate the agenda. You need to clone https://github.com/rust-lang/triagebot (there is no official prepackaged release for this tool) and export two environment variables: GITHUB_TOKEN and optionally a GOOGLE_API_KEY to access a public Google calendar (if this env var is not found, meetings should be manually copy&pasted from here).

To generate the meeting’s agenda, run:

$ cargo run --bin prioritization-agenda

Copy the content of the generated agenda on HackMD. This will be our starting point.

Add performance logs

Paste the markdown file of this week performance triage logs to the agenda and clean it up a little bit removing emojis (to make the text readable when pasted on Zulip).

Announce the meeting on Zulip

About two hours before the scheduled meeting, create a new topic on the Zulip stream #t-compiler/meetings titled “[weekly] YYYY-MM-DD” using the the following message template:

Hi @*T-compiler/meeting*; the triage meeting will happen tomorrow in about 2 hours.
*WG-prioritization* has done pre-triage in #**t-compiler/wg-prioritization/alerts**
@*WG-prioritization* has prepared the [meeting agenda](link_to_hackmd_agenda)

Working group checkins for today:
- @**WG-foo** by @**person1**
- @**WG-bar** by @**person2**

Working Group checkins rotation are generated by a script at this page (TODO: script is outdated and could probably be merged into the triagebot CLI code).

Checkins about the progress of working groups are not mandatory but we rotate them all to be sure we don’t miss on important progresses.

Add details to the Agenda

1. Summarize stable/beta nominations

These are pull requests that the compiler team might want to backport to a release channel. Example a stable-to-beta-regression fix might want to be backported to the beta release channel. A stable-to-stable-regression fix particularly annoying might warrant a point release (i.e. release a 1.67.1 after a 1.67.0).

Follow the General issues review process.

2. Summarize PRs waiting on team

These are pull requests waiting on a discussion / decision from T-compiler (sometimes more than one team).

Try to follow the General issues review process. Explicitly nominate any issue that can be quickly resolved in a triage meeting.

3. Fill up the “Oldest PRs waiting for review”

This is probably the less automatable part of the agenda (and likely the least fun). The triagebot will emit a list of 50 pull requests ordering them by least recent update. The idea is to issue mentions to assigned reviewers during the meeting ensuring that they stay on top of them. We usually try to keep the number of these mentions to around 5 for each meeting.

There are two human factors here to keep in consideration:

  • Pull requests reviewers are volunteers, we respect and appreciate their work. We don’t want to remind them too often that there is a pile of pull requests waiting on them. Therefore we usually wait 2 or 3 weeks before reminding them about that pull requests. It seems like a long time to wait but let’s not forget what contributors accomplish in the meanwhile! Anyway, we are trying to find ways to improve on these metrics.
  • Contributors taking their time to submit a pull request deserve equally our appreciation so we try to not have them wait too long for a review or they will lose context about their work (or motivation to drive the contribution to completion).

Striking a balance between these two diverging forces requires some empathy and “tribal knowledge” that comes with practice. Other factors can be blocking a pull request progress:

  • The review is shared with another team (i.e. Team 1 says “OK”, now waiting on Team 2)
  • The alternating labels S-waiting-on-review and S-waiting-on author handling the life cycle of a pull request are not promptly applied. A pull request that is ready to be reviewed but it’s not labeled S-waiting-on-review is idling for no purpose.

4. Add some context to P-critical and P-high regressions without an assignee

Try to follow the General issues review process.

5. Summarize I-compiler-nominated issues

Issues labeled with I-compiler-nominated generally are nominated to specifically have the compiler team dedicate them a special slice of the meeting (generally towards the end). After the discussion, add a comment on Github linking the Zulip message where the discussion started (so everyone can read). T-compiler sometimes writes a summary of the discussion on the issue itself.

Try to follow the General issues review process:

  • Check if an issue needs a discussion and add the label I-compiler-nominated
  • When added to the agenda, add some context:
    • Who the assignee is
    • Is this an issue or a pull request: if it’s an issue, does it have a pull request that fixes it?
    • Why was it nominated
    • Other important details

6. Final review before the meeting

Re-run the triagebot CLI script and update the agenda on HackMD with new data (if any). This is useful when there are last second changes affecting the agenda content.

Follow-ups after meeting

The meeting is over! Time to cleanup a little bit.

  • Lock the agenda file on HackMD assigning write permissions to Owners. Download the markdown file and commit it to this repository.

  • Remove the to-announce label from MCPs, unless this label was added exactly during the meeting (and therefore will be seen during the following meeting).

  • Remove to-announce FCPs from rust repo, compiler-team repo and forge repo, same disclaimer as before.

  • Accept or decline beta nominated and stable nominated backports that have been accepted during the meeting. For more info check T-release backporting docs

    • To accept a backport, add a {beta,stable}-accepted label and keep the {beta,stable}-nominated label. Other automated procedures will process these pull requests, it’s important to leave both labels. Add a comment on Github linking the Zulip discussion.
    • To decline a backport, simply remove {beta,stable}-nominated label. Add a comment on Github explaining why the backport was declined and link the Zulip discussion.
  • Remove I-compiler-nominated label from issues that were discussed. Sometimes not all nominated issues are discussed (because of time constraints). In this case the I-compiler-nominated will stick until next meeting.

  • Create a new agenda stub for the following week using our template and post the link on Zulip, so it’s available for people if they want to add content during the week.

Priority levels

As the compiler team’s resources are limited, the prioritization working group’s main goal is to identify the most relevant issues to work on, so that the compiler team can focus on what matters the most.

Words used in this document:

issue refers to bugs and feature requests that are nominated for prioritization, by flagging the I-prioritize label as described below.

This document will define what each label means, and what strategy for each label will be used.

Labels

Labeling an issue as I-prioritize starts the prioritization process, which will end by removing the I-prioritize label and appending one of the 4 labels we will discuss below:

  • P-critical
  • P-high
  • P-medium
  • P-low

Each of these labels defines a strategy the team will adopt regarding:

  • The amount of focus a given issue will receive
  • How members of the community can get involved

P-critical

A P-critical issue is a potentially blocker issue.

The Working Group will keep track of these issues and will remind the compiler team on a weekly basis during the triage meeting.

Examples of things we typically judge to be “critical” bugs:

  • Regressions where code that used to compile no longer does
    • Mitigating conditions that may lower priority:
      • If the code should never have compiled in the first place (but if the regression affects a large number of crates, this may indicate that we need a warning period)
      • If the code in question is theoretical and considered unlikely to exist in the wild, or if it only exists in small, unmaintained packages that are not widely used
    • If a regression has been in stable for a release or two (either because we are still awaiting a fix, or because the bug had laid dormant i.e. undetected), we typically lower the priority as well, because by that time, if the users have not raised a ruckus about the regression, that is a sign that it is inherently not a critical issue. Eg: an issue that would have been P-critical but ended up being P-high
  • Regressions where code still compiles but does something different than it used to do (dynamic semantics have changed)
    • Mitigating conditions that may lower priority:
      • If code uses feature that is explicitly not specified (e.g. std::vec::Vec docs state order in which it drops its elements is subject to change)
  • Feature-gated features accessible without a feature gate
    • Mitigating conditions that may lower priority:
      • If the pattern is VERY unlikely
  • Soundness holes with real-world implications
    • Mitigating conditions that may lower priority:
      • Soundness holes that are difficult to trigger
      • Soundness holes that will not affect stable, e.g. if the hole makes use of a gated unstable feature.
  • Diagnostic regressions where the diagnostic is very common and the situation very confusing
  • ICEs for common scenarios or code patterns
    • Mitigating conditions that may lower priority:
      • If the code that triggers the ICE also triggers compilation errors, and those errors are emitted before the ICE
      • If the code in question makes use of unstable features, particularly if the ICE requires a feature gate

A P-critical issue will receive the most attention. It must be assigned one or several people as soon as possible, and the rest of the team should do their best to help them out if/when applicable.

P-high

P-high issues are issues that need attention from the compiler team, but not to the point that they need to be discussed at every meeting. They can be P-critical issues that have a mitigating condition as defined above, or important issues that aren’t deemed blockers.

Because there are too many P-high issues to fit in every compiler meeting, they should rather be handled asynchronously by the Prioritization WG, in order to help them move forward. They can still occasionally be brought up at meetings when it is deemed necessary.

The effectiveness of the Prioritization WG will be a direct consequence of our ability to draw the line between P-critical and P-high issues. There shouldn’t be too many P-critical issues that compiler meetings become unmanageable, but critical issues shouldn’t get lost in the list of P-high issues.

P-high issues are issues the teams will mostly work on. We want to make sure they’re assigned, and keep an eye on them.

P-medium and P-low

P-medium refer to issues that aren’t a priority for the team, and that will be resolved in the long run. Eg issues that will be fixed after a specific feature has landed. They are issues we would mentor someone interested in fixing. They will remain in this state until someone complains, a community member fixes it, or it gets fixed by accident.

P-low refer to issues issue that the compiler team doesn’t plan to resolve, but are still worth fixing.

Notification groups

The compiler team has a number of notification groups that we use to ping people and draw their attention to issues. Notification groups are setup so that anyone can join them if they want.

Creating a notification group

If you’d like to create a notification group, here are the steps. First, you want to get approval from the compiler team:

  • Propose the group by preparing a Major Change Proposal. If your group is not analogous to some existing group, it is probably a good idea to ping compiler team leads before-hand or as part of the MCP.
  • The MCP should specify what GitHub label will be associated with the notification group. Often this is an existing label, such as O-Windows.

Once the MCP is accepted, here are the steps to actually create the group. In some cases we include an example PR from some other group.

Compiler-team Triage Meeting

What is it?

The triage meeting is a weekly meeting where we go over the open issues, look at regressions, consider beta backports, and other such business. In the tail end of the meeting, we also do brief check-ins with active working groups to get an idea what they’ve been working on.

When and where is it?

See the compiler team meeting calendar for the canonical date and time. The meetings take place in the #t-compiler stream on the rust-lang Zulip.

Where can I lean more?

The meeting procedure is documented in rust-lang/rust#54818.

The working group check-in schedule is available on the compiler-team website.

Compiler-team Steering Meeting

What is it?

The “steering meeting” is a weekly meeting dedicated to planning and high-level discussion. The meeting operates on a repeating schedule:

  • Week 1: Planning
  • Week 2: Technical or non-technical discussion
  • Week 3: Technical or non-technical discussion
  • Week 4: Non-technical discussion

The first meeting of the 4-week cycle is used for planning. The primary purpose of this meeting is to select the topics for the next three meetings. The topics are selected from a set of topic proposals, which must be uploaded and available for perusal before the meeting starts. The planning meeting is also an opportunity to check on the “overall balance” of our priorities.

The remaining meetings are used for design or general discussion. Weeks 2 and 3 can be used for technical or non-technical discussion; it is also possible to use both weeks to discuss the same topic, if that topic is complex. Week 4 is reserved for non-technical topics, so as to ensure that we are keeping an eye on the overall health and functioning of the team.

Where do proposals come from?

The team accepts proposals via an open submission process, which is documented on its own page

Announcing the schedule

After each planning meeting, the topics for the next three weeks are added to the compiler-team meeting calendar and a blog post is posted to the Inside Rust blog.

When and where is it?

See the compiler team meeting calendar for the canonical date and time. The meetings take place in the #t-compiler stream on the rust-lang Zulip.

Submitting a proposal

If you would like to submit a proposal to the steering meeting for group discussion, read on! This page has all the details.

TL;DR

In short, all you have to do is

You don’t have to have a lot of details to start: just a few sentences is enough. But, especially for technical design discussions, we will typically expect that some form of more detailed overview be made available by the time the meeting takes place.

Examples of good candidates for discussing at the steering meeting

Here are some examples of possible technical topics that would be suitable for the steering meeting:

  • A working group has an idea to refactor the HIR to make some part of their job easier. They have sketched out a proposal and would like feedback.
  • Someone has encountered a problem that is really hard to solve with the existing data structures. They would like feedback on a good solution to their problem.
  • Someone has done major refactoring work on a PR and they would like to be able to explain the work they did and request review.

Steering meetings are also a good place to discuss other kinds of proposals:

  • A proposal to move some part of the compiler into an out-of-tree crate.
  • A proposal to start a new working group.

Note that a steering meeting is not required to create a new working group or an out-of-tree crate, but it can be useful if the proposal is complex or controversial, and you would like a dedicated time to talk out the plans in more detail.

Criteria for selection

When deciding the topics for upcoming meetings, we must balance a number of things:

  • We don’t want to spend time on design work unless there are known people who will implement it and support it; this includes not only the “main coder” but also a suitable reviewer.
  • We don’t want to take on “too many” tasks at once, even if there are people to implement them.
  • We also don’t want to have active projects that will be “stepping on each others’ toes”, changing the same set of code in deep ways.

Meetings are not mandatory

It is perfectly acceptable to choose not to schedule a particular slot. This could happen if (e.g.) there are no proposals available or if nothing seems important enough to discuss at this moment. Note that, to keep the “time expectations” under control, we should generally stick to the same 4-week cycle and simply opt to skip meetings, rather than (e.g.) planning things at the last minute.

Adding a proposal

Proposals can be added by opening an issue on the compiler-team repository. There is an issue template for meeting proposals that gives directions. The basic idea is that you open an issue with a few sentences describing what you would like to talk about.

Some details that might be useful to include:

  • how complex of a topic you think this is
  • people in the compiler team that you think should be present for the meeting

Expectations for the meeting

By the time the meeting takes place, we generally would prefer to have a more detailed write-up or proposal. You can find a template for such a proposal here. This should be created in the form of a hackmd document – usually we will then update this document with the minutes and consensus from the meeting. The final notes are then stored in the minutes directory of the compiler-team repository.

Expectations for a non-technical proposal

The requirements for non-technical proposals are somewhat looser. A few sentences or paragraphs may well suffice, if it is sufficient to understand the aims of the discussion.

Frequently asked questions

What happens if there are not enough proposals? As noted above, meetings are not mandatory. If there aren’t enough proposals in some particular iteration, then we can just opt to not discuss anything.

How to run the planning meeting

Week of the meeting

  • Announce the meeting in the triage meeting
  • Skim over the list of proposals and ping people who have open proposals to get their availability over the next few weeks

Day of the meeting

  • Create a design meeting YYYY.MM.DD topic
    • Ping @t-compiler/meeting, ideally 1h or so before the meeting actually starts, to remind people
  • At the time of the meeting, return to the topic
    • Ping @t-compiler/meeting to let people know the meeting is starting
  • We typically begin with a 5min announcement period
  • Visit the compiler-team repository to get a list of proposed meetings

To actually make the final selection, we recommend

  • First, try to identify topics that are clear non-candidates
    • for example, sometimes more investigative work (e.g., data gathering) is needed
    • try to identify people to do those tasks
    • other issues may be out of date, or clear non-starters, and they can be closed
  • Next tackle technical design meetings, then non-technical
    • Typical ratio is 2 technical, 1 non-technical, but this is not set in stone
    • It’s ok to have fewer than 3 meetings

Announce the meetings

For each scheduled meeting, create a calendar event:

  • invite key participants to the meeting
  • set the location to #t-compiler, Zulip
  • include a link to the design meeting issue in the event

In the relevant issues, add the meeting-scheduled label and add a message like:

In today's [planning meeting], we decided to schedule this meeting for **DATE**.

[Calendar event]

[planning meeting]: XXX link to Zulip topic
[Calendar event]: XXX link to calendar event

You can get the link to the calendar event by clicking on the event in google calendar and selecting “publish”.

Publish a blog post

Add a blog post to the Inside Rust blog using the template found on the compiler-team repository.

How to run the design meeting

Week of the meeting

  • Announce the meeting in the triage meeting
  • Skim over the list of proposals and ping people who have open proposals to get their availability over the next few weeks
  • Make sure that a write-up is available and nag the meeting person otherwise

Day of the meeting

  • Create a design meeting YYYY.MM.DD topic
    • Ping @t-compiler/meeting, ideally 1h or so before the meeting actually starts, to remind people
    • Include a link to the design meeting write-up
  • At the time of the meeting, return to the topic
    • Ping @t-compiler/meeting to let people know the meeting is starting
    • Include a link to the design meeting write-up
  • We typically begin with a 5min announcement period

To guide the meeting, create a shared hackmd document everyone can view (or adapt an existing one, if there is a write-up). Use this to help structure the meeting, document consensus, and take live notes. Try to ensure that the meeting ends with sort of consensus statement, even if that consensus is just “here are the problems, here is a space of solutions and their pros/cons, but we don’t have consensus on which solution to take”.

After the meeting

crates.io

This section documents the processes of the crates.io team.

Crate removal procedure

If we get a DMCA takedown notice, here’s what needs to happen:

Before removing the crates, get in touch with legal support, and ask an opinion from them on the received request and whether we have to comply with it.

Remove relevant version(s) and/or entire crates from crates.io

  • Remove it from the database:

    heroku run -a crates-io -- target/release/crates-admin delete-crate [crate-name]
    

    or

    heroku run -a crates-io -- target/release/crates-admin delete-version [crate-name] [version-number]
    
  • Remove the crate or version from the index. To remove an entire crate, remove the entire crate file. For a version, remove the line corresponding to the relevant version.

  • Remove the crate archive(s) and readme file(s) from S3.

  • Invalidate the CloudFront cache:

    aws cloudfront create-invalidation --distribution-id EJED5RT0WA7HA --paths '/*'
    

Remove entire crates from docs.rs

The docs.rs application supports deleting all the documentation ever published of a crate, by running a CLI command. The people who currently have permissions to access the server and run it are:

You can find the documentation on how to run the command here.

Database maintenance

There are times when Heroku needs to perform a maintenance on our database instances, for example to apply system updates or upgrade to a newer database server.

We must not let Heroku run maintenances during the maintenance window to avoid disrupting production users (move the maintenance window if necessary). This page contains the instructions on how to perform the maintenance with the minimum amount of disruption.

Primary database

Performing maintenance on the primary database requires us to temporarily put the application in read-only mode. Heroku performs maintenances by creating a hidden database follower and switching over to it, so we need to prevent writes on the primary to let the follower catch up.

Maintenance should take less than 5 minutes of read-only time, but we should still announce it ahead of time on our status page. This is a sample message we can use:

The crates.io team will perform a database maintenance on YYYY-MM-DD from hh:mm to hh:mm UTC.

We expect this to take less than 5 minutes to complete. During maintenance, crates.io will only be available in read-only mode: downloading crates and visiting the website will still work, but logging in, publishing crates, yanking crates, or changing owners will not work.

Primary database checklist

1 hour before the maintenance

  1. Go into the Heroku Scheduler and disable the job enqueueing the downloads count updater. You can “disable” it by changing its schedule not to run during the maintenance window. The job uses a lot of database resources, and we should not run it during maintenance.

5 minutes before the maintenance

  1. Scale the background worker to 0 instances:

    heroku ps:scale -a crates-io background_worker=0
    

At the start of the maintenance

  1. Update the status page with this message:

    Scheduled maintenance on our database is starting.

    We expect this to take less than 5 minutes to complete. During maintenance, crates.io will only be available in read-only mode: downloading crates and visiting the website will still work, but logging in, publishing crates, yanking crates, or changing owners will not work.

  2. Configure the application to be in read-only mode without the follower:

    heroku config:set -a crates-io READ_ONLY_MODE=1 DB_OFFLINE=follower
    

    The follower is removed because while Heroku tries to prevent connections to the primary database from failing during maintenance we observed that the same does not apply to the follower database, and there could be brief periods while the follower is not available.

  3. Wait for the application to be redeployed with the new configuration:

    heroku ps:wait -a crates-io
    
  4. Run the database maintenance:

    heroku pg:maintenance:run --force -a crates-io
    
  5. Wait for the maintenance to finish:

    heroku pg:wait -a crates-io
    
  6. Confirm all the databases are online:

    heroku pg:info -a crates-io
    
  7. Confirm the primary database fully recovered (should output false):

    echo "SELECT pg_is_in_recovery();" | heroku pg:psql -a crates-io DATABASE
    
  8. Switch off read-only mode:

    heroku config:unset -a crates-io READ_ONLY_MODE
    

    WARNING: the Heroku Dashboard’s UI is misleading when removing an environment variable. A red badge with a “-” (minus) in it means the variable was successfully removed, it doesn’t mean removing the variable failed. Failures are indicated with a red badge with a “x” (cross) in it.

  9. Wait for the application to be redeployed with the new configuration:

    heroku ps:wait -a crates-io
    
  10. Update the status page and mark the maintenance as completed with this message:

    Scheduled maintenance finished successfully.

    The message is posted right now and not at the end because this is when production users are not impacted by the maintenance anymore.

  11. Scale the background worker up again:

    heroku ps:scale -a crates-io background_worker=1
    
  12. Confirm the follower database is available:

    echo "SELECT 1;" | heroku pg:psql -a crates-io READ_ONLY_REPLICA
    
  13. Enable connections to the follower:

    heroku config:unset -a crates-io DB_OFFLINE
    
  14. Re-enable the background job disabled during step 1.

Follower database

Performing maintenance on the follower database doesn’t require any external communication nor putting the application in read-only mode, as we can just redirect all of the follower’s traffic to the primary database. It shouldn’t be done during peak traffic periods though, as we’ll increase the primary database load by doing this.

Follower database checklist

At the start of the maintenance

  1. Configure the application to operate without the follower:

    heroku config:set -a crates-io DB_OFFLINE=follower
    
  2. Wait for the application to be redeployed with the new configuration:

    heroku ps:wait -a crates-io
    
  3. Start the database maintenance:

    heroku pg:maintenance:run --force -a crates-io READ_ONLY_REPLICA
    
  4. Wait for the maintenance to finish:

    heroku pg:wait -a crates-io READ_ONLY_REPLICA
    
  5. Confirm the follower database is ready:

    heroku pg:info -a crates-io
    
  6. Confirm the follower database is responding to queries:

    echo "SELECT 1;" | heroku pg:psql -a crates-io READ_ONLY_REPLICA
    
  7. Enable connections to the follower:

    heroku config:unset -a crates-io DB_OFFLINE
    
  8. Wait for the application to be redeployed with the new configuration.

    heroku ps:wait -a crates-io
    

docs.rs

docs.rs is a website that hosts documentation for crates published to crates.io.

Add a dependency to the build environment

Rustwide internally uses rust-lang/crates-build-env as the build environment for the crate. If you want to add a system package for crates to link to, this is place you’re looking for.

Preconditions

Docker and docker-compose must be installed. For example, on Debian or Ubuntu:

sudo apt-get install docker.io docker-compose

Getting started

First, clone the crates-build-env and the docs.rs repos:

git clone https://github.com/rust-lang/crates-build-env
git clone https://github.com/rust-lang/docs.rs

Set the path to the directory of your crate. This must be an absolute path, not a relative path! On platforms with coreutils, you can instead use $(realpath ../relative/path) (relative to the docs.rs directory).

YOUR_CRATE=/path/to/your/crate

Add package

Next, add the package to crates-build-env/linux/packages.txt in the correct alphabetical order. This should be the name of a package in the Ubuntu 20.04 Repositories. See the package home page for a full list/search bar, or use apt search locally.

Building the image

Now build the image. This will take a very long time, probably 10-20 minutes.

cd crates-build-env/linux
docker build --tag build-env .

Testing the image

Use the image to build your crate.

cd ../../docs.rs
cp .env.sample .env
docker-compose build
# avoid docker-compose creating the volume if it doesn't exist
if [ -e "$YOUR_CRATE" ]; then
  docker-compose run -e DOCSRS_DOCKER_IMAGE=build-env \
                     -e RUST_BACKTRACE=1 \
                     -v "$YOUR_CRATE":/opt/rustwide/workdir \
    web build crate --local /opt/rustwide/workdir
else
  echo "$YOUR_CRATE does not exist";
fi

Making multiple changes

If your build fails even after your changes, it will be annoying to rebuild the image from scratch just to add a single package. Instead, you can make changes directly to the Dockerfile so that the existing packages are cached. Be sure to move these new packages from the Dockerfile to packages.txt once you are sure they work.

On line 7 of the Dockerfile, add this line: RUN apt-get install -y your_second_package. Rerun the build and start the container; it should take much less time now:

cd ../crates-build-env/linux
docker build --tag build-env .
cd ../../docs.rs
docker-compose run -e DOCSRS_DOCKER_IMAGE=build-env \
                     -e RUST_BACKTRACE=1 \
                     -v "$YOUR_CRATE":/opt/rustwide/workdir \
    web build crate --local /opt/rustwide/workdir

Run the lint script

Before you make a PR, run the shell script lint.sh and make sure it passes. It ensures packages.txt is in order and will tell you exactly what changes you need to make if not.

cd ../crates-build-env
./lint.sh

Make a pull request

Once you are sure your package builds, you can make a pull request to get it adopted upstream for docs.rs and crater. Go to https://github.com/rust-lang/crates-build-env and click ‘Fork’ in the top right. Locally, add your fork as a remote in git and push your changes:

git remote add personal https://github.com/<your_username_here>/crates-build-env
git add -u
git commit -m 'add packages necessary for <your_package_here> to compile'
git push personal

Back on github, make a pull request:

  1. Go to https://github.com/rust-lang/crates-build-env/compare
  2. Click ‘compare across forks’
  3. Click ‘head repository’ -> <your_username>/crates-build-env
  4. Click ‘Create pull request’
  5. Add a description of what packages you added and what crate they fixed
  6. Click ‘Create pull request’ again in the bottom right.

Hopefully your changes will be merged quickly! After that you can either publish a point release (rebuilds your docs immediately) or request for a member of the docs.rs team to schedule a new build (may take a while depending on their schedules).

Self hosting a docs.rs instance

These are instructions for deploying the server in a production environment. For instructions on developing locally without docker-compose, see Developing without docker-compose.

Here is a breakdown of what it takes to turn a regular server into its own version of docs.rs.

Beware: This process is rather rough! Attempts at cleaning it up, automating setup components, etc, would be greatly appreciated!

Requirements

The commands and package names on this page will assume an Ubuntu server running systemd, but hopefully the explanatory text should give enough information to adapt to other systems. Note that docs.rs depends on the host being x86_64-unknown-linux-gnu.

Docs.rs has a few basic requirements:

  • Rust (preferably via rustup)
  • Git
  • CMake, GCC, G++, and pkg-config (to build dependencies for crates and docs.rs itself)
  • OpenSSL, zlib, curl, and libmagic (to link against)
  • PostgreSQL
  • LXC tools (doc builds run inside an LXC container)
$ curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y --default-toolchain nightly
$ source $HOME/.cargo/env
# apt install build-essential git curl cmake gcc g++ pkg-config libmagic-dev libssl-dev zlib1g-dev postgresql lxc-utils

The cratesfyi user

To help things out later on, we can create a new unprivileged user that will run the server process. This user will own all the files required by the docs.rs process. This user will need to be able to run lxc-attach through sudo to be able to run docs builds, so give it a sudoers file at the same time:

# adduser --disabled-login --disabled-password --gecos "" cratesfyi
# echo 'cratesfyi  ALL=(ALL) NOPASSWD: /usr/bin/lxc-attach' > /etc/sudoers.d/cratesfyi

(The name cratesfyi is a historical one: Before the site was called “docs.rs”, it was called “crates.fyi” instead. If you want to update the name of the user, feel free! Just be aware that the name cratesfyi will be used throughout this document.)

The “prefix” directory

In addition to the LXC container, docs.rs also stores several related files in a “prefix” directory. This directory can be stored anywhere, but the cratesfyi user needs to be able to access it:

# mkdir /cratesfyi-prefix
# chown cratesfyi:cratesfyi /cratesfyi-prefix

Now we can set up some required folders. To make sure they all have proper ownership, run them all as cratesfyi:

$ sudo -u cratesfyi mkdir -vp /cratesfyi-prefix/documentations /cratesfyi-prefix/public_html /cratesfyi-prefix/sources
$ sudo -u cratesfyi git clone https://github.com/rust-lang/crates.io-index.git /cratesfyi-prefix/crates.io-index
$ sudo -u cratesfyi git --git-dir=/cratesfyi-prefix/crates.io-index/.git branch crates-index-diff_last-seen

(That last command is used to set up the crates-index-diff crate, so we can start monitoring new crate releases.)

LXC container

To help contain what crates’ build scripts can access, documentation builds run inside an LXC container. To create one inside the prefix directory:

# LANG=C lxc-create -n cratesfyi-container -P /cratesfyi-prefix -t download -- --dist ubuntu --release bionic --arch amd64
# ln -s /cratesfyi-prefix/cratesfyi-container /var/lib/lxc
# chmod 755 /cratesfyi-prefix/cratesfyi-container
# chmod 755 /var/lib/lxc

(To make deployment simpler, it’s important that the OS the container is using is the same as the host! In this case, the host is assumed to be running 64-bit Ubuntu 18.04. If you make the container use a different release or distribution, you’ll need to build docs.rs separately inside the container when deploying.)

You’ll also need to configure networking for the container. The following is a sample /etc/default/lxc-net that enables NAT networking for the container:

USE_LXC_BRIDGE="true"
LXC_BRIDGE="lxcbr0"
LXC_ADDR="10.0.3.1"
LXC_NETMASK="255.255.255.0"
LXC_NETWORK="10.0.3.0/24"
LXC_DHCP_RANGE="10.0.3.2,10.0.3.254"
LXC_DHCP_MAX="253"
LXC_DHCP_CONFILE=""
LXC_DOMAIN=""

In addition, you’ll need to set the container’s configuration to use this. Add the following lines to /cratesfyi-prefix/cratesfyi-container/config:

lxc.net.0.type = veth
lxc.net.0.link = lxcbr0

Now you can reload the LXC network configuration, start up the container, and set it up to auto-start when the host boots:

# systemctl restart lxc-net
# systemctl enable lxc@cratesfyi-container.service
# systemctl start lxc@cratesfyi-container.service

Now we need to do some setup inside this container. You can either copy all these commands so that each one attaches on its own, or you can run lxc-console -n cratesfyi-container to open a root shell inside the container and skip the lxc-attach prefix.

# lxc-attach -n cratesfyi-container -- apt update
# lxc-attach -n cratesfyi-container -- apt upgrade
# lxc-attach -n cratesfyi-container -- apt install curl ca-certificates binutils gcc libc6-dev libmagic1 pkg-config build-essential

Inside the container, we also need to set up a cratesfyi user, and install Rust for it. In addition to the base Rust installation, we also need to install all the default targets so that we can build docs for all the Tier 1 platforms. The Rust compiler installed inside the container is the one that builds all the docs, so if you want to use a new Rustdoc feature, this is the compiler to update.

lxc-attach -n cratesfyi-container -- adduser --disabled-login --disabled-password --gecos "" cratesfyi
lxc-attach -n cratesfyi-container -- su - cratesfyi -c 'curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y --default-toolchain nightly'
lxc-attach -n cratesfyi-container -- su - cratesfyi -c 'rustup target add i686-apple-darwin'
lxc-attach -n cratesfyi-container -- su - cratesfyi -c 'rustup target add i686-pc-windows-msvc'
lxc-attach -n cratesfyi-container -- su - cratesfyi -c 'rustup target add i686-unknown-linux-gnu'
lxc-attach -n cratesfyi-container -- su - cratesfyi -c 'rustup target add x86_64-apple-darwin'
lxc-attach -n cratesfyi-container -- su - cratesfyi -c 'rustup target add x86_64-pc-windows-msvc'

Now that we have Rust installed inside the container, we can use a trick to give the cratesfyi user on the host the same Rust compiler as the container. By symlinking the following directories into its user directory, we don’t need to track a third toolchain.

for directory in .cargo .rustup .multirust; do  [[ -h /home/cratesfyi/$directory ]] || sudo -u cratesfyi ln -vs /var/lib/lxc/cratesfyi-container/rootfs/home/cratesfyi/$directory /home/cratesfyi/; done

Environment for the cratesfyi user

To ensure that the docs.rs server is configured properly, we need to set a few environment variables. The primary ones are going into a separate environment file, so we can load them into the systemd service that will manage the server.

Write the following into /home/cratesfyi/.cratesfyi.env. If you have a GitHub access token that the site can use to collect repository information, add it here, but otherwise leave it blank. The variables need to exist, but they can be blank to skip that collection.

CRATESFYI_PREFIX=/cratesfyi-prefix
CRATESFYI_DATABASE_URL=postgresql://cratesfyi:password@localhost
CRATESFYI_CONTAINER_NAME=cratesfyi-container
CRATESFYI_GITHUB_USERNAME=
CRATESFYI_GITHUB_ACCESSTOKEN=
RUST_LOG=cratesfyi

Now add the following to /home/cratesfyi/.profile:

export $(cat $HOME/.cratesfyi.env | xargs -d '\n')
export PATH="$HOME/.cargo/bin:$PATH"
export PATH="$PATH:$HOME/docs.rs/target/release"

Docs.rs build

Now we can actually clone and build the docs.rs source! The location of it doesn’t matter much, but again, we want it to be owned by cratesfyi so it can build and run the final executable. In addition, we copy the built cratesfyi binary into the container so that it can be used to arrange builds on the inside.

sudo -u cratesfyi git clone https://github.com/rust-lang-nursery/docs.rs.git ~cratesfyi/docs.rs
sudo su - cratesfyi -c 'cd ~/docs.rs && cargo build --release'
cp -v /home/cratesfyi/docs.rs/target/release/cratesfyi /var/lib/lxc/cratesfyi-container/rootfs/usr/local/bin

PostgreSQL

Now that we have the repository built, we can use it to set up the database. Docs.rs uses a Postgres database to store information about crates and their documentation. To set one up, we first need to ask Postgres to create the database, and then run the docs.rs command to create the initial tables and content:

sudo -u postgres sh -c "psql -c \"CREATE USER cratesfyi WITH PASSWORD 'password';\""
sudo -u postgres sh -c "psql -c \"CREATE DATABASE cratesfyi OWNER cratesfyi;\""
sudo su - cratesfyi -c "cd ~/docs.rs && cargo run --release -- database init"
sudo su - cratesfyi -c "cd ~/docs.rs && cargo run --release -- build add-essential-files"
sudo su - cratesfyi -c "cd ~/docs.rs && cargo run --release -- build crate rand 0.5.5"
sudo su - cratesfyi -c "cd ~/docs.rs && cargo run --release -- database update-search-index"
sudo su - cratesfyi -c "cd ~/docs.rs && cargo run --release -- database update-release-activity"

Server configuration

We’re almost there! At this point, we’ve got all the pieces in place to run the site. Now we can set up a systemd service that will run the daemon that will collect crate information, orchestrate builds, and serve the website. The following systemd service file can be placed in /etc/systemd/system/cratesfyi.service:

[Unit]
Description=Cratesfyi daemon
After=network.target postgresql.service

[Service]
User=cratesfyi
Group=cratesfyi
Type=forking
PIDFile=/cratesfyi-prefix/cratesfyi.pid
EnvironmentFile=/home/cratesfyi/.cratesfyi.env
ExecStart=/home/cratesfyi/docs.rs/target/release/cratesfyi daemon
WorkingDirectory=/home/cratesfyi/docs.rs

[Install]
WantedBy=multi-user.target

Enabling and running that will serve the website on http://localhost:3000, so if you want to route public traffic to it, you’ll need to set up something like nginx to proxy the connections to it.

Updating Rust

If you want to update the Rust compiler used to build crates (and the Rustdoc that comes with it), you need to make sure you don’t interrupt any existing crate builds. The daemon waits for 60 seconds between checking for new crates, so you need to make sure you catch it during that window. Since we hooked the daemon into systemd, the logs will be available in its journal. Running journalctl -efu cratesfyi (it may need to be run as root if nothing appears) will show the latest log output and show new entries as they appear. You’re looking for a message like “Finished building new crates, going back to sleep” or “Queue is empty, going back to sleep”, which indicates that the crate-building thread is waiting.

To prevent the queue from building more crates, run the following:

sudo su - cratesfyi -c "cd ~/docs.rs && cargo run --release -- build lock"

This will create a lock file in the prefix directory that will prevent more crates from being built. At this point, you can update the rustc inside the container and add the rustdoc static files to the database:

lxc-attach -n cratesfyi-container -- su - cratesfyi -c 'rustup update'
sudo su - cratesfyi -c "cd ~/docs.rs && cargo run --release -- build add-essential-files"

Once this is done, you can unlock the queue to allow crates to build again:

sudo su - cratesfyi -c "cd ~/docs.rs && cargo run --release -- build unlock"

And we’re done! New crates will start being built with the new rustc. If you want to rebuild any existing docs with the new rustdoc, you need to manually build them - there’s no automated way to rebuild failed docs or docs from a certain rust version yet.

Updating docs.rs

To update the code for docs.rs itself, you can follow a similar approach. First, watch the logs so you can stop the daemon from building more crates. (You can replace the lock command with a systemctl stop cratesfyi if you don’t mind the web server being down while you build.)

# journalctl -efu cratesfyi
(wait for build daemon to sleep)
$ sudo su - cratesfyi -c "cd ~/docs.rs && cargo run --release -- build lock"

Once the daemon has stopped, you can start updating the code and rebuilding:

$ sudo su - cratesfyi -c "cd ~/docs.rs && git pull"
$ sudo su - cratesfyi -c "cd ~/docs.rs && cargo build --release"

Now that we have a shiny new build, we need to make sure the service is using it:

# cp -v /home/cratesfyi/docs.rs/target/release/cratesfyi /var/lib/lxc/cratesfyi-container/rootfs/usr/local/bin
# systemctl restart cratesfyi

Next, we can unlock the builder so it can start checking new crates:

$ sudo su - cratesfyi -c "cd ~/docs.rs && cargo run --release -- build unlock"

And we’re done! Changes to the site or the build behavior should be visible now.

Common maintenance procedures

Temporarily remove a crate from the queue

It might happen that a crate fails to build repeatedly due to a docs.rs bug, clogging up the queue and preventing other crates to build. In this case it’s possible to temporarily remove the crate from the queue until the docs.rs’s bug is fixed. To do that, log into the machine and open a PostgreSQL shell with:

$ psql

Then you can run this SQL query to remove the crate:

UPDATE queue SET attempt = 100 WHERE name = '<CRATE_NAME>';

To add the crate back in the queue you can run in the PostgreSQL shell this query:

UPDATE queue SET attempt = 0 WHERE name = '<CRATE_NAME>';

Pinning a version of nightly

Sometimes the latest nightly might be broken, causing doc builds to fail. In those cases it’s possible to tell docs.rs to stop updating to the latest nightly and instead pin a specific release. To do that you need to edit the /home/cratesfyi/.docs-rs-env file, adding or changing this environment variable:

CRATESFYI_TOOLCHAIN=nightly-YYYY-MM-DD

Once the file changed docs.rs needs to be restarted:

systemctl restart docs.rs

To return to the latest nightly simply remove the environment variable and restart docs.rs again.

Rebuild a specific crate

If a bug was recently fixed, you may want to rebuild a crate so that it builds with the latest version. From the docs.rs machine:

cratesfyi queue add <crate> <version>

This will add the crate with a lower priority than new crates by default, you can change the priority with the -p option.

Raise the limits for a specific crate

Occasionally crates will ask for their build limits to be raised. You can raise them from the docs.rs machine with psql.

Raising a memory limit to 8 GB:

# memory is measured in bytes
cratesfyi=> INSERT INTO sandbox_overrides (crate_name, max_memory_bytes)
  VALUES ('crate name', 8589934592);

Raising a timeout to 15 minutes:

cratesfyi=> INSERT INTO sandbox_overrides (crate_name, timeout_seconds)
  VALUES ('crate name', 900);

Raising limits for multiple crates at once:

cratesfyi=> INSERT INTO sandbox_overrides (crate_name, max_memory_bytes)
  VALUES ('stm32f4', 8589934592), ('stm32h7', 8589934592), ('stm32g4', 8589934592);

Set a group of crates to be automatically de-prioritized

When many crates from the same project are published at once, they take up a lot of space in the queue. You can de-prioritize groups of crates at once like this:

cratesfyi=> INSERT INTO crate_priorities (pattern, priority)
  VALUES ('group-%', 1);

The pattern should be a LIKE pattern as documented on https://www.postgresql.org/docs/current/functions-matching.html.

Note that this only sets the default priority for crates with that name. If there are crates already in the queue, you’ll have to update those manually:

cratesfyi=> UPDATE queue SET priority = 1 WHERE name LIKE 'group-%';

Adding all the crates failed after a date back in the queue

After an outage you might want to add all the failed builds back to the queue. To do that, log into the machine and open a PostgreSQL shell with:

psql

Then you can run this SQL query to add all the crates failed after YYYY-MM-DD HH:MM:SS back in the queue:

UPDATE queue SET attempt = 0 WHERE attempt >= 5 AND build_time > 'YYYY-MM-DD HH:MM:SS';

Removing a crate from the website

Sometimes it might be needed to remove all the content related to a crate from docs.rs (for example after receiving a DMCA). To do that, log into the server and run:

cratesfyi database delete-crate CRATE_NAME

The command will remove all the data from the database, and then remove the files from S3.

Blacklisting crates

Occasionally it might be needed to prevent a crate from being built on docs.rs, for example if we can’t legally host the content of those crates. To add a crate to the blacklist, preventing new builds for it, you can run:

cratesfyi database blacklist add <CRATE_NAME>

Other operations (such as list and remove) are also supported.

Warning: blacklisting a crate doesn’t remove existing content from the website, it just prevents new versions from being built!

Governance

Leadership Council

The Leadership Council is a representative group of the teams within the Rust Project, tasked with coordinating between teams and to ensure successful operation of the Rust Project.

The policies governing the Leadership Council are specified in the Leadership Council chapter.

Moderation

The Moderation team is responsible for dealing with violations of the Rust Code of Conduct.

The policies governing the Moderation team are specified in the Moderation chapter.

Leadership Council

This document defines the authority1 and policies of the Rust Leadership Council (“Council”) to ensure successful operation of the Rust Project.

This document serves as a living document defining the current accepted set of policies governing the Council. The basis of this document started with the text of RFC 3392 which established the Council, and may be updated via the RFC process.

The Council delegates much of this authority to teams (which includes subteams, working groups, etc.2) who autonomously make decisions concerning their purviews. However, the Council retains some decision-making authority, outlined and delimited by this document.

The Council maintains a separate home site at https://github.com/rust-lang/leadership-council where they document their internal processes, and coordinate their work.

The Council is composed of representatives delegated to the Council from each top-level team.

The Council is charged with the success of the Rust Project as a whole. The Council identifies work that needs to be done but does not yet have a clear owner, creates new teams to accomplish this work, holds existing teams accountable for the work in their purview, and coordinates and adjusts the organizational structure of Project teams.

Outline

Motivation

The Rust project consists of hundreds of globally distributed people, organized into teams with various purviews. However, a great deal of work falls outside the purview of any established team, and still needs to get done.

The Council focuses on identifying and prioritizing work outside of team purviews. The Council primarily delegates that work, rather than doing that work itself. The Council can also serve as a coordination, organization, and accountability body between teams, such as for cross-team efforts, roadmaps, and the long-term success of the Project.

Duties, expectations, and constraints on the Council

At a high-level, the Council is only in charge of the following duties:

  • Identifying, prioritizing, and tracking work that goes undone due to lack of clear ownership (and not due to the owners’ explicit de-prioritization, placement in a backlog, etc.).
  • Delegating this work, potentially establishing new (and possibly temporary) teams to own this work.
  • Making decisions on urgent matters that do not have a clear owner.
    • This should only be done in exceptional circumstances where the decision cannot be delegated either to existing teams or to newly created ones.
  • Coordinating Project-wide changes to teams, structures, or processes.
  • Ensuring top-level teams are accountable to their purviews, to other teams, and to the Project.
  • Ensuring where possible that teams have the people and resources they need to accomplish their work.
  • Establishing the official position, opinion, or will of the Rust Project as a whole.
    • This helps reduce the need for Project-wide coordination, especially when a long public polling and consensus-building process is not practical - for example, when communicating with third parties who require some understanding of what the Rust Project as a whole “wants”.

In addition to these duties, the Council has additional expectations and constraints, to help determine if the Council is functioning properly:

  • Delegate work: The Council should not take on work beyond what this document explicitly assigns to it; it must delegate to existing or new teams distinct from the Council. Such teams may include Council representatives, but such membership is not part of the duties of a Council representative.
  • Ensure the Project runs smoothly in the long term: The Council should ensure that non-urgent Project management work is prioritized and completed with enough regularity that the Project does not accumulate organizational debt.
  • Be Accountable: As the Council wields broad power, the Council and Council representatives must be accountable for their actions. They should listen to others’ feedback, and actively reflect on whether they continue to meet the duties and expectations of the position they hold.
  • Be representational: Council representatives should not only represent the breadth of Project concerns but also the diversity of the Rust community in as many aspects as possible (demographics, technical background, etc).
  • Share burden: All Council representatives must share burden of Council duties.
  • Respect others’ purviews: The Council must respect the purviews delegated to teams. The Council should consult with and work together with teams on solutions to issues, and should almost never make decisions that go against the wishes of any given team.
  • Act in good faith: Council representatives should make decisions in the best interest of the Rust Project as a whole even if those decisions come into conflict with their individual teams, their employers, or other outside interests.
  • Be transparent: While not all decisions (or all aspects of a decision) can be made public, the Council should be as open and transparent about their decision-making as possible. The Council should also ensure the organizational structure of the Project is clear and transparent.
  • Respect privacy: The Council must never compromise personal or confidential information for the sake of transparency, including adjacent information that could unintentionally disclose privileged information.
  • Foster a healthy working environment: The Council representatives should all feel satisfied with the amount and nature of their contribution. They should not feel that their presence on the Council is merely out of obligation but rather because they are actively participating in a meaningful way.
  • Evolve: The Council is expected to evolve over time to meet the evolving needs of teams, the Project, and the community.

Council representatives, moderation team members, and other Project members serve as examples for those around them and the broader community. All of these roles represent positions of responsibility and leadership; their actions carry weight and can exert great force within the community, and should be wielded with due care. People choosing to serve in these roles should thus recognize that those around them will hold them to a correspondingly high standard.

Structure of the Council

The Council consists of a set of team representatives, each representing one top-level team and its subteams.

Each top-level team designates exactly one representative, by a process of their choice.

Any member of the top-level team or a member of any of their subteams is eligible to be the representative. Teams should provide members of their subteams with an opportunity for input and feedback on potential candidates.

Each representative represents at most one top-level team, even if they’re also a member of other teams. The primary responsibility of representing any Rust team falls to the representative of the top-level team they fall under.3

All teams in the Rust Project must ultimately fall under at least one top-level team. The Launching Pad team serves as a temporary home for teams that do not currently have a parent team. This ensures that all teams have representation on the Council.

Top-level teams

The Council establishes top-level teams via public policy decisions. In general, top-level teams should meet the following criteria:

  • Have a purview that is foundational to the Rust Project
  • Be the ultimate decision-makers on all aspects of that purview
  • Have a purview that not is a subset of another team’s purview (that is, it must not be a subteam or similar governance structure)
  • Have an open-ended purview that’s expected to continue indefinitely
  • Be a currently active part of the Rust Project

There must be between 4 and 9 top-level teams (inclusive), preferably between 5 and 8. This number balances the desire for a diverse and relatively shallow structure while still being practical for productive conversation and consent.4

When the Council creates a new top-level team, that team then designates a Council representative.5 When creating a new top-level team, the Council must provide justification for why it should not be a subteam or other governance structure.

The set of top-level teams is:

  • Compiler
  • Dev tools
  • Infrastructure
  • Language
  • Launching Pad
  • Library
  • Moderation

The Launching Pad top-level team

The Launching Pad team temporarily accepts subteams that otherwise do not have a top-level team to slot underneath of. This ensures that all teams have representation on the Council, while more permanent parent teams are found or established.

The Launching Pad team is an umbrella team: it has no direct members, only subteam representatives.

The Council should work to find or create a more appropriate parent for each subteam of the Launching Pad, and subsequently move those subteams to their new parent team.

In some cases, an appropriate parent team may exist but not yet be ready to accept subteams; the Launching Pad can serve as an interim home in such cases.

The Launching Pad also serves as a default home for subteams of a team that’s removed or reorganized away, if that removal or reorganization does not explicitly place those subteams somewhere else in the organization.

The Council must review subteam membership in the Launching Pad every 6 months to ensure that proper progress is being made on finding all subteams new parent teams. As with other top-level teams, the Launching Pad team can be retired (and have its representation within the Council removed) if the Council finds it to be no longer necessary. The process for retiring the Launching Pad team is the same as with other top-level teams. Alternatively, the Council is free to give the Launching Pad team its own purview.

Removing top-level teams

Any decision to remove a team’s top-level designation (or otherwise affect eligibility for the Council) requires the consent of all Council representatives, with the exception of the representative of the top-level team being removed. Despite this caveat, the representative of the team under consideration must be invited to Council deliberations concerning the team’s removal, and the Council should only remove a team over their objections in extreme cases.

The Council cannot remove the moderation team. The Council cannot change the moderation team’s purview without the agreement of the moderation team.

Alternates and forgoing representation

A representative may end their term early if necessary, such as due to changes in their availability or circumstances. The respective top-level team must then begin selecting a new representative. The role of representative is a volunteer position. No one is obligated to fill that role, and no team is permitted to make serving as a representative a necessary obligation of membership in a team. However, a representative is obligated to fulfill the duties of the position of representative, or resign that position.

A top-level team may decide to temporarily relinquish their representation, such as if the team is temporarily understaffed and they have no willing representative. However, if the team does not designate a Council representative, they forgo their right to actively participate in decision-making at a Project-wide level. All Council procedures including decision-making should not be blocked due to this omission. The Council is still obligated to consider new information and objections from all Project members. However, the Council is not obligated to block decisions to specially consider or collate a non-represented team’s feedback.

Sending a representative to the Council is considered a duty of a top-level team, and not being able to regularly do so means the team is not fulfilling its duties. However, a Council representative does not relinquish their role in cases of short absence due to temporary illness, vacation, etc.

A top-level team can designate an alternate representative to serve in the event their primary representative is unavailable. This alternate assumes the full role of Council representative until the return of the primary representative. Alternate representatives do not regularly attend meetings when the primary representative is present (to avoid doubling the number of attendees).

If a team’s representative and any alternates fail to participate in any Council proceedings for 3 consecutive weeks, the team’s representative ceases to count towards the decision-making quorum requirements of the Council until the team can provide a representative able to participate. The Council must notify the team of this before it takes effect. If a team wishes to ensure the Council does not make decisions without their input or without an ability for objections to be made on their behalf, they should ensure they have an alternate representative available.

A top-level team may change their representative before the end of their term, if necessary. However, as maintaining continuity incurs overhead, teams should avoid changing their representatives more than necessary. Teams have the primary responsibility for briefing their representative and alternates on team-specific issues or positions they wish to handle on an ongoing basis. The Council and team share the responsibilities of maintaining continuity for ongoing issues within the Council, and of providing context to alternates and other new representatives.

For private matters, the Council should exercise discretion on informing alternates, to avoid spreading private information unnecessarily; the Council can brief alternates if they need to step in.

Term limits

Council representatives’ terms are one year in length. Each representative has a soft limit of three consecutive full terms for any given representative delegation (the delegation from a particular top-level team). A representative may exceed this soft limit if and only if the Council receives explicit confirmation from the respective team that they are unable to produce a different team member as a representative (for example, due to lack of a willing alternative candidate, or due to team members having blocking objections to any other candidate).

Beyond this, there is no hard limit on the number of terms a representative can serve for other top-level teams or non-consecutive terms for a single top-level team. Teams should strive for a balance between continuity of experience and rotating representatives to provide multiple people with such experience.6

Half of the representative appointments shall happen at the end of March while half shall happen at the end of September. This avoids changing all Council representatives at the same time. For the initial Council, and anytime the set of top-level teams is changed, the Council and top-level teams should work together to keep term end-dates roughly evenly divided between March and September. However, each term should last for a minimum of 6 months (temporary imbalance is acceptable to avoid excessively short terms).

If the Council and top-level teams cannot agree on appropriate term end-date changes, representatives are randomly assigned to one or the other end date (at least 6 months out) to maintain balance.

Limits on representatives from a single company/entity

Council representatives must not disproportionately come from any one company, legal entity, or closely related set of legal entities, to avoid impropriety or the appearance of impropriety. If the Council has 5 or fewer representatives, no more than 1 representative may have any given affiliation; if the Council has 6 or more representatives, no more than 2 representatives may have any given affiliation.

Closely related legal entities include branches/divisions/subsidiaries of the same entity, entities connected through substantial ownership interests, or similar. The Council may make a judgment call in unusual cases, taking care to avoid conflicts of interest in that decision.

A Council representative is affiliated with a company or other legal entity if they derive a substantive fraction of their income from that entity (such as from an employer, client, or major sponsor). Representatives must promptly disclose changes in their affiliations.

If this constraint does not hold, whether by a representative changing affiliation, top-level teams appointing new representatives, or the Council size changing, restore the constraint as follows:

  • Representatives with the same affiliation may first attempt to resolve the issue amongst themselves, such that a representative voluntarily steps down and their team appoints someone else.
    • This must be a decision by the representative, not their affiliated entity; it is considered improper for the affiliated entity to influence this decision.
    • Representatives have equal standing in such a discussion; factors such as seniority in the Project or the Council must not be used to pressure people.
  • If the representatives with that affiliation cannot agree, one such representative is removed at random. (If the constraint still does not hold, the remaining representatives may again attempt to resolve the issue amongst themselves before repeating this.) This is likely to produce suboptimal results; a voluntary solution will typically be preferable.
  • While a team should immediately begin the process of selecting a successor, the team’s existing representative may continue to serve up to 3 months of their remaining term.
  • The existing representative should coordinate the transition with the incoming representative but it is the team’s choice which one is an actual representative during the up to 3 month window. There is only ever one representative from the top-level team.

Candidate criteria

The following are criteria for deciding ideal candidates. These are similar to but not the same as the criteria for an effective team lead or co-lead. While a team lead might also make a good Council representative, serving as a team lead and serving as a Council representative both require a substantial time investment, which likely motivates dividing those roles among different people. The criteria are not hard requirements but can be used for determining who is best positioned to be a team’s representative. In short, the representative should have:

  • sufficient time and energy to dedicate to the needs of the Council.
  • an interest in helping with the topics of Project operations and Project governance.
  • broad awareness of the needs of the Project outside of their teams or areas of active contribution.
  • a keen sense of the needs of their team.
  • the temperament and ability to represent and center the needs of others above any personal agenda.
  • ability and willingness to represent all viewpoints from their team, not just a subset, and not just those they agree with.

While some teams may not currently have an abundance of candidates who fit this criteria, the Council should actively foster such skills within the larger Project, as these are helpful not only for Council membership but across the entire Project.

Credentials

The Council does not have privileged access to administrative credentials for the project. This access solely resides with the infrastructure team7. The infrastructure team’s responsibilities include ensuring teams have the tools and access needed to do their work effectively, while balancing against security and maintainability of our infrastructure. The Council can help coordinate which teams should have access through policy.

Relationship to the Rust Foundation

The Council is responsible for establishing the process for selecting Project directors. The Project directors are the mechanism by which the Rust Project’s interests are reflected on the Rust Foundation board.

The Council delegates a purview to the Project directors to represent the Project’s interests on the Foundation Board and to make certain decisions on Foundation-related matters. The exact boundaries of that purview are not yet specified.

The Council’s decision-making process

The Council make decisions of two different types: operational decisions and policy decisions. Certain considerations may be placed on a given decision depending on its classification. However, by default, the Council uses a consent decision-making process for all decisions regardless of classification.

Operational vs policy decisions

Operational decisions are made on a daily basis by the Council to carry out their aims, including regular actions taking place outside of meetings (based on established policy). Policy decisions provide general reusable patterns or frameworks, meant to frame, guide, and support operations. In particular, policy decisions can provide partial automation for operational decisions or other aspects of operations. The council defaults to the consent decision making process for all decisions unless otherwise specified.

It is not defined precisely which decisions are operations versus policy; rather, they fall somewhere along a continuum. The purpose of this distinction is not to direct or constrain the council’s decision-making procedures. Instead, this distinction provides guidance to the Council, and clarifies how the Council intends to record, review, and refine its decisions over time. For the purposes of any requirements or guidance associated with the operational/policy classification, anything not labeled as either operational or policy in this or future policy defaults to policy.

Repetition and exceptions

Policy decisions often systematically address what might otherwise require repeated operational decisions. The Council should strive to recognize when repeated operational decisions indicate the need for a policy decision, or a policy change. In particular, the Council should avoid allowing repeated operational decisions to constitute de facto policy.

Exceptions to existing policy cannot be made via an operational decision unless such exceptions are explicitly allowed in said policy. Avoiding ad-hoc exceptions helps avoid “normalization of deviance”.

Consent means that no representative’s requirements (and thus those of the top-level team and subteams they represent) can be disregarded. The Council hears all relevant input and sets a good foundation for working together equitably with all voices weighted equally.

The Council uses consent decision-making where instead of being asked “do you agree?”, representatives are asked “do you object?”. This eliminates “pocket vetoes” where people have fully reviewed a proposal but decide against approving it without giving clear feedback as to the reason. Concerns, feedback, preferences, and other less critical forms of feedback do not prevent making a decision, but should still be considered for incorporation earlier in drafting and discussion. Objections, representing an unmet requirement or need, must be considered and resolved to proceed with a decision.

Approval criteria

The consent decision-making process has the following approval criteria:

  • Posting the proposal in one of the Council’s designated communication spaces (a meeting or a specific channel).
  • Having confirmation that at least N-2 Council representatives (where N is the total number of Council representatives) have fully reviewed the final proposal and give their consent.
  • Having no outstanding explicit objections from any Council representative.
  • Providing a minimum 10 days for feedback.

The approval criteria provides a quorum mechanism, as well as sufficient time for representatives to have seen the proposal. Allowing for two non-signoffs is an acknowledgement of the volunteer nature of the Project, based on experience balancing the speed of decisions with the amount of confirmation needed for consent and non-objection; this assumes that those representatives have had time to object if they wished to do so. (This is modeled after the process used today for approval of RFCs.)

The decision-making process can end at any time if the representative proposing it decides to retract their proposal. Another representative can always adopt a proposal to keep it alive.

If conflicts of interest result in the Council being unable to meet the N-2 quorum for a decision, the Council cannot make that decision unless it follows the process documented in the “Conflicts of interest” section for how a decision may proceed with conflicts documented. In such a case, the Council should consider appropriate processes and policies to avoid future recurrences of a similar conflict.

Modifying and tuning the decision-making process

Using the public policy process, the Council can establish different decision-making processes for classes of decisions.

When deciding on which decision-making process to adopt for a particular class of decision, the Council balances the need for quick decisions with the importance of confidence in full alignment. Consent decision-making processes fall on the following spectrum:

  • Consensus decision making (prioritizes confidence in full alignment at the expense of quick decision making): team members must review and prefer the proposal over all others, any team members may raise a blocking objection
  • Consent decision making (default for the Council, balances quick decisions and confidence in alignment): team members must review and may raise a blocking objection
  • One second and no objections (prioritizes quick decision making at the expense of confidence in alignment): one team member must review and support, any team member may raise a blocking objection

Any policy that defines decision-making processes must at a minimum address where the proposal may be posted, quorum requirements, number of reviews required, and minimum time delay for feedback. A lack of objections is part of the approval criteria for all decision-making processes.

If conflicts of interest prevent more than a third of the Council from participating in a decision, the Council cannot make that decision unless it follows the process documented in the “Conflicts of interest” section for how a decision may proceed with conflicts documented. (This is true regardless of any other quorum requirements for the decision-making process in use.) In such a case, the Council should consider appropriate processes and policies to avoid future recurrences of a similar conflict.

The Council may also delegate subsets of its own decision-making purviews via a public policy decision, to teams, other governance structures, or roles created and filled by the Council, such as operational lead, meeting facilitator, or scribe/secretary.

Note that the Council may delegate the drafting of a proposal without necessarily delegating the decision to approve that proposal. This may be necessary in cases of Project-wide policy that intersects the purviews of many teams, or falls outside the purview of any team. This may also help when bootstrapping a new team incrementally.

Agenda and backlog

The Council’s agenda and backlog are the primary interface through which the Council tracks and gives progress updates on issues raised by Project members throughout the Project.

To aid in the fairness and effectiveness of the agenda and backlog, the Council must:

  • Use a tool that allows Project members to submit requests to the Council and to receive updates on those requests.
  • Use a transparent and inclusive process for deciding on the priorities and goals for the upcoming period. This must involve regular check-ins and feedback from all representatives.
  • Strive to maintain a balance between long-term strategic goals and short-term needs in the backlog and on the agenda.
  • Be flexible and adaptable and be willing to adjust the backlog and agenda as needed in response to changing circumstances or priorities.
  • Regularly review and update the backlog to ensure that it accurately reflects the current priorities and goals of the Council.
  • Follow a clear and consistent process for moving items from the backlog to the agenda, such as delegating responsibility to roles (e.g. meeting facilitator and scribe), and consenting to the agenda at the start of meetings. Any agenda items rejected during the consent process must have their objections documented in the published meeting minutes of the Council.

Deadlock resolution

In some situations the Council might need to make an decision urgently and not feel it can construct a proposal in that time that everyone will consent to. In such cases, if everyone agrees that a timely decision they disagree with would be a better outcome than no timely decision at all, the Council may use an alternative decision-making method to attempt to resolve the deadlock. The alternative process is informal, and the council members must still re-affirm their consent to the outcome through the existing decision making process. Council members may still raise objections at any time.

For example, the Council can consent to a vote, then once the vote is complete all of the council members would consent to whatever decision the vote arrived to. The Council should strive to document the perceived advantages and disadvantages for choosing a particular alternative decision-making model.

There is, by design, no mandatory mechanism for deadlock resolution. If the representatives do not all consent to making a decision even if they don’t prefer the outcome of that decision, or if any representative feels it is still possible to produce a proposal that will garner the Council’s consent, they may always maintain their objections.

If a representative withdraws an objection, or consents to a decision they do not fully agree with (whether as a result of an alternative decision-making process or otherwise), the Council should schedule an evaluation or consider shortening the time until an already scheduled evaluation, and should establish a means of measuring/evaluating the concerns voiced. The results of this review are intended to determine whether the Council should consider changing its prior decision.

Feedback and evaluation

All policy decisions should have an evaluation date as part of the policy. Initial evaluation periods should be shorter in duration than subsequent evaluation periods. The length of evaluation periods should be adjusted based on the needs of the situation. Policies that seem to be working well and require few changes should be extended so less time is spent on unnecessary reviews. Policies that have been recently adjusted or called into question should have shortened evaluation periods to ensure they’re iterating towards stability more quickly. The Council should establish standardized periods for classes of policy to use as defaults when determining periods for new policy. For instance, roles could have an evaluation date of 3 months initially then 1 year thereafter, while general policy could default to 6 months initially and 2 years thereafter.

  • New policy decisions can always modify or replace existing policies.
  • Policy decisions must be published in a central location, with version history.
  • Modifications to the active policy docs should include or link to relevant context for the policy decision, rather than expecting people to find that context later.

Transparency and oversight for decision making

Decisions made by the Council will necessarily require varying levels of transparency and oversight based on the kind of decision being made. This section gives guidance on how the Council will seek oversight for its decisions, and what qualifies decisions to be made in private or in public.

This RFC places certain decisions into each category. All decisions not specifically enumerated must use the public policy process. The Council may evolve the categorization through the public policy process.

Decisions made by the Council fall into one of three categories, based on the level of oversight possible and necessary:

  • Decisions that the Council may make internally
  • Decisions that the Council must necessarily make privately
  • Decisions that the Council must make via public proposal

Decisions that the Council may make internally

Some types of operational decisions can be made internally by the Council, with the provision that the Council has a mechanism for community feedback on the decision after it has been made.

Adding a new decision to the list of decisions the Council can make internally requires a public policy decision. Any decisions that impact the structure, decision-makers, or oversight of the Council itself should not be added to this list.

The Council should also strive to avoid establishing de facto unwritten policy via repeated internal decisions in an effort to avoid public proposal. See “Repetition and exceptions” for more details.

This list exhaustively enumerates the set of decisions that the Council may make internally:

  • Deciding to start a process that itself will play out in public (e.g. “let’s start developing and posting the survey”, “let’s draft an RFC for this future public decision”).
  • Expressing and communicating an official position statement of the Rust Project.
  • Expressing and communicating the position of the Rust Project directly to another entity, such as the Rust Foundation.
  • Communicating via Rust Project communication resources (via the blog or all@).
  • Making most operational decisions about the Council’s own internal processes, including how the Council coordinates, the platforms it uses to communicate, where and when it meets, templates used for making and recording decisions (subject to requirements elsewhere in this document).
  • Appointing officers or temporary roles within the Council, for purposes such as leading/facilitating meetings, recording and publishing minutes, obtaining and collating feedback from various parties, etc.8 Note that any such roles (titles, duties, and current holders) must be publicly disclosed and documented.
  • Inviting specific attendees other than Council representatives to specific Council meetings or discussions, or holding a meeting open to the broader community. (In particular, the Council is encouraged to invite stakeholders of a particular decision to meetings or discussions where said decision is to be discussed.)
  • Making decisions requested by one or more teams that would be within the normal purviews of those teams to make without a public proposal. (Note that teams can ask for Council input without requesting a Council decision.)
  • Making one-off judgment calls in areas where the purviews of teams overlap or are ambiguous (though changing the purviews of those teams must be a public policy decision).
  • Any decision that this document or future Council policy specifies as an operational decision.

See the accountability section for details on the feedback mechanism for Council decisions.

Decisions that the Council must necessarily make privately

Some decisions necessarily involve private details of individuals or other entities, and making these details public would have a negative impact both on those individuals or entities (e.g. safety) and on the Project (eroding trust).

This additional constraint should be considered an exceptional case. This does not permit making decisions that would require a public proposal per the next section. However, this does permit decisions that the Council makes internally to be kept private, without full information provided for public oversight.

The Council may also decline to make a decision privately, such as if the Council considers the matter outside their purview (and chooses to defer to another team) or believes the matter should be handled publicly. However, even in such a case, the Council still cannot publicly reveal information shared with it in confidence (since otherwise the Council would not be trusted to receive such information). Obvious exceptions exist for imminent threats to safety.

Private decisions must not establish policy. The Council should also strive to avoid establishing de facto unwritten policy via repeated private decisions in an effort to avoid public proposal. See “Repetition and exceptions” for more details.

This list exhaustively enumerates the set of decisions that the Council may make either partly or entirely in private:

  • Determining relationships with new industry / Open Source initiatives, that require confidentiality before launching.
  • Discussing the personal aspects of a dispute between teams that involves some interpersonal dynamics/conflicts.
  • Participating in contract negotiations on behalf of the Project with third parties (e.g. accepting resources provided to the Project).
  • Decisions touching on Project-relevant controversial aspects of politics, personal safety, or other topics in which people may not be safe speaking freely in public.
  • Discussing whether and why a team or individual needs help and support, which may touch on personal matters.
  • Any decision that this document or future Council policy specifies as a private decision.

The Council may pull in members of other teams for private discussions leading to either a private or public decision, unless doing so would more broadly expose private information disclosed to the Council without permission. When possible, the Council should attempt to pull in people or teams affected by a decision. This also provides additional oversight.

Some matters may not be fit for full public disclosure while still being fine to share in smaller, more trusted circles (such as with all Project members, with team leads, or with involved/affected parties). The Council should strive to share information with the largest appropriate audiences for that information.

The Council may decide to withhold new decisions or aspects of decisions when it’s unclear whether the information is sensitive. However, as time progresses and it becomes clearer who the appropriate audience is or that the appropriate audience has expanded, the council should revisit its information-sharing decisions.

The Council should always loop in the moderation team for matters involving interpersonal conflict/dispute, both because such matters are the purview of the moderation team, and to again provide additional oversight.

The council should evaluate which portions of a decision or its related discussions necessarily need to be private, and should consider whether it can feasibly make non-sensitive portions public, rather than keeping an entire matter private just because one portion of it needs to be. This may include the existence of the discussion, or the general topic, if those details are not themselves sensitive.

Private matters may potentially be able to become public, or partially public, at a later date if they’re no longer sensitive. However, some matters may potentially never be able to become public, which means they will never become subject to broader review and oversight. Thus, the Council must exercise caution and prudence before making a private decision.

The Council should make every effort to not make private decisions. The Council should have appropriate additional processes in place to encourage representatives to collectively review such decisions and consider their necessity.

Decisions that the Council must make via public proposal

Decisions in this category require the Council to publicly seek feedback from the broader Rust Project in advance of the decision being made. Such decisions are proposed and decided via the appropriate public decision process, currently the RFC process (though the Council may adopt a different public proposal process in the future). The public decision process must require the consent of representatives (either affirmatively or via non-objection), must allow for blocking objections by Council representatives, must provide reasonable time for public evaluation and discussion, and must provide a clear path for public feedback to the Council.

Following the existing RFC process, public proposals must have a minimum time-delay for feedback before the decision takes effect. Any representative may request that the feedback period for a particular decision is extended to at most 20 days total. The Council may make an internal operational decision to extend the feedback period beyond 20 days. The time-delay for feedback starts only when the necessary threshold for approval is otherwise met, including there not being any raised objections. If objections are raised and resolved during the time-delay, the waiting period starts again.

The Council is expected to evolve over time to meet the evolving needs of the teams, the Rust Project, and the community. Such evolutionary changes may be small or large in scope and require corresponding amounts of oversight. Changes that materially impact the shape of the Council would need to be part of a public decision process.

As an exception to the above, modifications or removals of a single top-level team (other than the moderation team) may occur with the unanimous agreement of the Council absent the representative delegated by that top-level team.

The Council is permitted to have private discussions even on something that ultimately ends up as a public proposal or a publicly disclosed internal decision. The Council may wish to do this if the discussions are sensitive to allow decision participants to speak more frankly and freely. Additionally, in some cases, private information that can’t be disclosed may impact an otherwise public decision/proposal; the Council should strive to be as transparent and non-misleading as possible and avoid having opaque decisions where all rationale is private.

Note that all decisions fall into this category unless explicitly designated (via this document or future public proposals) to fall into another category, so this list (unlike those in the other two categories) is intentionally vague/broad: it is intended to give guidance on what likely should belong in this category without necessarily being prescriptive.

  • Any decision that has the effect of modifying the list of decision-makers on the Council or the decision-making process of the Council. For instance:
    • Changing this list (or this document in general).
    • Modifying the publication and approval process used for the Council’s public proposals. Such a proposal must use the existing established process, not the proposed process.
    • Adding, modifying, or removing policies affecting eligibility for Council representatives.
    • Adding, modifying, or removing one or more top-level teams. This includes:
      • modifying the purview of a top-level team to such an extent that it meaningfully becomes a different team.
      • reorganizing the Project such that top-level teams move underneath other teams.
    • Adding other types of Council representatives other than those delegated by top-level teams.
    • Adding, modifying, or removing policies regarding Council quorums or the locations in which binding decisions can be made.
  • Any policy decision, as opposed to a one-off operational decision. (See the decision-making section for details on policy decisions versus operational decisions.) This includes any decision that binds the decisions of other parts of the Project (e.g. other teams or individuals), effectively serving as an exception to the normal purviews of all teams. Some examples of policy decisions:
    • Modifying or extending existing policies, including those previously made via RFC.
    • A legal/licensing policy affecting Rust Project software or other work of the Rust Project.
    • A change to the Code of Conduct.
    • A policy affecting eligibility for membership in the Rust Project or any team thereof.
    • A change to how the moderation team moderates Council representatives or the Council as a whole. Such decisions must be made jointly with the moderation team.
    • An agreement with another project or organization that makes any ongoing commitments on behalf of the Rust Project. (One-off commitments involving teams that have agreed to those commitments are fine.)
    • Creating or substantially modifying legal structures (e.g. additional Foundations, changing relationship with the Rust Foundation, partnering with other legal entities).
    • Making policy decisions requested by one or more teams that would be within the normal purviews of those teams. (Note that teams can ask for Council input without requesting a Council decision.)
    • Deciding that a class of future decisions always belongs within the Council, rather than being delegated to any other team.
  • Any decision that this document or future Council policy specifies as a public policy decision.

Conflicts of interest

A Council representative must not take part in or influence a decision in which they have a conflict of interest.

Potential sources of conflicts of interest include, but are not limited to:

  • Personal: a decision about themselves
  • Financial: a decision with any substantive financial impact on the representative
  • Employment or equivalent: a decision involves another person at the same company, or would benefit/harm that company disproportionately more than others
  • Professional or other affiliation: a decision involves an organization the representative is associated with, such as an industry/professional/standards/governmental organization
  • Familial/Friendship: a decision about a person the representative cannot be expected to be impartial about, including a conflict of interest of another type through that person (such as a family member’s business)

Council representatives must promptly disclose conflicts of interest and recuse themselves from affected decisions. Council representatives must also proactively disclose likely sources of potential conflict annually to other representatives and to the moderation team.

Note that conflicts of interest can arise even if a proposal does not name a specific entity. Council representatives cannot, for instance, use their position to tailor requirements in a proposal to disproportionately benefit their employer.

A proposal favored widely across the Rust community does not automatically represent a conflict of interest for a representative merely because that representative’s employer or equivalent also favors the general area of that proposal, as long as the proposal does not favor any particular entities. For example, a proposal to improve the security of a particular Rust component is not a conflict of interest for representatives just because their employers generally care about Rust security; however, a proposal to engage specific developers or security experts, or one’s compensation being predicated on such a proposal, might still raise a conflict.

The Council may not waive a conflict of interest if one applies, even if the Council considers it minor. However, the Council may evaluate whether a conflict exists at all. Council representatives must raise potential conflicts so that the Council can make such a determination.

The Council may request specific information from a recused representative, and the recused representative may provide that information upon request.

Where possible and practical, the Council should separate decisions to reduce the scope of a conflict of interest. For instance, the Council could separate a decision to arrange access to a class of hardware (without setting specific requirements or selecting vendors) from the decision of which exact hardware to purchase and where to purchase it, if doing so made a conflict of interest only apply to the latter decision.

A representative simultaneously considering the interests of the Rust Project and the interests of any Project team is not necessarily a conflict of interest. In particular, representatives are expected to regularly take part in decisions involving their teams, as delegates from those teams.

In the unlikely event that a proposed decision produces a conflict of interest with enough representatives that the remainder cannot meet a previously established quorum requirement, and the decision must still be made, then either top-level teams must provide alternate representatives for the purposes of the specific decision, or (for public decisions only) the Council may elect to proceed with the decision while publicly documenting all conflicts of interest. (Note that proceeding with a public decision, even with conflicts documented, does not actually eliminate the conflicts or prevent them from influencing the decision; it only allows the public to judge whether the conflicts might have influenced the decision. Eliminating the conflicts entirely is always preferable.) In such a case, the Council should consider appropriate processes and policies to avoid future recurrences of a similar conflict.

Determining and changing team purviews

The Council can move an area or activity between the purviews of top-level teams either already existing or newly created (other than the moderation team). Though the purview of a given top-level team may be further sub-divided by that team, the Council only moves or adjusts top-level purviews. If a sub-divided purview is moved, the Council will work with the involved teams to coordinate the appropriate next steps. This mechanism should be used when the Council believes the existing team’s purview is too broad, such that it is not feasible to expect the team to fulfill the full purview under the current structure. However, this should not happen when a team only currently lacks resources to perform part of its duties.

The Council also must approve expansions of a top-level team’s purview, and must be notified of reductions in a top-level team’s purview. This most often happens when a team self-determines that they wish to expand or reduce their purview. This could also happen as part of top-level teams agreeing to adjust purviews between themselves. Council awareness of changes to a purview is necessary, in part, to ensure that the purview can be re-assigned elsewhere or intentionally left unassigned by the Council.

However, teams (individually or jointly) may further delegate their purviews to subteams without approval from the Council. Top-level teams remain accountable for the full purviews assigned to them, even if they delegate (in other words, teams are responsible for ensuring the delegation is successful).

The Council should favor working with teams on alternative strategies prior to shifting purviews between teams, as this is a relatively heavyweight step. It’s also worth noting that one of the use cases for this mechanism is shifting a purview previously delegated to a team that functionally no longer exists (for instance, because no one on the team has time), potentially on a relatively temporary basis until people arrive with the time and ability to re-create that team. This section intentionally does not put constraints on the Council for exactly how (or whether) this consultation should happen.

Mechanisms for oversight and accountability

The following are various mechanisms that the Council uses to keep itself and others accountable.

Ensuring the Council is accountable

The Council must publicly ensure that the wider Project and community’s expectations of the Council are consistently being met. This should be done both by adjusting the policies, procedures, and outcomes of the Council as well as education of the Project and community when their expectations are not aligned with the reality.

To achieve this, in addition to rotating representatives and adopting a “public by default” orientation, the Council must regularly (at least on a quarterly basis) provide some sort of widely available public communication on their activities as well as an evaluation of how well the Council is functioning using the list of duties, expectations, and constraints as the criteria for this evaluation.

Each year, the Council must solicit feedback on whether the Council is serving its purpose effectively from all willing and able Project members and openly discuss this feedback in a forum that allows and encourages active participation from all Project members. To do so, the Council and other Project members consult the high-level duties, expectations, and constraints listed in this document and any subsequent revisions thereof to determine if the Council is meeting its duties and obligations.

In addition, it is every representative’s individual responsibility to watch for, call out, and refuse to go along with failures to follow this document, other Council policies and procedures, or any other aspects of Council accountability. Representatives should strive to actively avoid “diffusion of responsibility”, the phenomenon in which a group of people collectively fail to do something because each individual member (consciously or subconsciously) believes that someone else will do so. The Council may also wish to designate a specific role with the responsibility of handling and monitoring procedural matters, and in particular raising procedural points of order, though others can and should still do so as well.

If any part of the above process comes to the conclusion that the Council is not meeting its obligations, then a plan for how the Council will change to better be able to meet their obligations must be presented as soon as possible. This may require an RFC changing charter or similar, a rotation of representatives, or other substantive changes. Any plan should have concrete measures for how the Council and/or Rust governance as a whole will evolve in light of the previous year’s experience.

Ensuring Council representatives are accountable

Council representatives should participate in regular feedback with each other and with their respective top-level team (the nature of which is outside the scope of this document) to reflect on how well they are fulfilling their duties as representatives. The goal of the feedback session is to help representatives better understand how they can better serve the Project. This feedback must be shared with all representatives, all members of the representative’s top-level team, and with the moderation team. This feedback should ask for both what representatives have done well and what they could have done better.

Separately, representatives should also be open to private feedback from their teams and fellow representatives at any time, and should regularly engage in self-reflection about their role and efficacy on the Council.

Artifacts from these feedback processes must never be made public to ensure a safe and open process. The Council should also reflect on and adjust the feedback process if the results do not lead to positive change.

If other members of the Council feel that a Council representative is not collaborating well with the rest of the Council, they should talk to that representative, and if necessary to that representative’s team. Council representatives should bring in moderation/mediation resources as needed to facilitate those conversations. Moderation can help resolve the issue, and/or determine if the issue is actionable and motivates some level of escalation.

While it is out of scope for this document to specify how individual teams ensure their representatives are held accountable, we encourage teams to use the above mechanisms as inspiration for their own policies and procedures.

Ensuring teams are accountable

Teams regularly coordinate and cooperate with each other, and have conversations about their needs; under normal circumstances the Council must respect the autonomy of individual teams.

However, the Council serves as a means for teams to jointly hold each other accountable, to one another and to the Project as a whole. The Council can:

  • Ask a team to reconsider a decision that failed to take the considerations of other teams or the Project as a whole into consideration.
  • Encourage teams to establish processes that more regularly take other teams into consideration.
  • Ensure a shared understanding of teams’ purviews.
  • Ensure teams are willing and able to fulfill those purviews.
  • Establish new teams that split a team’s purview up into more manageable chunks.

The accountability process must not be punitive, and the process must be done with the active collaboration of the teams in question.

In extreme circumstances where teams are willfully choosing to not act in good faith with regards to the wider Project, the Council has the authority to change a team’s purview, move some subset of a team’s purview to another team, or remove a team entirely. This is done through the Council’s regular decision making process. (This does not apply to the moderation team; see the next section for accountability between the Council and moderation team.)

Footnotes

1

The term ‘authority’ here refers to the powers and responsibilities the Council has to ensure the success of the Rust Project. This document lays out the limits of these powers, so that the Council will delegate the authority it has to teams responsible for the concerns of the Project. These concerns may include - but are not limited to - product vision, day-to-day procedures, engineering decisions, mentoring, and marketing.

2

Throughout this document, “teams” includes subteams, working groups, project groups, initiatives, and all other forms of official collaboration structures within the Project. “Subteams” includes all forms of collaboration structures that report up through a team.

3

Subteams or individuals that fall under multiple top-level teams should not get disproportionate representation by having multiple representatives speaking for them on the Council. Whenever a “diamond” structure like this exists anywhere in the organization, the teams involved in that structure should strive to avoid ambiguity or diffusion of responsibility, and ensure people and teams know what paths they should use to raise issues and provide feedback.

5

The Council consists only of the representatives provided to it by top-level teams, and cannot appoint new ad hoc members to itself. However, if the Council identifies a gap in the project, it can create a new top-level team. In particular, the Council can bootstrap the creation of a team to address a problem for which the Project doesn’t currently have coordinated/organized expertise and for which the Council doesn’t know the right solution structure to charter a team solving it. In that case, the Council could bring together a team whose purview is to explore the solution-space for that problem, determine the right solution, and to return to the Council with a proposal and charter. That team would then provide a representative to the Council, who can work with the Council on aspects of that problem and solution.

4

This also effectively constrains the number of Council representatives to the same range. Note that this constraint is independently important.

6

Being a Council representative is ultimately a position of service to the respective team and to the Project as a whole. While we hope that the position is fulfilling and engaging to whomever fills it, we also hope that it is not viewed as a position of status to vie for.

8

The Council is not required to assign such roles exclusively to Council representatives; the Council may appoint any willing Project member. Such roles do not constitute membership in the Council for purposes such as decision-making.

7

In practice the infrastructure team as a whole does not have access to all credentials and internally strives to meet the principle of least privilege.

Moderation, disagreements, and conflicts

This section describes the roles of the Leadership Council and the moderation team in helping resolve disagreements and conflicts, as well as the interactions between those teams.

Disagreements and conflicts fall on a spectrum of interpersonal interaction. Disagreements are more factual and/or technical misalignments, while conflicts are more social or relational roadblocks to collaboration. Many interactions might display aspects of both disagreement and conflict. The Council can help with aspects of disagreement, while aspects of conflict are the purview of the moderation team.

This document does not specify moderation policy in general, only the portion of it necessary to specify interactions with the Council and the checks and balances between the Council and the moderation team. General moderation policy is out of scope for this document.

Much of the work of the Rust Project involves collaboration with other people, all of whom care deeply about their work. It’s normal for people to disagree, and to feel strongly about that disagreement. Disagreement can also be a powerful tool for surfacing and addressing issues, and ideally, people who disagree can collaboratively and (mostly) amicably explore those disagreements without escalating into interpersonal conflicts.

Situations where disagreements and conflicts arise may be complex. Disagreements can escalate into conflicts, and conflicts can de-escalate into disagreements. If the distinction between a disagreement and a conflict is not clear in the situation, or if participants disagree, assume the situation is a conflict.

In the event of a conflict, involved parties should reach out to the moderation team to help resolve the conflict as soon as possible. Time is a critical resource in attempting to resolve a conflict before it gets worse or causes more harm.

Disagreements among teams

Where possible, teams should attempt to resolve disagreements on their own, with assistance from the Council as needed. The Council can make judgment calls to settle disagreements, but teams need to maintain good working relationships with each other to avoid persistent disagreements or escalations into conflicts.

Potential resolution paths for disagreements between teams could include selecting a previously discussed option, devising a new option, deciding whose purview the decision falls in, or deciding that the decision is outside the purviews of both teams and leaving it to the Council to find a new home for that work.

Conflicts involving teams or Project members

Conflicts involving teams or Project members should be brought to the moderation team as soon as possible. The Council can help mitigate the impact of those conflicts on pending/urgent decisions, but the moderation team is responsible for helping with conflicts and interpersonal issues, across teams or otherwise.

Individuals or teams may also voluntarily engage in other processes to address conflicts or interpersonal issues, such as non-binding external mediation. Individuals or teams should keep the moderation team in the loop when doing so, and should seek guidance from the moderation team regarding appropriate resources or approaches for doing so. Individuals or teams must not use resources that would produce a conflict of interest.

Contingent moderators

The moderation team must at all times maintain a publicly documented list of “contingent moderators”, who must be approved by both the moderation team and the Council via internal consent decision. The moderation team and contingent moderation team should both consist of at least three members each. The contingent moderators must be:

  • Not part of the current moderation team or the Leadership Council.
  • Widely trusted by Rust Project members as jointly determined by the Council and moderation team; this will often mean they’re already part of the Project in some capacity.
  • Qualified to do moderation work and audits as jointly determined by the Council and moderation team. More detailed criteria and guidelines will be established by moderation policy, which is out of scope for this document.
  • Willing to serve as contingent moderators: willing to do audits, and willing to do interim moderation work if the moderation team dissolves or becomes unavailable, until they can appoint new full moderators. (The contingent moderators are not expected to be willing to do moderation work long-term.)
  • Willing to stay familiar with moderation policy and procedure to the standards expected of a moderation team member (including any associated training). Contingent moderators should receive the same opportunities for training as the moderation team where possible.

The need for contingent moderators arises in a high-tension situation, and the Project and Council must be prepared to trust them to step into that situation. Choosing people known and trusted by the rest of the Project helps lower tensions in that situation.

Moderation is a high-burnout activity, and individual moderators or the moderation team may find itself wishing to step away from that work. Note that one or more individual moderators may always choose to step down, in which case the moderation team should identify and bring in new moderators to fill any gaps or shortfalls; if the moderation team asks a contingent moderator to become a full moderator, the team should then appoint a new contingent moderator. An individual moderator who stepped down may be selected as a contingent moderator. If the moderation team as a whole becomes simultaneously unavailable (as determined jointly by the Council and contingent moderators via internal consent decision), or chooses to step down simultaneously, the contingent moderators become the interim moderation team and must promptly appoint new contingent moderators and start seeking new full moderators.

As the contingent moderator role does not have any regular required activities outside of exceptional situations, those appointed to that role must have regular check-ins with the moderation team, to reconfirm that they’re still willing to serve in that role, and to avoid a circumstance in which the contingent moderators are abruptly needed and turn out to be unavailable.

Moderation team policies and procedures

The moderation team has a duty to have robust policies and procedures in place. The Council provides oversight and assistance to ensure that the moderation team has those policies and procedures and that they are sufficiently robust.

The Council may provide feedback to the moderation team and the moderation team is required to consider all feedback received. If the Council feels the moderation team has not followed moderation policies and procedures, the Council may require an audit by the contingent moderators. However, the Council may not overrule a moderation decision or policy.

Audits

If any Council member believes a moderation decision (or series of decisions) has not followed the moderation team’s policies and procedures, they should promptly inform the moderation team. The Council and moderation team should then engage with each other, discuss and understand these concerns, and work to address them.

One of the mechanisms this document provides for checking the moderation team’s actions in a privacy-preserving manner is an audit mechanism. In any case where any Council member believes moderation team actions have not followed documented policies or procedures, the Council member may decide to initiate the audit process. (In particular, they might do this in response to a report from a community member involved in a moderation situation.) This happens in addition to the above engagement and conversation; it is not a replacement for direct communication between the Council and the moderation team.

In an audit, the contingent moderation team works with the moderation team to establish whether the moderation team followed documented policies and procedures. This mechanism necessarily involves the contingent moderation team using their own judgment to evaluate moderation policy, specific evidence or communications, and corresponding moderation actions or proposed actions. However, this mechanism is not intended to second-guess the actions themselves; the audit mechanism focuses on establishing whether the moderation team is acting according to its established policy and procedures, as well as highlighting unintended negative consequences of the policies and procedures themselves.

The contingent moderators also reach out to the Council to find out any additional context they might need.

Moderation processes and audits both take time, and must be performed with diligence. However, the Council, contingent moderators, and moderation team should all aim to communicate their concerns and expectations to each other in a reasonably timely fashion and maintain open lines of communication.

Contingent moderators must not take part in decisions or audits for which they have a conflict of interest. Contingent moderators must not have access to private information provided to moderation before the contingent moderator was publicly listed as part of the contingent moderation team; this gives people speaking with the moderation team the opportunity to evaluate potential concerns or conflicts of interest.

The discussions with the Council and the contingent moderation team may discover that the moderation team had to make an exception in policy for a particular case, as there was an unexpected condition in policies or that there was contextual information that couldn’t be incorporated in policy. This is an expected scenario that merits additional scrutiny by the contingent moderation team on the rationale for making an exception and the process for deciding the necessity to make an exception, but is not inherently a violation of moderation team responsibilities.

As the audit process and the Council/moderation discussions proceed, the moderation team may decide to alter moderation policies and/or change the outcome of specific moderation decisions or proposed decisions. This is solely a decision for the moderation team to make.

The contingent moderation team must report the results of the audit to the moderation team and the Council for their review. This must not include any details that may reveal private information, either directly or indirectly. Together with the discussions with the moderation team, this should aim to address the concerns of the Council.

Last-resort accountability

The Leadership Council and moderation team each have substantial power within the Rust Project. This document provides many tools by which they can work out conflicts. This section outlines the last-resort mechanisms by which those teams can hold each other accountable. This section is written in the hopes that it will never be needed, and that teams will make every possible effort to resolve conflicts without reaching this point.

If the Council believes there is a systemic problem with the moderation team (whether based on an audit report from the contingent moderation team or otherwise), and the Council and moderation team cannot voluntarily come to agreement on how to address the situation, then as a last resort, the Council (by unanimous decision) may simultaneously dissolve itself and the moderation team. The top-level teams must then appoint new representatives to the Council, and the contingent moderation team becomes the new interim moderation team.

Conversely, if the moderation team believes the Council has a systemic problem, and the Council and moderation team cannot voluntarily come to agreement on how to address the situation, then as a last resort, the moderation team (by unanimous decision) may simultaneously dissolve itself and the Council. This process can only be enacted if there are at least three moderation team members. The top-level teams must then appoint new representatives to the Council, and the contingent moderation team becomes the new interim moderation team.

The moderation team’s representative is recused from the decision to dissolve the Council and moderation team to avoid conflicts of interest, though that representative must still step down as well.

The removed representatives and moderators may not serve on either the Council or the moderation team for at least one year.

By default, the new Council and interim moderation team will take responsibility for clearly communicating the transition.

This mechanism is an absolute last resort. It will almost certainly produce suboptimal outcomes, to say the least. If situations escalate to this outcome, many things have gone horribly wrong, and those cleaning up the aftermath should endeavor to prevent it from ever happening again. The indication (by either the moderation team or the Council) that the situation might escalate to this point should be considered a strong signal to come to the table and find a way to do “Something Else which is Not That” to avoid the situation.

Moderation actions involving Project members

The moderation team, in the course of doing moderation work, necessarily requires the ability to take action not just against members of the Rust community but also against members of the Rust Project. Those actions may span the ladder of escalation all the way from a conversation to removal from the Project. This puts the moderation team in a position of power and trust. This document seeks to provide appropriate accountability and cross-checks for the moderation team, as well as for the Council.

If the moderation team plans to enact externally visible sanctions against any member of the Rust Project (anything that would create a conspicuous absence, such as removal from a role, or exclusion from participation in a Project space for more than a week), then any party may request that an audit take place by reaching out to either the Council or contingent moderators, and that audit will be automatically granted.

Until June 2024, audits are automatically performed even without a request, to ensure the process is functional. After that time, the Council and moderation team will jointly review and decide whether to renew this provision.

When the moderation team sends a warning to a Project member, or sends a notification of moderation action regarding a Project member, that message will mention the option of requesting an audit.

Conflicts regarding Project members should be brought to the moderation team as soon as possible.

Conflicts involving Council representatives

Conflicts involving Council representatives, or alternates, follow the same process as conflicts involving Project members. The moderation team has the same ability to moderate representatives or alternates as any other member of the Project, including the required audit by the contingent moderators for any externally visible sanction. This remains subject to the same accountability mechanisms as for other decisions of the moderation team.

In addition to the range of moderation actions already available, the moderation team may take the following additional actions for representatives or alternates as a near-last resort, as a lesser step on the ladder of escalation than removing a member from the Project entirely. These actions are not generally specific to the Council, and apply to other Rust teams as well.

  • The moderation team may decide to remove a representative from the Council. The top-level team represented by that representative should delegate a new representative to serve the remainder of the term, starting immediately.
  • The moderation team may decide to prevent a Project member from becoming a Council representative.
  • The moderation team and Council (excluding the affected parties) may jointly decide (as a private operational consent decision) to apply other sanctions limiting the representative’s involvement in the Council. (In this scenario, representatives are not excluded if they have a conflict of interest, as the entire Council will have to cooperate to make the sanctions effective. If the conflicts of interest thus prevent applying these partial sanctions, the moderation team always has the option of full sanctions such as removal.)

All of these also trigger a required audit. The Council must also be notified of any moderation actions involving representatives or alternates, or actions directly preventing people from becoming representatives.

Conflicts involving moderation team members

Conflicts involving a member of the moderation team will be handled by the remaining members of the moderation team (minus any with a conflict of interest), together with the contingent moderation team to provide additional oversight. Any member of the moderation or contingent moderation team should confer with the Council if there is a more systemic issue within the moderation team. The contingent moderators must audit this decision and must provide an audit report to the Council and moderation team.

Policies

These chapters contain policies covering the Rust project and its members.

Rust crate ownership policy

Introduction

This document covers the policy for crates published by the Rust project. This was initially adopted via RFC 3119.

Categories

Rust crates published by the Rust project fall into one of the following categories:

  • Intentional artifacts: These are crates which are intentionally released by some team (usually libs), are actively maintained, are intended to be used by external users, and intentionally have an air of officialness. Example: libc
  • Internal use: These are crates which are used by some “internal client”, like rustc, crates.io, docs.rs, etc. Their primary purpose is not to be used by external users, though the teams that maintain them (typically the teams of their internal client) may wish for the crate to have wider adoption. The line can be blurry between these and “intentional artifacts” and ultimately depends on the goals of the team. Example: conduit, measureme. There are two subcategories based on whether they are intended to ever show up as a transitive dependency:
    • Transitively intentional: These are dependencies of intentional artifact libraries, and will show up in users’ dependency trees, even if they are not intended to be directly used. The Rust Project still needs to handle security issues in these crates as if they are “intentional artifacts”.
    • Not transitively intentional: These are dependencies of shipped binaries, CI tooling, the stdlib, or are otherwise not expected to show up in users’ dependency trees. The Rust Project may need to handle security issues in these crates internally, but does not necessarily need to message the wider public about security issues in these crates. If a security issue in one of these crates affects a published binary (or crates.io, etc), that will still need to be handled as a bug in the binary or website.
  • Experiment: This was an experiment by a team, intended to be picked up by users to better inform API design (or whatever), without a long-term commitment to maintainership. Example: failure
  • Deprecated: This used to be an “intentional artifact” (or experiment/internal use) but isn’t anymore. Example: rustc-serialize
  • Placeholder: Not a functional crate, used for holding on to the name of an official tool, etc. Example: rustup
  • Expatriated: This may have been an “intentional artifact”, and still is intended to be used by external users, but is no longer intended to be official. In such cases the crate is no longer owned/managed by the Rust project. Example: rand

Policy

Every crate in the organization must be owned by at least one team on crates.io. Teams should use rust-lang/foo teams for this. Non-expatriated crates may not have personal accounts as owners; if a crate needs additional owners that are not part of teams; the team should create a project group. Note that this does not forbid non-team (or project group) users from having maintainer access to the repository; it simply forbids them from publishing.

Currently it is not possible for a crate to be owned by only a team; the rust-lang-owner account (or a similar account to be decided by the infra team) can be used as a stopgap in such cases. We should try to phase this account out as much as possible, in order to make sure it is clear who is responsible for each crate. For crates being auto-published, a rust-lang/publish-bots team (or individual bot accounts) can be used to allow bot accounts to publish crates.

Each crate in the organization, and any future crates in the organization, must decide which to which category they belong in according to the above categorization. If you’re not sure what the category should be when registering a crate, or do not wish to make a decision just yet, pick “Experimental”.

Each published crate must contain a README. At a minimum, this README must mention the primary owning team. Based on their categories, crates are also required to include the following information in their READMEs and documentation roots:

Intentional artifact

“Intentional artifact” crates can choose their commitments but should be clear about what they are in their messaging. If and when a team has a charter, the crate should also be mentioned in the charter as an intentional artifact. Deprecating an intentional artifact should not be taken lightly and will require an RFC.

An example of such messaging would be text like:

This crate is maintained by The Rust [team] Team for use by the wider ecosystem. This crate is post-1.0 and follows semver compatibility for its APIs.

Security issues in these crates should be handled with the appropriate weight and careful messaging by the Security Response WG, and should be reported according to the project’s security policy.

Internal use

“Internal use” crates should contain the following text near the top of the readme/documentation:

This crate is maintained by [team], primarily for use by [rust project(s)] and not intended for external use (except as a transitive dependency). This crate may make major changes to its APIs or be deprecated without warning.

The “except as a transitive dependency” text should be included if the crate is a dependency of an intentional-artifact library (“transitively intentional”).

Security issues in transitively intentional libraries should be handled as if they were intentional artifacts.

Experiment

“Experiment” crates should mention they are experiments. Experiment crates may be intended to be used in a scoped sort of way; so if they are intended to be used they should be clear about what they are guaranteeing.

An example of such messaging would be text like:

This crate is maintained by [team] as a part of an experiment around [thingy]. We encourage people to try to use this crate in their projects and provide feedback through [method], but do not guarantee long term maintenance.

or, for experiments that are not intended to be used at all:

This crate is maintained by [team] and is an internal experiment. We do not guarantee stability or long term maintenance, use at your own risk.

Ideally, experimental crates that are published for feedback purposes will have a document to link to that lists out the purpose, rough duration, and processes of the experiment.

Deprecated

“Deprecated” crates should contain the following text near the top of the readme/documentation:

This crate is deprecated and not intended to be used.

Placeholder

“Placeholder” crates should contain the following text in their published readme/documentation:

This crate is a functionally empty crate that exists to reserve the crate name of [tool]. It should not be used.

In general it is better to have an empty placeholder crate published instead of reserving the crate via yanking, so that there is a readme that helps people understand why the crate is unavailable.

Expatriated

It’s unclear if any action should be taken on these beyond removing any semblance of officialness (including rust-lang/foo team owners). We currently have only one such crate (rand).

These should by and large not be considered to be “team managed” crates; this category is in this RFC for completeness to be able to talk about expatriation as an end state.

Transitions and new crates

Teams should feel free to create new crates in any of these categories; however “Intentional Artifact” crates must be accompanied with an RFC. As we move towards having team charters, this can transition to being a charter change (which may require an RFC or use its own process). Teams should notify council@rust-lang.org when they’ve created such crates so that the Leadership Council may track these crates and ensure this policy is applied.

From time to time a team’s plan for a crate may change: experiments may conclude, crates may need to be deprecated, or the team may decide to release something for wider usage.

In general, teams should notify council@rust-lang.org when such a transition is being made.

Any transition away from “Intentional Artifact” requires an RFC.

Any transition to “Intentional Artifact” should ideally be accompanied by an RFC, and an update to the team charter if there is one.

Expatriation should basically never occur anymore, but it also requires an RFC and Leadership Council approval in case it is really necessary. If a team wishes to stop working on a crate, they should deprecate it and encourage the community to fork it or build their own thing. The repository may be transferred out, however the crates.io name is kept by the Rust project and the new group of maintainers will need to pick a new crate name.

If “transitively intentional” crates are being deprecated care should be taken to ensure security issues will still be handled.

Transitions between the other types can be made at will since they explicitly and clearly state their lack of a strong stability/maintenance guarantee.

Applying this to existing crates

An audit should be performed on all existing potentially “official” crates, collecting them in a list and roughly determining what their team and category should be.

Once we have this list, we can approach teams with lists of crates and request that they verify that the categorization is accurate. In the case of some crates this might take some time as the team may need to work out what their intentions are with a particular crate.

Then, working with the teams, we make these changes to their documentation. We also make sure all crates have the appropriate rust-lang/teamname github owner, and remove personal accounts from the owners list.

For crates that are in direct use by a lot of the wider community, if we end up categorizing them as anything other than “intentional artifact”, there should be an attempt to announce this “change” to the community. While there was no formal commitment made in case of these crates, the vague sense of officialness may have made people believe there was, and we should at least try to rectify this so that people are not continually misled. Whether or not this needs to be done, and how, can be figured out by the individual teams.

A large part of this work can be parallelized; and it does not need to occur all at once.

Infrastructure

This section documents Rust’s infrastructure, and how it is maintained.

  • rust-toolstate records build and test status of external tools bundled with the Rust repository.

Other Rust Installation Methods

Which installer should you use?

Rust runs on many platforms, and there are many ways to install Rust. If you want to install Rust in the most straightforward, recommended way, then follow the instructions on the main installation page.

That page describes installation via rustup, a tool that manages multiple Rust toolchains in a consistent way across all platforms Rust supports. Why might one not want to install using those instructions?

  • Offline installation. rustup downloads components from the internet on demand. If you need to install Rust without access to the internet, rustup is not suitable.
  • Preference for the system package manager. On Linux in particular, but also on macOS with Homebrew, MacPorts or pkgsrc, and Windows with Chocolatey or Scoop, developers sometimes prefer to install Rust with their platform’s package manager.
  • Preference against curl | sh. On Unix, we usually install rustup by running a shell script via curl. Some have concerns about the security of this arrangement and would prefer to download and run the installer themselves.
  • Validating signatures. Although rustup performs its downloads over HTTPS, the only way to verify the signatures of Rust installers today is to do so manually with the standalone installers.
  • GUI installation and integration with “Add/Remove Programs” on Windows. rustup runs in the console and does not register its installation like typical Windows programs. If you prefer a more typical GUI installation on Windows there are standalone .msi installers. In the future rustup will also have a GUI installer on Windows.

Rust’s platform support is defined in three tiers, which correspond closely with the installation methods available: in general, the Rust project provides binary builds for all tier 1 and tier 2 platforms, and they are all installable via rustup. Some tier 2 platforms though have only the standard library available, not the compiler itself; that is, they are cross-compilation targets only; Rust code can run on those platforms, but they do not run the compiler itself. Such targets can be installed with the rustup target add command.

Other ways to install rustup

The way to install rustup differs by platform:

  • On Unix, run curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh in your shell. This downloads and runs rustup-init.sh, which in turn downloads and runs the correct version of the rustup-init executable for your platform.
  • On Windows, download and run rustup-init.exe.

rustup-init can be configured interactively, and all options can additionally be controlled by command-line arguments, which can be passed through the shell script. Pass --help to rustup-init as follows to display the arguments rustup-init accepts:

curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- --help

If you prefer not to use the shell script, you may directly download rustup-init for the platform of your choice:

Standalone installers

The official Rust standalone installers contain a single release of Rust, and are suitable for offline installation. They come in three forms: tarballs (extension .tar.xz), that work in any Unix-like environment, Windows installers (.msi), and Mac installers (.pkg). These installers come with rustc, cargo, rustdoc, the standard library, and the standard documentation, but do not provide access to additional cross-targets like rustup does.

The most common reasons to use these are:

  • Offline installation
  • Preferring a more platform-integrated, graphical installer on Windows

Each of these binaries is signed with the Rust signing key, which is available on keybase.io, by the Rust build infrastructure, with GPG. In the tables below, the .asc files are the signatures.

Past releases can be found in the archive.

platformstable (1.83.0)betanightly
aarch64-apple-darwinpkg
pkg.asc
tar.xz
tar.xz.asc
pkg
pkg.asc
tar.xz
tar.xz.asc
pkg
pkg.asc
tar.xz
tar.xz.asc
aarch64-pc-windows-msvcmsi
msi.asc
tar.xz
tar.xz.asc
msi
msi.asc
tar.xz
tar.xz.asc
msi
msi.asc
tar.xz
tar.xz.asc
aarch64-unknown-linux-gnutar.xz
tar.xz.asc
tar.xz
tar.xz.asc
tar.xz
tar.xz.asc
aarch64-unknown-linux-musltar.xz
tar.xz.asc
tar.xz
tar.xz.asc
tar.xz
tar.xz.asc
arm-unknown-linux-gnueabitar.xz
tar.xz.asc
tar.xz
tar.xz.asc
tar.xz
tar.xz.asc
arm-unknown-linux-gnueabihftar.xz
tar.xz.asc
tar.xz
tar.xz.asc
tar.xz
tar.xz.asc
armv7-unknown-linux-gnueabihftar.xz
tar.xz.asc
tar.xz
tar.xz.asc
tar.xz
tar.xz.asc
i686-pc-windows-gnumsi
msi.asc
tar.xz
tar.xz.asc
msi
msi.asc
tar.xz
tar.xz.asc
msi
msi.asc
tar.xz
tar.xz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.xz
tar.xz.asc
msi
msi.asc
tar.xz
tar.xz.asc
msi
msi.asc
tar.xz
tar.xz.asc
i686-unknown-linux-gnutar.xz
tar.xz.asc
tar.xz
tar.xz.asc
tar.xz
tar.xz.asc
loongarch64-unknown-linux-gnutar.xz
tar.xz.asc
tar.xz
tar.xz.asc
tar.xz
tar.xz.asc
loongarch64-unknown-linux-musltar.xz
tar.xz.asc
tar.xz
tar.xz.asc
tar.xz
tar.xz.asc
powerpc-unknown-linux-gnutar.xz
tar.xz.asc
tar.xz
tar.xz.asc
tar.xz
tar.xz.asc
powerpc64-unknown-linux-gnutar.xz
tar.xz.asc
tar.xz
tar.xz.asc
tar.xz
tar.xz.asc
powerpc64le-unknown-linux-gnutar.xz
tar.xz.asc
tar.xz
tar.xz.asc
tar.xz
tar.xz.asc
powerpc64le-unknown-linux-musltar.xz
tar.xz.asc
riscv64gc-unknown-linux-gnutar.xz
tar.xz.asc
tar.xz
tar.xz.asc
tar.xz
tar.xz.asc
s390x-unknown-linux-gnutar.xz
tar.xz.asc
tar.xz
tar.xz.asc
tar.xz
tar.xz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.xz
tar.xz.asc
pkg
pkg.asc
tar.xz
tar.xz.asc
pkg
pkg.asc
tar.xz
tar.xz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.xz
tar.xz.asc
msi
msi.asc
tar.xz
tar.xz.asc
msi
msi.asc
tar.xz
tar.xz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.xz
tar.xz.asc
msi
msi.asc
tar.xz
tar.xz.asc
msi
msi.asc
tar.xz
tar.xz.asc
x86_64-unknown-freebsdtar.xz
tar.xz.asc
tar.xz
tar.xz.asc
tar.xz
tar.xz.asc
x86_64-unknown-illumostar.xz
tar.xz.asc
tar.xz
tar.xz.asc
tar.xz
tar.xz.asc
x86_64-unknown-linux-gnutar.xz
tar.xz.asc
tar.xz
tar.xz.asc
tar.xz
tar.xz.asc
x86_64-unknown-linux-musltar.xz
tar.xz.asc
tar.xz
tar.xz.asc
tar.xz
tar.xz.asc
x86_64-unknown-netbsdtar.xz
tar.xz.asc
tar.xz
tar.xz.asc
tar.xz
tar.xz.asc

Source code

If you want to build the Rust toolchain from source code, you can use the following links to download source code tarballs.

ChannelArchives + Signatures
stable (1.83.0)tar.xz
tar.xz.asc
betatar.xz
tar.xz.asc
nightlytar.xz
tar.xz.asc

If you want to make sure that the published source tarball matches what is in the rust git repository, you can use the following script as a template:

Script for reproducing source tarball contents
#!/bin/bash

set -e

# You can use either a commit SHA or a stable release version (e.g. 1.XY.Z)
TAG=a8cfc83801301c2b4f0fd030192e268eeb15d473
# TAG=1.77.1

# Clone Rust toolchain repository from GitHub
git clone https://github.com/rust-lang/rust
cd rust
git reset --hard ${TAG}

cat >config.toml << EOF
[rust]
# Use for a commit SHA
channel = "nightly"

# Use for a stable release
# channel = "stable"

[dist]
compression-formats = ["xz"]
compression-profile = "fast"
EOF

# Build the source tarball from git into build/dist/
./x dist rustc-src

# Download source tarball for a commit SHA
wget https://ci-artifacts.rust-lang.org/rustc-builds/${TAG}/rustc-nightly-src.tar.xz

# Download a source tarball for a stable release
# wget https://static.rust-lang.org/dist/rustc-${TAG}-src.tar.xz

# Decompress the tarballs and check if they're the same
xz --decompress rustc-*-src.tar.xz
xz --decompress build/dist/rustc-*-src.tar.xz
diff rustc-*-src.tar build/dist/rustc-*-src.tar

Archive of Rust Stable Standalone Installers

Note: The Rust project only supports the latest stable release with security patches. Generally speaking these archives should not be used without some extra mechanisms to provide for patching.

The official Rust standalone installers contain a single release of Rust, and are suitable for offline installation. They come in three forms: tarballs (extension .tar.xz), that work in any Unix-like environment, Windows installers (.msi), and Mac installers (.pkg). These installers come with rustc, cargo, rustdoc, the standard library, and the standard documentation, but do not provide access to additional cross-targets like rustup does.

The most common reasons to use these are:

  • Offline installation
  • Preferring a more platform-integrated, graphical installer on Windows

Each of these binaries is signed with the Rust signing key, which is available on keybase.io, by the Rust build infrastructure, with GPG. In the tables below, the .asc files are the signatures.

Stable (1.82.0)

platformstable (1.82.0)
aarch64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
aarch64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
aarch64-unknown-linux-musltar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
loongarch64-unknown-linux-gnutar.gz
tar.gz.asc
loongarch64-unknown-linux-musltar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
riscv64gc-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-illumostar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.81.0)

platformstable (1.81.0)
aarch64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
aarch64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
aarch64-unknown-linux-musltar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
loongarch64-unknown-linux-gnutar.gz
tar.gz.asc
loongarch64-unknown-linux-musltar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
riscv64gc-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-illumostar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.80.1)

platformstable (1.80.1)
aarch64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
aarch64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
aarch64-unknown-linux-musltar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
loongarch64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
riscv64gc-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-illumostar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.80.0)

platformstable (1.80.0)
aarch64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
aarch64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
aarch64-unknown-linux-musltar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
loongarch64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
riscv64gc-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-illumostar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.79.0)

platformstable (1.79.0)
aarch64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
aarch64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
aarch64-unknown-linux-musltar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
loongarch64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
riscv64gc-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-illumostar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.78.0)

platformstable (1.78.0)
aarch64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
aarch64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
aarch64-unknown-linux-musltar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
loongarch64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
riscv64gc-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-illumostar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.77.2)

platformstable (1.77.2)
aarch64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
aarch64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
aarch64-unknown-linux-musltar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
loongarch64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
riscv64gc-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-illumostar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.77.1)

platformstable (1.77.1)
aarch64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
aarch64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
aarch64-unknown-linux-musltar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
loongarch64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
riscv64gc-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-illumostar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.77.0)

platformstable (1.77.0)
aarch64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
aarch64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
aarch64-unknown-linux-musltar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
loongarch64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
riscv64gc-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-illumostar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.76.0)

platformstable (1.76.0)
aarch64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
aarch64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
aarch64-unknown-linux-musltar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
loongarch64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
riscv64gc-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-illumostar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.75.0)

platformstable (1.75.0)
aarch64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
aarch64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
aarch64-unknown-linux-musltar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
loongarch64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
riscv64gc-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-illumostar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.74.1)

platformstable (1.74.1)
aarch64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
aarch64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
aarch64-unknown-linux-musltar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
loongarch64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
riscv64gc-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-illumostar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.74.0)

platformstable (1.74.0)
aarch64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
aarch64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
aarch64-unknown-linux-musltar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
loongarch64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
riscv64gc-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-illumostar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.73.0)

platformstable (1.73.0)
aarch64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
aarch64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
aarch64-unknown-linux-musltar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
loongarch64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
riscv64gc-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-illumostar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.72.1)

platformstable (1.72.1)
aarch64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
aarch64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
aarch64-unknown-linux-musltar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
loongarch64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
riscv64gc-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-illumostar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.72.0)

platformstable (1.72.0)
aarch64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
aarch64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
aarch64-unknown-linux-musltar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
loongarch64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
riscv64gc-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-illumostar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.71.1)

platformstable (1.71.1)
aarch64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
aarch64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
aarch64-unknown-linux-musltar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
loongarch64-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
riscv64gc-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-illumostar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.71.0)

platformstable (1.71.0)
aarch64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
aarch64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
aarch64-unknown-linux-musltar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
loongarch64-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
riscv64gc-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-illumostar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.70.0)

platformstable (1.70.0)
aarch64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
aarch64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
aarch64-unknown-linux-musltar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
riscv64gc-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-illumostar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.69.0)

platformstable (1.69.0)
aarch64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
aarch64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
aarch64-unknown-linux-musltar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
riscv64gc-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-illumostar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.68.2)

platformstable (1.68.2)
aarch64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
aarch64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
aarch64-unknown-linux-musltar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
riscv64gc-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-illumostar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.68.1)

platformstable (1.68.1)
aarch64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
aarch64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
aarch64-unknown-linux-musltar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
riscv64gc-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-illumostar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.68.0)

platformstable (1.68.0)
aarch64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
aarch64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
aarch64-unknown-linux-musltar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
riscv64gc-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-illumostar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.67.1)

platformstable (1.67.1)
aarch64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
aarch64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
aarch64-unknown-linux-musltar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
riscv64gc-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-illumostar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.67.0)

platformstable (1.67.0)
aarch64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
aarch64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
aarch64-unknown-linux-musltar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
riscv64gc-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-illumostar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.66.1)

platformstable (1.66.1)
aarch64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
aarch64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
aarch64-unknown-linux-musltar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
riscv64gc-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-illumostar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.66.0)

platformstable (1.66.0)
aarch64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
aarch64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
aarch64-unknown-linux-musltar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
riscv64gc-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-illumostar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.65.0)

platformstable (1.65.0)
aarch64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
aarch64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
aarch64-unknown-linux-musltar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
riscv64gc-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-illumostar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.64.0)

platformstable (1.64.0)
aarch64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
aarch64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
aarch64-unknown-linux-musltar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
riscv64gc-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-illumostar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.63.0)

platformstable (1.63.0)
aarch64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
aarch64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
aarch64-unknown-linux-musltar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
riscv64gc-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-illumostar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.62.1)

platformstable (1.62.1)
aarch64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
aarch64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
aarch64-unknown-linux-musltar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
riscv64gc-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-illumostar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.62.0)

platformstable (1.62.0)
aarch64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
aarch64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
aarch64-unknown-linux-musltar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
riscv64gc-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-illumostar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.61.0)

platformstable (1.61.0)
aarch64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
aarch64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
aarch64-unknown-linux-musltar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
riscv64gc-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-illumostar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.60.0)

platformstable (1.60.0)
aarch64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
aarch64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
aarch64-unknown-linux-musltar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
riscv64gc-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-illumostar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.59.0)

platformstable (1.59.0)
aarch64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
aarch64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
aarch64-unknown-linux-musltar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
riscv64gc-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-illumostar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.58.1)

platformstable (1.58.1)
aarch64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
aarch64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
aarch64-unknown-linux-musltar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
riscv64gc-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-illumostar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.58.0)

platformstable (1.58.0)
aarch64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
aarch64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
aarch64-unknown-linux-musltar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
riscv64gc-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-illumostar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.57.0)

platformstable (1.57.0)
aarch64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
aarch64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
aarch64-unknown-linux-musltar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
riscv64gc-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-illumostar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.56.1)

platformstable (1.56.1)
aarch64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
aarch64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
aarch64-unknown-linux-musltar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
riscv64gc-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-illumostar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.56.0)

platformstable (1.56.0)
aarch64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
aarch64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
aarch64-unknown-linux-musltar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
riscv64gc-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-illumostar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.55.0)

platformstable (1.55.0)
aarch64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
aarch64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
aarch64-unknown-linux-musltar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
riscv64gc-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-illumostar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.54.0)

platformstable (1.54.0)
aarch64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
aarch64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
aarch64-unknown-linux-musltar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
riscv64gc-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-illumostar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.53.0)

platformstable (1.53.0)
aarch64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
aarch64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
aarch64-unknown-linux-musltar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
riscv64gc-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-illumostar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.52.1)

platformstable (1.52.1)
aarch64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
aarch64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
aarch64-unknown-linux-musltar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
riscv64gc-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-illumostar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.52.0)

platformstable (1.52.0)
aarch64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
aarch64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
aarch64-unknown-linux-musltar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
riscv64gc-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-illumostar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.51.0)

platformstable (1.51.0)
aarch64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
aarch64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
aarch64-unknown-linux-musltar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
riscv64gc-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-illumostar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.50.0)

platformstable (1.50.0)
aarch64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
aarch64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
aarch64-unknown-linux-musltar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
riscv64gc-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-illumostar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.49.0)

platformstable (1.49.0)
aarch64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
aarch64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
aarch64-unknown-linux-musltar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
riscv64gc-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-illumostar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.48.0)

platformstable (1.48.0)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
aarch64-unknown-linux-musltar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
riscv64gc-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-illumostar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.47.0)

platformstable (1.47.0)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
riscv64gc-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-illumostar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.46.0)

platformstable (1.46.0)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.45.2)

platformstable (1.45.2)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.45.1)

platformstable (1.45.1)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.45.0)

platformstable (1.45.0)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.44.1)

platformstable (1.44.1)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.44.0)

platformstable (1.44.0)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.43.1)

platformstable (1.43.1)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.43.0)

platformstable (1.43.0)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.42.0)

platformstable (1.42.0)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.41.1)

platformstable (1.41.1)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.41.0)

platformstable (1.41.0)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.40.0)

platformstable (1.40.0)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.39.0)

platformstable (1.39.0)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.38.0)

platformstable (1.38.0)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.37.0)

platformstable (1.37.0)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.36.0)

platformstable (1.36.0)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.35.0)

platformstable (1.35.0)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-linux-musltar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.34.2)

platformstable (1.34.2)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.34.1)

platformstable (1.34.1)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.34.0)

platformstable (1.34.0)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.33.0)

platformstable (1.33.0)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.32.0)

platformstable (1.32.0)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.31.1)

platformstable (1.31.1)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.31.0)

platformstable (1.31.0)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.30.1)

platformstable (1.30.1)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.30.0)

platformstable (1.30.0)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.29.2)

platformstable (1.29.2)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.29.1)

platformstable (1.29.1)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.29.0)

platformstable (1.29.0)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.28.0)

platformstable (1.28.0)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.27.2)

platformstable (1.27.2)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.27.1)

platformstable (1.27.1)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.27.0)

platformstable (1.27.0)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.26.2)

platformstable (1.26.2)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.26.1)

platformstable (1.26.1)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.26.0)

platformstable (1.26.0)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.25.0)

platformstable (1.25.0)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.24.1)

platformstable (1.24.1)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.24.0)

platformstable (1.24.0)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.23.0)

platformstable (1.23.0)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.22.1)

platformstable (1.22.1)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.22.0)

platformstable (1.22.0)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.21.0)

platformstable (1.21.0)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.20.0)

platformstable (1.20.0)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.19.0)

platformstable (1.19.0)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.18.0)

platformstable (1.18.0)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.17.0)

platformstable (1.17.0)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.16.0)

platformstable (1.16.0)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.15.1)

platformstable (1.15.1)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.15.0)

platformstable (1.15.0)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.14.0)

platformstable (1.14.0)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
mips-unknown-linux-gnutar.gz
tar.gz.asc
mips64-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mips64el-unknown-linux-gnuabi64tar.gz
tar.gz.asc
mipsel-unknown-linux-gnutar.gz
tar.gz.asc
powerpc-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64-unknown-linux-gnutar.gz
tar.gz.asc
powerpc64le-unknown-linux-gnutar.gz
tar.gz.asc
s390x-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.13.0)

platformstable (1.13.0)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.12.1)

platformstable (1.12.1)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.12.0)

platformstable (1.12.0)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.11.0)

platformstable (1.11.0)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.10.0)

platformstable (1.10.0)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.9.0)

platformstable (1.9.0)
aarch64-unknown-linux-gnutar.gz
tar.gz.asc
arm-unknown-linux-gnueabitar.gz
tar.gz.asc
arm-unknown-linux-gnueabihftar.gz
tar.gz.asc
armv7-unknown-linux-gnueabihftar.gz
tar.gz.asc
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-freebsdtar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-unknown-netbsdtar.gz
tar.gz.asc

Stable (1.8.0)

platformstable (1.8.0)
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc

Stable (1.7.0)

platformstable (1.7.0)
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc

Stable (1.6.0)

platformstable (1.6.0)
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc

Stable (1.5.0)

platformstable (1.5.0)
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc

Stable (1.4.0)

platformstable (1.4.0)
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc

Stable (1.3.0)

platformstable (1.3.0)
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc

Stable (1.2.0)

platformstable (1.2.0)
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-msvcmsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc

Stable (1.1.0)

platformstable (1.1.0)
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc

Stable (1.0.0)

platformstable (1.0.0)
i686-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
i686-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
i686-unknown-linux-gnutar.gz
tar.gz.asc
x86_64-apple-darwinpkg
pkg.asc
tar.gz
tar.gz.asc
x86_64-pc-windows-gnumsi
msi.asc
tar.gz
tar.gz.asc
x86_64-unknown-linux-gnutar.gz
tar.gz.asc

The Rust Release Channel Layout

NOTE This document should be considered incomplete and descriptive rather than normative. Do not rely on anything described herein to be fully correct or a definition of how things should be done.

A lot of the content herein is derived from a posting made to the Rust internals forum by Brian Anderson back in 2016.

Rust releases are deployed onto static.rust-lang.org where they are served via https. There are several parts to a release channel (stable, beta, nightly) but they all key off a manifest file and then go from there.

Channel manifests

There is a top level directory /dist/ which contains the channel manifests. The manifests are named channel-rust-[channelname].toml. Each channel manifest is accompanied by a .sha256 file which is a checksum of the manifest file and can be used to check integrity of the downloaded data. In addition each channel’s manifest is also accompanied by a .asc file which is a detached GPG signature which can be used to check not only the integrity but also the authenticity of the channel manifest.

In addition to the stable, beta, and nightly channels, there is also a manifest for each release which will be called channel-rust-x.yy.z.toml with its associated .sha256 and .asc files.

To support date-based channels, there is an archive folder for each day (labelled YYYY-MM-DD) which contains copies of the requisite channel files on that day. So, for example, if you installed nightly-2019-02-16 then the channel file would be https://static.rust-lang.org/dist/2019-02-16/channel-rust-nightly.toml.

Content of channel manifests

Channel manifests are toml files. These are known as v2 manifests. The v1 manifests are simply lists of the files associated with a release and are not generated for every channel all of the time. Currently it is recommended to work only with the v2 manifests and these are the topic of this section.

The top level of the .toml file consists of two important key/value pairs. Firstly the manifest-version which is, at this time, "2", and secondly the date of the manifest (date) whose value is of the form "YYYY-MM-DD".

There are then a number of top level sections (tables) which are:

  • pkg - This contains the bulk of the manifest and lists the packages which are part of the release. Typically this will be things like rust, rustc, cargo etc. The rust package is semi-special and currently is used to specify the subset of other packages which will be installed by default.

    Within packages are components and extensions. Currently components are installed by default by rustup, extensions are optional components and are available via rustup component add and friends.

  • renames - This contains a set of package renames which can be used to determine the correct package to fetch when the user enters an alias for it.

    Typically renames are used when a package leaves its preview state and is considered to be release quality. For example, the actual package for rustfmt is called rustfmt-preview but since its release there has been a renames.rustfmt table whose to field is rustfmt-preview. When the user runs rustup component add rustfmt the name is automatically translated to rustfmt-preview and when the user runs rustup component list then rustfmt-preview is automatically renamed back to rustfmt for display to the user.

  • profiles - This is part of the future setup for deciding the default component set to install. Instead of choosing the components of pkg.rust instead rustup will honor one of the entries in the profiles table. Usually this will be the default entry which essentially (though not exactly) boils down to ["rustc", "cargo", "rust-std", "rust-docs", "rustfmt", "clippy"].

    Other profiles include minimal (["rustc", "cargo", "rust-std"]) and complete which adds in additional things such as a copy of the standard library source (rust-src), miri, lldb, llvm-tools, and rust-analysis.

Package entries in the channel manifest

As stated above, packages list their components and extensions (mostly just the rust package) and they can provide per-target tarball and sha256 data.

For example, a package might be:

[pkg.cargo.target.powerpc64-unknown-linux-gnu]
available = true
url = "https://static.rust-lang.org/dist/2019-05-23/cargo-0.36.0-powerpc64-unknown-linux-gnu.tar.gz"
hash = "279f3a84f40e3547a8532c64643f38068accb91c21f04cd16e46579c893f5a06"
xz_url = "https://static.rust-lang.org/dist/2019-05-23/cargo-0.36.0-powerpc64-unknown-linux-gnu.tar.xz"
xz_hash = "cf93b387508f4aea4e64f8b4887d70cc07a00906b981dc0c143e92e918682e4a"

Here you can see that this is for the cargo package, and for the powerpc64-unknown-linux-gnu target. The url/hash combo is for a .tar.gz and the xz_url/xz_hash pair for the same tarball compressed with xz. Either pair of url and hash could be present, both may be present, but it is not useful for neither to be present unless available is set to false to indicate that that particular combination of package and target is unavailable in this channel at this time.

In addition, there will be a single entry providing the version for a package in the form:

[pkg.cargo]
version = "0.36.0 (6f3e9c367 2019-04-04)"

Here version will be effectively the $tool --version output, minus the tool’s name.

Targets

Targets are the same triples you might use when building something with cargo build --target=$target and you can add them to your installation using rustup target add $target. When you do that, what rustup actually does is to find the rust-std package for the target in question and installs that. Essentially like an imaginary rustup component add rust-std.$target.

If a rust-std package for a target is not available = true then that target cannot be installed via rustup. This can happen for lower tier targets from time to time.

Since components and extensions are target-specific in the pkg tables, you will be able to see that rust-std for every target is specified in every rust target’s extensions. This allows for cross-compilation by installation of any rust-std on any build system.

Service Infrastructure

Most services in the Rust Infrastructure are deployed via rust-central-station. Questions about infrastructure, including current status, should go to the #t-infra Zulip stream.

Our stability guarantees: many of our services rely on publicly-accessible storage and APIs, but not all of these are intended for public consumption. At the moment, only the resources behind static.rust-lang.org are considered stable, meaning that those resources will not change without (at least) prior notice. If you are relying on other parts of the Rust project infrastructure for your own work, please let the infrastructure team know.

Rust Log Analyzer

The Rust Log Analyzer analyzes CI build logs to extract error messages and posts them to the pull request. It is run by TimNN.

Homu / bors

Homu is a bot which manages pull requests. It is often referred to as “bors” due to the name of its bot user account. Approved pull requests are placed in a queue from which tests are run.

Documentation on homu commands can be found here.

Please contact Alex Crichton if something goes wrong with the bot.

rfcbot

rfcbot is a bot (bot user account) which helps manage async decision making on issues and PRs (typically RFCs). Team members can view any pending requests for review on the FCP dashboard.

Documentation on rfcbot commands can be found in the rfcbot repository.

rustbot

rustbot is a bot (bot user account) to assist with managing issues and PRs to allow users to label and assign without GitHub permissions. See triagebot for more information.

DXR

DXR is a cross-referenced source index for Rust, allowing the Rust source tree to be navigated and searched with ease. It is generated by rust-dxr

perf / rust-timer

perf offers information about the performance of rustc over time, and a bot for on-demand benchmarking.

It is split into a data collector and a web frontend + bot. The raw performance data is available here and can be browsed on the perf website.

One-off performance runs can done by addressing the rust-timer bot (bot user account). You can trigger the necessary try-build and queue a perf run by saying

@bors try @rust-timer queue

(Technically, the requirement is that the queue command finishes executing prior to the try build completing successfully.)

See the documentation for further bot commands.

Rust Playground

Rust Playground allows you to experiment with Rust before you install it locally, or in any other case where you might not have the compiler available. The Rust playground can be accessed here.

Crater

Crater is a tool to run experiments across the whole Rust ecosystem. Its primary purpose is to detect regressions in the Rust compiler, and it does this by building large number of crates, running their test suites and comparing the results between two versions of the Rust compiler.

Crates comes with a bot to trigger experiments.

docs.rs

docs.rs builds and serves the rustdoc documentation for all crates on crates.io. Issues may be filed on the docs.rs repository. See the #docs-rs channel on Discord for discussion or urgent issues.

Toolstate

The state of tools included with Rust are tracked on the toolstate page. When each PR is merged via CI, the status of each tool is recorded in a JSON file and stored in the toolstate repo. For further information, see the toolstate system documentation.

Rustup components history

The rustup components history tracks the status of every rustup component for every platform over time. See the repository for more information.

CI Timing Tracker

The CI Timing Tracker tracks and compares how long CI jobs take over time. It is run by Alex Crichton.

Highfive (retired)

Highfive is a bot (bot user account) which was previously used to welcome newcomers and assigned reviewers. This service has been replaced with rustbot.

Team Maintenance

The roster of the Rust teams is always in flux. From time to time, new people are added, but also people sometimes opt to into “alumni status”, meaning that they are not currently an active part of the decision-making process. Unfortunately, whenever a new person is added or someone goes into alumni status, there are a number of disparate places that need to be updated.

Team repo

Membership of teams is primarily driven by the config files in the rust-lang/team repo. See the README of that respository for the systems integrated with it.

Rules for changes to team repo

Pull requests to the repository are merged by the team-repo-admins, who use these rules to merge PRs:

people and teams directories

If a change is related to an individual and does not expand permissions, then only the individual’s approval is required. If the change has already been made outside of the team repo (e.g., GitHub username change) then it is considered implicitly approved. This non-exhaustively includes:

  • Changing team membership to alumni or full removal
  • Changing email address
  • Adding zulip-id

If a change will grant additional permissions, then a team lead needs to approve the change. Any team lead in the “parent team” chain may do so. This includes:

  • Adding new subteams under an existing team
  • Changing other metadata (website descriptions, Zulip groups, etc.)

repo directory

The repo directory differs slightly from the other directories in that it is not only used to manage access to repositories, but also configures them and manages their automation.

The following changes must be approved and merged by a team-repo-admin:

  • Changing access to repositories owned by their team
    • For repositories ownership is not currently formally tracked. Until that is added, the team-repo-admins are expected to exercise their understanding of which team owns the repository, when in doubt asking for clarification and codifying in a comment in the relevant repository.

On the other hand, changes to the configuration or automation of repositories may be approved and merged by infra-admins:

Source code changes

The team repository additionally contains code to transform and validate the TOML user-edited files. This is owned by the Infrastructure team and approval should be sought for changes.

Who belongs to team-repo-admins?

This group of people is nominated & approved by the Leadership Council, but is not selected through any formal criteria. Eventually, we hope that the need for this group to exist will diminish as additional automation is added to enforce the above policies.

Note also that the infra-admins team maintains “root” credentials to Rust infrastructure, including the team repo, in order to make changes if needed to keep infrastructure operational. Those rights should only be exercised when required though, with team-repo-admins being the first point of contact for changes. (There may be overlap between the two teams).

Extra steps for changes

Full team membership

To make a full team member, the following places need to be modified:

  • the team repo
  • if the member is going to join the review rotation, they will need to be added to the [assign.owners] section of triagebot.toml of the repos where they will be reviewing

Team member departure

Remove the team member from any and all places:

  • team repo
  • triagebot.toml files of all repos they were involved in
  • 1password

Handling of tools embedded in the rustc repo (“toolstate”)

The Rust repository contains several external git submodules (e.g. the Book, the Reference). The toolstate system is used to allow these submodules to be in a broken state, except for beta releases.

This is necessary because the documentation is tested both on the rust-lang/rust CI, and on the CI of the documentation repo. If there is a change to rustc that breaks the documentation, it would not be possible to update the documentation since the not-yet-merged version of rustc that breaks it doesn’t exist, yet. We usually require CI to be in a passing state in both repos.

The toolstate system solves this problem by temporarily allowing the documentation to be in a “failing” state on rust-lang/rust. When the tests start failing, the maintainers of the submodule will be notified. They will then be responsible for getting it fixed.

The three possible states of a “tool” are: test-pass, test-fail, build-fail.

This page gives a rough overview how the toolstate system works, and what the rules are for when which tools are (not) allowed to break.

Note: Historically, the toolstate system was used for managing tools that were closely coupled with the compiler (like rustfmt or miri). However, those have since been transitioned to use git subtrees instead, so that those tools must always pass their tests, and any failures must be resolved within the PR that breaks them.

This document uses the term “tool”, but as of this writing, the only thing tracked is external documentation.

Toolstate Rules

  • For all tools, if a PR changes that tool (if it changes the commit used by the submodule), the tool has to be in test-pass after this PR or else CI will fail.

  • For all tools except for “nightly only” tools, the following extra rules are applied:

    • If a PR lands on the beta or stable branch, the tool has to be test-pass.
    • If a PR lands on master in the week before the beta is cut, and that PR regresses the tool (if it makes the state “worse”), CI fails. This is to help make sure all these tools become test-pass so that a beta can be cut. (See the Forge index for when the next beta cutoff is happening.)

    At the time of writing, the following tools are “nightly only”: embedded-book.

Updating the toolstate repository

Updating the toolstate repository happens in two steps: when CI runs on the auto branch (where bors moves a PR to test if it is good for integration), the “tool” runners for the individual platforms (at the time of writing, Linux and Windows) each submit a JSON file to the repository recording the state of each tool for the commit they are testing. Later, if that commit actually entirely passed CI and bors moves it to the master branch, the “current tool status” in the toolstate repository is updated appropriately.

These scripts also automatically ping some people and create issues when tools break.

For further details, see the comments in the involved files: checktools.sh, publish_toolstate.py as well as the other files mentioned there.

Updating tools

Tools can be updated by updating the submodule to the proper commit.

Run git submodule update --remote path/to/submodule, add the updates, make sure the tests pass, commit, and send a pull request. The path is from the root of the rust repository, so for example, the reference is src/doc/reference.

While not required, subup may assist you with this.

Adding a tool

NOTE: We are trying to switch away from submodules and toolstate over time. Consider adding a subtree instead of a submodule: #70651

To add a new tool to be tracked, the following steps must be taken:

  1. Create a PR to rust-lang/rust that adds the submodule along with any necessary build system / bootstrap updates. Be careful that the tests properly support ./x.py --no-fail-fast to avoid issues like this.
  2. Include changes to checktools.sh:
    • Build the tool at the top. This is the step that actually generates the JSON status for the tool. When save-toolstates is set in config.toml, the rust build system will write a JSON file with the status of each test.
    • Add the tool to status_check with whether it should be a beta blocker or not.
  3. Update publish_toolstate.py to add the tool. This includes a list of people to ping if the tool is broken, and its source repo. (Note: At the time of this writing, these users must have permissions to be assignable on rust-lang/rust GitHub.)
  4. Submit a PR to the toolstate repository to manually add the tool to the latest.json file.

Policies of the infrastructure team

This section documents the policies created by the infrastructure team.

Policy on broken nightlies

Sometimes the nightlies released automatically by our CI ends up being broken for some people or even everyone. This policy defines what the infra team response will be in those cases.

Which nightly will be rolled back

A nightly can only be rolled back in the following cases:

  • If it contains destructive code, for example if the included compiler deletes all the users files.
  • If an infra problem caused it to be broken for a big percentage of users on any Tier 1 platform. Issues affecting only lower tier platforms are not worthy of a roll back, since we don’t guarantee working builds for those platforms anyway.

A nightly will not be rolled back if it’s broken by a critical compiler bug: those bugs are supposed to be caught by CI, and nightly can have compiler regressions anyway. There are no exceptions, even if big projects are broken because of this.

What are we going to fix

Once any member of the infra team decides to roll back a nightly under this policy we will roll back to the most recent working nightly. The roll back has to fix installing the nightly with rustup:

$ rustup toolchain install nightly

It’s not required to roll back other things like the documentation or the manually downloadable artifacts. After the nightly is rolled back we have to announce the roll back on the @rustlang twitter account and on the status page.

Infrastructure guidelines

This section contains the guidelines written by the infrastructure team for other teams who want to use the project’s infrastructure.

Rust Infrastructure hosting for static websites

The Rust Infrastructure team provides hosting for static websites available for all Rust teams. This document explains the requirements a website needs to meet and how to setup one.

Requirements for hosting websites

  • The website must be managed by a Rust team, or be officially affiliated with the project.
    The infrastructure team has finite resources and we can’t offer hosting for community projects.
  • The website’s content and build tooling must be hosted on a GitHub repository in either the rust-lang or rust-lang-nursery organizations.
    The infrastructure team must be able to rebuild the website content at any time (for example if we need to switch hosting), and having it hosted on a GitHub repository inside infra-managed organizations is the best way for us to ensure that. Even though we’d prefer for all the repositories to be public it’s not a requirement.
  • The website must be built and deployed with a CI service.
    We have custom tooling built around hosting static websites on our infra, and at the moment they work with Travis CI and Azure Pipelines. If you need different CI services ask us in advance and we’ll adapt the tooling to your provider of choice.
  • The website must reach an A+ grade on the Mozilla Observatory.
    Browsers have multiple security features toggleable only through HTTP response headers, and those features enhance users’ privacy and prevent exploits from working. An A+ grade on the Observatory indicates all the important headers are correctly set.
  • The website must be hosted on platforms vetted by the infra team.
    We recommend either GitHub Pages or Amazon S3 (in the rust-lang AWS account) as the hosting and CloudFront as the CDN, but if you need other platforms that’s good as long as we consider them secure and reliable.

Static websites configuration

To avoid limitations of some hosting providers we have setup CloudFront to enable additional, custom behaviors. These behaviors are configured through a file named website_config.json at the root of the generated website content.

Adding custom headers

One of the requirements for having a static website hosted by the infrastructure team is to reach an A+ grade on the Mozilla Observatory, and that requires custom headers to be set. To setup custom headers you need to add an headers section to website_config.json. This example content includes all the headers needed to reach grade B on the Observatory (to reach grade A+ a Content Security Policy is required):

{
    "headers": {
        "Strict-Transport-Security": "max-age=63072000",
        "X-Content-Type-Options": "nosniff",
        "X-Frame-Options": "DENY",
        "X-XSS-Protection": "1; mode=block",
        "Referrer-Policy": "no-referrer, strict-origin-when-cross-origin"
    }
}

Fixing GitHub Pages redirects

GitHub Pages behaves weirdly when it sits behind CloudFront and it needs to issue redirects: since it doesn’t know the real domain name it will use http://org-name.github.io/repo-name as the base of the redirect instead of the correct protocol and domain. To prevent this behavior the github_pages_origin key needs to be added to website_config.json with the origin base url as the value (excluding the protocol):

{
    "github_pages_origin": "org-name.github.io/repo-name"
}

Deployment guide

These deployments steps are meant to be executed by a member of the infrastructure team since they require access to our AWS account.

Configuring AWS

Create a CloudFront web distribution and set the following properties:

  • Origin Domain Name: rust-lang.github.io/repo-name
  • Origin Protocol Policy: HTTPS Only
  • Viewer Protocol Policy: Redirect HTTP to HTTPS
  • Lambda Function Association:
    • Viewer Response: arn:aws:lambda:us-east-1:890664054962:function:static-websites:4
  • Alternate Domain Names: your-subdomain-name.rust-lang.org
  • SSL Certificate: Custom SSL Certificate
    • You will need to request the certificate for that subdomain name through ACM (please use the DNS challenge to validate the certificate)
  • Comment: your-subdomain-name.rust-lang.org

Wait until the distribution is propagated and take note of its .cloudfront.net domain name.

Head over to the domain’s Route 53 hosted zone and create a new record set:

  • Name: your-subdomain-name
  • Type: CNAME
  • Value: the .cloudfront.net domain name you saw earlier

Create an AWS IAM user to allow the CI provider used to deploy website changes to perform whitelisted automatic actions. Use ci--ORG-NAME--REPO-NAME (for example ci--rust-lang--rust) as the user name, allow programmatic access to it and add it to the ci-static-websites IAM group. Then take note of the access key id and the secret access key since you’ll need those later.

Adding deploy keys

To deploy websites we don’t use GitHub tokens (since they don’t have granular access scoping) but a deploy key with write access unique for each repository. To setup the deploy key you need to be an administrator on the repository, clone the simpleinfra repository and run this command:

$ cargo run --bin setup-deploy-keys rust-lang/repo-name

The command requires the GITHUB_TOKEN (you can generate one here) and the TRAVIS_TOKEN (you can see yours here) to be present. It will generate a brand new key, upload it to GitHub and configure Travis CI to use it if the repo is active there.

Configuring Travis CI

To actually deploy the website, this snippet needs to be added to your .travis.yml (please replace the contents of RUSTINFRA_DEPLOY_DIR and RUSTINFRA_CLOUDFRONT_DISTRIBUTION):

env:
  RUSTINFRA_DEPLOY_DIR: path/to/be/deployed
  RUSTINFRA_CLOUDFRONT_DISTRIBUTION: ABCDEFGHIJKLMN
import:
  - rust-lang/simpleinfra:travis-configs/static-websites.yml

You will also need to set the contents of the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables on the Travis CI web UI with the credentials of the IAM user you created earlier. The secret access key must be hidden from the build log, while the access key id should be publicly visible.

Configuring Azure Pipelines

To actually deploy the website, this snippet needs to be added at the top of your pipeline’s YAML file:

resources:
  repositories:
    - repository: rustinfra
      type: github
      name: rust-lang/simpleinfra
      endpoint: rust-lang

Then you can add this steps when you want to execute the deploy (please replace the contents of deploy_dir and cloudfront_distribution):

- template: azure-configs/static-websites.yml@rustinfra
  parameters:
    deploy_dir: path/to/output
    # Optional, only needed if GitHub pages is behind CloudFront
    cloudfront_distribution: AAAAAAAAAAAAAA

You will also need to set the following environment variables in the pipeline:

  • GITHUB_DEPLOY_KEY: value outputted when adding the deploy key earlier (secret)
  • AWS_ACCESS_KEY_ID: access key ID of the IAM user allowed to invalidate CloudFront (public)
  • AWS_SECRET_ACCESS_KEY: access key of the IAM user allowed to invalidate CloudFront (secret)

Infrastructure team documentation

This section contains the documentation about the services hosted and managed by the Rust Infrastructure Team. Most of the linked resources and instructions are only available to infra team members though.

AWS access for team members

Selected members of the Rust Team have access to the AWS account of the project. This includes both members of the Infrastructure Team and members of teams with services hosted on AWS.

This document explains how to access our AWS account, and how to interact with it. If you’re a infrastructure team member and you need to setup or revoke access for another person, read the “AWS access management” page.

Setting up your user after receiving the credentials

The first thing you need to do after receiving your credentials is changing the password and enabling 2-factor authentication: until you do these things, access will be restricted automatically to just the permissions needed to configure 2FA.

Sign into the console with the temporary credentials given to you by the infrastructure team member who created the user. You’ll be prompted to change the temporary password: change it and log in again. Then, go to the “My Security Credentials” page, located in the dropdown at the top:

Location of the “My Security Credentials” page

Scroll down and click the “Assign MFA device” button. Choose “Virtual MFA device” (which is classic TOTP) and configure it with your authenticator app. Once you’re done, log out of the console and log in again to gain access to the resources you’re authorized to use.

Do not choose “U2F security key”, even if you own one: due to limitations of the AWS API, that would prevent you from using the CLI, restricting your access to the console alone.

Using the AWS console

The AWS console provides a visual interface to most of the resources in our AWS account.

Sign into the console.

Using the AWS CLI

The AWS CLI allows you to interact with our AWS account from a terminal or a script. To set it up the first time, follow Amazon’s documentation to install it and configure your credentials. The CLI doesn’t use your console password to authenticate: you’ll need to create an access key from the “My Security Credentials” page on the console.

2-factor authentication

To ensure the security of our AWS account, 2-factor authentication is required to interact with the CLI. The Infrastructure Team developed a script that eases the authentication process by creating a temporary session validated with 2FA for the current shell. The session expires in 12 hours, and it’s valid for an unlimited number of invocations.

To use the script, clone the rust-lang/simpleinfra repository in a directory. Then, every time you need to use the AWS CLI run this command in your shell:

eval $(~/PATH/TO/SIMPLEINFRA/aws-creds.py)

That command will prompt you for your 2FA code, and it will set a few environment variables in the current shell with the temporary credentials. You’ll need to run the command again after 12 hours, or if you want the credentials on another shell.

Plaintext credentials

By default, AWS CLI stores your credentials (including the secret key) in the ~/.aws/credentials file, without any kind of encryption. While the danger of having plaintext credentials stored in your home directory is partially mitigated by the 2FA requirement, it’d be best not to store them anyway.

If you use a password manager with a CLI interface, an approach you can take to avoid the problem is to store your credentials in the password manager, and configure the CLI to call your password manager to fetch the credentials when needed.

AWS access management

This document explains how to setup and manage AWS access for Rust team members. If you’re a team member and you need to access AWS with your existing credentials, or you have received your credentials for the first time, check out the “AWS access for team members” page.

Granting access

To grant access to a person, go to team-members-access/_users.tf in the Terraform configuration and add the new user to it, specifying which teams they should be a member of. The user will be created as soon as you apply the configuration.

By default, there will be no credentials attached to the user. To allow the user to log in, go to the IAM console, open the security credentials page of the user you just created, and enable a console password. Let AWS generate a random one, and require the password to be changed on first login.

Finally communicate to the user that they can join with the generated password, and to follow the “AWS access for team members” page to learn how to enable 2FA and gain access to their account.

Revoking access

To revoke access from a person, log into the IAM console, open the security credentials page of the user you want to delete, and:

  • Disable console access by clicking “Manage” on the console password
  • Disable 2-factor authentication by clicking “Manage” on the assigned MFA device
  • Remove all the access keys, including inactive ones, by clicking the “x”.

Once all the access was removed from the console, go to team-members-access/_users.tf in the Terraform configuration, remove the user and apply the configuration.

Selection of AWS Regions

The Rust project has deployed a lot of resources on AWS, and most of them are in us-west-1. As we are growing our footprint and expand to more international locations, we are reconsidering which regions we want to use.

Please note that this is mainly for new resources that we are deploying, such as new AWS accounts. Existing resources might get migrated, but this is a significant effort that might not be worth it given our limited time.

Selection Criteria

We have two criteria that we use to make this decision:

  • Price - Pricing differs between regions, and we can reduce our costs by deploying to cheaper regions.
  • Location - We want to host our services close to most of our users. But given that Rust is used globally, we won’t be able to satisfy everyone.

Price

Looking at the current distribution of our bill, outbound traffic is by far the most expensive item. This severely limits the price savings we might enjoy by switching to a cheaper region.

Even if we assume that we will be able to significantly reduce our outbound traffic cost on AWS (e.g. by moving to Fastly), the difference between regions is not massive.

Locations

Because most of our traffic comes from the US, we want to run most of our infrastructure here. The following regions are interesting to us:

  • us-east-1 or us-east-2 (cheaper)
  • us-west-1 (already in use)

Services we want to distribute more globally, e.g. the dev-desktops, we also want to deploy to Europe. Here, the following regions seem the most reasonable:

  • eu-west-1 (cheaper)
  • eu-central-1 (more central location)

Decision

We decided to use the following regions for new resources:

  • us-east-2 - Given that most of our infrastructure is hosted in the US, we want to use a cheaper region here to benefit at least a little bit.
  • eu-central-1 - Since we’re not deploying that many resources to Europe, we want to optimize for location here.

When deploying new resources, they should be deployed to us-east-2 by default. Only resources that need to be geographically distributed should be deployed to eu-central-1.

Bastion server

Logging into servers through the bastion

To improve the security of our infrastructure it’s not possible to connect directly to a production server with SSH. Instead, all connections must come from a small server called the “bastion”, which only allows connections from a few whitelisted networks and logs any connection attempt.

To log into a server through the bastion, use one of the following methods:

  • Use SSH’s -J flag:

    ssh -J <username>@bastion.infra.rust-lang.org <username>@servername.infra.rust-lang.org
    
  • Configure your SSH client to always jump through the bastion when connecting to a host:

    • Add this snippet to your SSH configuration file (usually located in ~/.ssh/config):

      Host servername.infra.rust-lang.org
          ProxyJump <username>@bastion.infra.rust-lang.org
      
    • Use SSH:

      ssh <username>@servername.infra.rust-lang.org
      

Please remember the bastion server only allows connections from a small list of IP addresses. Infra team members with AWS access can change the whitelist, but it’s good practice to either have your own bastion server or a static IP address.

The SSH keys authorized to log into each account are stored in the simpleinfra repository. Additionally, people with sensitive 1password access can use the master key stored in the vault to log into every account, provided their connection comes from any whitelisted IP.

Common maintenance procedures

Adding a new user to the bastion server

To add a new user to the bastion you need to add its key to a file named <username>.pub in ansible/roles/common/files/ssh-keys, and change the Ansible playbook adding the user to the list of unprivileged users. Please leave a comment clarifying which servers the user will have access to.

Once that’s done apply the playbook and add a new whitelisted IP address.

Editing a whitelisted IP

Due to privacy reasons, all the static IP addresses of team members with access to the bastion are stored on AWS SSM Parameter Store instead of public git repositories. When running the following commands, replace USERNAME and IP_ADDRESS with the proper values.

Adding a whitelisted IP

  1. Run:
    aws ssm put-parameter --type String --name "/prod/bastion/allowed-ips/USERNAME" --value "IP_ADDRESS/32" --region us-west-1
    
  2. Add the username to the list in terraform/bastion/firewall.tf (local variable allowed_users).
  3. apply the Terraform configuration.

Updating a whitelisted IP

  1. Run:

    aws ssm put-parameter --overwrite --type String --name "/prod/bastion/allowed-ips/USERNAME" --value "IP_ADDRESS/32" --region us-west-1
    
  2. apply the Terraform configuration.

Removing a whitelisted IP

  1. Run:

    aws ssm delete-parameter --name "/prod/bastion/allowed-ips/USERNAME" --region us-west-1
    
  2. Remove the username from the list in terraform/bastion/firewall.tf (local variable allowed_users).

  3. apply the Terraform configuration.

Bors

The infrastructure team manages an instance of Homu called “Bors”, to be used by repositories inside the rust-lang organization. The instance is available at bors.rust-lang.org, and is backed by the @bors GitHub account.

The service is configured with Terraform, and it’s automatically deployed from the rust-lang/homu repository onto our ECS cluster.

Maintenance procedures

Fixing inconsistencies in the queue

Homu is quite buggy, and it might happen that the queue doesn’t reflect the actual state in the repositories. This can be fixed by pressing the “Synchronize” button in the queue page. Note that the synchronization process itself is a bit buggy, and it might happen that PRs which were approved but failed are re-approved again on their own.

Adding a new repository to bors

There are multiple steps needed to add a repository to our Bors instance:

  1. The @bors GitHub account needs to be granted write access to the repository.

  2. Each CI provider needs to have a single GitHub Check Run to gate on. This is not provided by default on GitHub Actions, but it can be simulated with these two jobs, which will generate a bors build finished check:

    end-success:
      name: bors build finished
      if: success()
      runs-on: ubuntu-latest
      needs: [ALL, OTHER, JOBS]
      steps:
        - name: Mark the job as successful
          run: exit 0
    
    end-failure:
      name: bors build finished
      if: "!success()"
      runs-on: ubuntu-latest
      needs: [ALL, OTHER, JOBS]
      steps:
        - name: Mark the job as a failure
          run: exit 1
    

    Make sure to replace [ALL, OTHER, JOBS] with a list of all the jobs you want to gate on.

    These jobs need to run on specific branches (auto and try) so it’s necessary to add those branches to the list of branches tested by the CI provider. For GitHub Actions that looks like this:

    on:
       push:
           branches: [ 
             auto,   # Added for bors
             try     # Added for bors
          ]
    
  3. Add the repository name to the bors permissions array in the team repository, and grant the bors.REPOSITORY.review permission to the right teams or people. You can see an example of adding bors permissions to a team here.

  4. Add the repository to the repositories map in the Terraform configuration file. This will create a webhook and inject its secret key in the bors execution environment.

  5. Add the repository to the Bors configuration, taking inspiration from other repositories. Note that the environment variables used in that config will be set automatically as long as you completed step 3 above.

  6. Give it a test by commenting @bors ping in any PR. If you get a response back, you can then try to approve the PR with @bors r+.

Content Delivery Networks

Users of the Rust programming language interact with the infrastructure of the project in various different ways. They access the project’s website and documentation, query the crates index, and download Rust releases and crates. These resources are hosted by the Rust project and served through a Content Delivery Network (CDN).

This document outlines why we use CDNs, for what, and how we have set them up.

Objectives

We have three goals for our use of CDNs in our infrastructure:

  1. Reduce costs of outbound traffic through cheaper pricing and caching
  2. Reduce load on origin servers to save compute resources
  3. Provide a way to rewrite legacy URLs for some resources

Reducing Costs

As an open source project, we have to be very mindful of our infrastructure costs. Outbound traffic is by far one of the most expensive items on our monthly bills, and one that will continue to increase as Rust gets more popular.

Cloud providers typically charge different rates for outbound traffic based on the service. For example, serving data straight from Amazon S3 is more expensive than serving the same data through an Amazon CloudFront distribution. This is why we now use a CDN by default, even for services that can’t make use of other features of a CDN such as caching.

Infrastructure

Most of the project’s resources are hosted on AWS. Static content is stored in Amazon S3, while dynamic content is loaded from a server. Both types of content are served through Amazon CloudFront, the Content Delivery Network of AWS.

When a user access a resource, e.g. they are trying to download a crate, they will access the resource through the CDN. Different distributions map domain names to a configuration and a backend (called the origin). For example, downloading a crate from static.crates.io goes through a distribution that fetches the crate from an S3 bucket and then caches it for future requests.

                             ┌──► S3 (static content)
                             │
User ───────► CloudFront ────┤
                             │
                             └──► Server (dynamic content)

Distributions

There are many distributions, all of which are configured in the rust-lang/simpleinfra repository. However, their usage is very unevenly distributed. The following distributions are the most important ones for the project, both in terms of traffic and criticality for the ecosystem.

Rust Releases

Whenever a user installs or updates Rust, pre-compiled binaries are downloaded from static.rust-lang.org. The same is true when Rust is installed in a CI/CD pipeline, which is why this distribution has by far the highest traffic volume.

Rust binaries are static and are stored in Amazon S3, from where they are served by the CloudFront distribution.

The distribution for static.rust-lang.org has a custom router that runs in a AWS Lambda function. The router provides a way to list files for a release and rewrites the legacy URL for rustup.sh.

The cache for Rust releases is invalidated nightly.

Crates

Similar to Rust releases, crates are served from as static content from static.crates.io. While still being the second-largest distribution in our infrastructure, it is much smaller than the releases.

Crates are static and stored in Amazon S3, and served through a CloudFront distribution.

Crater agents

Service configuration

Crater agents are servers with our standard configuration running a Docker container hosting the agent. A timer checks for updates every 5 minutes, and if a newer Docker image is present the container will automatically be updated and restarted. This service is managed with Ansible.

Common maintenance procedures

Starting and stopping the agent

The agent is managed by the container-crater-agent.service systemd unit. That means it’s possible to start, stop and restart it with the usual systemctl commands:

systemctl stop container-crater-agent.service
systemctl start container-crater-agent.service
systemctl restart container-crater-agent.service

Inspecting the logs of the agent

Logs of the agents are forwarded and collected by journald. To see them you can use journalctl:

journalctl -u container-crater-agent.service

Manually updating the container image

The container is updated automatically every 5 minutes (provided a newer image is present). If you need to update them sooner you can manually start the updater service by running this command:

systemctl start docker-images-update.service

Dev Desktops

The dev desktops provide maintainers and contributors to the Rust Project with free access to high-powered cloud compute. They are part of the Cloud Compute Program by the Rust Foundation.

MachineArchitecturePerf enabledLocation
dev-desktop-eu-1aarch64YesGermany
dev-desktop-eu-2amd64NoNetherlands
dev-desktop-us-1aarch64YesN. Virgina, US
dev-desktop-us-2amd64NoWashington, US

How to apply to the program

At this time, access to the program and the compute instances is limited to maintainers and core contributors of the Rust Project. While the program is under development, it is limited to certain teams. If you are in one of these teams, you should automatically have access.

If you feel like your work on the Rust project would be significantly improved by access to a powerful build machine, reach out to infra@rust-lang.org with the following information:

  • Your GitHub handle
  • A short description of how you would use and benefit from the dev desktops

How to connect to a dev desktop

Each user has their own account on the dev desktops. The account is named after the user’s GitHub handle, with gh- as a prefix. For example, a user with the GitHub handle user will have a user account with the name gh-user on the dev desktop.

Users can connect to the dev desktop with SSH. The dev desktops use public key authentication, and automatically fetch the user’s public keys from GitHub.

You can connect to the instance with the following command:

ssh <your-username>@<name>.infra.rust-lang.org

Replace <name> with the machine name from the table at the top of the page. For example, connect to dev-desktop-eu-1 using the hostname dev-desktop-eu-1.infra.rust-lang.org.

If you don’t have a public key on GitHub, read the following guides that explain how to create an SSH key and add it to your GitHub account. It might take a few minutes after the key has been added before the dev desktops get updated.

To make the command easier, you can configure an alias in your ~/.ssh/config like so:

Host rustvm
  User <your-username>
  HostName <name>.infra.rust-lang.org

Then you can connect with ssh rustvm.

How to set up your account

When connecting to the machine for the first time, there are a few things you might want to do.

First, check that your Git username and email are configured correctly.

git config -l --global

You can configure your username and email address with:

git config --global user.name "Your name"
git config --global user.email "your-email"

How to customize your shell

You can set your default shell on the dev desktops by adding yourself to a configuration file in the rust-lang/simpleinfra repository. Open ansible/roles/dev-desktop/defaults/main.yml, look for the variable vars_user_config, and add yourself to the list.

vars_user_config:
  - username: gh-jdno
    shell: /usr/bin/zsh
  - username: gh-WaffleLapkin
    shell: /usr/bin/fish

Open a pull request and request a review from @rust-lang/infra (or ping us in #t-infra on Zulip).

After the pull request is merged, an infrastructure admin has to deploy the new configuration to the dev desktops. Only after that will your default shell be changed.

How to install a Rust toolchain

The dev desktops don’t have Rust pre-installed, but instead make it easy to install a specific toolchain from a local repository or worktree.

First, you want to run the following command to install rustup:

/usr/local/bin/init.sh

If you don’t want or need to work with your own version of Rust, you can skip the next section and start working.

If you haven’t done so yet, open the rust-lang/rust repository on GitHub and create a fork in your personal account. Then connect to the dev desktop and run the following script:

/usr/local/bin/setup_rust.sh

The script will clone your personal fork to the dev desktop, check out the latest version from rust-lang/rust, and compile it. Once that’s done, it will link the stages so that you can work with them locally.

The directory contains more scripts to manage worktrees and Rust versions. Run help.sh to get a list and a short description of them.

How to interact with GitHub

The dev desktops are designed to work with repositories on GitHub that belong to your user account. A GitHub App is used to protect your credentials and give you granular control over the repositories that the dev desktops can access.

First, go to https://github.com/apps/rust-cloud-vms to give the app access to your repositories. It’s recommended to only grant access to the repositories that you want to use on the dev desktop, e.g. your fork of rust-lang/rust.

Then connect to the dev desktop and clone the repository that you want to work on with HTTPS. From there, you can work with the repository like you would normally do.

Under the hood, the GitHub App acts as a credentials helper for Git and generates temporary access tokens that are scoped to the permissions that you have granted the application. If you get an error, review the permissions and ensure that the app is allowed to access your repository.

How to set up remote development in Visual Studio Code

Most modern code editors provide support for remote development via SSH. This can be used to write code locally, but execute it inside the dev desktop. While the configuration will differ slightly, the following example for Visual Studio Code should be applicable to other editors as well.

Setting up remote development with VS Code is pretty straightforward, and is described in detail in VS Code’s documentation: Remote Development using SSH. In summary:

  1. SSH into the dev desktop and clone the repository that you want to work on to a local folder
  2. Then open VS Code on your machine and install the Remote Development Extension Pack
  3. Open the command palette and search for “Remote-SSH: Connect to host”
  4. Enter your username and the instance name (<your-username>@<instance>)
  5. Select the path for the cloned repository from step 1
  6. Install any extensions that you want to run on the server (e.g. rust-analyzer)
  7. Use VS Code to run or debug the code remotely

How to request new packages and tools

If you need a specific tool or package that is not installed on the dev desktops, you can request its installation by opening a pull request against the rust-lang/simpleinfra repository. The Ansible role dev-desktop contains a tasks called dependencies.yml that lists the system packages that are installed on the machines. Add the required package to this list, open a pull request, and ping @rust-lang/infra for a review.

It helps to check https://packages.ubuntu.com/ first and make sure that the package is available for both arm64 and amd64 architectures. The dev desktops are currently running Ubuntu 22.04 LTS.

Available disk space

Every user has a quota of how much disk space they can use. Currently, the quota is set to 150GB.

Exceeding the quota will result in the error Disk quota exceeded (os error 122).

The remarks on disk space section of the rustc dev guide contains some tips on how to clean your disk space.

How to give feedback and report issues

If you experience any problems with the dev desktops, or have feedback and suggestions, get in touch with the infrastructure team:

#t-infra on Zulip

We might ask you to create an issue in the rust-lang/simpleinfra repository.

Github App for pushing to github from the dev-desktops

These instructions are for server-side setup and debugging of the dev-desktop github app. The user only needs to be directed to the app installation URL and everything should just work for them.

We’re using the python github library for all github operations. You can find the docs at https://pygithub.readthedocs.io/en/latest/introduction.html

How to setup an App

  1. Go to https://github.com/settings/apps
  2. New Github App
  3. Fill out metadata (name and url)
  4. disable WebHook checkbox
  5. Set Contents - Repository contents, commits, branches, downloads, releases, and merges. to read/write
  6. Set Workflows - Update GitHub Action workflow files. to read/write
  7. Set to “enable on any account”
  8. Create App
  9. Go to https://github.com/settings/apps/{your_app_name_here} and copy the App ID into app_id.txt (same folder as gen_temp_access_token.py)

How to generate a .pem file for your App

  1. Go to https://github.com/settings/apps/{your_app_name_here}#private-key and generate a private key
  2. Download starts, save it to somewhere private.
  3. copy the .pem file into the same folder as the gen_temp_access_token.py and name it dev-desktop.private-key.pem

How to install the app for a user

  1. direct the user to https://github.com/settings/apps/{your_app_name_here}/installations
  2. let them install it on the org/user they want to and restrict to the repositories they want to use

How to generate a temporary access token for a specific user

  1. invoke gen_temp_access_token.py <github_username> <github_repository_name>

Integration into git command line

We’re using credential-helpers. For debugging a credential helper, have it in userspace and invoke it with

git -c credential.helper -c credential.UseHttpPath=true /path/to/helper push origin branch

Note that this does not work for remotes that are registered with ssh urls. You must use https!

The first command line argument is get, store or remove. In our case, we just abort (exit(0)) for everything but get, as we regenerate credentials on every invocation anyway.

The actual arguments are passed via stdin and usually look like

protocol=https
host=github.com
path=your_repo.git

Discord moderation bot

The bot is hosted on the rust-ecs-prod ECS cluster, on the project’s AWS account, with the discord-mods-bot service name. Its container image is stored in a ECR repository with the same name, and its data is stored in the shared RDS PostgreSQL instance.

Automatic deploys are setup from the rust-lang/discord-mods-bot GitHub repository.

The Discord bot account is rustbot#4299. pietroalbini, Mark-Simulacrum, alexcrichton and aidanhs have access to the developer portal.

Common maintenance procedures

Instructions on how to manage ECS services are available here.

Domain names and DNS

All the DNS records of the domains owned by the Rust Infrastructure team are hosted on AWS Route 53, and can be tweaked by members of the team. This document contains instructions for them on how to make changes.

Changing DNS records of a domain managed with Terraform

Warning: not all domain names are yet managed with Terraform. In the console, if a zone’s comment doesn’t start with [terraform] you’ll need to make changes manually from the UI. Work is underway to migrate every domain to Terraform though.

Warning: terraform/services/dns only contains the definition of DNS records pointing to resources managed outside of Terraform. When Terraform manages a resource it will automatically add the required records on its own. See the service’s documentation to learn where its Terraform configuration lives.

DNS records are managed in the terraform/services/dns directory of our Terraform configuration. A file named after the domain name, ending in .tf, exists for each managed domain, and it contains some basic information plus its records.

The configuration supports adding A, CNAME, MX and TXT records. Inside the module definition contained in the domain’s file, each record type has its own map: the map keys are the names of the records, while the values are a list of record values.

For example, to add a pages.rust-lang.org CNAME pointing to rust-lang.github.io you’ll need to add this to terraform/services/dns/rust-lang.org:

module "rust_lang_org" {
  # ...

  CNAME = {
    "pages.rust-lang.org." = ["rust-lang.github.io"],
    # ...
  }
}

Once you made all the changes you can apply them with:

terraform apply

Managing DNS for a new domain with Terraform

Setting up Terraform to manage the DNS records of a new domain name involves a few steps. First of all you need to decide the identifier used inside Terraform for that domain. By convention, the identifier is the domain name itself with . and - replaced with _. For example rust-lang.org becomes rust_lang_org.

Then you can create a file in terraform/services/dns named after the domain name, ending in .tf, with this content (take care of replacing the placeholders):

module "<IDENTIFIER>" {
  source = "./domain"

  domain = "<DOMAIN-NAME>"
  comment = "<COMMENT-FOR-THE-DOMAIN>"
  ttl = 300
}

Finally you need to output the ID of the Route53 zone, allowing other parts of our Terraform configuration to add records. Add this snippet to terraform/services/dns/outputs.tf:

# ...

output "zone_<IDENTIFIER>" {
  value = module.<IDENTIFIER>.zone_id
}

Once you’re done you can apply the changes with:

terraform init
terraform apply

Adding subdomain redirects

Our Terraform configuration supports creating redirects from an arbitrary number of subdomains we control to an URL. Redirects are created with these pieces of infrastructure:

  • A S3 bucket for each set of redirects, named rust-http-redirect-<HASH>. The bucket has website hosting enabled, configured to redirect all the incoming requests to the chosen URL. This allows implementing redirects without an underlying server.

  • An ACM certificate (plus the DNS records to validate it) for each set of redirects, with all the sources as alternate names. This is used to enable HTTPS redirects.

  • A CloudFront distribution for each set of redirects to support HTTPS requests, using the previously generated ACM certificate and forwarding requests to the S3 bucket.

  • Route53 records for each redirect in the related zones: CNAMEs for subdomains, and ALIASes for apex domains.

All the redirects are defined in terraform/redirects.tf, with a module for each destination URL. Either create a new module if you need to redirect to a new URL, or add a new subdomain to an existing module. See an example module here (take care of replacing the placeholders):

module "redirect_<IDENTIFIER>" {
  source = "./modules/subdomain-redirect"
  providers = {
    aws       = "aws"
    aws.east1 = "aws.east1"
  }

  to = "<DESTINATION-URL>"
  from = {
    "<SUBDOMAIN-1>" = module.dns.zone_<DOMAIN-1-IDENTIFIER>,
    "<SUBDOMAIN-2>" = module.dns.zone_<DOMAIN-2-IDENTIFIER>,
  }
}

Once you made all the changes you can apply the configuration with:

terraform init
terraform apply

Note that each change is going to take around 15 minutes to deploy, as CloudFront distribution changes are really slow to propagate. Also, it’s normal to see a bunch of resources being recreated when a domain is added or removed from an existing redirect, as the ACM certificate will need to be regenerated.

Transferring domain names to Rust

These are the steps a member of the infrastructure team needs to take to transfer a domain name to the Rust project’s registrar:

  1. Ask inside the infrastructure team if this is a domain name the project wants to own. In some more complicated cases this will need to be escalated to the Leadership Council.

  2. If the domain name doesn’t already use AWS Route 53 as its nameserver, ask the current owner of the domain a list of all the DNS records that will need to be migrated. Then, add all the records to a new hosted zone on Route 53 before the transfer of the domain. See the section below on transferring DNS for more information on this step.

  3. Ask the current owner to unlock the domain name for transfer, and get the transfer code from them. The transfer code is key to transferring the domain, so avoid receiving it on public communication platforms.

  4. Go to the Transfer Domain section of AWS Route 53 and enter the domain name. If it doesn’t give an error (which should detail which steps are missing) enter the transfer code you received earlier, and choose to use an existing Route 53 hosted zone (it should auto-complete the right one). Until the Rust Foundation is up, use Pietro’s details as the domain contacts. Finally review everything and complete the transfer process.

  5. Tell the current owner to wait for an email from their registrar, which will ask to click on a link to confirm the domain name transfer.

  6. The transfer process will take a while. Once admin@rust-lang.org receives an email telling the domain has been transferred you’re done! 🎉🎉🎉

Transferring DNS

Most domain names use their registrar as the DNS server, but that means that once the domain is transferred away the old registrar also stops serving DNS traffic. Because of that we need to ensure all the DNS records are correctly copied over to AWS Route 53 before actually starting the transfer process.

Explicitly ask the current domain owner for all the A, AAAA, CNAME, TXT and MX records. Everything except the MX records needs to be copied to the Terraform DNS configuration (create a new file for the domain name, and take inspiration from the other domain names).

If you notice some of the records are referring to HTTP redirect services provided by the current registrar then those will have to wait until the domain has been transferred. Once the transfer occured, add a new domain redirect on Terraform. This has to be done after the transfer to be able to request the TLS certificate for the HTTPS redirect.

If the domain has MX records those will need to be migrated to Mailgun. Go to Mailgun and add the domain name there. Ensure it’s in the US region, it uses shared IPs, and it has a 1024 bit DKIM key (the 2048 keys do not fit into a single AWS Route 53 record). Then copy all the records except the CNAME tracking one over to the Terraform DNS configuration, and wait for the domain to be transferred. Once the transfer happens go back to Mailgun and verify the DNS settings for the domain. Finally, add the domain to the team repository’s config.toml and create the mailing lists you need through the usual process.

docs.rs

ECS services management

Some applications running on the project’s infrastructure are hosted in ECS clusters on our AWS account. This document explains the common maintenance procedures one should follow when operating them. Most of the actions explained here require AWS access.

Note: our ECS cluster is located in the Northern California (us-west-1) AWS region. Make sure it’s the selected region when interacting with the AWS console.

Inspecting the logs

Logs for applications hosted on ECS are stored in CloudWatch Logs, and can be inspected in the AWS Console. Open the console, go to CloudWatch Logs and select the log group called /ecs/<service-name>. There are two ways to inspect the logs:

  • If you need to look at the application as a whole, you can get an aggregated view by clicking the “View all log events” button (or, on the classic interface, “Search Log Group”).

  • If you need to debug a specific instance of a container, separate log streams for each running task are available. The streams are named after the container name and the task ID.

Logs are periodically purged (retention varies based on the specific application).

Restarting an application

To restart an application, you can force a new deployment without actually pushing any new code beforehand. To do so, run this command:

aws ecs update-service --cluster rust-ecs-prod --service <service-name> --force-new-deployment

Rolling back a deployment

To rollback a bad deployment you can run the aws-rollback.py script (stored in the simpleinfra repository) with your AWS credentials present in the shell. The script requires the name of the ECR container image repository as its first and only argument:

./aws-rollback.py <image-repository-name>

The script will show the list of images available in the repository, and asks for the image number to rollback to. Once that’s inserted the script will point the latest tag to the image you chose, and if an ECS service with the same name as the repository exists that service will be restarted too.

Deploying application changes

Each application stores its own Docker container in a ECR repository in our AWS account. You can deploy changes both manually and automatically (with GitHub Actions).

For production applications it’s recommended to setup automatic deployment.

Manual deployments

To manually deploy a local build you first need it to tag your built image with its ECR name:

docker tag <image-tag> 890664054962.dkr.ecr.us-west-1.amazonaws.com/<repository-name>:latest

Then you can authenticate with ECR and push it:

$(aws ecr get-login --no-include-email --region us-west-1)
docker push 890664054962.dkr.ecr.us-west-1.amazonaws.com/<repository-name>:latest

Finally, you need to force a new deployment of the ECS service with:

aws ecs update-service --cluster rust-ecs-prod --service <service-name> --force-new-deployment

Automatic deployments with GitHub Actions

The infrastructure team prepared an action for GitHub Actions that automates deployments from CI. To use it, ask a team member to setup AWS credentials in your repository, and then add this snippet to your workflow:

- name: Build the Docker image
  run: docker build -t deploy-image .

- name: Deploy to production
  uses: rust-lang/simpleinfra/github-actions/upload-docker-image@master
  with:
    image: deploy-image
    repository: <ecr-repository-name>
    region: us-west-1
    redeploy_ecs_cluster: rust-ecs-prod
    redeploy_ecs_service: <service-name>
    aws_access_key_id: "${{ secrets.AWS_ACCESS_KEY_ID }}"
    aws_secret_access_key: "${{ secrets.AWS_SECRET_ACCESS_KEY }}"
  if: github.ref == 'refs/heads/<deploy-branch>'

Be sure to replace <ecr-repository-name>, <service-name> and <deploy-branch> with the correct values for your workflow. Once the workflow changes are merged in the branch you chose for deploys, any future commits pushed there will be deployed to the ECS cluster.

Monitoring

Service configuration

Our monitoring service is composed of three parts: Prometheus to scrape, collect and monitor metrics, Alertmanager to dispatch the alerts generated by Prometheus, and Grafana to display the metrics. All the parts are configured through Ansible.

The metrics are not backed up, as Prometheus purges them after 7 days anyway, but the Grafana dashboards are stored in a PostgreSQL database, which is backed up with restic in the rust-backups bucket (monitoring subdirectory). The password to decrypt the backups is in 1password.

Common maintenance procedures

Scrape a new metrics source

Prometheus works by periodically scraping a list of HTTP endpoints for metrics, written in its custom format. In our configuration the list is located in the prometheus_scrape section of the ansible/playbooks/monitoring.yml file in the simpleinfra repository.

To add a new metrics source, add your endpoint to an existing job or, if the metrics you’re scraping are not related to any other job, a new one. The endpoint must be reachable from the monitoring instance. You can read the Prometheus documentation to find all the available options.

Create a new alert

Alerts are generated by Prometheus every time a custom rule defined in its configuration evaluates to true. In our configuration the list of rules is located in the prometheus_rule_groups section of the ansible/playbooks/monitoring.yml file in the simpleinfra repository.

To add a new alert you need to create an alerting rule either in an existing group or a new one. The full list of options is available in the Prometheus documentation.

Add permissions to a user

There are two steps needed to grant access to our Grafana instance to an user.

First of all, to enable the user to log into the instance with their GitHub account they need to be a member of a team authorized to log in. The list of teams is defined in the grafana_github_teams section of the ansible/playbooks/monitoring.yml file in the simpleinfra repository, and it contains a list of GitHub team IDs. To fetch an ID you can run this command:

curl -H "Authorization: token $GITHUB_TOKEN" https://api.github.com/orgs/<ORG>/teams/<NAME> | jq .id

Once the user is a member of a team authorized to log in they will automatically be added to the main Grafana organization with “viewer” permissions. For infrastructure team members that needs to be changed to “admin” (in the “Configuration” -> “Users”), otherwise leave it as viewer.

By default a viewer only has access to the unrestricted dashboards. To grant access to other dashboards you’ll need to add them to a team (in the “Configuration” -> “Teams” page). It’s also possible to grant admin privileges to the whole Grafana instance in the “Server Admin” -> “Users” -> “<username>” page. Do not grant those permissions except to trusted infra team members.

Additional resources

rust-bots

Common maintenance procedures

Adding a new domain

First, edit sudo vim /etc/nginx/nginx.conf to edit the nginx configuration to add the domain.

server {
    listen 443 ssl;
    listen [::]:443 ssl;
    server_name <domain>.infra.rust-lang.org; # Edit <domain> to match here

    location /.well-known/acme-challenge {
        root /home/ssl-renew/challenges;
    }

    location / {
        # configure the domain here
    }
}

Then run sudo -i -u ssl-renew vim renew.sh. Add a --domains line to the script with the domain you’re adding.

Then, run the script: sudo -i -u ssl-renew ./renew.sh

How the Rust CI works

Continuous integration (CI) workflows on the rust-lang/rust repository ensure that the master branch is always in a valid state.

The CI infrastructure is described in detail in the rustc-dev-guide.

Sentry

The infrastructure team manages a Sentry organization on sentry.io for the Rust Team to use. The instance is generously sponsored by Sentry, and this document explains how to use it.

Log into the instance

Every member of the rust-lang GitHub organization can authenticate in our Sentry instance, using their GitHub credentials. Visit the authentication page, click the “Single Sign-On” tab and enter the rust-lang Organization ID. You’ll be then prompted to log with your GitHub Account!

If this is the first time signing into our Sentry organization, you might have to request access to the teams you’re on. Once you request access, a member of the infrastructure team will approve it.

Request a new project

If you’re a member of a Rust Team and you want to use Sentry for a project your team manages, you need to follow these steps:

  1. If the project is public facing (i.e. people outside the team are supposed to access it) you need to contact the Leadership Council to request support in amending the privacy policy, adding a note that your service is using Sentry too similar to the existing ones.

  2. Once the privacy policy is sorted out (whenever needed), you can contact the infrastructure team to create a new project in the Sentry interface and potentially a new Sentry team.

  3. Finally, you can integrate the Sentry SDK with your project.

Creating a new project

This section documents how the infrastructure team can actually create new projects when requested. You need to either have a personal Sentry account with “Owner” permissions, or access to the Sensitive 1Password vault (where the admin credentials are stored).

To create a project, authenticate in Sentry and visit the create new project page. Pick the technology stack the team is using, a relevant name and the team responsible for it (you can create new teams by clicking the “+” icon). Finally, if you created a new team, add the relevant people to it.

Language

This section documents meta processes by the language team.

  • The language team has communications channels on Discord as well as Zulip.

RFC Merge Procedure

Once an RFC has been accepted (i.e., the final comment period is complete, and no major issues were raised), it must be merged. Right now this is a manual process, though just about anyone can do it (if you’re not a subteam member, though, you’ll have to open a PR rather than merge the RFC manually). Here is the complete set of steps to merge an RFC – in some cases, not all the steps will be applicable.

Step 1: Open tracking issue

Open a tracking issue over on rust-lang/rust. Here is a template for the issue text. You’ll have to adjust the various places labeled XXX with some suitable content (e.g., the name of the RFC, or the most appropriate team).

This is a tracking issue for the RFC "XXX" (rust-lang/rfcs#NNN).

**Steps:**

- [ ] Implement the RFC (cc @rust-lang/XXX -- can anyone write up mentoring
      instructions?)
- [ ] Adjust documentation ([see instructions on rustc-dev-guide][doc-guide])
- [ ] Stabilization PR ([see instructions on rustc-dev-guide][stabilization-guide])

[stabilization-guide]: https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#stabilization-pr
[doc-guide]: https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#documentation-prs

**Unresolved questions:**

XXX --- list all the "unresolved questions" found in the RFC to ensure they are
not forgotten

Add the following labels to the issue:

  • B-rfc-approved
  • C-tracking-issue
  • the approriate T-XXX label

(If you don’t have permissions to do so, leave a note cc’ing the appropriate team and asking them to do so.)

Step 2: Merge the RFC PR itself

In your local git checkout:

  • Merge the RFC PR into master in your fork
  • Add a commit that moves the file name from 0000- to its RFC number
  • Edit the new file to include links to the RFC PR and the tracking issue you just created in the header
  • Open a PR or push directly to the master branch on rust-lang/rfcs, as appropriate

Step 3: Leave a comment

Leave a final comment on the PR directing everyone to the tracking issue. Something like this, but feel free to add your own personal flavor (and change the team):

**Huzzah!** The @rust-lang/lang team has decided **to accept** this RFC.

To track further discussion, subscribe to the tracking issue here:
rust-lang/rust#41517

That’s it, you’re done!

Triage meeting procedure

This page documents how to run a lang team triage meeting, should you have the misfortune of being forced to do so.

Attending a meeting

If you would just like to attend a lang-team triage meeting, all you have to do is join the zoom call (the URL is attached to the calendar invite below).

Scheduling

Note that the scheduling for all meetings is recorded in the team calendar, links to which can be found on the rust-lang/lang-team repository.

Pre-triage

To start, we have a pre-triage meeting which occurs before the main meeting. This is not recorded. It is boring.

To execute this meeting you:

  • Open the Current Meeting dropbox paper document
  • Skim down the action items and look to see if there are any you know have been handled
    • they can be checked off and removed
  • Skip down to the Triage section
  • For each Triage section, click on the link and populate it with what you find
    • typically it is best to copy-and-paste the title of the issue, so that links remain intact
  • For each item, click in and try to add a few notes as to the main topic
    • look for things where there isn’t much discussion needed, or just reminders
    • these can be handled quickly in the meeting, or perhaps not at all
    • items that require more discussion will need time alotted for them

Main meeting

  • Ping the team on discord @lang-team
  • Begin the recording on Zoom, if you have acccess
    • If nobody has access to the recording, oh well, we don’t do it every week
  • Discuss item by item and take some notes on what was said
    • Add specific actions to the action items section above
    • If a consensus arises, make sure to create an action item to document it!
    • The goal should be that we leave some comment on every issue

After meeting

  • Export the meeting file to markdown
    • you will need to cleanup “check boxes” – Niko usually searches and replaces ^(\s*)[ ] with \1* [ ] or something like that to insert a * before them, which makes them valid markdown
  • Upload video to youtube if applicable and get the URL
  • Add the file to the minutes directory of rust-lang/lang-team repository with a file name like YYYY-MM-DD.md

Libs

This section documents meta processes by the Libs team.

Where to find us

The rust-lang/libs-team GitHub repository is the home of the Libs team. It has details on current project groups, upcoming meetings, and the status of tracking issues.

The Libs team hangs out primarily in the rust-lang Zulip these days in the #t-libs stream.

You can also find out more details about Zulip and how the Rust community uses it.

Maintaining the standard library

Everything I wish I knew before somebody gave me r+

This document is an effort to capture some of the context needed to develop and maintain the Rust standard library. It’s goal is to help members of the Libs team share the process and experience they bring to working on the standard library so other members can benefit. It’ll probably accumulate a lot of trivia that might also be interesting to members of the wider Rust community.

This document doesn’t attempt to discuss best practices or good style. For that, see the API Guidelines.

Contributing

If you spot anything that is outdated, under specified, missing, or just plain incorrect then feel free to open up a PR on the rust-lang/rust-forge repository!

Terms

  • Libs. That’s us! The team responsible for development and maintenance of the standard library (among other things).
  • Pull request (PR). A regular GitHub pull request against rust-lang/rust.
  • Request for Comment (RFC). A formal document created in rust-lang/rfcs that introduces new features.
  • Tracking Issue. A regular issue on GitHub that’s tagged with C-tracking-issue.
  • Final Comment Period (FCP). Coordinated by rfcbot that gives relevant teams a chance to review RFCs and PRs.

If you’re ever unsure…

Maintaining the standard library can feel like a daunting responsibility! Through automated reviewer assignment via triagebot, you’ll find yourself dropped into a lot of new contexts.

Ping the @rust-lang/libs team on GitHub anytime. We’re all here to help!

If you don’t think you’re the best person to review a PR then use triagebot to assign it to somebody else.

Finding reviews waiting for your input

Please remember to regularly check https://rfcbot.rs/. Click on any occurrence of your nickname to go to a page like https://rfcbot.rs/fcp/SimonSapin that only shows the reviews that are waiting for your input.

Reviewing PRs

As a member of the Libs team you’ll find yourself assigned to PRs that need reviewing, and your input requested on issues in the Rust project.

When is an RFC needed?

New unstable features don’t need an RFC before they can be merged. If the feature is small, and the design space is straightforward, stabilizing it usually only requires the feature to go through FCP. Sometimes however, you may ask for an RFC before stabilizing.

Is there any unsafe?

Unsafe code blocks in the standard library need a comment explaining why they’re ok. There’s a tidy lint that checks this. The unsafe code also needs to actually be ok.

The rules around what’s sound and what’s not can be subtle. See the Unsafe Code Guidelines WG for current thinking, and consider pinging @rust-lang/libs, @rust-lang/lang, and/or somebody from the WG if you’re in any doubt. We love debating the soundness of unsafe code, and the more eyes on it the better!

Is that #[inline] right?

Inlining is a trade-off between potential execution speed, compile time and code size. There’s some discussion about it in this PR to the hashbrown crate. From the thread:

#[inline] is very different than simply just an inline hint. As I mentioned before, there’s no equivalent in C++ for what #[inline] does. In debug mode rustc basically ignores #[inline], pretending you didn’t even write it. In release mode the compiler will, by default, codegen an #[inline] function into every single referencing codegen unit, and then it will also add inlinehint. This means that if you have 16 CGUs and they all reference an item, every single one is getting the entire item’s implementation inlined into it.

You can add #[inline]:

  • To public, small, non-generic functions.

You shouldn’t need #[inline]:

  • On methods that have any generics in scope.
  • On methods on traits that don’t have a default implementation.

#[inline] can always be introduced later, so if you’re in doubt they can just be removed.

What about #[inline(always)]?

You should just about never need #[inline(always)]. It may be beneficial for private helper methods that are used in a limited number of places or for trivial operators. A micro benchmark should justify the attribute.

Is there any potential breakage?

Breaking changes should be avoided when possible. RFC 1105 lays the foundations for what constitutes a breaking change. Breakage may be deemed acceptable or not based on its actual impact, which can be approximated with a crater run.

There are strategies for mitigating breakage depending on the impact.

For changes where the value is high and the impact is high too:

  • Using compiler lints to try phase out broken behavior.

If the impact isn’t too high:

  • Looping in maintainers of broken crates and submitting PRs to fix them.

Is behavior changed?

Breaking changes aren’t just limited to compilation failures. Behavioral changes to stable functions generally can’t be accepted. See the home_dir issue for an example.

Are there new impls for stable traits?

A lot of PRs to the standard library are adding new impls for already stable traits, which can break consumers in many weird and wonderful ways. The following sections gives some examples of breakage from new trait impls that may not be obvious just from the change made to the standard library.

Inference breaks when a second generic impl is introduced

Rust will use the fact that there’s only a single impl for a generic trait during inference. This breaks once a second impl makes the type of that generic ambiguous. Say we have:

#![allow(unused)]
fn main() {
// in `std`
impl From<&str> for Arc<str> { .. }
}
#![allow(unused)]
fn main() {
// in an external `lib`
let b = Arc::from("a");
}

then we add:

impl From<&str> for Arc<str> { .. }
+ impl From<&str> for Arc<String> { .. }

then

#![allow(unused)]
fn main() {
let b = Arc::from("a");
}

will no longer compile, because we’ve previously been relying on inference to figure out the T in Box<T>.

This kind of breakage can be ok, but a crater run should estimate the scope.

Deref coercion breaks when a new impl is introduced

Rust will use deref coercion to find a valid trait impl if the arguments don’t type check directly. This only seems to occur if there’s a single impl so introducing a new one may break consumers relying on deref coercion. Say we have:

#![allow(unused)]
fn main() {
// in `std`
impl Add<&str> for String { .. }

impl Deref for String { type Target = str; .. }
}
#![allow(unused)]
fn main() {
// in an external `lib`
let a = String::from("a");
let b = String::from("b");

let c = a + &b;
}

then we add:

impl Add<&str> for String { .. }
+ impl Add<char> for String { .. }

then

#![allow(unused)]
fn main() {
let c = a + &b;
}

will no longer compile, because we won’t attempt to use deref to coerce the &String into &str.

This kind of breakage can be ok, but a crater run should estimate the scope.

Could an implementation use existing functionality?

Types like String are implemented in terms of Vec<u8> and can use methods on str through deref coersion. Vec<T> can use methods on [T] through deref coersion. When possible, methods on a wrapping type like String should defer to methods that already exist on their underlying storage or deref target.

Are there #[fundamental] items involved?

Blanket trait impls can’t be added to #[fundamental] types because they have different coherence rules. See RFC 1023 for details. That includes:

  • &T
  • &mut T
  • Box<T>
  • Pin<T>

Is specialization involved?

Specialization is currently unstable. You can track its progress here.

We try to avoid leaning on specialization too heavily, limiting its use to optimizing specific implementations. These specialized optimizations use a private trait to find the correct implementation, rather than specializing the public method itself. Any use of specialization that changes how methods are dispatched for external callers should be carefully considered.

As an example of how to use specialization in the standard library, consider the case of creating an Rc<[T]> from a &[T]:

#![allow(unused)]
fn main() {
impl<T: Clone> From<&[T]> for Rc<[T]> {
    #[inline]
    fn from(v: &[T]) -> Rc<[T]> {
        unsafe { Self::from_iter_exact(v.iter().cloned(), v.len()) }
    }
}
}

It would be nice to have an optimized implementation for the case where T: Copy:

#![allow(unused)]
fn main() {
impl<T: Copy> From<&[T]> for Rc<[T]> {
    #[inline]
    fn from(v: &[T]) -> Rc<[T]> {
        unsafe { Self::copy_from_slice(v) }
    }
}
}

Unfortunately we couldn’t have both of these impls normally, because they’d overlap. This is where private specialization can be used to choose the right implementation internally. In this case, we use a trait called RcFromSlice that switches the implementation:

#![allow(unused)]
fn main() {
impl<T: Clone> From<&[T]> for Rc<[T]> {
    #[inline]
    fn from(v: &[T]) -> Rc<[T]> {
        <Self as RcFromSlice<T>>::from_slice(v)
    }
}

/// Specialization trait used for `From<&[T]>`.
trait RcFromSlice<T> {
    fn from_slice(slice: &[T]) -> Self;
}

impl<T: Clone> RcFromSlice<T> for Rc<[T]> {
    #[inline]
    default fn from_slice(v: &[T]) -> Self {
        unsafe { Self::from_iter_exact(v.iter().cloned(), v.len()) }
    }
}

impl<T: Copy> RcFromSlice<T> for Rc<[T]> {
    #[inline]
    fn from_slice(v: &[T]) -> Self {
        unsafe { Self::copy_from_slice(v) }
    }
}
}

Only specialization using the min_specialization feature should be used. The full specialization feature is known to be unsound.

Are there public enums?

Public enums should have a #[non_exhaustive] attribute if there’s any possibility of new variants being introduced, so that they can be added without causing breakage.

Does this change drop order?

Changes to collection internals may affect the order their items are dropped in. This has been accepted in the past, but should be noted.

Is there a manual Drop implementation?

A generic Type<T> that manually implements Drop should consider whether a #[may_dangle] attribute is appropriate on T. The Nomicon has some details on what #[may_dangle] is all about.

If a generic Type<T> has a manual drop implementation that may also involve dropping T then dropck needs to know about it. If Type<T>’s ownership of T is expressed through types that don’t drop T themselves such as ManuallyDrop<T>, *mut T, or MaybeUninit<T> then Type<T> also needs a PhantomData<T> field to tell dropck that T may be dropped. Types in the standard library that use the internal Unique<T> pointer type don’t need a PhantomData<T> marker field. That’s taken care of for them by Unique<T>.

As a real-world example of where this can go wrong, consider an OptionCell<T> that looks something like this:

#![allow(unused)]
fn main() {
struct OptionCell<T> {
    is_init: bool,
    value: MaybeUninit<T>,
}

impl<T> Drop for OptionCell<T> {
    fn drop(&mut self) {
        if self.is_init {
            // Safety: `value` is guaranteed to be fully initialized when `is_init` is true.
            // Safety: The cell is being dropped, so it can't be accessed again.
            unsafe { self.value.assume_init_drop() };
        }
    }
}
}

Adding a #[may_dangle] attribute to this OptionCell<T> that didn’t have a PhantomData<T> marker field opened up a soundness hole for T’s that didn’t strictly outlive the OptionCell<T>, and so could be accessed after being dropped in their own Drop implementations. The correct application of #[may_dangle] also required a PhantomData<T> field:

struct OptionCell<T> {
    is_init: bool,
    value: MaybeUninit<T>,
+   _marker: PhantomData<T>,
}

- impl<T> Drop for OptionCell<T> {
+ unsafe impl<#[may_dangle] T> Drop for OptionCell<T> {

How could mem break assumptions?

mem::replace and mem::swap

Any Sized value behind a &mut reference can be replaced with a new one using mem::replace or mem::swap, so code shouldn’t assume any reachable mutable references can’t have their internals changed by replacing.

mem::forget

Rust doesn’t guarantee destructors will run when a value is leaked (which can be done with mem::forget), so code should avoid relying on them for maintaining safety. Remember, everyone poops.

It’s ok not to run a destructor when a value is leaked because its storage isn’t deallocated or repurposed. If the storage is initialized and is being deallocated or repurposed then destructors need to be run first, because memory may be pinned. Having said that, there can still be exceptions for skipping destructors when deallocating if you can guarantee there’s never pinning involved.

How is performance impacted?

Changes to hot code might impact performance in consumers, for better or for worse. Appropriate benchmarks should give an idea of how performance characteristics change. For changes that affect rustc itself, you can also do a rust-timer run.

Is the commit log tidy?

PRs shouldn’t have merge commits in them. If they become out of date with master then they need to be rebased.

Merging PRs

PRs to rust-lang/rust aren’t merged manually using GitHub’s UI or by pushing remote branches. Everything goes through bors.

When to rollup

For Libs PRs, rolling up is usually fine, in particular if it’s only a new unstable addition or if it only touches docs.

See the rollup guidelines for more details on when to rollup. The idea is to try collect a number of PRs together and merge them all at once, rather than individually. This can get things merged faster, but might not be appropriate for some PRs that are likely to conflict, or have performance characteristics that would be obscured in a rollup.

When there’s new public items

If the feature is new, then a tracking issue should be opened for it. Have a look at some previous tracking issues to get an idea of what needs to go in there. The issue field on #[unstable] attributes should be updated with the tracking issue number.

Unstable features can be merged as normal through bors once they look ready.

When there’s new trait impls

There’s no way to make a trait impl for a stable trait unstable, so any PRs that add new impls for already stable traits must go through a FCP before merging. If the trait itself is unstable though, then the impl needs to be unstable too.

When a feature is being stabilized

Features can be stabilized in a PR that replaces #[unstable] attributes with #[stable] ones. The feature needs to have an accepted RFC before stabilizing. They also need to go through a FCP before merging.

You can find the right version to use in the #[stable] attribute by checking the Forge.

When a const function is being stabilized

Const functions can be stabilized in a PR that replaces #[rustc_const_unstable] attributes with #[rustc_const_stable] ones. The Constant Evaluation WG should be pinged for input on whether or not the const-ness is something we want to commit to. If it is an intrinsic being exposed that is const-stabilized then @rust-lang/lang should also be included in the FCP.

Check whether the function internally depends on other unstable const functions through #[allow_internal_unstable] attributes and consider how the function could be implemented if its internally unstable calls were removed. See the Stability attributes page for more details on #[allow_internal_unstable].

Where unsafe and const is involved, e.g., for operations which are “unconst”, that the const safety argument for the usage also be documented. That is, a const fn has additional determinism (e.g. run-time/compile-time results must correspond and the function’s output only depends on its inputs…) restrictions that must be preserved, and those should be argued when unsafe is used.

When a feature is being deprecated

To try reduce noise in the docs from deprecated items, they should be moved to the bottom of the module or impl block so they’re rendered at the bottom of the docs page. The docs should then be cut down to focus on why the item is deprecated rather than how you might use it.

Release

This section documents the process around creating a new release of the compiler, tools, as well information on The Rust Programming Language’s platform support.

  • The Homu/Bors page provides links to the pull request testing queues for the rust-lang GitHub organisation, as well as providing an overview of the bot’s syntax you can use to interact with it.
  • Rustup Component History documents when a component was last available (if it was available) for a specific platform on nightly.
  • PR Tracking provides visualisations of pull requests made to the rust-lang/rust repository.
  • kennytm’s rustup-toolchain-install-master is a utility to install the latest generated artifacts from CI into rustup.

Backporting

There’s a steady trickle of patches that need to be ported to the beta and stable branch. Only a few people are even aware of the process, but this is actually something anybody can do.

Beta backporting in rust-lang/rust

Backports of PRs to the beta branch are usually only done to fix regressions. Getting a PR backported to the beta branch involves the following process:

  1. Add the label beta-nominated to the PR to be backported. This marks the PR as in the state that it needs attention from the appropriate team to decide if it should be backported. Anybody with triage access is free to add this label.

  2. If the team thinks it should be backported, then they should add the beta-accepted label. Otherwise they should remove the nominated label.

  3. Occasionally someone will make a beta rollup PR. This is often done by the release team, but it can be done by anyone. The process here is:

    1. Create a local branch off the beta branch.

    2. Cherry-pick all of the PRs that have both beta-nominated and beta-accepted labels. It is usually preferred to not include PRs that have not been merged in case there are any last minute changes, or it fails when running the full CI tests.

    3. Run ./x.py run replace-version-placeholder and if there were any changes, put them into a new commit.

    4. (Recommended) Run some tests locally. It is not uncommon that the backports may not apply cleanly, or the UI tests need to be re-blessed if there are differences in the output.

    5. Open a PR against the beta branch with a title that starts with [beta] (so reviewers can see its specialness).

    6. List all of the PRs being backported in the PR description. Here’s an example.

    7. Go through all of the PRs being backported and:

      • Change the milestone to the correct value for the beta release.
      • Remove the beta-nominated label. This indicates that the backport has been completed.

      If there are a lot of PRs, this can be done quickly by opening the nominated + accepted query, check all the PRs being backported, and use the “Milestones” and “Label” drop-downs to modify multiple PRs in bulk.

      This last step can be done before or after the beta PR has been merged, though it can be easy to forget if you wait for it to be merged.

  4. A reviewer (typically from the release team) needs to verify that the backport looks correct and that it’s submitted to the beta branch. They will then approve with @bors r+ rollup=never (to avoid it being rolled up on accident). If the author of the PR has r+ rights, and has not made significant changes while backporting, they can also self-approve the PR.

In summary, there are three states that a PR can go through:

  1. beta-nominated: Needs the team’s attention.
  2. beta-nominated + beta-accepted: Waiting to be backported.
  3. beta-accepted: Backport complete.

Stable backporting in rust-lang/rust

Backports to the stable branch work exactly the same as beta ones, labels have just a slightly different name: stable-nominated identifies a PR to be discussed for a backport and stable-accepted is a PR accepted for backport. Declined stable nomination will have the stable-nominated label removed.

The T-release will decide on a case by case basis if a stable backport will warrant a point (.patch) release (f.e. release a 1.50.1 between 1.50 and 1.51).

Beta Backporting in rust-lang/cargo

The procedure for backporting fixes to Cargo is similar but slightly more extended than the rust-lang/rust repo’s procedure. Currently there aren’t backport tags in the Cargo repository, but you’ll initiate the backport process by commenting on an associated PR, requesting a backport. Once a Cargo team member has approved the backport to happen you’re good to start sending PRs!

  • First you’ll send a PR to the rust-1.21.0 branch of Cargo (replace 1.21 with the current rustc beta version number). Like with rust-lang/rust you’ll prefix the title of your PR with [beta] and ensure it’s flagged as going to beta.

  • Next a Cargo reviewer will @bors: r+ the PR and put it into the queue. Eventually bors will automatically merge the PR (when tests are passing) to the appropriate Cargo branch.

  • Finally you’ll send a PR to the rust-lang/rust repository’s beta branch, updating the Cargo submodule. The Cargo submodule should be updated to the tip of the rust-1.21.0 branch (the branch your Cargo PR was merged to). As like before, ensure you’ve got [beta] in the PR title.

After that’s all said and done the Cargo change is ready to get scheduled onto the beta release!

Preparing Release Notes

The release notes for the next release should be compiled at the beginning of the beta cycle, 6 weeks ahead of the release.

Clone the relnotes utility. This program pulls all pull requests made against rust-lang/rust and rust-lang/cargo within the latest release cycle and prints out a markdown document containing all the pull requests, categorised into their respective sections where possible, and prints the document to stdout.

Only pull requests that impact stable users of Rust should be included. Generally, more exciting items go toward the top of sections. Most items are simply links to the PR that landed them; some that need more explanation have additional, unlinked text; anything supported by an RFC has an additional RFC link. Reuse the PR titles or write descriptions as needed for clarity.

Try to keep the language of the document independent of en-US or en-UK, when it can’t be avoided defer to en-US grammar and syntax.

The Rust Release Process

Here’s how Rust is currently released:

A note about the start-release.py script

Steps of the release process that require interacting with our production environment are executed through the start-release.py script. The script requires you to install programs and configure your local environmet, and it will guide you through the setup.

The first time you run the script (or when the pre-requisites change), you will need to invoke the script multiple times until everything is setup correctly.

start-release.py will always start a CI job in the background. To know when it finishes, you have to watch the logs. WHen the build finishes, a line like this will appear in the logs:

Phase complete: UPLOAD_ARTIFACTS State: SUCCEEDED

Bump the stable version number (Friday the week before)

Open a PR bumping the version number in src/version. r+ rollup=never this PR (self-approve it).

Mark it as rollup=never, because if it lands in a rollup as not the first PR then other pull requests in that rollup will be incorrectly associated with the prior release.

This is effectively when the beta branch forks – when beta is promoted, it will be based off of the PR that landed just before this version number bump PR.

Promote branches (Monday)

Both promotions should happen on Monday. You can open both PRs at the same time, but prioritize landing the stable promotion first (to maximize the pre-release testing period).

Updating the base of the beta and stable branches

Run this command from the rust-lang/release-team repository1:

./scripts/start-release.py update-rust-branches

Remember that start-release.py starts a job in the background, and the script will exit before the branches are updated. Watch the logs to see when the background job finishes before proceeding.

stable PR

Send a PR to rust-lang/rust targeting the new stable branch making the following changes:

  • Update release notes to the latest available copy:

    • If the release notes PR was merged:

      git checkout origin/master -- RELEASES.md
      
    • Otherwise, manually copy RELEASES.md from the pending release notes PR

  • Update src/ci/channel to stable

Self-approve the PR with r+ rollup=never p=1000.

Note that we need to merge this PR as soon as possible, to maximise the pre-release testing time. If another PR is being tested by bors, and CI is not going to finish soon (use your judgement here), you can “yield” priority to the stable release PR by going into that PR and typing this comment:

@bors retry
Yield priority to the stable release.

beta PR

Send a PR to rust-lang/rust targeting the new beta branch with these changes:

  • Run this command and create a separate commit with just its output:

    ./x.py run replace-version-placeholder
    
  • Update src/ci/channel to beta

Self-approve the PR with r+ rollup=never p=10.

Publish the pre-release on the dev-static environment

After the stable PR is merged you’ll need to start the pre-release. Run this command from the rust-lang/release-team repository1:

./scripts/start-release.py publish-rust-dev-stable YYYY-MM-DD

You need to replace YYYY-MM-DD with the date of the release (Thursday).

Master bootstrap update (Tuesday)

This step can only be done after the new beta has been released. The release process for the beta happens automatically at 00:00 UTC every day, so if the beta PR landed after that you will have to wait another day. You can check whether beta has been released by installing it with rustup.

Send a PR to the master branch to:

  • Cherry pick the commit that ran replace-version-placeholder from the now merged beta branch PR. Do not re-run the tool as there might have been other stabilizations on master which were not included in the branched beta, so may not be attributed to the current release.

  • Run this to update the bootstrap compiler to the beta you created yesterday:

    ./x.py run src/tools/bump-stage0
    
  • Remove references to the bootstrap and not(bootstrap) conditional compilation attributes. You can find all of them by installing ripgrep and running this command:

    rg '#!?\[.*\(bootstrap' -t rust -t toml
    

    The general guidelines (both for #[] and #![]) are:

    • Remove any item annotated with #[cfg(bootstrap)].
    • Remove any #[cfg(not(bootstrap))] attribute while keeping the item.
    • Remove any #[cfg_attr(bootstrap, $attr)] attribute while keeping the item.
    • Replace any #[cfg_attr(not(bootstrap), doc="$doc")] with $doc in the relevant documentation block (or in a new documentation block).
    • Replace any #[cfg_attr(not(bootstrap), $attr)] with #[$attr].

    Note that if a PR adds cfg(bootstrap) and is merged between the beta PR and the master bootstrap update, the rg invocation will show them even though they won’t have to be removed. The easiest way to handle this is to change them anyway and let CI show you the failure.

  • Ensure there are no new warnings or Clippy lints affecting the codebase:

    ./x clippy ci
    

Release day (Thursday)

Decide on a time to do the release. You are fully in charge of deciding when the release happens, pick the time that works best for you. The only constraint is, the release process must start and finish within the release day (in UTC).

Let the Social Media coordinator (currently Mara) know of the time, so that she can be ready to post the release on the project’s social media channels.

As of September 2024 a release takes between 75 and 90 minutes to complete, so start the release process earlier enough to hit the time you planned.

To start the release, Run this command in the rust-lang/release-team repository1:

./scripts/start-release.py publish-rust-prod-stable

The command will start a background job to invoke promote-release targeting the production environment, and it will show the instructions to follow its logs.

When the release process completes, merge the blog post PR and inform Mara to announce the release on social media. Finally, bask in your success 🎉

Beta stage0 update (Friday)

Send a PR to the beta branch updating the stage0 to the stable release you published:

./x run src/tools/bump-stage0

Appendix: Rebuilding stable pre-releases

If something goes wrong and we need to rebuild the stable artifacts, merge the PR on the stable branch of the rust-lang/rust repository. Once the commit is merged, [authenticate with AWS][awscli] and run this command in the rust-lang/release-team repository:

./scripts/start-release.py publish-rust-dev-stable-rebuild

You’ll also want to update the previously published pre-release announcement on the blog and internals with the new information.

1

Publishing releases require authentication, and only authorized members of the release team can invoke it. The command will prompt you on how to setup your environment and how to authenticate with AWS the first time you execute it.

Rollup Procedure

Background

The Rust project has a policy that every pull request must be tested after merge before it can be pushed to master. As PR volume increases this can scale poorly, especially given the long (~3.5hr) current CI duration for Rust.

Enter rollups! Changes that are small, not performance sensitive, or not platform dependent are marked with the rollup command to bors (@bors r+ rollup to approve a PR and mark as a rollup, @bors rollup to mark a previously approved PR, @bors rollup- to un-mark as a rollup). ‘Performing a Rollup’ then means collecting these changes into one PR and merging them all at once. The rollup command accepts four values always, maybe, iffy, and never. See the Rollups section of the review policies for guidance on what these different statuses mean.

You can see the list of rollup PRs on Rust’s Homu queue, they are listed at the bottom of the ‘approved’ queue with a priority of ‘rollup’ meaning they will not be merged by themselves until everything in front of them in the queue has been merged.

Making a Rollup

  1. Using the interface on Homu queue, select pull requests and then use “rollup” button to make a rollup pull request. (The text about fairness can be ignored.) Important note: consider for addition PRs marked as rollup=always, rollup=maybe and rollup=iffy, based on the review policies of the Rollups section. Be extra careful when deciding what to include, in particular on rollup=maybe and rollup=iffy PRs. We should try as much as possible to avoid risking and hit regressions (bugs or perf). Also consider that contributors often forget to tag things with rollup=never, when they should have done so, so when PRs are not explicitly tagged with rollup, be extra careful.

  2. Run the following command in the pull request thread:

    @bors r+ rollup=never p=5
    
  3. If the rollup fails, use the logs rust-log-analyzer provides to bisect the failure to a specific PR and do @bors r-. If the PR is running, you need to do @bors r- retry. Otherwise, your rollup succeeded. If it did, proceed to the next rollup (every now and then let rollup=never and toolstate PRs progress).

  4. Recreate the rollup without the offending PR starting again from 1.. There’s a link in the rollup PR’s body to automatically prefill the rollup UI with the existing PRs (minus any PRs that have been r-d)

Selecting Pull Requests

The queue is sorted by rollup status. In general, a good rollup includes one or two iffy PRs (if available), a bunch of maybe (unmarked) PRs, and a large pile of always PRs. A rollup should never include rollup=never PRs.

The actual absolute size of the rollup can depend based on experience, people new to making rollups might start with including 1 iffy, 4 maybes, and 5 alwayss, but more experienced people might even make a rollup of 1-2 iffys, 8 maybes, and 10 alwayss! Massive rollups are rarely needed, but as your intuition grows you’ll get better at judging risk when including PRs in a rollup.

Don’t hesitate to downgrade the rollup status of a PR! If your intuition tells you that a rollup=always PR has some chances for failures, mark it rollup=maybe or rollup=iffy. A lot of the unmarked maybe PRs are categorized as such because the reviewer may not have considered rollupability, so it’s always worth picking them with a critical eye. Similarly, if a PR causes your rollup to fail, it’s worth considering changing its rollup status

Generally, PRs, that touch CI configuration or the bootstrapping process are probably iffy and should be handled with care. On the other hand, PRs that just edit docs are usually rollup=always.

Avoid having too many PRs with large diffs or submodule changes in the same rollup. Also avoid having PRs you suspect will have large perf impacts, and mark them as rollup=never.

It’s tempting to avoid including iffy PRs at all since ideally you want your rollup to succeed. However, it’s worth remembering that the job of the PR queue is to test PRs, not to land them. As such, a rollup that fails because of an iffy PR is a good thing, since that PR would have to be tested at some point anyway and it would have taken up the same amount of time to test if it never got included in a rollup. One way to look at rollups when it comes to iffy PRs is that a rollup is a way for a bunch of other PRs to piggyback on the CI cycle that the iffy PR needs anyway. If rollups avoid iffy PRs entirely what ends up happening is that these PRs tend to languish in the queue for a long time, which isn’t good.

Similarly, make sure to leave some spare CI cycles so that never PRs also get a chance! If you’re the only person making rollups it’s worth letting them run during times you’re not paying attention to the queue, but these days there are rollup authors in multiple time zones, so it’s often best to just keep an eye on the relative size of the queue and put aside a couple CI cycles for never PRs, especially if they pile up.

Try to be fair with rollups: Rollups are a way for things to jump the queue. For rollup=maybe PRs, try to include the oldest one (at the top of the section) so that newer PRs aren’t jumping the queue over older PRs entirely. You don’t have to include every PR older than PRs included in your rollup, but try to include the oldest. Similar to the perspective around iffy, it’s useful to look at a rollup as a way for other PRs to piggyback on the CI cycle of the oldest PR in queue.

Failed rollups

If the rollup has failed, run the @bors retry command if the failure was spurious (e.g. due to a network problem or a timeout). If it wasn’t spurious, find the offending PR and throw it out by copying a link to the rust-logs-analyzer comment, and writing Failed in <link_to_comment>, @bors r-. Hopefully, the author or reviewer will give feedback to get the PR fixed or confirm that it’s not at fault. The failed rollup PR can be closed.

Once you’ve removed the offending PR, re-create your rollup without it (see 1.). Sometimes however, it is hard to find the offending PR. If so, use your intuition to avoid the PRs that you suspect are the problem and recreate the rollup. Another strategy is to raise the priority of the PRs you suspect, mark them as rollup=never (or iffy) and let bors test them standalone to dismiss or confirm your hypothesis.

If a rollup continues to fail you can run the @bors rollup=never command to never rollup the PR in question.

Triage Procedure

Pull Request Triage

Status Tags

  • S-waiting-on-author - Author needs to make changes to address reviewer comments, or merge conflicts/test failures are present. This also covers more obscure cases, like a PR being blocked on another (usually with the S-blocked label in addition), or waiting for a crater run – it is the author’s responsibility to push the PR forward.

    Also used for work-in-progress PRs, sometimes the PR will also be marked as draft in GitHub.

  • S-waiting-on-review - Review is incomplete

  • S-waiting-on-team - A T- label is marked, and team has been CC’d for feedback.

  • S-waiting-on-bors - Currently approved, waiting to merge. Managed by bors.

  • S-waiting-on-crater - Waiting to see what the impact the PR will have on the ecosystem

  • S-waiting-on-bikeshed - Waiting on the consensus over a minor detail

  • S-waiting-on-perf - Waiting on the results of a perf run

  • S-waiting-on-ACP - Waiting on API change proposal (ACP)

  • S-blocked - Waiting for another PR to be merged or for discussion to be resolved

  • S-inactive - Hasn’t had activity in a while

  • S-experimental - An experimental PR that shouldn’t be triaged. S-waiting-on-author used to be used for this, but S-experimental communicates that the PR is an experiment to test out some changes.

Also: PRs with no status tags. This is useful to find PRs where rustbot conked out and didn’t assign a reviewer and thus didn’t assign S-waiting-on-review. These PRs can get lost otherwise. (Note that you should likely not triage PRs that have r? @ghost since that means the author does not need a review yet.)

Procedure

We primarily triage three status labels: S-waiting-on-review, S-waiting-on-author, and (once in a while) S-blocked. Here is the procedure for each:

S-waiting-on-review

Click this link to see all PRs with the S-waiting-on-review label. Only triage PRs that were last updated 15 days or more ago (give or take a day).

For each PR:

  1. If the PR has new conflicts, CI failed, or a new review has been made then change the label to S-waiting-on-author and ping the author.

  2. Add the PR to your report.

S-waiting-on-author

Click this link to see all PRs with the S-waiting-on-author label. Only triage PRs that were last updated 15 days or more ago (give or take a day).

For each PR:

  1. If the author did what the PR was waiting on them for then update the label to S-waiting-on-review.

    Otherwise, if the author still needs to do something, then ping the author if they are not a member of a Rust team (does not include working groups — only teams like T-compiler, T-lang, T-rustdoc, etc.).

  2. Add the PR to your report.

S-blocked

You only need to check S-blocked PRs occasionally (e.g., once a month). Click this link to see all PRs with the S-blocked label.

For each PR:

  1. If it is still blocked then leave it as-is.

    Otherwise, if it is no longer blocked, then remove S-blocked (and add a status label like S-waiting-on-review if appropriate).

  2. Add the PR to your report.

Triage Report

You should record information about each PR you triage in a report. The report is just a small document that looks like:

S-waiting-on-review

#12345 20 days - still waiting on review - author: ferris, assignee: bors

[…]

Your report can look different, just make sure you include this information for each PR:

  1. The PR number (e.g., #12345). No need to manually add a link; the Rust Zulip will autolink PR (and issue) numbers.

  2. Number of days since last activity. “Activity” means:

    • author, reviewer, or team member commented or reviewed; or
    • bors commented about merge conflicts; or
    • PR was pushed to;
    • etc.
  3. Author, reviewer, and who or what (person, team, other PR, etc.) the PR is waiting on.

  4. Current status and what the most recent activity was (e.g., merge conflicts, reviewer commented).

Once you are done triaging PRs, post your report in the topic for the current week’s triage in the #t-release/triage Zulip stream. the topic should have a name like YYYY-MM-DD to YYYY-MM-DD. Note that this uses a monday-sunday week.

If a topic does not exist, you can generate its title with the following bash one-liner (requires GNU date):

echo "$(date -I --date="$([ "z$(date +%a)" = "zMon" ] && echo 'today' || echo 'last monday')") to $(date -I --date="$([ "z$(date +%a)" = "zSun" ] && echo 'today' || echo 'next sunday')")"

Avoiding duplicate work

Since triaging is sometimes done by looking at oldest issues first, re-applying one of the S-* labels will update an issue/PR’s last-modified timestamp, signaling to other triagers that it has already been taken care of.

Issue triaging

This page is about the rust-lang/rust repository. Other repositories may have different processes.

Tracking issues (label C-tracking-issue) don’t fit into this procedure and are treated differently.

Motivation

The rust-lang/rust repository has thousands of issues and hundreds of people working on it. It is impossible for all people to check and solve issues. The goals of triaging are connecting issues to the relevant people, and helping them be more effective at fixing the issue.

In practice, it is unrealistic for all issues to be solved quickly and found by right people. Through applications of labels we make the issue tracker more searchable for future reference, so that people in the future have an easier time finding related issues or issues they are interested in working on.

Triaging can be done by everyone, no matter your permissions. We encourage everyone to help here, as triaging work is highly parallelizable and easy to get started with.

Initial triaging

When an issue is opened, it gets the needs-triage label. This ensures that every issue gets an initial look and that no issue is ignored, or that when it is ignored, it is at least visibly ignored by still having the label.

needs-triage is an initial checkpoint. The effort needed to get an issue past the label should be small.

To do the initial triage and remove the needs-triage label, the following conditions should be fulfilled/considered. It’s okay if not all of these are always considered; treat it as a guideline, not a hard checklist. It is also not exhaustive.

  • The issue should make sense, that is, it should present a problem.
    • For example, if an issue is a question about Rust in general, the issue should be closed and the user redirected to URLO/Discord. You can of course answer the question too :) (but make sure to mention that the user should go to URLO/Discord next time).
  • Add appropriate labels (Labels)
    • Specifically, T-* and C-* are the most relevant
  • If the issue contains no reproduction but needs one (when in doubt, it needs one), ask for one and add the S-needs-repro label
  • The issue tracker is the wrong place for some kinds of feature requests. Suggest the author where they can get support.
  • If the issue could benefit from bisecting the regression (when in doubt, it can), add E-needs-bisection (or do the bisection yourself)
  • Does this issue require nightly? Add requires-nightly.
  • Is the issue a regression? Apply the regression-untriaged label (or figure out what regression it is exactly)
  • If you happen to know people who this issue is relevant to, ping them.
    • For example, write cc @ThatPerson if ThatPerson has been working a lot on the feature in question recently.
  • Does this issue require incomplete or internal features? Add requires-{incomplete,internal}-features.

For applying and removing labels, unprivileged users can use @rustbot to add or remove the labels allowed by the triagebot.toml configuration. For example, @rustbot label +T-compiler +C-bug +A-linkage +O-macos -needs-triage.

To see a list of all labels, check out the “labels” page next to the search bar in the issue tracker.

ICE Triage

For Issues that have both I-ICE and needs-triage

  • Check that the issue is actually an ICE, and not more accuratly described with I-crash or I-hang.
  • If it is an older (like latest stable) version of rust, ask for or check the latest nightly.
  • Check for duplicates, but don’t close as duplicate unless you’re sure they represent the same underlying issue. Prefer simply linking to the issue as possibly related/duplicate.
  • If it does not have a reproduction, comment asking for one and add S-needs-repro. if there isn’t one for around a month it should generally be closed.
  • If the reproduction is not minimal, add E-needs-mcve or create a Minimal Complete and Verifible Example yourself.
  • Add A-* labels based on the code that causes the issue (check backtraces!), and the nature of the repro (eg. if the repro is a weird trait impl or the backtrace points to rustc_trait_selection, add A-traits)
  • Add T-*, WG-*, PG-*, F-*, requires-*, and regression-* labels as appropriate.

Further triaging

For issues that have been through the initial triaging step (that is, don’t have the needs-triage label anymore), there are usually still things that can be improved. There are often many more labels that could be applied (using rustbot again if you don’t have privileges).

Additionally, old (there is no clear definition of old yet, but something on the order of months) S-needs-repro issues can be closed if there is no way to make progress without a reproduction. This requires privileges, but if you don’t have them, you can just link the issue on Zulip (for example in t-release/triage or general) and someone with privileges can close it for you.

Another useful thing to do is go through E-needs-mcve and E-needs-bisection issues and creating minimizations or bisecting the issue (using cargo-bisect-rustc). When you provide one, you can also remove the label using rustbot (@rustbot label -E-needs-bisection).

Labels

There are many different labels that can be applied to issues.

  • needs-triage: Signals that an issue is new and needs initial triage
  • T-*: Specifies the team or teams that this issue is relevant to. For example T-compiler, T-types or T-libs.
  • WG-*: Specifies the working groups that this issue is relevant to, for example WG-debugging.
  • PG-*: Specifies the project groups that this issue is relevant to, for example the PG-exploit-mitigations.
  • C-*: Specifies the category of the label, for example a bug, tracking issue or discussion
    • A-diagnostics issues usually don’t have any C-* label.
    • C-optimization for missed compiler optimizations.
    • C-defective-hardware for hardware bugs that are beyond our control.
    • C-external-bug for software bugs that affect us but we don’t have direct control over, but is worth tracking from our side.
  • O-*: For target-specific issues, specifies the compile target1 or compile target family (most notably the platform, i.e., the architecture or operating system). For example O-macos, O-aarch64, O-windows, O-windows-msvc.
  • A-*: The areas that the issue is relevant to, for example A-linkage, A-patterns, A-diagnostics.
  • L-*: When the issue concerns a specific lint.
    • L-false-positive if the lint fires on a case that it should not have fired on.
    • L-false-negative if the lint misses a case where it should have fired on.
  • F-*: When the issue concerns a specific (usually unstable, usually language) feature.
  • -Z*: When the issue concerns a specific unstable -Z compiler flag.
  • requires-nightly: This issue is not relevant to the stable compiler
  • requires-{incomplete,internal}-features: This issue requires an incomplete or internal feature. The latter often means that the issue should be closed in accordance with compiler MCP 620.
  • regression-*: Labels for tracking issues that are regressions.
  • D-*: Labels for diagnostic issues.
    • D-diagnostic-infra: This issue is about the diagnostics infrastructure itself.
  • I-*: Different labels about the nature2 of a bug. For example ICE, slow code, heavy code (binary size), crashes, unsoundness. There are also some other I-* labels that don’t really fit into this. For triaging, focus on I-ICE, I-crash, I-hang, I-slow, I-heavy, I-compiletime and I-unsound.
  • P-*: Priority labels. Applied using the compiler prioritization procedure.
  • S-*: The status of an issue, for example S-needs-repro.
  • E-*: Calls for participation3, for example to minimize an issue.
    • E-mentor: A mentor is available to help with the issue, which makes for good first issues.
    • E-needs-mcve: This issue has a reproduction, but it is not minimal, it should be minimized.
    • E-needs-bisection: This issue needs a bisection, for example using cargo-bisect-rustc.
    • E-needs-investigation: This issue needs further investigation to determine root causes and the nature of the issue.
    • E-needs-design: This issue will require some substantial design effort (exploration, prototyping, discussions, etc.).
    • E-needs-test: The issue has been fixed, but no test has been added for it. After someone adds a test, it can be closed.
    • E-{easy,medium,hard}: Someone has estimated how hard the issue is to fix. This can help with finding good first issues, but is bound to be inaccurate.

See also section Issue Triage in the Rust Compiler Development Guide.

1

The O in O-* labels originally stood for operating system (OS). 2: The I in I-* labels originally stood for importance. This makes the most sense for the I-*-nominated labels. For most I-* labels however it makes sense to interpret the I as issue (kind). 3: The E in E-* labels stands for experience.

Triaging Crater Runs

Running crater

We regularly run Crater runs, and this documents the procedure for triaging a beta run; it may also be applicable to non-release team runs (e.g., PR crater runs) with minor modifications.

First, file a new issue titled “Crater runs for 1.x” (example)

A crater run for beta should be started as soon as we have beta out. Use the following craterbot invocations.

$BETA_VERSION is e.g. 1.40.0-1, increment the 1 if it’s not the first beta crater run, you can also use the auto-incremented counter on the beta rustc --version.

$STABLE is e.g. 1.39.0 (the stable release) $BETA is beta-YYYY-MM-DD, get the date by looking at https://static.rust-lang.org/manifests.txt and get the date of the most recent channel-rust-beta.toml.

@craterbot run name=beta-$BETA_VERSION start=$STABLE end=$BETA mode=build-and-test cap-lints=warn p=10
@craterbot run name=beta-rustdoc-$BETA_VERSION start=$STABLE end=$BETA mode=rustdoc cap-lints=warn p=5

Once the runs complete, you want to triage them

Triaging

These steps should generally be done for the normal rustc run, and then followed up by a triage of the rustdoc run. Ignore failures in rustdoc that look to be rooted in rustc (i.e., duplicate failures).

There will usually be quite a few regressions – there are a couple tools that can help reduce the amount of work that you need to do. It’s mostly a matter of personal preference which is more helpful.

  • https://github.com/Mark-Simulacrum/crater-generate-report/
    • This groups regressions by ‘root’ by parsing the logs to look for the compilation failed messages printed by Cargo
  • https://github.com/Centril/crater-cat-errors
    • This groups regressions by the “error” message, also by parsing logs

If you’ve written a tool, feel free to add it here! We’re still figuring out what the best UI for this is.

Regardless of the tool you’ve run, you ultimately need to read through a bunch of logs and try to quickly determine if they’re genuine failures or spurious. Most of the time, a compiler failure is genuine, and test failures are mostly spurious, but this usually requires some level of guessing.

Once you’ve determined that something is a genuine failure, add it to a list somewhere (local file, HackMD, whatever) with the error “category.” Mostly, you’re trying to group things such that the regressions in a single group are all caused by the same set of commits, and different groups have different causes.

Once this is done, and you have all the regressions triaged into their separate groups, you want to file a new issue for each group. It should have the regression-from-stable-to-beta and T-compiler label by default, possibly T-libs if it’s a standard library regression, but that’s relatively rare. If you happen to think you know the PR that caused the failure, cc the PR author in a separate comment and link to the PR; otherwise compiler team will triage the issue soon.

Leave a comment on the original issue with the crater runs linking to all the issues you just opened, ideally with the issue titles as well.

You’re done!

Re-running rustc on a crate

For the crates which we’re not sure about, you can try running crater locally, or build the crate directly (cratesio-curl can be helpful). Be careful – regardless of what you do, you are running arbitrary code locally. It’s also fine to file issues for the crates you’re not sure about and let the triage process naturally categorize the error, though it’s not good to do this for all the crates. Once you’ve triaged a crater run a couple times you get a pretty good sense of what is spurious and what isn’t, too.

You can run crater on just a single crate by doing something like this (at least, as of now). Note that this will download several gigabytes (on first use) and requires Docker to be running.

git clone https://github.com/rust-lang/crater
cd crater
cargo run -- prepare-local
CRATES="crates-io-crate-0.4.0,owner/repository-name" # Edit this.
cargo run -- define-ex --crate-select=list:$CRATES --cap-lints=forbid 1.38.0 beta # Edit the stable version.
cargo run -- run-graph --threads 4
cargo run -- gen-report work/ex/default/
# view report for this crate

It’s also possible to re-queue a subset of crates onto the official builders, for which that take a look at: https://gist.github.com/ecstatic-morse/be799bfa4d3b3d6e163fa61a9c30706f

Determining the root cause of the regression

It’s not always apparent why a crate stopped building. This isn’t generally something done as part of crater triage – but can be a good followup. Here, cargo-bisect-rustc and Felix’s minimization guide are excellent tools to apply.

Edition Releases

This document gives an overview of how to manage an Edition release. This assumes that you are familiar with what Editions are (see the Edition Guide).

Edition project group

RFC 3501 established that the Leadership Council is responsible for forming a project group who takes responsibility for managing the Edition.

Project group leads

It is recommended that the project group have 2-3 leads, to make it easier to coordinate and quickly make decisions and actions that will be needed within a relatively tight timeline.

The other consideration is commitment and time availability. Unlike most of the work we do in Rust, Editions have a fixed timeline. This requires a different sort of commitment. Ordinarily our system is pretty tolerant of people coming and going or experiencing unexpected delays. There is less room for that with the Edition. Leads should be able to commit to meeting regularly and following up on action items in a timely fashion. (Of course, things happen, people take vacations, whatever; we’ll deal and make it work. But you get the idea.)

Another thing to consider is that the time requirements vary significantly from month to month. There may be some months where there is nothing to do, and a few where there is a fairly high time requirement (10+ hours in a week). This is also highly variable based on how many changes are in an Edition.

Project group members

Additional members of the Edition Project Group can help with shorter-term action items, or to help with specific aspects of the process (such as writing documentation, implementing migration lints, fixing bugs, authoring progress updates and blog posts, etc.).

Phases

Running an Edition involves many steps, coordinated across the project.

  1. Preparation phase. This is the time approximately 1-3 years before the Editions ships that involves all the preparation work. The sooner these tasks can be performed, the better.
    1. Preliminary support for the next edition should be added to tools. It might be nice to make this automated in the future. Examples:
    2. Teams start their proposals and implementation work.
  2. The Leadership Council sets up a Project Group to run the edition (approximately 1 year before the final release).
  3. Final deadline phase. This is the period starting about one year before the Edition release. This starts the series of final deadlines for anything to be added to the Edition. See Sample timeline below for the set of deadlines during the course of the year.

Feature phases

Each feature goes through a series of phases. This process can start at any time. The process can take a highly variable length of time, sometimes completing very quickly and sometimes taking many years.

  1. Individuals and teams propose an Edition change. The exact process will vary by team, but a common way to start is to post a Pre-RFC on IRLO.
  2. An RFC should be posted with the proposed Edition change.
  3. The team accepts the RFC. This indicates that the team wants the idea in principle, but does not guarantee that it will make it in time for a specific Edition.
  4. Either the RFC or the team should put together a migration plan that defines how migrations will be handled from the previous edition. It’s OK for some kinds of breakage to require people to make manual edits to the code, but that has to be rare, and ideally it should be noisy (i.e., people will get compilation errors, not surprising semantics at runtime). It’s up to the edition leads to make the call on what is “rare enough”.
  5. Implement the feature and migration support.
  6. Informal testing should happen on nightly by people most interested in the feature. Issues should be identified and fixed during this time.
  7. The team responsible for the feature should make a final call if the feature is ready for the next edition by the feature cutoff date. This should be done in conjunction with the Edition Project Group.
  8. Document the change in places such as the Edition Guide and the Reference.

Sample timeline

The following is a sample set of deadlines of the Edition.

This is shown as milestones relative to the release. Previous Edition releases have hit stable late in the year (October), but it is highly encouraged for future Editions to release earlier in the year, such as June.

These dates are not very fixed (for example, Rust releases on a 6 week cadence, so the exact release date shuffles around), and the Edition Project Group should adjust these as desired.

  • 1-3 years before the Edition release date
    • Teams should be planning and implementing their Edition changes.
  • T-11 months
    • Leadership Council ensures the Edition Project Group is formed and ready.
    • Blog post announcing the Edition schedule.
    • Edition Project Group should start coordinating with teams for their list of changes, and set up a tracking tool to track the changes.
    • Tools should have preliminary support for the next edition (ideally this should be done soon after the previous Edition).
    • Public blog post calling for the final list of features, and to communicate the final deadlines. Example
  • T-10 months
    • Last chance for Pre-RFC proposals.
  • T-9 months
    • Last chance for RFC approvals.
  • T-8 months
    • Final list of Edition changes is complete, all RFCs approved.
    • Public blog post informing what is included in the Edition. Example
  • T-7 months
  • T-6 months
  • T-5 months
    • All features and migrations implemented on nightly. All feature gates should be removed.
  • T-4 months
    • Crater test all migrations (see Crater migration test below).
    • Edition Project Group should be tracking all edition issues to ensure they get resolved in time.
  • T-3 months
    • Public blog post calling for final testing on nightly. Example
  • T-2 months
    • Most issues have been fixed.
    • Documentation finished (the Edition Guide, the Reference, etc.).
    • The Edition Project Group should make a final go/no-go decision on stabilizing the Edition, or if it has to be delayed to the next release.
    • Edition is stabilized on nightly for all tools (rustc, cargo, etc.).
  • T-1 months
    • Edition reaches beta, last chance for any backports.
    • Work with Release team to prepare release announcement. Example
  • T-0 months
    • Edition is released on stable.

Crater migration test

Crater does not directly support testing migration lints. To perform a crater run, a modified version of cargo must be used which will perform the steps necessary to migrate a crate. #87190 contains an example of what this looks like. This roughly performs the following steps:

  1. Crater runs cargo check using the previous master build.
  2. Crater runs cargo check using the modified cargo. This modified cargo check will perform the following steps instead of doing a normal check.
  3. Copies the package to a temp directory (since the source directory is read-only in crater).
  4. Checks if the package’s edition is older than the current edition. If so, skip it, since we only want to test migration of the current edition.
  5. Runs cargo fix --edition --allow-no-vcs --allow-dirty.
  6. Modifies Cargo.toml to set the new edition.
  7. Runs cargo check with a special environment variable so that the real cargo check can run.

The modified cargo also allows setting the new edition without cargo-features being used.

If the final cargo fix or cargo check steps fail, and the check succeeded on the previous master build, then that signals a regression where the migration failed.

The Edition Project Group is then responsible for analyzing the report, and filing issues for any problems, and following up with teams for getting those fixed.

This process may need to be repeated several times as problems are fixed and need to be re-tested.

Beware that the process of running this and analyzing the reports may take a long time, depending on how many changes are in the Edition. The 2021 Edition took about a month, which involved analyzing hundreds of regressions, determining root causes, and re-running crater after fixes had been implemented.

Blog posts and announcements

It is highly encouraged for the Edition Project Group and the involved teams to communicate with everyone early and frequently. Major milestones should be announced on the Rust blog. Inside Rust blog posts should be made regularly (such as monthly) with updates about the overall progress and timelines.

Partly in public messaging, but ideally everywhere, reiterate what editions are and how they work (no code breaks! no ecosystem split!). There is always confusion. Last time, if I recall, some reporters reached out to the Rust Foundation for clarification. The Edition Project Group should coordinate with the Foundation team on public messaging, as they will get questions.

Tracking tools

The Edition Project Group should decide which tools they want to use to track the progress of the edition. Individual teams will also likely want to choose the tools that are best for them. In the past, we have used a mix of different tools, such as GitHub Projects, GitHub issue labels, Google Sheets, HackMd, etc. Use whatever you are comfortable with, just keep in mind that it should be publicly accessible.

Examples:

Examples of individual team tracking:

Implementation notes

Individual teams and projects have their own resources for how to implement Edition changes. The following are some links for additional information if you are looking on how Edition changes are implemented.

  • How migrations work if you need a refresher of how the underlying system works.
  • The rustc-dev-guide has an Editions chapter which contains information on how to implement edition-specific changes in rustc.
  • The Rust Style Guide has information on Rust style editions, which define styling changes across editions.

Historical context

  • RFC 2052 started the Edition system, and kicked off the 2018 Edition.
  • RFC 3085 set the plan for the 2021 Edition, as well as clarifying and changing the meaning of an Edition.
  • RFC 3501 kicked off the 2024 Edition, as well as formalizing the 3-year cadence and establishing the process for managing future Editions.

Archive

This section is for content that has become outdated, but that we want to keep available to be read for historical/archival reasons.

Friends of the Tree

The Rust Team likes to occasionally recognize people who have made outstanding contributions to The Rust Project, its ecosystem, and its community. These people are ‘Friends of the Tree’, archived here for eternal glory.

2016-02-26 @mitaa

This week we would like to nominate @mitaa as Friend of the Tree. Recently @mitaa has sent a wave of fixes to rustdoc (yes those are all separate links) with even more on the way! Rustdoc has historically been a tool in need of some love, and the extra help in fixing bugs is especially appreciated. Thanks @mitaa!

2016-02-12 Jeffrey Seyfried (@jseyfried)

This week’s friend of the tree is Jeffrey Seyfried (@jseyfried)!

Jeffrey Seyfried (@jseyfried) has made some awesome contributions to name resolution. He has fixed a ton of bugs, reported previously unknown edge cases, and done some big refactorings, all of which have helped improve a complex and somewhat neglected part of the compiler.

2015-12-04 Vadim Petrochenkov @petrochenkov

This week we’d like to nominate @petrochenkov for Friend of the Tree. Vadim has been doing some absolutely amazing compiler work recently such as fixing privacy bugs, fixing hygiene bugs, fixing pattern bugs, paving the way and implementing #[deprecated], fixing and closing many privacy holes, refactoring and improving the HIR, and reviving the old type ascription PR. The list of outstanding bugs and projects in the compiler is growing ever smaller now; thanks @petrochenkov!

2015-11-16 Peter Atashian (WindowsBunny, retep998)

In his own words, WindowsBunny is “a hopping encyclopedia of all the issues windows users might run into and how to solve them.” One of the heroes that make Rust work on Windows, he actively pushes the frontiers of what Rust can do on the platform. He is also notably the maintainer of the winapi family of crates, a comprehensive set of bindings to the Windows system APIs. You da bunny, WindowsBunny. Also, a friend of the tree.

Source.

2015-10-31 Marcus Klaas

Today @nrc would like to nominated @marcusklaas as Friend of the Tree:

Marcus is one of the primary authors of rustfmt. He has been involved since the early days and is now the top contributor. He has fixed innumerable bugs, implemented new features, reviewed a tonne of PRs, and contributed to the design of the project. Rustfmt would not be the software it is today without his hard work; he is indeed a Friend Of The Tree.

2015-10-16 Ryan Prichard

nmatsakis would also like to declare Ryan Prichard a Friend of the Tree. Over the last few months, Ryan has been comparing the Rust compiler’s parsing behavior with that of the rust-grammar project, which aims to create a LALR(1) grammar for parsing Rust. Ryan has found a number of inconsistencies and bugs between the two. This kind of work is useful for two reasons: it finds bugs, obviously, which are often hard to uncover any other way. Second, it helps pave the way for a true Rust reference grammar outside of the compiler source itself. So Ryan Prichard, thanks!

2015-10-02 Vikrant Chaudhary

Vikrant Chaudhary (nasa42) is an individual who believes in the Rust community. Since June he has been contributing to This Week in Rust, coordinating its publication on urlo, and stirring up contributions. He recently rolled out an overhaul to the site’s design that brings it more inline with the main website. Today Vikrant is the main editor on the weekly newsletter, assisted by llogiq and other contributors. Thanks for keeping TWiR running, Vikrant, you friend of the tree.

Source.

2015-07-24 Tshepang Lekhonkhobe

@Gankra has nominated @tshepang for Friend of the Tree this week:

Over the last year Tshepang has landed over 100 improvements to our documentation. Tshepang saw where documentation was not, and said “No. This will not do.”

We should all endeavor to care about docs as much as Tshepang.

Source.

2015-05-19 Chris Morgan

I’d like to nominate Chris Morgan (@chris-morgan) for Friend of the Tree today. Chris recently redesigned the play.rust-lang.org site for the 1.0 release, giving the site a more modern and rustic feel to it. Chris has been contributing to Rust for quite some time now, his first contribution dating back to July 2013 and also being one of the early pioneers in the space of HTTP libraries written in Rust. Chris truly is a friend of the tree!

2015-03-24 Andrew Gallant (BurntSushi)

BurntSushi is an individual who practically needs no introduction. He’s written many of the world’s most popular crates, including docopt.rs, regex, quickcheck, cbor, and byteorder. Don’t forget his CSV swiss-army-knife, xsv, built on rust-csv. Feedback from his early work on libraries helped informed the evolution of Rust during a critical time in its development, and BurntSushi continues to churn out the kind of Rust gems that can only come from someone who is a skilled friendofthetree.

2015-03-03 Manish Goregaokar (Manishearth)

Manish started working on Servo as part of the GSoC program in 2014, where he implemented XMLHttpRequest. Since then he’s become in integral part of the Servo team while finishing his university studies and organizing Rust community events. In 2015 he took an interest in bors’ queue and started making rollup PRs to accelerate the integration process. Nursing the PR queue is the kind of time-consuming labor that creates friends of the tree like Manish, the rollup friend of the tree.

2015-02-17 Toby Scrace

Today I would like to nominate Toby Scrace as Friend of the Tree. Toby emailed me over the weekend about a login vulnerability on crates.io where you could log in to whomever the previously logged in user was regardless of whether the GitHub authentication was successful or not. I very much appreciate Toby emailing me privately ahead of time, and I definitely feel that Toby has earned becoming Friend of the Tree.

2015-02-10 Jonathan Reem (reem)

Jonathan Reem has been making an impact on Rust since May 2014. His primary contribution has been as the main author of the prominent Iron web framework, though he has also created several other popular projects including the testing framework stainless. His practical experience with these projects has led to several improvements in upstream rust, most notably his complete rewrite of the TaskPool type. Reem is doing everything he can to advance the Rust cause.

2015-01-20 Barosl Lee (barosl)

Today I would like to nominate Barosl Lee (@barosl) for Friend of the Tree. Barosl has recently rewritten our bors cron job in a new project called homu. Homu has a number of benefits including:

  • Zero “down time” between testing different PRs (compared to 30+ minutes for bors!)
  • A new rollup button to create separate rollup PRs from other PRs.
  • Multiple repositories are supported (Cargo and Rust are on the same page)

Homu was recently deployed for rust-lang/rust thanks to a number of issues being closed out by Barosl, and it’s been working fantastically so far! Barosl has also been super responsive to any new issues cropping up. Barosl truly is a Friend of the Tree!

2015-01-13 Kang Seonghoon (lifthrasiir, Yurume)

Seonghoon has been an active member of the Rust community since early 2013, and although he has made a number of valuable contributions to Rust itself, his greatest work has been in developing key libraries out of tree. rust-encoding, one of the most popular crates in Cargo, performs character encoding, and rust-chrono date / time handling, both of which fill critical holes in the functionality of the standard library. rust-strconv is a prototype of efficient numerical string conversions that is a candidate for future inclusion in the standard library. He maintains a blog where he discusses his work.

2015-01-06 Jorge Aparicio (japaric)

I nominate Jorge Aparicio (japaric) for Friend of the Tree (for the second time, no less!). japaric has done tremendous work porting the codebase to use the new language features that are now available. First, he converted APIs in the standard library to take full advantage of DST after it landed. Next, he converted APIs to use unboxed closures. Then, he converted a large portion of the libraries to use associated types. Finally, he removed boxed closures from the compiler entirely. He has also worked to roll out RFCs changing the overloaded operators and comparison traits, including both their definitions and their impact on the standard library. And this list excludes a number of smaller changes, like deprecating older syntax. The alpha release would not be where it is without him; Japaric is simply one of the best friends the tree has ever had.

2014-12-30 Kevin Ballard (kballard, Eridius)

This is a belated recognition of Kevin Ballard (aka @kballard, aka Eridius) as a friend of the tree. Kevin put a lot of work into Unicode issues in Rust, especially as related to platform-specific constraints. He wrote the current path module in part to accommodate these constraints, and participated in the recent redesign of the module. He has also been a dedicated and watchful reviewer. Thanks, Kevin, for your contributions!

2014-12-16 Gábor Lehel (glaebhoerl)

Gabor’s major contributions to Rust have been in the area of language design. In the last year he has produced a number of very high quality RFCs, and though many of them of not yet been accepted, his ideas are often thought-provoking and have had a strong influence on the direction of the language. His trait based exception handling RFC was particularly innovative, as well that for future-proofing checked arithmetic. Gabor is an exceedingly clever Friend of the Tree.

2014-11-11 Brian Koropoff (unwound)

In the last few weeks, he has fixed many, many tricky ICEs all over the compiler, but particularly in the area of unboxed closures and the borrow checker. He has also completely rewritten how unboxed closures interact with monomorphization and had a huge impact on making them usable. Brian Koropoff is truly a Friend of the Tree.

2014-10-07 Alexis Beingessner (Gankra)

Alexis Beingessner (aka @Gankra) began contributing to Rust in July, and has already had a major impact on several library-related areas. Her main focus has been collections. She completely rewrote BTree, providing a vastly more complete and efficient implementation. She proposed and implemented the new Entry API. She’s written extensive new documentation for the collections crate. She pitched in on collections reform.

And she added collapse-all to rustdoc!

Alexis is, without a doubt, a FOTT.

2014-09-02 Jorge Aparicio (japaric)

Jorge has made several high-impact contributions to the wider Rust community. He is the primary author of rustbyexample.com, and last week published “eulermark”, a comparison of language performance on project Euler problems, which happily showed Rust performing quite well. As part of his benchmarking work he has ported the ‘criterion’ benchmarking framework to Rust.

2014-07-29 Björn Steinbrink (dotdash, doener)

Contributing since April 2013. Björn has done many optimizations for Rust, including removing allocation bloat in iterators, fmt, and managed boxes; optimizing fail!; adding strategic inlining in the libraries; speeding up data structures in the compiler; eliminating quadratic blowup in translation, and other IR bloat problems.

He’s really done an amazing number of optimizations to Rust.

Most recently he earned huge kudos by teaching LLVM about the lifetime of variables, allowing Rust to make much more efficient use of the stack.

Björn is a total FOTT.

2014-07-22 Jonas Hietala (treeman)

Jonas Hietala, aka @treeman, has been contributing a large amount of documentation examples recently for modules such as hashmap, treemap, priority_queue, collections, bigint, and vec. He has also additionally been fixing UI bugs in the compiler such as those related to format!

Jonas continues to add new examples/documentation every day, making documentation more approachable and understandable for all newcomers. Jonas truly is a friend of the tree!

2014-07-08 Sven Nilson (bvssvni, long_void)

Sven Nilson has done a great deal of work to build up the Rust crate ecosystem, starting with the well-regarded rust-empty project that provides boilerplate build infrastructure and - crucially - integrates well with other tools like Cargo.

His Piston project is one of the most promising Rust projects, and its one that integrates a number of crates, stressing Rust’s tooling at just the right time: when we need to start learning how to support large-scale external projects.

Sven is a friend of the tree.

2014-06-24 Jakub Wieczorek (jakub-)

jakub-, otherwise known as Jakub Wieczorek, has recently been working very hard to improve and fix lots of match-related functionality, a place where very few dare to venture! Most of this code appears to be untouched for quite some time now, and it’s receiving some well-deserved love now.

Jakub has fixed 10 bugs this month alone, many of which have been long-standing problems in the compiler. He has also been very responsive in fixing bugs as well as triaging issues that come up from fun match assertions.

Jakub truly is a friend of the tree!

2014-04-22 klutzy

klutzy has been doing an amazing amount of Windows work for years now. He picks up issues that affect our quality on Windows and picks them off 1 by 1. It’s tedious and doesn’t get a ton of thanks, but is hugely appreciated by us. As part of the Korean community, he has also done a lot of work for the local community there. He is a friend of the tree. Thank you!

  • Rust on Windows crusader
  • Fixed issues with x86 C ABI struct arguments
  • Fixed multiple issues with non-US locales

2014-03-18 Clark Gaebel (cgaebel)

This week’s friend of the tree is Clark Gaebel. He just landed a huge first contribution to Rust. He dove in and made our hashmaps significantly faster by implementing Robin Hood hashing. He is an excellent friend of the tree.

2014-02-25 Erick Tryzelaar (erickt)

  • Contributing since May 2011
  • Wrote the serialization crate
  • Organizes the bay area Rust meetups
  • Just rewrote the Hash trait

2014-02-11 Flavio Percoco (FlaPer87)

  • Contributing since September
  • Does issue triage
  • Organizing community events in Italy
  • Optimized the ‘pow’ function
  • Recently been fixing lots of small but important bugs

2014-01-27 - Jeff Olson (olsonjefferey)

  • Contributing since February 2012
  • Did the original libuv integration
  • Implemented our second attempt at I/O, first using libuv
  • Ported parts of the C++ runtime to Rust
  • Implemented file I/O for the newest runtime
  • Last week published an article about file I/O on the Safari books blog

2014-01-21 - Steven Fackler (sfackler)

  • Contributing since last May
  • CMU grad
  • Lots of library improvements, Base64, Bitv, I/O
  • Rustdoc improvements
  • Mut/RefCell
  • std::io::util
  • external module loading

2014-01-14 - Eduard Burtescu (eddyb)

  • Contributing since October
  • Working on the compiler, including trans
  • Reduced rustc memory usage
  • Optimized vector operations
  • Helping refactor the compiler to eliminate use of deprecated features
  • Cleaned up ancient code in the compiler
  • Removed our long-standing incorrect use of the environment argument to pass the self param

2014-01-07 - Vadim Chugunov (vadimcn)

  • Contributing since June
  • Fixed numerous bugs on Windows
  • Fixing broken tests
  • Improved compatibility with newer mingw versions
  • Eliminated our runtime C++ dependency by implementing unwinding through libunwind

Rust Release history

This is an archive of Rust release artifacts from 0.1–1.7.0. Each release is signed with the Rust GPG signing key (older key, even older key).

1.7.0

1.6.0

1.5.0

1.4.0

1.3.0

1.2.0

1.1.0

1.0.0

1.0.0-beta

1.0.0-alpha.2

1.0.0-alpha

Rust 0.x

In addition to the included short-form release in the mailing list, each 0.x release has a longer explanation in the release notes.

0.12.0

0.11.0

0.10

0.9

0.8

0.7

0.6

0.5

0.4

0.3.1

This was an OS X bugfix release.

0.3

0.2

0.1