Web framework performance test for Rust

In this article, we will compare the performance of the 3 most popular backend frameworks for Rust: Axum, Actix and Rocket.

Testing methodology

Using each of the frameworks, we will write a simple web service with three endpoints:

POST /test/simple

Takes a parameter in JSON, formats it, returns the result in JSON

POST /test/timed

Accepts a parameter in JSON, sleeps for 20 ms, formats as the previous method, returns the result in JSON

POST /test/bcrypt

Accepts a parameter in JSON, hashes it using the bcrypt algorithm with the parameter cost=10, returns the result in JSON

The first endpoint allows you to measure the framework’s net overhead and represents an endpoint with the simplest business logic; the second endpoint represents an endpoint with some light query to a database or other service; the third endpoint represents some heavy business logic. All endpoints accept and return a JSON object with a single string payload field.

The code for all three frameworks is written using examples from official sites, all performance-related settings are left at default.

Axum

The framework was first announced on July 30, 2021. The youngest framework amongst those under review and at the same time the most popular one. It is developed by the tokio team – the most popular asynchronous runtime for Rust (Actix and Rocket under the hood also use it).

One of the advantages of the framework is the ability to describe endpoints without using macros, which makes the code and compiler messages more readable and understandable. It also improves the quality of syntax highlighting and hints in the IDE. Along with this advantage, the authors claim the following:

  • Declarative parsing of query parameters using extractors

  • Simple and predictable error handling model

  • Generate responses with a minimum of auxiliary code

  • Ability to use the ecosystem of middleware, services and utilities tower and tower-http

The quality of the documentation is high – you will have no problems following the beginner's guide.

The main function of the application is the usual asynchronous main function from tokio; you can perform asynchronous initialization.

GitHub: https://github.com/tokio-rs/axum Documentation: https://docs.rs/axum/latest/axum/ Number of downloads at crates.io23 million

Code

main.rs

use std::str::FromStr;

use std::time::Duration;

use axum::Json;

use axum::response::IntoResponse;

use tokio::time::sleep;

#[derive(Debug, serde::Serialize, serde::Deserialize)]

struct Data {

payload: String

}

async fn simple_endpoint(Json(param): Json<Data>) -> impl IntoResponse {

Json(Data {

payload: format!("Hello, {}", param.payload)

})

}

async fn timed_endpoint(Json(param): Json<Data>) -> impl IntoResponse {

sleep(Duration::from_millis(20)).await;

Json(Data {

payload: format!("Hello, {}", param.payload)

})

}

async fn bcrypt_endpoint(Json(param): Json<Data>) -> impl IntoResponse {

Json(Data {

payload: bcrypt::hash(&param.payload, 10).unwrap()

})

}

#[tokio::main]

async fn main() -> Result<(), Box<dyn std::error::Error>> {

env_logger::init_from_env(env_logger::Env::default().default_filter_or("info"));

let router = axum::Router::new()

.route("/test/simple", axum::routing::post(simple_endpoint))

.route("/test/timed", axum::routing::post(timed_endpoint))

.route("/test/bcrypt", axum::routing::post(bcrypt_endpoint));

let address = "0.0.0.0";

let port = 3000;

log::info!("Listening on http://{}:{}/", address, port);

axum::Server::bind(

&std::net::SocketAddr::new(

std::net::IpAddr::from_str(&address).unwrap(),

port

)

).serve(router.into_make_service()).await?;

Ok(())

}

Cargo.xml

[package]

name = "rust_web_benchmark"

version = "0.1.0"

edition = "2021"

[dependencies]

log = "0.4.20"

env_logger = "0.10.0"

tokio = { version = "1", features = ["macros", "rt-multi-thread"] }

axum = "0.6.20"

serde = { version = "1.0.189", features = ["derive"] }

bcrypt = "0.15.0"

Actix

The first release on GitHub dates to October 31, 2017.

Key advantages stated by the developers:

  • Type safety

  • Rich in features (HTTP/2, logging, etc.)

  • Extensibility

  • Extreme performance

Macros are used to describe endpoints. The main function of the application is compatible with the regular main function in tokio, so you can perform asynchronous initialization.

The quality of the documentation for beginners is quite good, so even with no experience with Actix you’ll be able to write test code without difficulty at a comparable speed.

Official website: https://actix.rs/

Number of downloads at crates.io5,8 million

Code

main.rs

use std::time::Duration;

use actix_web::{post, App, HttpResponse, HttpServer, Responder};

use actix_web::web::Json;

use tokio::time::sleep;

#[derive(Debug, serde::Serialize, serde::Deserialize)]

struct Data {

payload: String

}

#[post("/test/simple")]

async fn simple_endpoint(Json(param): Json<Data>) -> impl Responder {

HttpResponse::Ok().json(Json(Data {

payload: format!("Hello, {}", param.payload)

}))

}

#[post("/test/timed")]

async fn timed_endpoint(Json(param): Json<Data>) -> impl Responder {

sleep(Duration::from_millis(20)).await;

    HttpResponse::Ok().json(Json(Data {

payload: format!("Hello, {}", param.payload)

}))

}

#[post("/test/bcrypt")]

async fn bcrypt_endpoint(Json(param): Json<Data>) -> impl Responder {

HttpResponse::Ok().json(Json(Data {

payload: bcrypt::hash(&param.payload, 10).unwrap()

}))

}

#[actix_web::main]

async fn main() -> std::io::Result<()> {

env_logger::init_from_env(env_logger::Env::default().default_filter_or("info"));

let address = "0.0.0.0";

let port = 3000;

log::info!("Listening on http://{}:{}/", address, port);

    HttpServer::new(|| {

        App::new()

            .service(simple_endpoint)

            .service(timed_endpoint)

.service(bcrypt_endpoint)

    })

        .bind((address, port))?

        .run()

        .await

}

Cargo.toml

[package]

name = "rust_web_benchmark"

version = "0.1.0"

edition = "2021"

[dependencies]

log = "0.4.20"

env_logger = "0.10.0"

tokio = { version = "1", features = ["macros", "rt-multi-thread"] }

actix-web = "4"

serde = { version = "1.0.189", features = ["derive"] }

bcrypt = "0.15.0"

Rocket

Saw the light in 2016. The oldest of the frameworks under review; until version 0.5 used its own implementation of asynchrony but starting from version 0.5 it switched to tokio.

Key advantages stated by the developers:

  • Type safety

  • Freedom from boilerplate code

  • Simple, intuitive API

  • Extensibility

Macros are actively used to define handlers; a special rocket::launch macro is also used to define the main function of the application which should return the built instance of the framework.

Although version 0.5 claims to support the stable branch of Rust, it was not possible to build the project with it, because the dependency of the pear library requires nighly, so this is the only test that is built with this version of the compiler.

It is also worth noting that there is confusion in the documentation due to the strong API change in version 0.5. A Google search often turns up examples for version 0.4 that don't work in version 0.5. Be aware when writing code for this framework as you might spend more time correcting compilation errors after copying examples from the documentation. Probably, if you study the framework well, this will cease to be such a problem, but for a beginner it is definitely a significant disadvantage.

Official website: https://rocket.rs/

Number of downloads at crates.io3,7 million

Code

main.rs

use std::time::Duration;

use tokio::time::sleep;

use rocket::serde::json::Json;

#[macro_use] extern crate rocket;

#[derive(Debug, serde::Serialize, serde::Deserialize)]

struct Data {

payload: String

}

#[post("/test/simple", data = "<param>")]

async fn simple_endpoint(param: Json<Data>) -> Json<Data> {

Json(Data {

payload: format!("Hello, {}", param.into_inner().payload)

})

}

#[post("/test/timed", data = "<param>")]

async fn timed_endpoint(param: Json<Data>) -> Json<Data> {

sleep(Duration::from_millis(20)).await;

Json(Data {

payload: format!("Hello, {}", param.into_inner().payload)

})

}

#[post("/test/bcrypt", data = "<param>")]

async fn bcrypt_endpoint(param: Json<Data>) -> Json<Data> {

Json(Data {

payload: bcrypt::hash(&param.into_inner().payload, 10).unwrap()

})

}

#[launch]

fn rocket() -> _ {

rocket::build()

.configure(rocket::Config::figment()

.merge(("address", "0.0.0.0"))

.merge(("port", 3000))

)

.mount("/", routes![

simple_endpoint,

timed_endpoint,

bcrypt_endpoint

])

}

Cargo.toml

[package]

name = "rust_web_benchmark"

version = "0.1.0"

edition = "2021"

[dependencies]

tokio = "1"

rocket = { version = "0.5.0-rc.3", features = ["json"] }

serde = { version = "1.0.189", features = ["derive"] }

bcrypt = "0.15.0"

Benchmark

As a benchmark, let's write a simple application that spawns N parallel tasks where each must send M requests to the specified URL. The time of successful (200 OK) requests is measured (in microseconds), unsuccessful requests are simply counted. For the test result, the arithmetic mean and median values are calculated, as well as the number of requests per second (the number of successful requests divided by the total time between the start of the first task and the end of the last task).

Uses tokio and the reqwest library.

Code

main.rs

use reqwest::StatusCode;

static REQ_PAYLOAD: &str = "{\n\t\"payload\": \"world\"\n}\n";

#[tokio::main]

async fn main() -> Result<(), Box<dyn std::error::Error>> {

let args = std::env::args().collect::<Vec<_>>();

if args.len() != 4 {

println!("Usage: {} url thread-count request-count", args[0]);

return Ok(());

}

let url = args[1].clone();

let request_count = args[2].parse().unwrap();

let thread_count = args[3].parse().unwrap();

let client = reqwest::Client::new();

let start = std::time::Instant::now();

let handles = (0..thread_count).map(|_| {

let url = url.clone();

let client = client.clone();

tokio::spawn(async move {

let mut error_count = 0;

let mut results = Vec::new();

for in 0..requestcount {

let start = std::time::Instant::now();

let res = client

.post(&url)

.header("Content-Type", "application/json")

.body(REQ_PAYLOAD)

.send()

.await

.unwrap();

if res.status() == StatusCode::OK {

res.text().await.unwrap();

let elapsed = std::time::Instant::now().duration_since(start);

results.push(elapsed.as_micros());

} else {

error_count += 1;

}

}

(results, error_count)

})

}).collect::<Vec<_>>();

let mut results = Vec::new();

let mut error_count = 0;

for handle in handles {

let (out, err_count) = handle.await.unwrap();

results.extend(out.into_iter());

error_count += err_count;

}

let elapsed = std::time::Instant::now().duration_since(start);

let rps = results.len() as f64 / elapsed.as_secs_f64();

results.sort();

println!(

"average={}us, median={}us, errors={}, total={}, rps={}",

results.iter()

.copied()

.reduce(|a, b| a + b).unwrap() / results.len() as u128,

results[results.len() / 2],

error_count,

results.len(),

rps

);

Ok(())

}

Cargo.toml

[package]

name = "bench"

version = "0.1.0"

edition = "2021"

[dependencies]

tokio = { version = "1", features = ["macros", "rt-multi-thread"] }

reqwest = "0.11.22"

Results

Each service was launched in its own Docker container (for easy tracking of resource consumption), then the benchmark was launched against the same endpoint for each service in turn. After this, all containers were restarted and the process was repeated for the next endpoint, etc. The full test was repeated three times for three different orders of testing containers (to eliminate the advantage in the test due to possible throttling after the first test or vice versa due to possible CPU output from power saving mode after the first test), and the results are averaged.

For the simple and timed endpoints, 100 tasks with 100 requests were used. For the bcrypt endpoint, 10 tasks of 50 requests were used.

Test

Metrics

Axum

Actix

Rocket

Simple

Average (ms)

7,727

7,239

12,971

Median (ms)

3,698

3,1

9,097

RPS

12010

12483

7419

Timed

Average (ms)

25,922

25,764

26,402

Median (ms)

22,379

21,906

22,659

RPS

3799

3789

3696

Bcrypt

Average (ms)

493

505

501

Median (ms)

474

486

503

RPS

93

86

91

(underlining highlights the best result, italics – the worst one)

As you can see, Axum and Actix are toe to toe in terms of performance, while Actix is slightly ahead. Rocket is a clear underdog in terms of performance. It should be considered that the test is still synthetic and in real applications the entire difference in performance will be eroded by business logic, queries to the database and external services, etc. (in fact, this can be observed in the timed and bcrypt tests – the gap between all three frameworks becomes almost invisible).

RAM consumption

Axum

Actix

Rocket

After launch

0,75 MiB

1,3 MiB

0,97 MiB

During test (maximum)

71 MiB

71 MiB

102 MiB

After test

0,91 MiB

2,4 MiB

1,8 MiB

In terms of RAM consumption, Axum is the clear winner; Actix consumes a comparable amount of RAM under load, but when idle, especially after the first load, it consumes the most. Rocket is average in terms of RAM consumption when idle, but under load it consumes a third more.

Conclusion

The favorite based on the review is Axum: the largest community and good documentation, many examples, high performance, most economical RAM consumption which is especially important when developing microservices. The lag behind Actix in performance is insignificant and can be explained by an error in the testing methodology, but even if not, since Axum is the youngest framework, most likely the gap will disappear as it develops, and updates are released. The ability to describe endpoints without using macros is very convenient.

The second place goes to Actix, with documentation no worse than the previous contender has and slightly higher performance in some scenarios together with good memory consumption under load. However, macros and high memory consumption when idle are its significant disadvantages.

There are no obvious advantages for Rocket at the moment. It may have been an outstanding framework when it was released in 2016, a pioneer, but now it is losing both in memory consumption and performance to newer frameworks, still has problems with the stable branch of Rust, and has confusing documentation due to breaking changes between versions 0.4 and 0.5.

Benchmark source code on GitHub

#framework #axum #actix #rocket #benchmark #IT

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics