Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Grpc Rs | 1,687 | 90 | 50 | 22 days ago | 43 | June 28, 2022 | 84 | apache-2.0 | Rust | |
The gRPC library for Rust built on C Core library and futures | ||||||||||
Grpc | 1,230 | 38 | 26 | 2 days ago | 8 | March 21, 2020 | 38 | apache-2.0 | Elixir | |
An Elixir implementation of gRPC | ||||||||||
Grpc_bench | 728 | 2 months ago | 22 | mit | Shell | |||||
Various gRPC benchmarks | ||||||||||
Rpc Benchmark | 507 | 6 months ago | 60 | apache-2.0 | Java | |||||
java rpc benchmark, 灵感源自 https://www.techempower.com/benchmarks/ | ||||||||||
Flatsharp | 414 | 4 | 8 hours ago | 52 | April 26, 2022 | 7 | apache-2.0 | C# | ||
Fast, idiomatic C# implementation of Flatbuffers | ||||||||||
Rpcx Benchmark | 170 | 8 months ago | June 02, 2021 | 2 | apache-2.0 | Java | ||||
benchmark codes for rpcx, gRPC, Dubbo, Motan | ||||||||||
Jmeter Grpc Request | 149 | 23 days ago | 15 | apache-2.0 | Java | |||||
JMeter gRPC Request load test plugin for gRPC | ||||||||||
Ptg | 113 | a year ago | 1 | Go | ||||||
💥Performance testing tool (Go), It is also a GUI gRPC client. | ||||||||||
Benchmark Grpc Protobuf Vs Http Json | 97 | 4 years ago | Go | |||||||
Benchmarks comparing gRPC+Protobuf vs JSON+HTTP in Go | ||||||||||
Test Infra | 71 | 13 days ago | 31 | June 27, 2022 | 3 | apache-2.0 | Go | |||
Repo for gRPC testing infrastructure support code |
One repo to finally have a clear, objective gRPC benchmark with code for everyone to verify and improve.
Contributions are most welcome! Feel free to use discussions if you have questions/issues or ideas. There is also a category where you are encouraged to submit your own benchmark results!
See Nexthink blog post for a deeper overview of the project and recent results.
The goal of this benchmark is to compare the performance and resource usage of various gRPC libraries across different programming languages and technologies. To achieve that, a minimal protobuf contract is used to not pollute the results with other concepts (e.g. performances of hash maps) and to make the implementations simple.
That being said, the service implementations should NOT take advantage of that and keep the code generic and maintainable. What does generic mean? One should be able to easily adapt the existing code to some fundamental use cases (e.g. having a thread-safe hash map on server side to provide values to client given some key, performing blocking I/O or retrieving a network resource).
Keep in mind the following guidelines:
Although in the end results are sorted according to the number of requests served, one should go beyond and look at the resource usage - perhaps one implementation is slightly better in terms of raw speed but uses three times more CPU to achieve that. Maybe it's better to take the first one if you're running on a Raspberry Pi and want to get the most of it. Maybe it's better to use the latter in a big server with 32 CPUs because it scales. It all depends on your use case. This benchmark is created to help people make an informed decision (and get ecstatic when their favourite technology seems really good, without doubts).
We try to provide some metrics to make this decision easier:
docker stats
void
pointers. Ups!Linux or MacOS with Docker. Keep in mind that the results on MacOS may not be that reliable, Docker for Mac runs on a VM.
To build the benchmarks images use: ./build.sh [BENCH1] [BENCH2] ...
. You need them to run the benchmarks.
To run the benchmarks use: ./bench.sh [BENCH1] [BENCH2] ...
. They will be run sequentially.
To clean-up the benchmark images use: ./clean.sh [BENCH1] [BENCH2] ...
The benchmark can be configured through the following environment variables:
Name | Description | Default value |
---|---|---|
GRPC_BENCHMARK_DURATION | Duration of the benchmark. | 20s |
GRPC_BENCHMARK_WARMUP | Duration of the warmup. Stats won't be collected. | 5s |
GRPC_REQUEST_SCENARIO | Scenario (from scenarios/) containing the protobuf and the data to be sent in the client request. | complex_proto |
GRPC_SERVER_CPUS | Maximum number of cpus used by the server. | 1 |
GRPC_SERVER_RAM | Maximum memory used by the server. | 512m |
GRPC_CLIENT_CONNECTIONS | Number of connections to use. | 50 |
GRPC_CLIENT_CONCURRENCY | Number of requests to run concurrently. It can't be smaller than the number of connections. | 1000 |
GRPC_CLIENT_QPS | Rate limit, in queries per second (QPS). | 0 (unlimited) |
GRPC_CLIENT_CPUS | Maximum number of cpus used by the client. | 1 |
GRPC_IMAGE_NAME | Name of Docker image built by ./build.sh
|
'grpc_bench' |
GRPC_BENCHMARK_DURATION
should not be too small. Some implementations need a warm-up before achieving their optimal performance and most real-life gRPC services are expected to be long running processes. From what we measured, 300s should be enough.GRPC_SERVER_CPUS
+ GRPC_CLIENT_CPUS
should not exceed total number of cores on the machine. The reason for this is that you don't want the ghz
client to steal precious CPU cycles from the service under test. Keep in mind that having the GRPC_CLIENT_CPUS
too low may not saturate the service in some of the more performant implementations. Also keep in mind limiting the number of GRPC_SERVER_CPUS
to 1 will severely hamper the performance for some technologies - is running a service on 1 CPU your use case? It may be, but keep in mind eventual load balancer also incurs some costs.GRPC_REQUEST_SCENARIO
is a parameter to both build.sh
and bench.sh
. The images must be rebuilt each time you intend to use a scenario having a different helloworld.proto
from the one ran previously.Other parameters will depend on your use-case. Choose wisely.
You can find our old sample results in the Wiki. Be sure to run the benchmarks yourself if you have sufficient hardware, especially for multi-core scenarios. New results will be posted to discussions and you are encouraged to publish yours as well!