Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Hyperfine | 17,039 | 2 | 18 days ago | 24 | June 03, 2023 | 43 | apache-2.0 | Rust | ||
A command-line benchmarking tool | ||||||||||
Colorette | 1,549 | 6,513 | 1,596 | a month ago | 37 | April 16, 2023 | 6 | mit | JavaScript | |
🌈Easily set your terminal text color & styles | ||||||||||
Cometd | 554 | 17 | 26 | 21 hours ago | 60 | June 23, 2022 | 56 | apache-2.0 | Java | |
The CometD project, a scalable comet (server push) implementation for web messaging. | ||||||||||
Code Minimap | 550 | 7 days ago | 17 | January 01, 2022 | 12 | apache-2.0 | Rust | |||
🛰 A high performance code minimap render. | ||||||||||
Simpletable | 303 | 7 | 25 | 2 years ago | 1 | April 02, 2021 | 2 | mit | Go | |
Simple tables in terminal with Go | ||||||||||
Snap Python | 245 | 2 years ago | 62 | other | C++ | |||||
SNAP Python code, SWIG related files | ||||||||||
Spring Cloud Gateway Bench | 220 | 5 years ago | 5 | apache-2.0 | Shell | |||||
Simple benchmark comparing zuul and spring cloud gateway | ||||||||||
Vtebench | 216 | a year ago | 4 | September 19, 2020 | apache-2.0 | Rust | ||||
Generate benchmarks for terminal emulators | ||||||||||
Typin | 182 | 1 | 2 years ago | 20 | December 19, 2021 | 28 | other | C# | ||
Declarative framework for interactive CLI applications | ||||||||||
Terminal Codelearn | 80 | 3 years ago | 1 | JavaScript | ||||||
Super fast multi user pseudo bash Terminal in Node.js & SockJS |
A tool for benchmarking terminal emulator PTY read performance.
This benchmark is not sufficient to get a general understanding of the performance of a terminal emulator. It lacks support for critical factors like frame rate or latency. The only factor this benchmark stresses is the speed at which a terminal reads from the PTY. If you do not understand what this means, please do not jump to any conclusions from the results of this benchmark.
vtebench accepts benchmarks as executables and uses their stdout as benchmark
payload. By default benchmarks are read from the ./benchmarks
directory, which
contains a good selection of benchmarks already. Benchmarks in vtebench are
defined as a directory with a benchmark
and an optional setup
executable.
To just run all the default benchmarks in the repository, you can run the following after setting up a Rust toolchain:
cargo run --release
vtebench contains a script for automatically plotting results using gnuplot
.
To do this you first need to output the benchmark results in the .dat
format:
cargo run --release -- --dat results.dat
After having generated the .dat
file, you can then pass it to a script in the
./gnuplot
directory to generate the SVG plot:
./gnuplot/summary.sh results.dat output.svg
You can combine any number of results by passing them to the gnuplot script:
./gnuplot/summary.sh *.dat output.svg
And can plot detailed results using detailed.sh
:
./gnuplot/detailed.sh *.dat output/
If you have found benchmarks that might provide insightful information, or show significant differences between different terminals and version, you can send a pull request to add them to the default benchmark collection.
To do so, you just need to create a new directory in the ./benchmarks
directory and add a benchmark
and an optional setup
executable. The stdout
of the benchmark will automatically be repeated to fill a reasonable minimum
sample size, so make sure to take that into account and move everything into
setup
that should only be done once.