Awesome Open Source
Awesome Open Source


An elegantly simple and lightweight Rust library for making iterator pipelines concurrent and amazingly fast, hence the name "ppipe" (parallel pipe). Find it here:


Put this in your Cargo.toml:


ppipe = "0.2.0"

Then add this to your root module:

extern crate ppipe;

And add this to whatever module you want to be able to use ppipe:

use ppipe::*;

Now generally all iterators in scope should have the ppipe method!


If you followed the above steps, every iterator should have the ppipe method. The method takes an Option<usize> parameter which can be used to declare if you want back-pressure or not. ppipe(Some(1000)) would mean that you want the concurrent receiver to hold no more than 1000 values and tell the sender to block until the receiver's buffer goes below 1000 over the course of, for example, a for loop.


extern crate ppipe;
use ppipe::*;

// ...

for item in excel_sheet.into_iter()
  .ppipe(None)  // create a thread for the above work
  .ppipe(None) // create another thread for the the above work
  // ...
  // ...

How It Works Internally

The significance of this little library is hard to put into words, so please refer to the in the examples folder of this repository, which I will reference in this explanation.

The point of this simplistic example is to demonstrate how WITHOUT ppipe, every iteration moves the corresponding item along the pipeline and then executes, in this case, the for loop's body. The items are never pre-loaded into some buffer waiting for the iteration variable to take ownership after being forced to move along the pipeline. This is almost never idealistic; why limit yourself to serial processing when you have Rust's powerful parallel processing at hand? This is what ppipe does. WITH ppipe, all previous iterator adaptors are ran regardless of what iteration, in this case, the for loop is on, including any previous ppipe adaptors which are busy doing their own thing. Every item that is processed is put in a buffer which can be accessed as it is being added to, and if there are no items in the buffer, the iteration will simply block until an item is available, or break if there are no more items being processed. This means items can be added to the buffer as you are iterating over previous items in the buffer, which ultimately reduces bottlenecking and GREATLY increases performance!

Get A Weekly Email With Trending Projects For These Topics
No Spam. Unsubscribe easily at any time.
rust (4,299
pipeline (167
multithreading (97
parallel-processing (24
loop (22

Find Open Source By Browsing 7,000 Topics Across 59 Categories