Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Yjs | 11,876 | 56 | 172 | 18 hours ago | 252 | August 18, 2022 | 78 | other | JavaScript | |
Shared data types for building collaborative software | ||||||||||
Hypermerge | 1,262 | 8 | 2 | 4 months ago | 12 | April 13, 2018 | mit | TypeScript | ||
Build p2p collaborative applications without any server infrastructure in Node.js | ||||||||||
Ocsystem | 732 | 3 years ago | lgpl-3.0 | PHP | ||||||
🚀ONLYCHAIN blockchain underlying system-OCSystem | ||||||||||
Peer Pad | 596 | 4 years ago | 1 | August 17, 2017 | 101 | mit | JavaScript | |||
📝 Online editor providing collaborative editing in really real-time using CRDTs and IPFS. | ||||||||||
Octobase | 375 | 17 hours ago | 29 | agpl-3.0 | Rust | |||||
🐙 OctoBase is the open-source database behind AFFiNE, local-first, yet collaborative. A light-weight, scalable, data engine written in Rust. | ||||||||||
Firestr | 347 | 17 days ago | 3 | gpl-3.0 | C++ | |||||
The Grass Computing Platform | ||||||||||
Seeks | 211 | 8 years ago | 20 | agpl-3.0 | C++ | |||||
Seeks is a decentralized p2p websearch and collaborative tool. | ||||||||||
Codeinterview Backend | 207 | 4 months ago | 3 | apache-2.0 | Python | |||||
Backend code repository for CodeInterview | ||||||||||
Quiet | 158 | 2 days ago | 358 | gpl-3.0 | C | |||||
A private, p2p alternative to Slack and Discord built on Tor & IPFS | ||||||||||
Mute | 102 | a month ago | 17 | agpl-3.0 | TypeScript | |||||
a scalable collaborative document editor with CRDT, P2P and E2EE |
Warning Hypermerge is deprecated. This library is no longer maintained and uses an ancient and slow version of Automerge. We strongly recommend you adopt automerge/automerge-repo instead.
Hypermerge is a Node.js library for building p2p collaborative applications without any server infrastructure. It combines Automerge, a CRDT, with hypercore, a distributed append-only log.
This project provides a way to have apps data sets that are conflict free and offline first (thanks to CRDT's) and serverless (thanks to hypercore/DAT).
While the DAT community has done a lot of work to secure their tool set, zero effort has been made with hypermerge itself to deal with security and privacy concerns. Due to the secure nature of the tools its built upon a properly audited and secure version of this library would be possible in the future.
Hypermerge stores an automerge document as a set of separate Hypercores, one per actor ID. Each actor ID in the document is the discovery key of a Hypercore, which allows recursive lookup.
This strategy works for a small number of documents, but is very slow to synchronize due to the vast number of Hypercores required for a large collection. It also doesn't work well with the new Automerge file format which consolidates sequential changes during storage: each change has its own Hypercore entry with all the attendant metadata. To put this in context, a single keystroke in a textfield will result in hundreds of bytes of data if you use Hypermerge.
There are several example repos in the /examples
directory, including a very simple two-repo
code demo and a simple CLI-based chat application.
The best demonstration of Hypermerge is PushPin, which shows Hypermerge in "full flight", including taking advantage of splitting the fast, simple front-end from the more expensive, slower back-end.
The base object you make with hypermerge is a Repo. A repo is responsible for managing documents and replicating to peers.
import { Repo } from 'hypermerge'
import Hyperswarm from 'hyperswarm'
const path = '.data'
const repo = new Repo({ path })
repo.setSwarm(Hyperswarm())
const url = repo.create({ hello: 'world' })
repo.doc<any>(url, (doc) => {
console.log(doc) // { hello: "world" }
})
// this is an automerge change function - see automerge for more info
// basically you get to treat the state as a plain old javacript object
// operations you make will be added to an internal append only log and
// replicated to peers
repo.change(url, (state: any) => {
state.foo = 'bar'
})
repo.doc<any>(url, (doc) => {
console.log(doc) // { hello: "world", foo: "bar" }
})
// to watch a document that changes over time ...
const handle = repo.watch(url, (doc: any) => {
console.log(doc)
if (doc.foo === 'bar') {
handle.close()
}
})
NOTE: If you're familiar with Automerge: the change
function in Hypermerge
is asynchronous, while the Automerge.change
function is synchronous. What this
means is that although Automerge.change
returns an object representing the new
state of your document, repo.change
(or handle.change
) does NOT. So:
// ok in Automerge!
doc1 = Automerge.change(doc1, 'msg', (doc) => {
doc.foo = 'bar'
})
// NOT ok in Hypermerge!
doc1 = repo.change(url1, (doc) => {
doc.foo = 'bar'
})
Instead, you should expect to get document state updates via repo.watch
(or handle.subscribe
) as shown in the example above.
const docUrl = repoA.create({ numbers: [2, 3, 4] })
// this will block until the state has replicated to machine B
repoA.watch<MyDoc>(docUrl, (state) => {
console.log('RepoA', state)
// { numbers: [2,3,4] }
// { numbers: [2,3,4,5], foo: "bar" }
// { numbers: [2,3,4,5], foo: "bar" } // (local changes repeat)
// { numbers: [1,2,3,4,5], foo: "bar", bar: "foo" }
})
repoB.watch<MyDoc>(docUrl, (state) => {
console.log('RepoB', state)
// { numbers: [1,2,3,4,5], foo: "bar", bar: "foo" }
})
repoA.change<MyDoc>(docUrl, (state) => {
state.numbers.push(5)
state.foo = 'bar'
})
repoB.change<MyDoc>(docUrl, (state) => {
state.numbers.unshift(1)
state.bar = 'foo'
})
Hypermerge supports a special kind of core called a hyperfile. Hyperfiles are unchanging, written-once hypercores that store binary data.
Here's a simple example of reading and writing files.
// Write an hyperfile
const fileStream = fs.createReadStream('image.png')
const { url } = await repo.files.write(fileStream, 'image/png')
// Read an hyperfile
const fileStream = fs.createWriteStream('image.png')
const hyperfileStream = await repo.files.read(url)
hyperfileStream.pipe(fileStream)
Note that hyperfiles are conveniently well-suited to treatment as a native protocol for Electron applications. This allows you to refer to them throughout your application directly as though they were regular files for images and other assets without any special treatment. Here's how to register that:
protocol.registerStreamProtocol(
'hyperfile',
(request, callback) => {
try {
const stream = await repo.files.read(request.url)
callback(stream)
} catch (e) {
log(e)
}
},
(error) => {
if (error) {
log('Failed to register protocol')
}
}
)
Both Hypermerge and Automerge supports separating the front-end (where materialized documents live and changes are made) from the backend (where CRDT computations are handled as well as networking and compression.) This is useful for maintaining application performance by moving expensive computation operations off of the render thread to another location where they don't block user input.
The communication between front-end and back-end is all done via simple Javascript objects and can be serialized/deserialized through JSON if required.
// establish a back-end
const back = new RepoBackend({ path: HYPERMERGE_PATH, port: 0 })
const swarm = Hyperswarm({ /* your config here */ })
back.setSwarm(swarm)
// elsewhere, create a front-end (you'll probably want to do this in different threads)
const front = new RepoFrontend()
// the `subscribe` method sends a message stream, the `receive` receives it
// for demonstration here we simply output JSON and parse it back in the same location
// note that front-end and back-end must each subscribe to each other's streams
back.subscribe((msg) => front.receive(JSON.parse(JSON.stringify(msg))))
front.subscribe((msg) => back.receive(JSON.parse(JSON.stringify(msg))))
}
Note: each back-end only supports a single front-end today.