Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Awesome Architecture | 8,359 | 2 years ago | 7 | |||||||
架构师技术图谱,助你早日成为架构师 | ||||||||||
Share_ppt | 4,639 | 5 months ago | 5 | |||||||
🚗 我个人曾经做过的技术分享... | ||||||||||
Tidis | 1,398 | 6 months ago | 4 | January 29, 2021 | mit | Go | ||||
Distributed transactional NoSQL database, Redis protocol compatible using tikv as backend | ||||||||||
Summitdb | 1,327 | a year ago | 12 | other | Go | |||||
In-memory NoSQL database with ACID transactions, Raft consensus, and Redis API | ||||||||||
Icefiredb | 974 | 2 days ago | 4 | mit | Go | |||||
@IceFireLabs -> IceFireDB is a database built for web3.0 It strives to fill the gap between web2 and web3.0 with a friendly database experience, making web3 application data storage more convenient, and making it easier for web2 applications to achieve decentralization and data immutability. | ||||||||||
Javaok | 725 | 3 years ago | ||||||||
必看!java后端,亮剑诛仙。java发展路线技术要点。 | ||||||||||
Uhaha | 537 | 1 | 4 months ago | 44 | June 11, 2022 | 2 | mit | Go | ||
High Availability Raft Framework for Go | ||||||||||
Finn | 531 | 1 | 2 | 2 years ago | 3 | November 03, 2020 | mit | Go | ||
Fast Raft framework using the Redis protocol for Go | ||||||||||
Redisraft | 520 | 3 days ago | 53 | other | C | |||||
A Redis Module that make it possible to create a consistent Raft cluster from multiple Redis instances. | ||||||||||
Elasticell | 418 | 3 years ago | 1 | apache-2.0 | Go | |||||
Elastic Key-Value Storage With Strong Consistency and Reliability |
This project has been archived. Please check out Uhaha for a fitter, happier, more productive Raft framework.
Finn is a fast and simple framework for building Raft implementations in Go. It uses Redcon for the network transport and Hashicorp Raft. There is also the option to use LevelDB, BoltDB or FastLog for log persistence.
To start using Finn, install Go and run go get
:
$ go get -u github.com/tidwall/finn
This will retrieve the library.
Here's an example of a Redis clone that accepts the GET, SET, DEL, and KEYS commands.
You can run a full-featured version of this example from a terminal:
go run example/clone.go
package main
import (
"encoding/json"
"io"
"io/ioutil"
"log"
"sort"
"strings"
"sync"
"github.com/tidwall/finn"
"github.com/tidwall/match"
"github.com/tidwall/redcon"
)
func main() {
n, err := finn.Open("data", ":7481", "", NewClone(), nil)
if err != nil {
log.Fatal(err)
}
defer n.Close()
select {}
}
type Clone struct {
mu sync.RWMutex
keys map[string][]byte
}
func NewClone() *Clone {
return &Clone{keys: make(map[string][]byte)}
}
func (kvm *Clone) Command(m finn.Applier, conn redcon.Conn, cmd redcon.Command) (interface{}, error) {
switch strings.ToLower(string(cmd.Args[0])) {
default:
return nil, finn.ErrUnknownCommand
case "set":
if len(cmd.Args) != 3 {
return nil, finn.ErrWrongNumberOfArguments
}
return m.Apply(conn, cmd,
func() (interface{}, error) {
kvm.mu.Lock()
kvm.keys[string(cmd.Args[1])] = cmd.Args[2]
kvm.mu.Unlock()
return nil, nil
},
func(v interface{}) (interface{}, error) {
conn.WriteString("OK")
return nil, nil
},
)
case "get":
if len(cmd.Args) != 2 {
return nil, finn.ErrWrongNumberOfArguments
}
return m.Apply(conn, cmd, nil,
func(interface{}) (interface{}, error) {
kvm.mu.RLock()
val, ok := kvm.keys[string(cmd.Args[1])]
kvm.mu.RUnlock()
if !ok {
conn.WriteNull()
} else {
conn.WriteBulk(val)
}
return nil, nil
},
)
case "del":
if len(cmd.Args) < 2 {
return nil, finn.ErrWrongNumberOfArguments
}
return m.Apply(conn, cmd,
func() (interface{}, error) {
var n int
kvm.mu.Lock()
for i := 1; i < len(cmd.Args); i++ {
key := string(cmd.Args[i])
if _, ok := kvm.keys[key]; ok {
delete(kvm.keys, key)
n++
}
}
kvm.mu.Unlock()
return n, nil
},
func(v interface{}) (interface{}, error) {
n := v.(int)
conn.WriteInt(n)
return nil, nil
},
)
case "keys":
if len(cmd.Args) != 2 {
return nil, finn.ErrWrongNumberOfArguments
}
pattern := string(cmd.Args[1])
return m.Apply(conn, cmd, nil,
func(v interface{}) (interface{}, error) {
var keys []string
kvm.mu.RLock()
for key := range kvm.keys {
if match.Match(key, pattern) {
keys = append(keys, key)
}
}
kvm.mu.RUnlock()
sort.Strings(keys)
conn.WriteArray(len(keys))
for _, key := range keys {
conn.WriteBulkString(key)
}
return nil, nil
},
)
}
}
func (kvm *Clone) Restore(rd io.Reader) error {
kvm.mu.Lock()
defer kvm.mu.Unlock()
data, err := ioutil.ReadAll(rd)
if err != nil {
return err
}
var keys map[string][]byte
if err := json.Unmarshal(data, &keys); err != nil {
return err
}
kvm.keys = keys
return nil
}
func (kvm *Clone) Snapshot(wr io.Writer) error {
kvm.mu.RLock()
defer kvm.mu.RUnlock()
data, err := json.Marshal(kvm.keys)
if err != nil {
return err
}
if _, err := wr.Write(data); err != nil {
return err
}
return nil
}
Every Command()
call provides an Applier
type which is responsible for handling all Read or Write operation. In the above example you will see one m.Apply(conn, cmd, ...)
for each command.
The signature for the Apply()
function is:
func Apply(
conn redcon.Conn,
cmd redcon.Command,
mutate func() (interface{}, error),
respond func(interface{}) (interface{}, error),
) (interface{}, error)
conn
is the client connection making the call. It's possible that this value may be nil
for commands that are being replicated on Follower nodes.cmd
is the command to process.mutate
is the function that handles modifying the node's data.
Passing nil
indicates that the operation is read-only.
The interface{}
return value will be passed to the respond
func.
Returning an error will cancel the operation and the error will be returned to the client.respond
is used for responding to the client connection. It's also used for read-only operations. The interface{}
param is what was passed from the mutate
function and may be nil
.
Returning an error will cancel the operation and the error will be returned to the client.Please note that the Apply()
command is required for modifying or accessing data that is shared on all of the nodes.
Optionally you can forgo the call altogether for operations that are unique to the node.
All Raft commands are stored in one big log file that will continue to grow. The log is stored on disk, in memory, or both. At some point the server will run out of memory or disk space. Snapshots allows for truncating the log so that it does not take up all of the server's resources.
The two functions Snapshot
and Restore
are used to create a snapshot and restore a snapshot, respectively.
The Snapshot()
function passes a writer that you can write your snapshot to.
Return nil
to indicate that you are done writing. Returning an error will cancel the snapshot. If you want to disable snapshots altogether:
func (kvm *Clone) Snapshot(wr io.Writer) error {
return finn.ErrDisabled
}
The Restore()
function passes a reader that you can use to restore your snapshot from.
Please note that the Raft cluster is active during a snapshot operation. In the example above we use a read-lock that will force the cluster to delay all writes until the snapshot is complete. This may not be ideal for your scenario.
There's a command line Redis clone that supports all of Finn's features. Print the help options:
go run example/clone.go -h
First start a single-member cluster:
go run example/clone.go
This will start the clone listening on port 7481 for client and server-to-server communication.
Next, let's set a single key, and then retrieve it:
$ redis-cli -p 7481 SET mykey "my value"
OK
$ redis-cli -p 7481 GET mykey
"my value"
Adding members:
go run example/clone.go -p 7482 -dir data2 -join :7481
go run example/clone.go -p 7483 -dir data3 -join :7481
That's it. Now if node1 goes down, node2 and node3 will continue to operate.
Here are a few commands for monitoring and managing the cluster:
The Options.Durability
field has the following options:
Low
- fsync is managed by the operating system, less safeMedium
- fsync every second, fast and saferHigh
- fsync after every write, very durable, slowerThe Options.Consistency
field has the following options:
Low
- all nodes accept reads, small risk of stale dataMedium
- only the leader accepts reads, itty-bitty risk of stale data during a leadership changeHigh
- only the leader accepts reads, the raft log index is incremented to guaranteeing no stale dataFor example, setting the following options:
opts := finn.Options{
Consistency: High,
Durability: High,
}
n, err := finn.Open("data", ":7481", "", &opts)
Provides the highest level of durability and consistency.
Finn supports the following log databases.
Josh Baker @tidwall
Finn source code is available under the MIT License.