Cosid

Universal, flexible, high-performance distributed ID generator. | 通用、灵活、高性能的分布式 ID 生成器
Alternatives To Cosid
Project NameStarsDownloadsRepos Using ThisPackages Using ThisMost Recent CommitTotal ReleasesLatest ReleaseOpen IssuesLicenseLanguage
Kuboard Press17,908
2 months ago333JavaScript
Kuboard 是基于 Kubernetes 的微服务管理界面。同时提供 Kubernetes 免费中文教程,入门教程,最新版本的 Kubernetes v1.23.4 安装手册,(k8s install) 在线答疑,持续更新。
Mall Swarm10,317
8 days ago43apache-2.0Java
mall-swarm是一套微服务商城系统,采用了 Spring Cloud 2021 & Alibaba、Spring Boot 2.7、Oauth2、MyBatis、Docker、Elasticsearch、Kubernetes等核心技术,同时提供了基于Vue的管理后台方便快速搭建系统。mall-swarm在电商业务的基础集成了注册中心、配置中心、监控中心、网关等系统功能。文档齐全,附带全套Spring Cloud教程。
Goquality Dev Contents7,752
4 days ago3mit
{ 고퀄리티 :zap: 개발 컨텐츠 모음 }
Awesome Fenix7,167
3 months ago234Vue
讨论如何构建一套可靠的大型分布式系统
Blog_demos3,710
2 days ago163apache-2.0Java
CSDN博客专家程序员欣宸的github,这里有六百多篇原创文章的详细分类和汇总,以及对应的源码,内容涉及Java、Docker、Kubernetes、DevOPS等方面
Spring Cloud Kubernetes3,2909133 days ago20July 06, 202141apache-2.0Java
Kubernetes integration with Spring Cloud Discovery Client, Configuration, etc...
Spring Boot Quick2,152
3 months ago12Java
:herb: 基于springboot的快速学习示例,整合自己遇到的开源框架,如:rabbitmq(延迟队列)、Kafka、jpa、redies、oauth2、swagger、jsp、docker、k3s、k3d、k8s、mybatis加解密插件、异常处理、日志输出、多模块开发、多环境打包、缓存cache、爬虫、jwt、GraphQL、dubbo、zookeeper和Async等等:pushpin:
Staffjoy1,771
6 months ago48mitJava
微服务(Microservices)和云原生架构教学案例项目,基于Spring Boot和Kubernetes技术栈
Six Finger1,715
15 days ago1Java
📓从Java基础、JavaWeb基础到常用的框架再到面试题、微服务、分布式、大数据都有完整的教程,几乎涵盖了Java必备的知识点
Spring Microservices1,369
a year ago7mitJava
Microservices using Spring Boot, Spring Cloud, Docker and Kubernetes
Alternatives To Cosid
Select To Compare


Alternative Project Comparisons
Readme

CosId Universal, flexible, high-performance distributed ID generator

License GitHub release Maven Central Codacy Badge codecov Integration Test Status

Introduction

CosId aims to provide a universal, flexible and high-performance distributed ID generator.

  • CosIdGenerator : Stand-alone TPS performance15,570,085 ops/s , three times that of UUID.randomUUID(),global trend increasing based-time.
  • SnowflakeId : Stand-alone TPS performance4,096,000 ops/s JMH Benchmark , It mainly solves two major problems of SnowflakeId: machine number allocation problem and clock backwards problem and provide a more friendly and flexible experience.
  • SegmentId: Get a segment (Step) ID every time to reduce the network IO request frequency of the IdSegment distributor and improve performance.
    • IdSegmentDistributor:
      • RedisIdSegmentDistributor: IdSegment distributor based on Redis.
      • JdbcIdSegmentDistributor: The Jdbc-based IdSegment distributor supports various relational databases.
      • ZookeeperIdSegmentDistributor: IdSegment distributor based on Zookeeper.
  • SegmentChainId(recommend):SegmentChainId (lock-free) is an enhancement of SegmentId, the design diagram is as follows. PrefetchWorker maintains a safe distance, so that SegmentChainId achieves approximately AtomicLong TPS performance: 127,439,148+ ops/s JMH Benchmark .
    • PrefetchWorker maintains a safe distance (safeDistance), and supports dynamic safeDistance expansion and contraction based on hunger status.

SnowflakeId

Snowflake

SnowflakeId is a distributed ID algorithm that uses Long (64-bit) bit partition to generate ID. The general bit allocation scheme is : timestamp (41-bit) + machineId (10-bit) + sequence (12-bit) = 63-bit

  • 41-bit timestamp = (1L<<41)/(1000/3600/24/365) approximately 69 years of timestamp can be stored, that is, the usable absolute time is EPOCH + 69 years. Generally, we need to customize EPOCH as the product development time. In addition, we can increase the number of allocated bits by compressing other areas The number of timestamp bits to extend the available time.
  • 10-bit machineId = (1L<<10) = 1024 That is, 1024 copies of the same business can be deployed (there is no master-slave copy in the Kubernetes concept, and the definition of Kubernetes is directly used here) instances. Generally, there is no need to use so many, so it will be redefined according to the scale of deployment.
  • 12-bit sequence = (1L<<12) * 1000 = 4096000 That is, a single machine can generate about 409W ID per second, and a global same-service cluster can generate 4096000*1024=4194304000=4.19 billion (TPS).

It can be seen from the design of SnowflakeId:

  • 👍 The first 41-bit are a timestamp,So SnowflakeId is local monotonically increasing, and affected by global clock synchronization SnowflakeId is global trend increasing.
  • 👍 SnowflakeId does not have a strong dependency on any third-party middleware, and its performance is also very high.
  • 👍 The bit allocation scheme can be flexibly configured according to the needs of the business system to achieve the optimal use effect.
  • 👎 Strong reliance on the local clock, potential clock moved backwards problems will cause ID duplication.
  • 👎 The machineId needs to be set manually. If the machineId is manually assigned during actual deployment, it will be very inefficient.

CosId-SnowflakeId

It mainly solves two major problems of SnowflakeId: machine number allocation problem and clock backwards problem and provide a more friendly and flexible experience.

MachineIdDistributor

Currently CosId provides the following three MachineId distributors.

ManualMachineIdDistributor

cosid:
  snowflake:
    machine:
      distributor:
        type: manual
        manual:
          machine-id: 0

Manually distribute MachineId

StatefulSetMachineIdDistributor

cosid:
  snowflake:
    machine:
      distributor:
        type: stateful_set

Use the stable identification ID provided by the StatefulSet of Kubernetes as the machine number.

RedisMachineIdDistributor

Redis Machine Id Distributor

Machine Id Safe Guard

cosid:
  snowflake:
    machine:
      distributor:
        type: redis

Use Redis as the distribution store for the machine number.

ClockBackwardsSynchronizer

cosid:
  snowflake:
    clock-backwards:
      spin-threshold: 10
      broken-threshold: 2000

The default DefaultClockBackwardsSynchronizer clock moved backwards synchronizer uses active wait synchronization strategy, spinThreshold (default value 10 milliseconds) is used to set the spin wait threshold, when it is greater than spinThreshold, use thread sleep to wait for clock synchronization, if it exceedsBrokenThreshold (default value 2 seconds) will directly throw a ClockTooManyBackwardsException exception.

MachineStateStorage

public class MachineState {
    public static final MachineState NOT_FOUND = of(-1, -1);
    private final int machineId;
    private final long lastTimeStamp;
    
    public MachineState(int machineId, long lastTimeStamp) {
        this.machineId = machineId;
        this.lastTimeStamp = lastTimeStamp;
    }
    
    public int getMachineId() {
        return machineId;
    }
    
    public long getLastTimeStamp() {
        return lastTimeStamp;
    }
    
    public static MachineState of(int machineId, long lastStamp) {
        return new MachineState(machineId, lastStamp);
    }
}
cosid:
  snowflake:
    machine:
      state-storage:
        local:
          state-location: ./cosid-machine-state/

The default LocalMachineStateStorage local machine state storage uses a local file to store the machine number and the most recent timestamp, which is used as a MachineState cache.

ClockSyncSnowflakeId

cosid:
  snowflake:
    share:
      clock-sync: true

The default SnowflakeId will directly throw a ClockBackwardsException when a clock moved backwards occurs, while using the ClockSyncSnowflakeId will use the ClockBackwardsSynchronizer to actively wait for clock synchronization to regenerate the ID, providing a more user-friendly experience.

SafeJavaScriptSnowflakeId

SnowflakeId snowflakeId=SafeJavaScriptSnowflakeId.ofMillisecond(1);

The Number.MAX_SAFE_INTEGER of JavaScript has only 53-bit. If the 63-bit SnowflakeId is directly returned to the front end, the value will overflow. Usually we can convert SnowflakeId to String type or customize SnowflakeId Bit allocation is used to shorten the number of bits of SnowflakeId so that ID does not overflow when it is provided to the front end.

SnowflakeFriendlyId (Can parse SnowflakeId into a more readable SnowflakeIdState)

cosid:
  snowflake:
    share:
      friendly: true
public class SnowflakeIdState {
    
    private final long id;
    
    private final int machineId;
    
    private final long sequence;
    
    private final LocalDateTime timestamp;
    /**
     * {@link #timestamp}-{@link #machineId}-{@link #sequence}
     */
    private final String friendlyId;
}
public interface SnowflakeFriendlyId extends SnowflakeId {
    
    SnowflakeIdState friendlyId(long id);
    
    SnowflakeIdState ofFriendlyId(String friendlyId);
    
    default SnowflakeIdState friendlyId() {
        long id = generate();
        return friendlyId(id);
    }
}
    SnowflakeFriendlyId snowflakeFriendlyId=new DefaultSnowflakeFriendlyId(snowflakeId);
    SnowflakeIdState idState=snowflakeFriendlyId.friendlyId();
    idState.getFriendlyId(); //20210623131730192-1-0

SegmentId

Segment Id

RedisIdSegmentDistributor

cosid:
  segment:
    enabled: true
    distributor:
      type: redis

JdbcIdSegmentDistributor

Initialize the cosid table

create table if not exists cosid
(
    name            varchar(100) not null comment '{namespace}.{name}',
    last_max_id     bigint       not null default 0,
    last_fetch_time bigint       not null,
    constraint cosid_pk
        primary key (name)
) engine = InnoDB;
spring:
  datasource:
    url: jdbc:mysql://localhost:3306/test_db
    username: root
    password: root
cosid:
  segment:
    enabled: true
    distributor:
      type: jdbc
      jdbc:
        enable-auto-init-cosid-table: false
        enable-auto-init-id-segment: true

After enabling enable-auto-init-id-segment:true, the application will try to create the idSegment record when it starts to avoid manual creation. Similar to the execution of the following initialization sql script, there is no need to worry about misoperation, because name is the primary key.

insert into cosid
    (name, last_max_id, last_fetch_time)
    value
    ('namespace.name', 0, unix_timestamp());

SegmentChainId

SegmentChainId

cosid:
  segment:
    enabled: true
    mode: chain
    chain:
      safe-distance: 5
      prefetch-worker:
        core-pool-size: 2
        prefetch-period: 1s
    distributor:
      type: redis
    share:
      offset: 0
      step: 100
    provider:
      bizC:
        offset: 10000
        step: 100
      bizD:
        offset: 10000
        step: 100

IdGeneratorProvider

cosid:
  snowflake:
    provider:
      bizA:
        #      timestamp-bit:
        sequence-bit: 12
      bizB:
        #      timestamp-bit:
        sequence-bit: 12
IdGenerator idGenerator=idGeneratorProvider.get("bizA");

In actual use, we generally do not use the same IdGenerator for all business services, but different businesses use different IdGenerator, then IdGeneratorProvider exists to solve this problem, and it is the container of IdGenerator , You can get the corresponding IdGenerator by the business name.

CosIdPlugin (MyBatis Plugin)

Kotlin DSL

    implementation("me.ahoo.cosid:cosid-mybatis:${cosidVersion}")

@Target({ElementType.FIELD})
@Documented
@Retention(RetentionPolicy.RUNTIME)
public @interface CosId {
    String value() default IdGeneratorProvider.SHARE;
    
    boolean friendlyId() default false;
}
public class LongIdEntity {
    
    @CosId(value = "safeJs")
    private Long id;
    
    public Long getId() {
        return id;
    }
    
    public void setId(Long id) {
        this.id = id;
    }
}

public class FriendlyIdEntity {
    
    @CosId(friendlyId = true)
    private String id;
    
    public String getId() {
        return id;
    }
    
    public void setId(String id) {
        this.id = id;
    }
}

@Mapper
public interface OrderRepository {
    @Insert("insert into t_table (id) value (#{id});")
    void insert(LongIdEntity order);
    
    @Insert({
        "<script>",
        "insert into t_friendly_table (id)",
        "VALUES" +
            "<foreach item='item' collection='list' open='' separator=',' close=''>" +
            "(#{item.id})" +
            "</foreach>",
        "</script>"})
    void insertList(List<FriendlyIdEntity> list);
}
        LongIdEntity entity=new LongIdEntity();
    entityRepository.insert(entity);
    /**
     * {
     *   "id": 208796080181248
     * }
     */
    return entity;

ShardingSphere Plugin

cosid-shardingsphere

CosIdKeyGenerateAlgorithm (Distributed-Id)

spring:
  shardingsphere:
    rules:
      sharding:
        key-generators:
          cosid:
            type: COSID
            props:
              id-name: __share__

Interval-based time range sharding algorithm

CosIdIntervalShardingAlgorithm

  • Ease of use: supports multiple data types (Long/LocalDateTime/DATE/ String / SnowflakeId),The official implementation is to first convert to a string and then convert to LocalDateTime, the conversion success rate is affected by the time formatting characters.
  • Performance: Compared to org.apache.shardingsphere.sharding.algorithm.sharding.datetime.IntervalShardingAlgorithm ,The performance is 1200~4000 times higher.
PreciseShardingValue RangeShardingValue
Throughput Of IntervalShardingAlgorithm - PreciseShardingValue Throughput Of IntervalShardingAlgorithm - RangeShardingValue
  • CosIdIntervalShardingAlgorithm
    • type: COSID_INTERVAL
spring:
  shardingsphere:
    rules:
      sharding:
        sharding-algorithms:
          alg-name:
            type: COSID_INTERVAL
            props:
              logic-name-prefix: logic-name-prefix
              id-name: cosid-name
              datetime-lower: 2021-12-08 22:00:00
              datetime-upper: 2022-12-01 00:00:00
              sharding-suffix-pattern: yyyyMM
              datetime-interval-unit: MONTHS
              datetime-interval-amount: 1

CosIdModShardingAlgorithm

CosId Mod Sharding Algorithm

  • Performance: Compared to org.apache.shardingsphere.sharding.algorithm.sharding.datetime.IntervalShardingAlgorithm ,The performance is 1200~4000 times higher.And it has higher stability and no serious performance degradation.
PreciseShardingValue RangeShardingValue
Throughput Of ModShardingAlgorithm - PreciseShardingValue Throughput Of ModShardingAlgorithm - RangeShardingValue
spring:
  shardingsphere:
    rules:
      sharding:
        sharding-algorithms:
          alg-name:
            type: COSID_MOD
            props:
              mod: 4
              logic-name-prefix: t_table_

Examples

jdbc/proxy/redis-cosid/redis/shardingsphere/zookeeper

Examples

Installation

Gradle

Kotlin DSL

    val cosidVersion = "1.14.5";
    implementation("me.ahoo.cosid:cosid-spring-boot-starter:${cosidVersion}")

Maven

<?xml version="1.0" encoding="UTF-8"?>

<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">

    <modelVersion>4.0.0</modelVersion>
    <artifactId>demo</artifactId>
    <properties>
        <cosid.version>1.14.5</cosid.version>
    </properties>

    <dependencies>
        <dependency>
            <groupId>me.ahoo.cosid</groupId>
            <artifactId>cosid-spring-boot-starter</artifactId>
            <version>${cosid.version}</version>
        </dependency>
    </dependencies>

</project>

application.yaml

spring:
  shardingsphere:
    datasource:
      names: ds0,ds1
      ds0:
        type: com.zaxxer.hikari.HikariDataSource
        driver-class-name: com.mysql.cj.jdbc.Driver
        jdbcUrl: jdbc:mysql://localhost:3306/cosid_db_0
        username: root
        password: root
      ds1:
        type: com.zaxxer.hikari.HikariDataSource
        driver-class-name: com.mysql.cj.jdbc.Driver
        jdbcUrl: jdbc:mysql://localhost:3306/cosid_db_1
        username: root
        password: root
    props:
      sql-show: true
    rules:
      sharding:
        binding-tables:
          - t_order,t_order_item
        tables:
          cosid:
            actual-data-nodes: ds0.cosid
          t_table:
            actual-data-nodes: ds0.t_table_$->{0..1}
            table-strategy:
              standard:
                sharding-column: id
                sharding-algorithm-name: table-inline
          t_date_log:
            actual-data-nodes: ds0.t_date_log_202112
            key-generate-strategy:
              column: id
              key-generator-name: snowflake
            table-strategy:
              standard:
                sharding-column: create_time
                sharding-algorithm-name: data-log-interval
        sharding-algorithms:
          table-inline:
            type: COSID_MOD
            props:
              mod: 2
              logic-name-prefix: t_table_
          data-log-interval:
            type: COSID_INTERVAL
            props:
              logic-name-prefix: t_date_log_
              datetime-lower: 2021-12-08 22:00:00
              datetime-upper: 2022-12-01 00:00:00
              sharding-suffix-pattern: yyyyMM
              datetime-interval-unit: MONTHS
              datetime-interval-amount: 1
        key-generators:
          snowflake:
            type: COSID
            props:
              id-name: snowflake


cosid:
  namespace: ${spring.application.name}
  machine:
    enabled: true
    #      stable: true
    #      machine-bit: 10
    #      instance-id: ${HOSTNAME}
    distributor:
      type: redis
    #        manual:
    #          machine-id: 0
  snowflake:
    enabled: true
    #    epoch: 1577203200000
    clock-backwards:
      spin-threshold: 10
      broken-threshold: 2000
    share:
      clock-sync: true
      friendly: true
    provider:
      order_item:
        #        timestamp-bit:
        sequence-bit: 12
      snowflake:
        sequence-bit: 12
      safeJs:
        machine-bit: 3
        sequence-bit: 9
  segment:
    enabled: true
    mode: chain
    chain:
      safe-distance: 5
      prefetch-worker:
        core-pool-size: 2
        prefetch-period: 1s
    distributor:
      type: redis
    share:
      offset: 0
      step: 100
    provider:
      order:
        offset: 10000
        step: 100
      longId:
        offset: 10000
        step: 100

JMH-Benchmark

  • The development notebook : MacBook Pro (M1)
  • All benchmark tests are carried out on the development notebook.
  • Deploying Redis on the development notebook.

SnowflakeId

gradle cosid-core:jmh
# or
java -jar cosid-core/build/libs/cosid-core-1.14.5-jmh.jar -bm thrpt -wi 1 -rf json -f 1
Benchmark                                                    Mode  Cnt        Score   Error  Units
SnowflakeIdBenchmark.millisecondSnowflakeId_friendlyId      thrpt       4020311.665          ops/s
SnowflakeIdBenchmark.millisecondSnowflakeId_generate        thrpt       4095403.859          ops/s
SnowflakeIdBenchmark.safeJsMillisecondSnowflakeId_generate  thrpt        511654.048          ops/s
SnowflakeIdBenchmark.safeJsSecondSnowflakeId_generate       thrpt        539818.563          ops/s
SnowflakeIdBenchmark.secondSnowflakeId_generate             thrpt       4206843.941          ops/s

Throughput (ops/s) of SegmentChainId

Throughput-Of-SegmentChainId

Percentile-Sample (P9999=0.208 us/op) of SegmentChainId

In statistics, a percentile (or a centile) is a score below which a given percentage of scores in its frequency distribution falls (exclusive definition) or a score at or below which a given percentage falls (inclusive definition). For example, the 50th percentile (the median) is the score below which (exclusive) or at or below which (inclusive) 50% of the scores in the distribution may be found.

Percentile-Sample-Of-SegmentChainId

Community Partners and Sponsors

Popular Spring Projects
Popular Kubernetes Projects
Popular Frameworks Categories
Related Searches

Get A Weekly Email With Trending Projects For These Categories
No Spam. Unsubscribe easily at any time.
Java
Spring
Kubernetes
Redis
Spring Boot
Gradle
Microservices
Clock
Zookeeper
Spring Cloud
Cloud Native
Snowflake
Sharding