Skip to main content

RPC Overview

This section provides an overview of RPC frameworks relevant to Kumo systems. The focus is scene-driven selection rather than deep protocol analysis. Each framework is compared along three dimensions: ecosystem & usage scenario, performance, and integration & operational complexity.

The goal is to guide quick selection based on practical business or backend needs. High QPS claims should be treated cautiously, as typical server CPU load constraints and single-thread limitations make extreme values unrealistic.


Ecosystem & Usage Scenarios

FrameworkLayerStrengthsTypical Usage / Scenario
gRPCBusinessMulti-language ecosystem, widely adoptedBusiness layer, pipeline orchestration, multi-language clients
httplibBusinessHeader-only, lightweightQuick prototyping, temporary services, validation
brpcBackendMature, high throughput, Raft-compatibleBackend services, Raft consensus, high reliability
krpcBackendKumo-enhanced brpc, better opsInternal backend preferred choice
ACLBackendHigh-quality C++ libraryHigh-performance backend services, IO-intensive tasks
ThriftLegacyModerate performanceLegacy interop, declining ecosystem
SeastarExtreme IONUMA-aware, very high throughputExtreme IO services, dedicated ops required

Business layer favors gRPC for multi-language integration and pipeline control. Backend layer favors brpc/krpc for throughput and operational reliability. Extreme IO frameworks like Seastar require specialized environments.


Performance Metrics

FrameworkTypical QPS per ServerCPU LoadNotes
gRPC3k–10k30–50%Suitable for orchestrating pipelines; practical business throughput
httplib<1k<20%Lightweight testing or prototyping only
brpc10k–30k40–70%Required for Raft; operational expertise needed
krpc10k–30k40–70%Optimized brpc with better internal ops
ACL10k–30k40–70%High-quality backend C++ services
Thrift5k–15k40–60%Legacy support only
Seastar50k–100k+ (pure IO)50–70%Extreme IO; dedicated ops required

CPU load above 70% is risky. Most claimed "million QPS" values are theoretical. Business layer rarely exceeds 10k QPS per server; backend systems may target 30k QPS. Extreme IO scenarios require dedicated operational expertise.


Integration & Operational Complexity

FrameworkLanguage SupportIntegration DifficultyOps Notes
gRPCMulti-languageMediumMultiple C++ libraries; kmpkg simplifies integration
httplibC++Very LowHeader-only, trivial integration
brpcC++MediumRequires ops expertise; Raft-compatible
krpcC++MediumKumo-enhanced brpc; easier internal ops
ACLC++Low-MediumHigh-quality library; not ideal for multi-language business layer
ThriftMulti-languageMediumEcosystem declining; mainly for legacy support
SeastarC++HighComplex ops; NUMA-aware; dedicated environment; high cost

Business layer frameworks prioritize ecosystem integration over raw throughput. Backend frameworks must handle high CPU and throughput reliably. Extreme IO frameworks like Seastar require professional operations for protocol stack management.


Summary

The RPC ecosystem is diverse. Selection should be scene-driven:

  • Business layer: Use gRPC for multi-language clients and pipeline orchestration. Lightweight alternatives like httplib are only for rapid prototyping.
  • Backend layer: brpc or krpc provides high throughput, mature operations, and Raft compatibility. ACL can be used for high-performance C++ backend tasks.
  • Extreme IO: Seastar is suitable only for specialized, high-throughput IO services and requires dedicated operational expertise.

Because of ecosystem limitations, implementation choices are relatively fixed. For special requirements, custom implementations may be necessary.