Memcachex is an advanced Memcached client tailored for Go applications that demand high performance and predictable latency. Utilizing an event-loop-driven I/O engine and async-first APIs, it provides explicit control over execution and backpressure while minimizing allocations. Ideal for infrastructure-grade workloads, memcachex prioritizes measurable behavior under load.
memcachex is a high-performance Memcached client for Go designed for environments where factors such as latency predictability, allocation behavior, and execution control are paramount. This advanced client leverages a custom event-loop–driven I/O engine that provides asynchronous APIs and explicit management of request lifecycles, thus ensuring transparent and measurable performance under load.
Key Features
- Async-first API: Implements a callback-based design that eliminates goroutines per request, enhancing efficiency.
- Event-loop–driven network engine: Offers explicit scheduling and predictable execution, resulting in improved performance.
- Explicit request and buffer pooling: Reduces allocations and minimizes garbage collection pressure.
- Optional OS-thread pinning: Facilitates tighter control over latency in specialized scenarios.
- Allocation-aware hot paths: Designed to maintain minimal steady-state allocations, enhancing responsiveness.
- Synchronous APIs with consistent behavior: Built on the same async engine to avoid code duplication and discrepancies.
- Bounded internal queues: Implements early backpressure application rather than relying on hidden buffering, ensuring stability under load.
- Predictable behavior under strain: Actively avoids latency spikes caused by unmonitored buffering mechanisms.
Client Creation and Configuration
To create a memcachex client, the following code can be used to establish a connection:
cl, err := memcachex.NewClient(
memcachex.WithAddr("localhost:11211"),
)
if err != nil {
panic(err)
}
Clients can be configured through functional options or a ClientOptions struct. Here’s an example using functional options:
client, err := memcachex.NewClient(
memcachex.WithAddr("127.0.0.1:11211"),
memcachex.WithNumEventLoops(1),
memcachex.WithNumEventLoopSockets(2),
memcachex.WithRingSize(8192),
memcachex.WithNumEnqueueRetries(2),
memcachex.WithLockOSThread(false),
)
A comprehensive overview of configuration options can be found in the README. Each default value is optimized for typical usage, making alterations unnecessary for most applications unless operating under specific conditions.
API Usage Examples
Synchronous APIs
Utilize the synchronous APIs for straightforward operations:
- Get a value:
val, err := cl.Get([]byte("key"))
- Set a value:
err := cl.Set(&proto.Item{
Key: []byte("key"),
Value: []byte("value"),
Expiration: 10,
})
Asynchronous APIs
For non-blocking requests, leverage the asynchronous capabilities:
- Async Get:
err := cl.GetAsync([]byte("key"), func(v any, err error) {
if err != nil {
return
}
val := v.(*proto.Value)
fmt.Println(string(val.Value))
})
Target Audience
memcachex is ideal for developers and systems that prioritize:
- High request rates
- Low tail latency
- Explicit control over asynchronous processes
- Predictable performance instead of abstract convenience
This library is classified as experimental, with anticipated changes to the APIs as optimization efforts continue and insights are refined. The design prioritizes stability, efficiency, and performance, making it suitable for demanding infrastructure-grade workloads.
No comments yet.
Sign in to be the first to comment.