PitchHut logo
Create a shared RAM pool from nearby devices effortlessly.
Pitch

MemCloud transforms multiple machines into a cohesive, ephemeral storage solution by pooling their available RAM. Written in Rust, it offers seamless integration for macOS and Linux users, enabling fast data access with millisecond latency. Discover the ease of distributed storage with no complicated setup required.

Description

MemCloud is an innovative distributed in-memory data store crafted in Rust that enables macOS and Linux machines on a local network to collaborate by pooling their RAM into a shared, ephemeral storage cloud. This project offers a unique approach to utilizing idle memory across multiple devices, thereby creating a high-speed, efficient data storage solution for local networks.

Key Features

  • Distributed RAM Pooling: Aggregate unused RAM from multiple devices within a Local Area Network (LAN).
  • Zero-Config Discovery: Effortlessly connect devices using automatic peer discovery via mDNS without the need for manual IP configurations.
  • Millisecond Latency: Achieve quick data storage and retrieval across devices in less than 10 milliseconds.
  • Multi-Device Support: Compatible with macOS, Ubuntu, and various Linux distributions.
  • Offline Functionality: Operates fully over the LAN without requiring an internet connection, making it ideal for local applications.
  • Command-Line Interface and SDKs: Offers a robust command-line interface along with Rust and TypeScript/JavaScript SDKs.
  • Daemon Mode: Can function as a background service for continuous operation.
  • Key-Value Store: Supports standard key-value operations alongside basic block storage features, allowing for set(key, value) and get(key) methods.

Architecture Overview

The architecture of MemCloud is designed to facilitate efficient data management.

flowchart TD
    subgraph AppLayer[Application Layer]
        CLI["MemCLI (optional)"]
        SDK["JS / Python / Rust SDK"]
    end
    subgraph LocalDaemon["MemCloud Daemon (Local)"]
        RPC["Local RPC API<br/>(Unix Socket / TCP)"]
        BlockMgr["Block Manager<br/>(Store/Load/Free)"]
        PeerMgr["Peer Manager<br/>(Connections & Routing)"]
        RAM[("Local RAM Cache")]
        Discovery["mDNS Discovery"]
    end
    subgraph RemoteDevice["Remote Device(s)"]
        RemoteDaemon["Remote MemCloud Daemon"]
        RemoteRAM[("Remote RAM Storage")]
    end

    CLI --> RPC
    SDK --> RPC
    RPC --> BlockMgr
    BlockMgr --> RAM
    BlockMgr --> PeerMgr
    PeerMgr --> Discovery
    Discovery --> RemoteDaemon
    PeerMgr <-->|TCP / Binary RPC| RemoteDaemon
    RemoteDaemon --> RemoteRAM

Use Cases

Efficient Data Storage for Large Datasets

MemCloud excels in scenarios where applications need to process massive data streams without causing memory overflow. For example, during log archiving, MemCloud can offload data streams to connected peers without overwhelming local system resources.

Distributed Caching for Development

Utilizes shared local memory to create a collaborative caching system, enhancing performance for development and machine learning tasks.

Performance Comparison

MemCloud operates on a unique P2P architecture, which contrasts with traditional client-server models like Redis and Memcached. While those platforms serve as strong standalone instances, MemCloud's design focuses on utilizing idle resources across all connected machines for distributed local caching.

Performance Benchmarks

Leveraging Rust's efficiency and asynchronous capabilities through the Tokio runtime, MemCloud shows promising performance metrics, making it a compelling choice for local caching needs:

SystemSET (ops/sec)GET (ops/sec)
MemCloud25,54516,704
Redis~28,000*~30,000*
Memcached~35,000*~40,000*
(Benchmark conducted on a MacBook Air M1 with a payload of 1KB and 10k operations.)

MemCloud presents a powerful solution for anyone looking to leverage the power of distributed in-memory data storage in an efficient, user-friendly manner. For complete implementation details, reference the repository's README file.

0 comments

No comments yet.

Sign in to be the first to comment.