PitchHut logo
Every Other Token
Transform LLM interactions with real-time token manipulation.
Pitch

Every Other Token is a tool that allows researchers to intercept OpenAI's streaming API and manipulate token responses in real-time. This innovative approach facilitates studies on token dependencies, interpretability, and creative AI collaborations, paving the way for new insights in LLM interaction.

Description

Every Other Token (Rust)

A real-time LLM stream interceptor for token-level interaction research, built in Rust.

Rust MIT License OpenAI API

What is this?

Every Other Token intercepts OpenAI's streaming API responses and applies transformations to alternating tokens in real-time. Instead of waiting for complete responses, it intervenes at the token level, creating a new paradigm for LLM interaction and analysis.

How it works

  1. Even tokens (0, 2, 4, 6...): Passed through unchanged
  2. Odd tokens (1, 3, 5, 7...): Transformed using the selected method

Example Output

Every Other Token Output

Screenshot showing the tool in action with the reverse transform

Quick Start

Prerequisites

  • Rust 1.70.0 or higher
  • OpenAI API key

Installation

# Clone the repository
git clone https://github.com/yourusername/every-other-token-rust.git
cd every-other-token-rust

# Build the project
cargo build --release

# Set your OpenAI API key
set OPENAI_API_KEY=your-api-key-here

Basic Usage

# Simple example with reverse transform
cargo run -- "Tell me a story about a robot"

# With specific transformation
cargo run -- "Explain quantum physics" uppercase

# With different model
cargo run -- "Write a haiku" mock gpt-4

Available Transformations

TransformDescriptionExample InputExample Output
reverseReverses odd tokens"hello world""hello dlrow"
uppercaseConverts odd tokens to uppercase"hello world""hello WORLD"
mockAlternating case (mocking text)"hello world""hello WoRlD"
noiseAdds random characters to odd tokens"hello world""hello world*"

Why does this matter?

This tool opens up novel research possibilities:

  • Token Dependency Analysis: Study how LLMs handle disrupted token sequences
  • Interpretability Research: Understand token-level dependencies and causality
  • Creative AI Interaction: Build co-creative systems with human-AI token collaboration
  • Real-time LLM Steering: Develop new prompt engineering techniques
  • Stream Manipulation: Explore how semantic meaning degrades with token alterations

Research Applications

Token Dependency Studies

# Study how meaning degrades with token corruption
cargo run -- "Solve this math problem: 2+2=" reverse

Semantic Robustness Testing

# Test how well LLMs maintain coherence under disruption
cargo run -- "Continue this story logically..." noise

Creative Collaboration

# Use transformations to create unexpected creative outputs
cargo run -- "Write a poem about nature" mock

Command Line Usage

every-other-token [PROMPT] [TRANSFORM] [MODEL]

Arguments:

  • PROMPT: Your input prompt (required)
  • TRANSFORM: Transformation type - reverse, uppercase, mock, noise (default: reverse)
  • MODEL: OpenAI model to use (default: gpt-3.5-turbo)

Examples:

cargo run -- "Hello world"
cargo run -- "Hello world" uppercase
cargo run -- "Hello world" mock gpt-4

Building from Source

# Clone and build
git clone https://github.com/Mattbusel/Every-Other-Token.git
cd every-other-token-rust
cargo build --release

# Run tests
cargo test

# Run with debug info
RUST_LOG=debug cargo run -- "test prompt"

Performance

The Rust implementation offers significant performance improvements:

  • Memory Usage: ~90% reduction compared to Python
  • CPU Usage: ~75% reduction in overhead
  • Latency: ~60% improvement in token processing speed
  • Throughput: Handles 10,000+ tokens/second

Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests if applicable
  5. Submit a pull request

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgments

  • OpenAI for the streaming API
  • The Rust community for excellent async ecosystem
  • Original Python implementation for inspiration

"Every token tells a story. Every other token tells a different one."

0 comments

No comments yet.

Sign in to be the first to comment.