Axiom is an open-source tensor library designed for performance and ease of use, providing a NumPy/PyTorch-like API to native C++. With advanced SIMD vectorization, BLAS acceleration, and support for Metal GPU, Axiom combines high-performance computing with an intuitive interface for Python developers, making it ideal for modern computational needs.
Axiom is a high-performance, open-source C++ tensor library designed to provide the simplicity of NumPy and PyTorch in native code. With cutting-edge features such as SIMD vectorization, BLAS acceleration, and Metal GPU support, Axiom ensures HPC-grade performance along with an intuitive API that aligns closely with what Python developers are familiar with.
Key Features
- Python-like API: Axiom simplifies tensor operations through familiar constructs such as operator overloading and method chaining, making it easy for developers accustomed to NumPy and PyTorch to get started.
- Exceptional Performance: Utilize the power of Accelerate, OpenBLAS, and specially optimized SIMD kernels to achieve high throughput.
- Comprehensive Vectorization: Take advantage of advanced vectorization techniques across various architectures with support for SSE, AVX, AVX-512, ARM NEON, RISC-V, and more.
- GPU Acceleration: Leverage full GPU support via Metal Performance Shaders (MPSGraph) for efficient tensor operations beyond just matrix multiplication.
- Secure Tensor Manipulation: Enjoy a safe programming environment with features like NaN/Inf guards and shape assertions to validate tensor operations.
- Cross-platform Compatibility: Axiom is designed for seamless deployment across platforms with dynamically linked BLAS backends, ensuring consistent performance.
Usage Examples
Get a glimpse of Axiom’s intuitive syntax:
// NumPy: x = np.where(x > 0, x, 0)
auto x = Tensor::where(x > 0, x, 0);
// NumPy: y = x.reshape(2, -1).T
auto y = x.reshape({2, -1}).T();
// PyTorch: z = F.softmax(scores, dim=-1)
auto z = scores.softmax(-1);
For performance insights, Axiom has demonstrated over 3500+ GFLOPS on M4 Pro, substantially outperforming libraries like Eigen and PyTorch in benchmarks.
Advanced Functionalities
Axiom also integrates powerful features for complex tensor manipulations with functionalities such as:
- Einops Integration: Use semantic patterns for tensor rearrangement and reduction, creating readable and expressive code.
- Full LAPACK Coverage: Access advanced linear algebra operations including singular value decomposition, eigenvalue computations, and matrix solvers.
- Efficient I/O: Save and load tensors in optimized formats like FlatBuffers and NumPy, facilitating swift data interchange between C++ and Python.
Axiom is built for developers seeking the performance capabilities of C++ while enjoying the user-friendly experience of high-level tensor libraries. Explore Axiom’s Quick Start guide and Benchmarks section to see the library in action.
No comments yet.
Sign in to be the first to comment.