This project is a neural network implementation in C++ utilizing SIMD (Single Instruction, Multiple Data) for optimized performance. The goal is to leverage SIMD instructions to accelerate computations.
- Implementation of core neural network components
- SIMD optimization for faster computations
- Modular and extensible code structure
- Current work in progress: implementing the attention mechanism
To build the project, ensure you have a C++ compiler that supports SIMD instructions (e.g., GCC, Clang, or MSVC) and CMake installed.
sudo apt update
sudo apt install libeigen3-devmkdir build
cd build
cmake ..
make- C++17 or later
- CMake (for build configuration)
- Linear Layer
- Convolution Layer
- Add optimizer
- Add most common schedulers such as steps, linear, cosine, reduce on plateau-
- Complete attention mechanism implementation
- Add more layers with SIMD acceleration
- Benchmark performance against non-SIMD implementations
- Setup a AlexNet model from scratch
- Setup a Variational Auto Encoder from scratch
Contributions are welcome! Feel free to open issues or submit pull requests.
This project is licensed under the MIT License.
For any questions or discussions, feel free to reach out.