A Python library to easily build, train, and test your CNN AI models.
Live Demo: PyCNN CIFAR-10 Model
This is a Convolutional Neural Network (CNN) library project implemented entirely from scratch using only low-level libraries like NumPy, PIL, SciPy and Cython no deep learning frameworks (e.g., TensorFlow or PyTorch) are used. It can train a CNN model on your local dataset folder or an external Hugging Face dataset, save or load models, support CUDA and the Adam optimizer for better performance, and switch the training backend between CPU and GPU
- Multi-class classification
- Model save/load functionality
- RGB image preprocessing with customizable filters
- Real-time training visualization (accuracy & loss per epoch)
- CUDA acceleration for faster training and inference
- Adam optimizer for improved convergence
- Dynamic user-defined layers for customizable architectures
- Automatic backend conversion between CPU and GPU
- Hugging Face datasets support
- Dataset augmentation support
- PyTorch export to convert PyCNN models to PyTorch format
- Python 3.8+
- NumPy
- Pillow (PIL)
- SciPy
- Matplotlib (for visualization)
Optional:
- CuPy (for CUDA support)
- datasets (for Hugging Face integration)
- PyTorch (for model export)
Note
No Compiler Required! PyCNN now automatically downloads pre-compiled binaries for Windows, Linux, and macOS. You no longer need to install GCC or build tools for regular usage.
If you want to build from source or contribute to the project, you'll need a C++ compiler:
Click to expand compiler installation instructions
Linux (Ubuntu/Debian):
sudo apt-get update
sudo apt-get install build-essentialLinux (Fedora/RHEL):
sudo dnf install gcc gcc-c++ makemacOS:
xcode-select --installWindows:
- Install MinGW-w64 or Microsoft Visual C++ Build Tools
pip install git+https://github.com/77AXEL/PyCNN.gitStandard installation will automatically fetch pre-built binaries for your platform. If no matching binaries are found, it will attempt to build from source (requires a compiler).
Optional - CUDA Support:
pip install cupy-cuda118See the CUDA Documentation for setup details.
For local datasets, organize your data as follows:
data/
βββ train/
β βββ class1/
β βββ image1.png
β βββ image2.png
β βββ class2/
β βββ image1.png
β βββ image2.png
β βββ class.../
βββ test/
βββ class1/
βββ image1.png
βββ image2.png
βββ class2/
βββ image1.png
βββ image2.png
βββ class.../
Each subfolder represents a class
from pycnn.pycnn import PyCNN, Evaluate
# Initialize model
pycnn = PyCNN()
pycnn.cuda(True) # Enable CUDA (requires CuPy)
# Configure network architecture
pycnn.init(
batch_size=32,
layers=[256, 128, 64],
learning_rate=0.0001,
epochs=100
)
# Use Adam optimizer
pycnn.adam()
# Load dataset from Hugging Face
pycnn.dataset.hf(
"cifar10",
max_image=1000,
split="train",
cached=True
)
# Or load local dataset
# pycnn.dataset.local("path/to/dataset", max_image=1000)
# Train with visualization and early stopping
pycnn.train_model(visualize=True, early_stop=10)
# Evaluate model
eval = Evaluate(pycnn)
eval.hf(dataset_name="cifar10", max_image=10)# Save model
pycnn.save_model("model.bin")
# Load model
pycnn.load_model("model.bin")
# Export to PyTorch format
pycnn.torch("model.pth")# Predict on a new image
class_name, confidence = pycnn.predict("test_image.png")
print(f"Prediction: {class_name} ({confidence*100:.2f}%)")from pycnn.pycnn import PyCNN
pycnn = PyCNN()
pycnn.init(epochs=50, layers=[64, 32])
pycnn.dataset.hf("cifar10", max_image=50)
pycnn.adam()
pycnn.train_model()
# Export to PyTorch
pycnn.torch("model.pth")from pycnn.pycnn import PyCNNTorchModel
from PIL import Image
import numpy as np
import torch
# Load checkpoint
checkpoint = torch.load('model.pth', map_location='cpu')
model = PyCNNTorchModel(
checkpoint['layers'],
checkpoint['num_classes'],
checkpoint['filters'],
checkpoint['image_size']
)
model.load_state_dict(checkpoint['model_state_dict'])
model.eval()
# Predict
def predict(image_path):
img = Image.open(image_path).convert("RGB")
img = img.resize((checkpoint['image_size'], checkpoint['image_size']))
img_array = np.array(img).astype(np.float32) / 255.0
img_tensor = torch.from_numpy(img_array).permute(2, 0, 1).unsqueeze(0)
with torch.no_grad():
output = model(img_tensor)
confidence, predicted_idx = torch.max(output, 1)
predicted_class = checkpoint['classes'][predicted_idx.item()]
print(f"Prediction: {predicted_class} ({confidence.item()*100:.2f}%)")
predict("example.png")- Image Preprocessing: Images are resized, normalized, and processed through custom convolution filters with ReLU activation and max-pooling
- Feature Extraction: Flattened feature maps are fed into fully connected layers
- Classification: Dense layers compute activations followed by softmax for multi-class classification
- Backpropagation: Gradients are computed and weights updated using Adam or SGD optimizer
- GPU Acceleration: CUDA support enables parallel processing for faster training
from pycnn.pycnn import PyCNN, Evaluate
pycnn = PyCNN()
pycnn.cuda(False)
pycnn.init(layers=[512, 256], epochs=500)
pycnn.dataset.hf("cifar10", max_image=100, cached=True)
pycnn.adam()
pycnn.train_model(early_stop=15)
eval = Evaluate(pycnn)
eval.hf(dataset_name="cifar10", max_image=10)We welcome contributions, issues, and suggestions! Check the Discussions tab or see CONTRIBUTING.md.
Found a security issue? Please report it in issues
Released under the MIT License
See the PyCNN Documentation for detailed guides and API reference.

