This repository contains a complete set of image processing and computer vision exercises developed for Computer Vision university course at U-tad (Centro Universitario de Tecnología y Arte Digital).
All tasks were implemented in a single Jupyter notebook, supported by an img/ directory containing test images.
The work covers a wide range of transformations and filtering techniques, including contrast enhancement, denoising, Gaussian filtering, edge detection, feature extraction and connected-components analysis. Both manual implementations and OpenCV-based methods are explored throughout the notebook.
- CLAHE using OpenCV
- Manual CLAHE implementation
- Single and multiple clipLimit experiments
- Manual multi-clipLimit contrast enhancement
- Application of linear and nonlinear denoising filters
- MSE and PSNR quantitative evaluation
- Median-filter section analysis
- NLMeans applied to grayscale images
- Quantitative comparison and section analysis
- Construction of the Gaussian kernel
- Horizontal and vertical separable convolution
- Manual Gaussian filtering vs OpenCV implementation
- Theoretical basis of the Kuwahara filter
- Qualitative interpretation
- Laplacian of Gaussian (LoG)
- Canny edge detection
- Size reduction + Canny preprocessing
- Connected-components extraction
- Comparison with alternative approaches
- SIFT keypoint detection
- Orientation assignment
- Descriptor construction
- Feature matching with BFMatcher
/img # Test images
computer_vision_transformations.ipynb
README.md
Note
All results, plots, explanations, and comparisons are contained inside the notebook.
- Contrast enhancement (CLAHE)
- Noise reduction and smoothing filters
- Gaussian filtering (manual & OpenCV)
- Laplacian of Gaussian
- Canny edge detection
- Connected components
- SIFT: detection, description, and matching
- Quantitative measures: MSE, PSNR, entropy