Notes:
The Kannada MNIST dataset provided by Kaggle is required to run the code. You must put the dataset in the super directory.
The main file is runModels.py
The task for this project was to construct 3 different machine learning models to train on a Kaggle dataset. We had to use 1 shallow model, and 2 neural networks- the first being 3 layers total.
We utilized the Random Forest algorithm for our shallow model, and achieved an accuracy on the Kaggle hidden test set of over 92%.
With our 3-layer neural network, we achieved an accuracy of over 95%.
With our final neural network, consisting of 16 layers, we achieved an accuracy of over 97%.
==================== Work distribution: ==================== Jacob Reiss : Github management, shallow learning model, runModels.py, Random Forest research, documentation Dang Tran : 16-layer neural network, neural network research, documentation, presentation Sandra Phan : visualizeData.py, evaluation metrics and research, documentation, presentation Janie Leung : 3-layer neural network, neural network research, documentation, presentation