A platform where kids can train their own custom AI models for text and image classification using Google Cloud Platform's AutoML capabilities.
- Text classification model training and inference
- Image classification model training using labeled images
- User-friendly API for integrating with other applications
- Modern error handling and validation
The application follows a modular architecture:
- Services: Encapsulate all GCP API interactions
- Controllers: Handle HTTP requests and responses
- Routes: Define API endpoints and validation
- Middleware: Provide cross-cutting concerns (error handling, validation)
- Node.js 18.x or later
- Google Cloud Platform account with the following enabled:
- Vertex AI API
- Cloud Storage
- GCP Project with permissions to create:
- Datasets
- Training pipelines
- Models and endpoints
Create a .env file in the root directory with the following variables:
# Required GCP Configuration
GCP_PROJECT_ID=your-gcp-project-id
GCP_REGION=us-central1
GCS_BUCKET_NAME=your-gcs-bucket-name
GOOGLE_APPLICATION_CREDENTIALS=/path/to/your/credentials.json
# Server Configuration (Optional)
PORT=2634 # Default if not specified
SERVER_PORT=2634 # Alternative to PORT
# SSL Configuration (Optional)
SSL_KEY_PATH=/path/to/ssl/key.pem
SSL_CERT_PATH=/path/to/ssl/cert.pem
- Clone the repository
git clone https://github.com/your-username/cognimates-training.git
cd cognimates-training- Install dependencies
npm install-
Set up Google Cloud credentials
- Create a service account with appropriate permissions in the Google Cloud Console
- Download the JSON credentials file
- Set the
GOOGLE_APPLICATION_CREDENTIALSenvironment variable to the path of this file
-
Create a Cloud Storage bucket to store training data
- Set the
GCS_BUCKET_NAMEenvironment variable to this bucket name
- Set the
-
Start the server
npm startnpm run dev- Start the server with Nodemon for automatic reloadingnpm run sass-build- Build the SCSS filesnpm run lint- Run ESLint on the source codenpm test- Run the test suite
POST /classify/text/create
Content-Type: application/json
{
"classifier_name": "my-text-classifier"
}
POST /classify/text/{classifier_name}/train
Content-Type: application/json
{
"training_data": {
"positive": ["Great product", "Excellent service", "Highly recommend"],
"negative": ["Poor quality", "Not satisfied", "Would not recommend"]
}
}
POST /classify/text/{classifier_name}
Content-Type: application/json
{
"phrase": "This is an amazing product"
}
POST /classify/image/{classifier_name}/train
Content-Type: multipart/form-data
Form field: "images" (ZIP file with folders named by label)
The ZIP file structure should be:
training_images.zip
├── label1/
│ ├── image1.jpg
│ └── image2.png
└── label2/
├── image3.jpeg
└── image4.jpg
GET /classify/text/operations/{operation_name}
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add some amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.