Hello world! π I'm Parampal Singh, a passionate engineer continuously learning and growing in Machine Learning and Software Engineering. I'm on an exciting journey exploring the intersection of AI research and practical software development, building everything from:
- π§ LLMs from scratch using PyTorch
- π High-performance APIs with FastAPI and Rust
- βοΈ Cloud-native applications leveraging Kubernetes and Terraform
- π» Full-stack web applications with modern frameworks
My journey involves continuous learning and pushing the boundaries of what's possible with code. I'm particularly interested in:
- π€ Building and fine-tuning large language models
- π οΈ Systems programming with Rust
- π§© Microservices architecture and cloud infrastructure
- π Solving complex problems with elegant, efficient solutions
When I'm not coding, you'll find me exploring new technologies and tinkering. I'm always eager to connect with experienced professionals and actively seeking mentorships to accelerate my growth in the ML/AI field.
Building a GPT-like Large Language Model following Sebastian Raschka's book "Build a Large Language Model (From Scratch)" with PyTorch. Implements transformer architecture, attention mechanisms, and fine-tuning capabilities.
A production-ready API for text generation using Hugging Face Transformers. Includes Azure Infrastructure as Code with Terraform, Kubernetes deployment manifests, and an optimized Docker container.
Exploring distributed computing and ML orchestration with Ray. Learning distributed training, hyperparameter tuning, and scalable ML workflows for production-grade machine learning systems.
Parameter-Efficient Fine-Tuning (PEFT) techniques following the Hugging Face course. Exploring LoRA, AdaLoRA, and other efficient fine-tuning methods for large language models.
A production-grade email newsletter API built with Rust and Axum. Reimplementation of "Zero To Production In Rust" using the modern Axum web framework, featuring authentication, PostgreSQL integration, comprehensive testing, and deployment configurations.
An implementation of microservices architecture following the book "Bootstrapping Microservices with Docker, Kubernetes, and Terraform".
- π I'm currently working on building and fine-tuning LLMs from scratch, exploring distributed computing with Ray
- π± I'm currently learning Rust, Parameter-Efficient Fine-Tuning (PEFT), distributed ML frameworks, and Kubernetes
- π― I'm looking to collaborate on open-source ML/AI projects or Rust ecosystem tools
- π§βπ« Actively seeking mentorships from experienced ML Engineers and AI researchers to accelerate my learning journey
- π¬ Ask me about my learning journey, LLMs, Rust, FastAPI, distributed computing, or anything tech-related
- β‘ Fun fact: I maintain custom Linux dotfiles and am passionate about terminal-based workflows with Neovim
