I'm a Software Engineer with 5+ years of experience building scalable, production-grade systems, primarily in the fintech and cloud-native domains. I specialize in Python, Java, React, SQL, and AWS. I have a strong foundation in full-stack development, workflow automation, and data engineering, with a focus on building systems that drive measurable business impact.
- End-to-End Expertise: I can deploy an MVP in hours using GenAI tools, and have hands-on experience with data ingestion pipelines and ETL processes.
- AI & Research: Recently, I’ve trained vision transformers like SWIN, Segformer, and LaneATT on event-based camera data for high-accuracy road lane detection—outperforming traditional methods.
- Robotics & Autonomous Systems: Currently, I'm a Research Assistant at the UTA Research Institute, developing real-time perception and motion tracking systems for autonomous platforms using sensor fusion and event-based vision.
- Passion: I enjoy solving complex problems, optimizing code, and building prototypes that just work. I'm driven by a desire to create smarter, more efficient tools that have a meaningful impact.
Here are a couple of my deployed projects:
Everleaf is an AI-native LaTeX editor built to simplify academic writing by automating formatting, citations, and content editing.It features real-time LaTeX compilation, PDF preview, and mobile-friendly editing. The frontend is built with React and Tailwind CSS, while the backend uses Node.js and Express. For AI capabilities, it integrates Meta’s Llama 3.1 via Groq API for fast, context-aware language generation. Uploaded documents are processed using LlamaParse, and their embeddings are stored in Pinecone for vector-based retrieval. A RAG pipeline powers the research assistant, enabling users to ask questions or insert references from their own uploaded papers. A key technical focus was enabling surgical editing of LaTeX—modifying targeted sections without breaking document structure.
Vision Transformers have revolutionized the field of computer vision by applying the self-attention mechanism to image recognition tasks. This project aims to enhance ViT's performance by incorporating HOG (Histogram of Oriented Gradients) features, which are known for their ability to capture shape and appearance information through gradient distributions.
Thesis Link: Enhancing Vision Transformers with HOG Features
A web-based portal for managing employee data with secure login and full CRUD operations along with real-time Dashboard Analytics. Built with React, Node, PostgreSQL, it enhances operational efficiency and boosts user engagement by 25%.
- Languages: Python, Java, JavaScript (React), SQL
- Tech Stack: React, Node.js, Express.js, MongoDB, PostgreSQL
- Cloud/DevOps: AWS (Certified), Docker, Kubernetes, Terraform
- AI/Robotics: LLMs, Vision Transformers, Sensor Fusion, Event-based Cameras


