🌐 Browse My Profile | 📅 Book a Strategy Call
I love helping people build trust in their AI, making sure it is transparent, explainable, and backed by real evidence.
- Explainable AI (XAI) Strategy: Turning "black box" models into transparent systems with human-readable outputs.
- Scientific Validation: Designing peer-reviewed validation frameworks to prove your tool works.
- Algorithmic Auditing: Forensic testing of algorithms to ensure fair results across user populations.
See pinned repositories below for code examples.
- Grounded AI Knowledge Base: A "Glass Box" Graph RAG system for verifiable API navigation (Jan 2025).
- Benchmarking Public Data: Statistical framework for detecting bias and score gaps (Aug 2024).
- Neuromuscular Adaptive Controller: Advanced EMG signal extraction toolbox (June 2018).
- Neuromuscular Simulation Optimizer: OpenSim plugin that grounds biomechanical predictions real EMG data (June 2014).
- AI/ML: GraphRAG, LangChain, TensorFlow, PyTorch, Vertex AI.
- Data Science: Python (Polars/Pandas), R, SQL, Bayesian Statistics, Psychometrics.
- Deployment: FastAPI, Docker, Streamlit, Firebase, GCP.
Continuous Learning & Experiments (Click to Expand)
🤓 I believe in rigorous, continuous education. This archive contains coursework, forks, and early experiments.
- ResearchGPT: Evaluation of LLM workflows.
- LangChain Experiments: Testing agentic workflows.
- Data Science Toolbox: Archive of early R and data cleaning pipelines.
- TensorFlow: Neural network implementations and testing.
- IBM Watson Studio: Testing enterprise AI platform functionality.
- Coursera Archives: Completed coursework for Johns Hopkins Data Science specialization.