Skip to content
View iltutishrak's full-sized avatar

Block or report iltutishrak

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
iltutishrak/README.md

Hi, I’m Ishrak Iltut

Product Builder at AWS Applied AI | Focused on Agentic and LLM Systems

Python AWS LLM AI Product Management Open Source


I build reasoning-driven AI systems, focusing on orchestration, evaluation, governance, and explainability.
I believe the best AI PMs do more than write PRDs. They prototype, test, and iterate.


🚀 What I Build

Area Focus
🧩 Agentic Systems multi-agent orchestration, planning, simulation
📊 Evaluation and Reliability LLM trust scoring, hallucination detection, reasoning metrics
🧠 Governance and Feedback bias audits, HITL loops, model transparency
🧰 PM Frameworks reusable templates and metrics guides for AI teams

💻 Featured Projects

Repository Description
agentic-reasoning-lab Multi-agent orchestration and RAG demo with reasoning loop.
llm-evaluation-playground Mock scoring system for LLM accuracy, coherence, and hallucination.
feedback-loop-simulator Simulated human-in-the-loop model feedback cycle.
data-lineage-demo Tracks data flow and governance audit logs in AI systems.
model-governance-kit Simulates bias, fairness, and compliance checks for models.
agent-observability-demo Logs and traces multi-agent interactions in text.
sandbox-orchestrator Modular orchestration of multiple reasoning agents.
ai-simulation-framework Scenario simulation and decision evaluation sandbox.
ai-pm-templates PRD, prompt, and evaluation templates for AI PMs.
ai-metrics-guide Key reasoning and reliability metrics reference for PMs.
hallucination-detection-lab Simple playground for detecting potential hallucinations in model outputs using mock factual checks.

⚙️ Tech and Tools

Python · AWS · LangChain · Bedrock · Kendra · OpenAI · PyTorch · FAISS
GitHub · Markdown · AI Evaluation · Agent Orchestration


🌍 Connect

📫 linkedin.com/in/iltutishrak
🧰 github.com/iltutishrak


Pinned Loading

  1. hallucination-detection-lab hallucination-detection-lab Public

    Simple playground for detecting potential hallucinations in model outputs using mock factual checks.

    Python