Skip to content

Conversation

@jeremiaswerner
Copy link
Collaborator

@jeremiaswerner jeremiaswerner commented Aug 6, 2025

This tutorial show cases batch inferencing with Serverless GPUs. The example extracts temperature and duration of cookbooks using a LLM.

@jeremiaswerner jeremiaswerner changed the title add the inferencing tutorial tutorual: batch inferencing to extract temperature and duration of cookbooks using LLM Aug 6, 2025
@jeremiaswerner jeremiaswerner changed the title tutorual: batch inferencing to extract temperature and duration of cookbooks using LLM tutorial: batch inferencing to extract temperature and duration of cookbooks using LLM Aug 6, 2025
@jeremiaswerner jeremiaswerner requested a review from reggeenr August 6, 2025 09:10
@jeremiaswerner jeremiaswerner self-assigned this Aug 6, 2025
@jeremiaswerner jeremiaswerner changed the title tutorial: batch inferencing to extract temperature and duration of cookbooks using LLM tutorial: batch inferencing to extract temperature and duration of cookbook recipes using a LLM Aug 6, 2025
Copy link
Collaborator

@reggeenr reggeenr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Really cool! Thanks for contributing this sample :) LGTM

@jeremiaswerner jeremiaswerner merged commit d148bf1 into IBM:main Aug 7, 2025
1 of 2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants