Running local llm with unsupported hardware #32
tanojericko
started this conversation in
Show and tell
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Here's an iPhone 11 pro running a local llm. Here's the model i tried with: https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v1.0-GGUF/blob/main/tinyllama-1.1b-chat-v1.0.Q8_0.gguf. To do it, rename the file into "Llama-3.2-3B-Instruct-Q4_K_M", this tricks Pythonista lab that it is using the model it is supposed to use
491501606_23920465454225645_3689332641550298092_n.mp4
Beta Was this translation helpful? Give feedback.
All reactions