Skip to content

Conversation

@ishaan-jaff
Copy link

This PR adds support for models from all the above mentioned providers using https://github.com/BerriAI/litellm/

Here's a sample of how it's used:

from litellm import completion, acompletion

## set ENV variables
# ENV variables can be set in .env file, too. Example in .env.example
os.environ["OPENAI_API_KEY"] = "openai key"
os.environ["COHERE_API_KEY"] = "cohere key"

messages = [{ "content": "Hello, how are you?","role": "user"}]

# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)

# llama2 call
model_name = "replicate/llama-2-70b-chat:2c1608e18606fad2812020dc541930f2d0495ce32eee50074220b87300bc16e1"
response = completion(model_name, messages)

# cohere call
response = completion("command-nightly", messages)

# anthropic call
response = completion(model="claude-instant-1", messages=messages)

@ishaan-jaff
Copy link
Author

@riverscuomo can you take a look at this pr when possible?

happy to add docs/tests if this initial commit looks good

@riverscuomo
Copy link
Owner

This is cool, thank you.
Let me ask a few questions to help me understand the purpose of the PR:

  • Are you thinking that we may end up using some of the other services, like Llama2?
  • Should I use litellm.completion for my existing Replicate calls too?

@ishaan-jaff
Copy link
Author

@riverscuomo

Are you thinking that we may end up using some of the other services, like Llama2?

This is your call - we support all LLM providers and models. so you can start using any service you'd like in the future

Should I use litellm.completion for my existing Replicate calls too?

Yes, docs on this is here: https://docs.litellm.ai/docs/providers/replicate

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants