-
-
Notifications
You must be signed in to change notification settings - Fork 1k
Closed
Description
I try to process streaming (return customer in chat). I need to use proxy. I have problem that response does not streaming when using proxy (all responses returned after all processed, no effect of writing text)
import asyncio
from typing import Optional
from httpx import AsyncClient
from openai import AsyncStream, AsyncOpenAI
from openai.types.chat import ChatCompletionChunk
async def get_openai_stream_agenerator() -> AsyncStream[ChatCompletionChunk]:
client = AsyncOpenAI(
http_client=AsyncClient(
# when I comment these two lines streaming is ok
proxy="http://localhost:8080", # I'm using mitmproxy with basic configuration
verify=False,
)
)
messages = [
{"role": "system", "content": "Return details about asking person"},
{"role": "user", "content": "Iga Świątek"},
]
response: AsyncStream[ChatCompletionChunk] = await client.chat.completions.create(
model='gpt-4-0613',
messages=messages,
stream=True,
) # type: ignore
return response
def get_delta_argument(chunk: ChatCompletionChunk) -> Optional[str]:
if len(chunk.choices) > 0:
return chunk.dict()['choices'][0]['delta']['content']
else:
return None
async def get_response_generator() -> None:
async for it in await get_openai_stream_agenerator():
value = get_delta_argument(it)
if value:
print(value, end="")
print()
if __name__ == '__main__':
asyncio.run(get_response_generator())OS: macOS
Python version: Python v3.11.7
Library version: openai 1.12.0, httpx 0.26.0
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels