From 7671cd8df73cd781224cecfb497be75eb307447a Mon Sep 17 00:00:00 2001 From: Bryant Date: Wed, 20 Nov 2024 01:47:35 +1100 Subject: [PATCH 1/2] add latest realtime voice limitations --- fern/openai-realtime.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fern/openai-realtime.mdx b/fern/openai-realtime.mdx index e82ac8bee..2e77f03c3 100644 --- a/fern/openai-realtime.mdx +++ b/fern/openai-realtime.mdx @@ -5,7 +5,7 @@ slug: openai-realtime --- - The Realtime API is currently in beta, and not recommended for production use by OpenAI. We're excited to have you try this new feature and welcome your [feedback](https://discord.com/invite/pUFNcf2WmH) as we continue to refine and improve the experience. + The Realtime API is currently in beta, and not recommended for production use by OpenAI. Advanced functionality is currently limited with the latest voices Ash, Ballad, Coral, Sage and Verse. We're excited to have you try this new feature and welcome your [feedback](https://discord.com/invite/pUFNcf2WmH) as we continue to refine and improve the experience. OpenAI’s Realtime API enables developers to use a native speech-to-speech model. Unlike other Vapi configurations which orchestrate a transcriber, model and voice API to simulate speech-to-speech, OpenAI’s Realtime API natively processes audio in and audio out. From d4ec5af3e4f707234c7eaaaeb0adce6b066985ae Mon Sep 17 00:00:00 2001 From: Bryant Date: Wed, 20 Nov 2024 14:43:23 +1100 Subject: [PATCH 2/2] move the note into the bottom bullet points --- fern/openai-realtime.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/fern/openai-realtime.mdx b/fern/openai-realtime.mdx index 2e77f03c3..3e0e66218 100644 --- a/fern/openai-realtime.mdx +++ b/fern/openai-realtime.mdx @@ -5,12 +5,12 @@ slug: openai-realtime --- - The Realtime API is currently in beta, and not recommended for production use by OpenAI. Advanced functionality is currently limited with the latest voices Ash, Ballad, Coral, Sage and Verse. We're excited to have you try this new feature and welcome your [feedback](https://discord.com/invite/pUFNcf2WmH) as we continue to refine and improve the experience. + The Realtime API is currently in beta, and not recommended for production use by OpenAI. We're excited to have you try this new feature and welcome your [feedback](https://discord.com/invite/pUFNcf2WmH) as we continue to refine and improve the experience. OpenAI’s Realtime API enables developers to use a native speech-to-speech model. Unlike other Vapi configurations which orchestrate a transcriber, model and voice API to simulate speech-to-speech, OpenAI’s Realtime API natively processes audio in and audio out. To start using it with your Vapi assistants, select `gpt-4o-realtime-preview-2024-10-01` as your model. - Please note that only OpenAI voices may be selected while using this model. The voice selection will not act as a TTS (text-to-speech) model, but rather as the voice used within the speech-to-speech model. -- Also note that we don’t currently support Knowledge Bases with the Realtime API. +- Also note that we don’t currently support Knowledge Bases with the Realtime API. Furthermore, advanced functionality is currently limited with the latest voices Ash, Ballad, Coral, Sage and Verse. - Lastly, note that our Realtime integration still retains the rest of Vapi's orchestration layer such as the endpointing and interruption models to enable a reliable conversational flow. \ No newline at end of file