Now You Can Have Voice Chats With ChatGPT

No Image

Open AI, on Monday, 25th September, announced a new feature that’ll let you have a voice chat with ChatGPT. This feature will allow users to speak aloud to ChatGPT and hear the chatbot talk back.

Open AI shared a demo of the new update showing the feature in action. In it, the user asks ChatGPT to create a story about “the super-duper sunflower hedgehog named Larry.” The chatbot then presented a story out loud with a human-sounding voice that could also respond to questions, such as, “What was his house like?” and “Who is his best friend?”

This new voice feature is similar to those currently offered by Amazon Alexa or Apple’s voice assistant Siri.

How You Can Enable the Voice And Image Features?

OpenAI unveils ChatGPT’s latest feature to Plus and Enterprise users which is coming in the next two weeks. Voice will be available on iOS and Android and images will be available on all platforms.

To enable the voice feature, you can go to Settings, then tap on New Features on the mobile app and opt-in for voice conversations. Then, tap the headphone button located in the top-right corner of the home screen and choose your preferred voice out of five different voices.

The new voice capability works using a new text-to-speech model, capable of generating human-like audio from just text and a few seconds of sample speech. The company added that it is uniting with professional voice artists to create the five different voices. They are also using Whisper, an open-source speech recognition system, to transcribe spoken words into text.

Chat About the Images

You can now show one or more images to ChatGPT and inquire about your problems. Users can also use the drawing tool in the app to focus on specific parts of the image.

To get started, tap the photo button to click or select an image. If you’re on iOS or Android, tap the plus button first. You can also discuss multiple images or use the drawing tool to guide the assistant.

The understanding of the images is generated by multimodal GPT-3.5 and GPT-4. These models apply their language reasoning skills to a wide range of images, such as photographs, screenshots, and documents containing both text and images.

Who Will Win the AI Race?

In the previous week, Microsoft presented its unified AI assistant and Google updated Bard to its Applications. Furthermore, Amazon also made a deal with Anthropic. It is noteworthy to see how these updates are coming out in the same timeline. The race continues and new features keep adding up for users to enjoy and employ in their daily lives.

Tell us your thoughts on the latest features from OpenAI and whether you will be using the features in your daily tasks.

Show More
Leave a Reply