cua cà mau cua tươi sống cua cà mau bao nhiêu 1kg giá cua hôm nay giá cua cà mau hôm nay cua thịt cà mau cua biển cua biển cà mau cách luộc cua cà mau cua gạch cua gạch cà mau vựa cua cà mau lẩu cua cà mau giá cua thịt cà mau hôm nay giá cua gạch cà mau giá cua gạch cách hấp cua cà mau cua cốm cà mau cua hấp mua cua cà mau cua ca mau ban cua ca mau cua cà mau giá rẻ cua biển tươi cuaganic cua cua thịt cà mau cua gạch cà mau cua cà mau gần đây hải sản cà mau cua gạch son cua đầy gạch giá rẻ các loại cua ở việt nam các loại cua biển ở việt nam cua ngon cua giá rẻ cua gia re crab farming crab farming cua cà mau cua cà mau cua tươi sống cua tươi sống cua cà mau bao nhiêu 1kg giá cua hôm nay giá cua cà mau hôm nay cua thịt cà mau cua biển cua biển cà mau cách luộc cua cà mau cua gạch cua gạch cà mau vựa cua cà mau lẩu cua cà mau giá cua thịt cà mau hôm nay giá cua gạch cà mau giá cua gạch cách hấp cua cà mau cua cốm cà mau cua hấp mua cua cà mau cua ca mau ban cua ca mau cua cà mau giá rẻ cua biển tươi cuaganic cua cua thịt cà mau cua gạch cà mau cua cà mau gần đây hải sản cà mau cua gạch son cua đầy gạch giá rẻ các loại cua ở việt nam các loại cua biển ở việt nam cua ngon cua giá rẻ cua gia re crab farming crab farming cua cà mau
Skip to main content

Google’s AI just got ears

The Google Gemini AI logo.
Google

AI chatbots are already capable of “seeing” the world through images and video. But now, Google has announced audio-to-speech functionalities as part of its latest update to Gemini Pro. In Gemini 1.5 Pro, the chatbot can now “hear” audio files uploaded into its system and then extract the text information.

The company has made this LLM version available as a public preview on its Vertex AI development platform. This will allow more enterprise-focused users to experiment with the feature and expand its base after a more private rollout in February when the model was first announced. This was originally offered only to a limited group of developers and enterprise customers.

Recommended Videos

1. Breaking down + understanding a long video

I uploaded the entire NBA dunk contest from last night and asked which dunk had the highest score.

Gemini 1.5 was incredibly able to find the specific perfect 50 dunk and details from just its long context video understanding! pic.twitter.com/01iUfqfiAO

— Rowan Cheung (@rowancheung) February 18, 2024

Google shared the details about the update at its Cloud Next conference, which is currently taking place in Las Vegas. After calling the Gemini Ultra LLM that powers its Gemini Advanced chatbot the most powerful model of its Gemini family, Google is now calling Gemini 1.5 Pro its most capable generative model. The company added that this version is better at learning without additional tweaking of the model.

Gemini 1.5 Pro is multimodal in that it can interpret different types of audio into text, including TV shows, movies, radio broadcasts, and conference call recordings. It’s even multilingual in that it can process audio in several different languages. The LLM may also be able to create transcripts from videos; however, its quality may be unreliable, as mentioned by TechCrunch.

When first announced, Google explained that Gemini 1.5 Pro used a token system to process raw data. A million tokens equate to approximately 700,000 words or 30,000 lines of code. In media form, it equals an hour of video or around 11 hours of audio.

There have been some private preview demos of Gemini 1.5 Pro that demonstrate how the LLM is able to find specific moments in a video transcript. For example, AI enthusiast Rowan Cheung got early access and detailed how his demo found an exact action shot in a sports contest and summarized the event, as seen in the tweet embedded above.

However, Google noted that other early adopters, including United Wholesale Mortgage, TBS, and Replit, are opting for more enterprise-focused use cases, such as mortgage underwriting, automating metadata tagging, and generating, explaining, and updating code.

Fionna Agomuoh
Fionna Agomuoh is a Computing Writer at Digital Trends. She covers a range of topics in the computing space, including…
This upcoming AI feature could revolutionize Google Chrome
Google's Gemini logo with the AI running on a smartphone and a PC.

One of the latest trends in the generative AI space is AI agents, and Google may be prepping its own agent to be a feature of an upcoming Gemini large language model (LLM).

The development, called Project Jarvis, is an AI agent based within the Google Chrome browser that will be able to execute common tasks after being given a short query or command with more independence than before. The inclusion of AI agents in the next Chrome update has the potential to be the biggest overhaul since the browser launched in 2008, according to The Information.

Read more
Google expands AI Overviews to over 100 more countries
AI Overviews being shown in Google Search.

Google's AI Overview is coming to a search results page near you, whether you want it to or not. The company announced on Monday that it is expanding the AI feature to more than 100 countries around the world.

Google debuted AI Overview, which uses generative AI to summarize the key points of your search topic and display that information at the top of the results page, to mixed reviews in May before subsequently expanding the program in August. Monday's roll-out sees the feature made available in seven languages — English, Hindi, Indonesian, Japanese, Korean, Portuguese, and Spanish — to users in more than 100 nations (you can find a full list of covered countries here)

Read more
GPT-5: everything we know so far about OpenAI’s next frontier model
A MacBook Pro on a desk with ChatGPT's website showing on its display.

There's perhaps no product more hotly anticipated in tech right now than GPT-5. Rumors about it have been circulating ever since the release of GPT-4, OpenAI's groundbreaking foundational model that's been the basis of everything the company has launched over the past year, such as GPT-4o, Advanced Voice Mode, and the OpenAI o1-preview.

Those are all interesting in their own right, but a true successor to GPT-4 is still yet to come. Now that it's been over a year a half since GPT-4's release, buzz around a next-gen model has never been stronger.
When will GPT-5 be released?
OpenAI has continued a rapid rate of progress on its LLMs. GPT-4 debuted on March 14, 2023, which came just four months after GPT-3.5 launched alongside ChatGPT. OpenAI has yet to set a specific release date for GPT-5, though rumors have circulated online that the new model could arrive as soon as late 2024.

Read more