cua cà mau cua tươi sống cua cà mau bao nhiêu 1kg giá cua hôm nay giá cua cà mau hôm nay cua thịt cà mau cua biển cua biển cà mau cách luộc cua cà mau cua gạch cua gạch cà mau vựa cua cà mau lẩu cua cà mau giá cua thịt cà mau hôm nay giá cua gạch cà mau giá cua gạch cách hấp cua cà mau cua cốm cà mau cua hấp mua cua cà mau cua ca mau ban cua ca mau cua cà mau giá rẻ cua biển tươi cuaganic cua cua thịt cà mau cua gạch cà mau cua cà mau gần đây hải sản cà mau cua gạch son cua đầy gạch giá rẻ các loại cua ở việt nam các loại cua biển ở việt nam cua ngon cua giá rẻ cua gia re crab farming crab farming cua cà mau cua cà mau cua tươi sống cua tươi sống cua cà mau bao nhiêu 1kg giá cua hôm nay giá cua cà mau hôm nay cua thịt cà mau cua biển cua biển cà mau cách luộc cua cà mau cua gạch cua gạch cà mau vựa cua cà mau lẩu cua cà mau giá cua thịt cà mau hôm nay giá cua gạch cà mau giá cua gạch cách hấp cua cà mau cua cốm cà mau cua hấp mua cua cà mau cua ca mau ban cua ca mau cua cà mau giá rẻ cua biển tươi cuaganic cua cua thịt cà mau cua gạch cà mau cua cà mau gần đây hải sản cà mau cua gạch son cua đầy gạch giá rẻ các loại cua ở việt nam các loại cua biển ở việt nam cua ngon cua giá rẻ cua gia re crab farming crab farming cua cà mau
Skip to main content

Gemini AI is making robots in the office far more useful

An Everyday Robot navigating through an office.
Everyday Robot

Lost in an unfamiliar office building, big box store, or warehouse? Just ask the nearest robot for directions.

A team of Google researchers combined the powers of natural language processing and computer vision to develop a novel means of robotic navigation as part of a new study published Wednesday.

Recommended Videos

Essentially, the team set out to teach a robot — in this case an Everyday Robot — how to navigate through an indoor space using natural language prompts and visual inputs. Robotic navigation used to require researchers to not only map out the environment ahead of time but also provide specific physical coordinates within the space to guide the machine. Recent advances in what’s known as Vision Language navigation have enabled users to simply give robots natural language commands, like “go to the workbench.” Google’s researchers are taking that concept a step further by incorporating multimodal capabilities, so that the robot can accept natural language and image instructions at the same time.

For example, a user in a warehouse would be able to show the robot an item and ask, “what shelf does this go on?” Leveraging the power of Gemini 1.5 Pro, the AI interprets both the spoken question and the visual information to formulate not just a response but also a navigation path to lead the user to the correct spot on the warehouse floor. The robots were also tested with commands like, “Take me to the conference room with the double doors,” “Where can I borrow some hand sanitizer,” and “I want to store something out of sight from public eyes. Where should I go?”

Or, in the Instagram Reel above, a researcher activates the system with an “OK robot” before asking to be led somewhere where “he can draw.” The robot responds with “give me a minute. Thinking with Gemini …” before setting off briskly through the 9,000-square-foot DeepMind office in search of a large wall-mounted whiteboard.

To be fair, these trailblazing robots were already familiar with the office space’s layout. The team utilized a technique known as “Multimodal Instruction Navigation with demonstration Tours (MINT).” This involved the team first manually guiding the robot around the office, pointing out specific areas and features using natural language, though the same effect can be achieved by simply recording a video of the space using a smartphone. From there the AI generates a topological graph where it works to match what its cameras are seeing with the “goal frame” from the demonstration video.

Then, the team employs a hierarchical Vision-Language-Action (VLA) navigation policy “combining the environment understanding and common sense reasoning,” to instruct the AI on how to translate user requests into navigational action.

The results were very successful with the robots achieving “86 percent and 90 percent end-to-end success rates on previously infeasible navigation tasks involving complex reasoning and multimodal user instructions in a large real world environment,” the researchers wrote.

However, they recognize that there is still room for improvement, pointing out that the robot cannot (yet) autonomously perform its own demonstration tour and noting that the AI’s ungainly inference time (how long it takes to formulate a response) of 10 to 30 seconds turns interacting with the system a study in patience.

Andrew Tarantola
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
This upcoming AI feature could revolutionize Google Chrome
Google's Gemini logo with the AI running on a smartphone and a PC.

One of the latest trends in the generative AI space is AI agents, and Google may be prepping its own agent to be a feature of an upcoming Gemini large language model (LLM).

The development, called Project Jarvis, is an AI agent based within the Google Chrome browser that will be able to execute common tasks after being given a short query or command with more independence than before. The inclusion of AI agents in the next Chrome update has the potential to be the biggest overhaul since the browser launched in 2008, according to The Information.

Read more
Google expands AI Overviews to over 100 more countries
AI Overviews being shown in Google Search.

Google's AI Overview is coming to a search results page near you, whether you want it to or not. The company announced on Monday that it is expanding the AI feature to more than 100 countries around the world.

Google debuted AI Overview, which uses generative AI to summarize the key points of your search topic and display that information at the top of the results page, to mixed reviews in May before subsequently expanding the program in August. Monday's roll-out sees the feature made available in seven languages — English, Hindi, Indonesian, Japanese, Korean, Portuguese, and Spanish — to users in more than 100 nations (you can find a full list of covered countries here)

Read more
GPT-5: everything we know so far about OpenAI’s next frontier model
A MacBook Pro on a desk with ChatGPT's website showing on its display.

There's perhaps no product more hotly anticipated in tech right now than GPT-5. Rumors about it have been circulating ever since the release of GPT-4, OpenAI's groundbreaking foundational model that's been the basis of everything the company has launched over the past year, such as GPT-4o, Advanced Voice Mode, and the OpenAI o1-preview.

Those are all interesting in their own right, but a true successor to GPT-4 is still yet to come. Now that it's been over a year a half since GPT-4's release, buzz around a next-gen model has never been stronger.
When will GPT-5 be released?
OpenAI has continued a rapid rate of progress on its LLMs. GPT-4 debuted on March 14, 2023, which came just four months after GPT-3.5 launched alongside ChatGPT. OpenAI has yet to set a specific release date for GPT-5, though rumors have circulated online that the new model could arrive as soon as late 2024.

Read more