cua cà mau cua tươi sống cua cà mau bao nhiêu 1kg giá cua hôm nay giá cua cà mau hôm nay cua thịt cà mau cua biển cua biển cà mau cách luộc cua cà mau cua gạch cua gạch cà mau vựa cua cà mau lẩu cua cà mau giá cua thịt cà mau hôm nay giá cua gạch cà mau giá cua gạch cách hấp cua cà mau cua cốm cà mau cua hấp mua cua cà mau cua ca mau ban cua ca mau cua cà mau giá rẻ cua biển tươi cuaganic cua cua thịt cà mau cua gạch cà mau cua cà mau gần đây hải sản cà mau cua gạch son cua đầy gạch giá rẻ các loại cua ở việt nam các loại cua biển ở việt nam cua ngon cua giá rẻ cua gia re crab farming crab farming cua cà mau cua cà mau cua tươi sống cua tươi sống cua cà mau bao nhiêu 1kg giá cua hôm nay giá cua cà mau hôm nay cua thịt cà mau cua biển cua biển cà mau cách luộc cua cà mau cua gạch cua gạch cà mau vựa cua cà mau lẩu cua cà mau giá cua thịt cà mau hôm nay giá cua gạch cà mau giá cua gạch cách hấp cua cà mau cua cốm cà mau cua hấp mua cua cà mau cua ca mau ban cua ca mau cua cà mau giá rẻ cua biển tươi cuaganic cua cua thịt cà mau cua gạch cà mau cua cà mau gần đây hải sản cà mau cua gạch son cua đầy gạch giá rẻ các loại cua ở việt nam các loại cua biển ở việt nam cua ngon cua giá rẻ cua gia re crab farming crab farming cua cà mau
Skip to main content

I’ve seen the (distant) future of AI web search – here’s where it’s amazing, and where it struggles

The aggressiveness with which artificial intelligence (AI) moved from the realm of theoretical power into real-world consumer-ready products is astonishing. For several years now, and up until a couple of months ago when OpenAI’s ChatGPT broke onto the scene, companies from the titans of Microsoft and Google down to myriad startups espoused the benefits of AI with little practical application of the tech to back it up. Everyone knew AI was a thing, but most didn’t actually utilize it.

Just a handful of weeks after announcing an investment in OpenAI, Microsoft launched a publicly-accessible beta version of its Bing search engine and Edge browser powered by the same technology that has made ChatGPT the talk of the town. ChatGPT itself has been a fun thing to play with, but launching something far more powerful and fully integrated into consumer products like Bing and Edge is an entirely new level of exposure for this tech. The significance of this step cannot be overstated.

Recommended Videos

ChatGPT felt like a toy; having the same AI power applied to a constantly-updated search database changes the game.

Microsoft was kind enough to provide me with complete access to the new AI “copilot” in Bing. It only takes a few minutes of real-world use to understand why Microsoft (and seemingly every other tech company) is excited about AI. Asking the new Bing open-ended questions about planning a vacation, setting up a week of meal plans, or starting research into buying a new TV and having the AI guide you to something useful, is powerful. Anytime you have a question that would normally require pulling information from multiple sources, you’ll immediately streamline the process and save time using the new Bing.

Let AI do the work for you

Not everyone wants to show up to Google or Bing ready to roll up their sleeves and get into a multi-hour research session with lots of open tabs, bookmarks, and copious amounts of reading. Sometimes you just want to explore a bit, and have the information delivered to you — AI handles that beautifully. Ask one multifaceted question and it pulls the information from across the internet, aggregates it, and serves it to you in one text box. If it’s not quite right, you can ask follow-up questions contextually and have it generate more finely-tuned results.

Bing AI copilot search result.
Information overload, perhaps, but this is a great response. Image used with permission by copyright holder
Bing copilot search result.
Image used with permission by copyright holder

The biggest thing Microsoft brings to the table that was unobtainable through ChatGPT is recent information — and that differentiator truly changes the game. Whereas ChatGPT draws on a fixed data set and references a snapshot of the internet that is a couple of years old, Bing is pulling in data constantly and always has up-to-date information. It’s a search engine that’s crawling the entire internet, after all. I can get restaurant recommendations for a new place that opened last week, information about the new OnePlus 11 that launched yesterday, and up-to-date pricing on flights and hotels. This alone takes this experience from “fun toy” to “truly useful.”

Not only does that make the answers you get out of Bing’s AI copilot more relevant, but it also gives the system vastly more information to work with and build off of. When AI is constantly being served new data, it can (theoretically) improve at a faster rate. And that gets even better as it learns how people interact with it.

But at this point, conversational AI is far from applicable to every kind of search. Or frankly, most searches. And when it doesn’t work, it really falls flat.

This beats ChatGPT … but still has its weaknesses

Shortcomings we all experience when using ChatGPT aren’t so easily dismissed or forgiven when a company the size of Microsoft puts its name behind it and says it’s ready for everyone. And it took but a few minutes of testing (admittedly, an early beta version) the new AI-powered Bing to find similar shortcomings.

It’s counterintuitive, but the Bing copilot works better the less specific your questions are.

Interestingly, the more specific you are with your search, or the more certain you are about the answer you expect, the worse the copilot response. Counterintuitively, it is more capable of serving you a useful response when you give it less information to work with and leave the question open-ended for the AI to fill in the gaps.

When you ask a very specific question, you often get a very flowery answer with tons of unnecessary context and background. The answer you want is typically available within that few paragraphs and/or several bullet points, and in some cases, it’s set in bold, but it’s easy to see when the AI is trying to cover for its lack of “knowledge” by writing out a verbose response. That extra fluff is often not accurate, either, as the AI pulls from a wider set of sources that are more likely to be out of date or unrelated.

Bing copilot search result.
The week before the super bowl, when we certainly do know who’s playing in the game. Image used with permission by copyright holder

By Microsoft’s own admission, over half of web search queries are “navigational” or “informational” — meaning I have a specific question about how to get from A to B or a knowledge-base query that’s available in a structured data format. When I ask what hours a restaurant is open, or how many square miles the island of Manhattan is, I don’t need a history lesson attached to what is likely a couple-word answer. I just want the answer and to move on.

Knowing when to use AI and when to keep it simple

For this half (or more) of searches, the new AI power offers little added value right now. And in many cases, it’s actually a hindrance. Speaking with Jordi Ribas, VP of Bing, I got some much-needed context on how Microsoft is thinking about this integration of AI with its “traditional” search product, and how they are managing surfacing powerful new technology to customers when they know it isn’t applicable in every scenario and has room for improvement.

Sometimes I just want a simple answer, and the AI wants to tell me a story instead.

This debate was strong within the Bing team, Ribas said; there was a strong contingent of people who felt that moving the entire Bing interface into a “chat” window was the only way to go, while others were bearish and preferred it was a smaller part of the experience. The result, in a very Microsoft approach, is somewhere in the middle.

Getting my hands on a dev build of Edge with the copilot built in to use outside of a demo environment, I started to understand what they’re going for. The standard Bing interface starts off the same; outside of the fact that you now have a bigger text box that can take up to 1,000 characters. If the AI can generate a detailed response to your query, you get a panel on the right side with that response — but it doesn’t take over the experience, it’s complementary to the standard search results on the left. If you want to dive in with more questions, you can switch to a full-screen all-chat experience and have a conversation of sorts with the AI. It’s simple to scroll up to the chat and back down to the regular results interchangeably.

And in some cases, at least in this early version, you may not get any AI responses at all — just a standard set of blue links. Microsoft is honest about the fact that these AI responses are very computationally intensive and expensive to return (servers aren’t free!) compared to a “regular” search result, so when Bing is confident it can give a satisfactory response using the regular search model rather than the new “Prometheus” (AI) model, it’ll do that. Though in my experience it doesn’t fall back to this standard model nearly enough, as I noted above.

Bing copilot search result.
Multiple sources, nicely aggregated and displayed with context. Image used with permission by copyright holder

It’s clear that what Microsoft has been able to accomplish here is beyond just sprinkling a little AI on top of Bing. But it’s not that far beyond it, and it doesn’t take long to figure out that with this current level of technical sophistication, we don’t want AI in every aspect of our search experience.

The core issue with AI-powered Bing is that the queries that demo very well aren’t typically what you’re doing most of the time you sit down and use a search engine. For a majority of my searches I won’t be utilizing AI, and at this point, the “intelligence” often gets in my way just as often as it helps. That takes some of the edge off of the amazement of this announcement.

But I am still incredibly excited about these developments in AI, and even when you recognize that you aren’t going to be using the AI for each search query, the times you do use it can be an extraordinary experience.

This is the future of web search, there’s no doubt about it. It just isn’t necessarily the near future.

Andrew Martonik
Andrew Martonik is the Editor in Chief at Digital Trends, leading a diverse team of authoritative tech journalists.
From Open AI to hacked smart glasses, here are the 5 biggest AI headlines this week
Ray-Ban Meta smart glasses in Headline style are worn by a model.

We officially transitioned into Spooky Season this week and, between OpenAI's $6.6 million funding round, Nvidia's surprise LLM, and some privacy-invading Meta Smart Glasses, we saw a scary number of developments in the AI space. Here are five of the biggest announcements.
OpenAI secures $6.6 billion in latest funding round

Sam Altman's charmed existence continues apace with news this week that OpenAI has secured an additional $6.6 billion in investment as part of its most recent funding round. Existing investors like Microsoft and Khosla Ventures were joined by newcomers SoftBank and Nvidia. The AI company is now valued at a whopping $157 billion, making it one of the wealthiest private enterprises on Earth.

Read more
Meta and Google made AI news this week. Here were the biggest announcements
Ray-Ban Meta Smart Glasses will be available in clear frames.

From Meta's AI-empowered AR glasses to its new Natural Voice Interactions feature to Google's AlphaChip breakthrough and ChromaLock's chatbot-on-a-graphing calculator mod, this week has been packed with jaw-dropping developments in the AI space. Here are a few of the biggest headlines.

Google taught an AI to design computer chips
Deciding how and where all the bits and bobs go into today's leading-edge computer chips is a massive undertaking, often requiring agonizingly precise work before fabrication can even begin. Or it did, at least, before Google released its AlphaChip AI this week. Similar to AlphaFold, which generates potential protein structures for drug discovery, AlphaChip uses reinforcement learning to generate new chip designs in a matter of hours, rather than months. The company has reportedly been using the AI to design layouts for the past three generations of Google’s Tensor Processing Units (TPUs), and is now sharing the technology with companies like MediaTek, which builds chipsets for mobile phones and other handheld devices.

Read more
OpenAI’s advanced ‘Project Strawberry’ model has finally arrived
chatGPT on a phone on an encyclopedia

After months of speculation and anticipation, OpenAI has released the production version of its advanced reasoning model, Project Strawberry, which has been renamed "o1." It is joined by a "mini" version (just as GPT-4o was) that will offer faster and more responsive interactions at the expense of leveraging a larger knowledge base.

It appears that o1 offers a mixed bag of technical advancements. It's the first in OpenAI's line of reasoning models designed to use humanlike deduction to answer complex questions on subjects -- including science, coding, and math -- faster than humans can.

Read more