Published :
4 minute read

Google Removes Controversial ‘What People Suggest’ AI Health Feature After Safety Concerns

info Synopsis: Google has removed its AI search feature “What People Suggest,” which organized crowdsourced health advice from online discussions. The company said the move is part of a broader simplification of its search page. The removal comes as Google faces growing scrutiny over the accuracy and safety of AI-generated health information in search results.

Google removes the What People Suggest AI health search feature amid safety concerns about crowdsourced medical advice.

Google has quietly removed a controversial artificial intelligence -powered search feature that surfaced crowdsourced health advice from internet users, a move that comes amid growing scrutiny over the reliability of AI-generated medical information .

The feature, known as “What People Suggest,” had been designed to aggregate experiences and advice shared by people discussing health conditions online. While Google had promoted it as an example of how AI could improve access to health insights, critics raised concerns about the accuracy and safety of medical guidance sourced from non-experts.

A Google spokesperson confirmed the removal, describing it as part of a broader effort to simplify the search page rather than a direct response to safety issues. However, the decision arrives as the company faces increased pressure over how its AI tools present medical information to billions of users worldwide.

A feature built on crowdsourced health experiences

“What People Suggest” was introduced as a way to surface perspectives from individuals who had experienced similar medical conditions. Using artificial intelligence, Google’s search system would analyze discussions across the internet and organize them into themes, offering users a summarized view of advice or experiences shared by others.

The concept aimed to make it easier for people researching symptoms or health conditions to hear from those who had faced similar situations. Rather than presenting formal medical guidance, the feature focused on personal experiences discussed in online forums and communities.

At a New York event called “The Check Up” last year, Google highlighted the feature as part of its broader push to integrate AI into health-related search results.

Karen DeSalvo, who was serving as Google’s chief health officer at the time, promoted the initiative during the event. She said that people often value hearing from others with similar lived experiences and that the feature would help users find those insights more easily.

The idea was to use AI to organize diverse perspectives from online discussions into clear themes, making them easier for users to understand while researching medical issues.

Removal comes amid scrutiny over AI health information

Despite Google’s framing of the feature as a way to share lived experiences, critics questioned whether AI-organized health advice from non-professionals could mislead users seeking medical guidance.

Concerns intensified earlier this year after a report from The Guardian highlighted potential risks associated with another Google feature known as AI Overviews .

These AI-generated summaries appear above traditional search results and are shown to roughly two billion users each month, making them one of the most widely viewed features on the internet.

The report warned that inaccurate or misleading medical information generated by AI summaries could put users at risk if they relied on it for health-related decisions.

At the time, Google responded by emphasizing that AI Overviews link to reputable sources and include reminders encouraging users to consult qualified professionals for medical advice.

However, the criticism highlighted the broader challenge facing technology companies as they integrate generative AI into products that millions of people rely on for sensitive information.

Google says the removal is part of a broader redesign

According to a Google spokesperson, the removal of “What People Suggest” was part of a “broader simplification” of the search page rather than a direct reaction to criticism over health misinformation.

The company did not indicate that safety concerns played a role in the decision.

Still, the timing of the change coincides with ongoing debate about how AI-generated information should be presented in areas such as medicine, where inaccurate guidance could have real-world consequences.

The feature’s disappearance also suggests that Google may be reassessing how it balances user-generated experiences with authoritative medical information within its search ecosystem.

AI Overviews remain a central part of Google Search

Although “What People Suggest” has been removed, Google continues to expand its use of artificial intelligence within its search platform.

AI Overviews remain a major component of the company’s strategy, providing summarized answers generated by AI models directly at the top of search results.

Following the criticism reported earlier this year, Google removed AI Overviews for some medical queries, though not all.

The company had previously signaled its intention to expand AI-generated medical summaries, highlighting the technology as a tool to help people better understand complex health topics.

The debate around these features reflects a broader tension within the tech industry: AI systems can help organize vast amounts of information quickly, but they also raise questions about accuracy, accountability, and the potential consequences when users rely on automated summaries for health guidance.

Growing pressure on AI companies in sensitive domains

The removal of “What People Suggest” illustrates the increasing scrutiny facing companies deploying AI tools in sensitive areas such as healthcare.

Search engines have long served as a first stop for people seeking medical information, and the introduction of generative AI has amplified both the potential benefits and risks of those searches.

While AI systems can quickly synthesize information from across the web, they can also reproduce misleading claims or unverified advice when drawing from large volumes of online content.

For Google, which operates the world’s most widely used search engine, the challenge is particularly significant. Any change to how health information is displayed has the potential to influence millions of users researching symptoms, treatments, or diagnoses.

For now, the removal of “What People Suggest” suggests a cautious step as the company continues to refine how artificial intelligence fits into one of its most important products.

End of Article
Add Khogendra Rupini as a preferred source on Google

You Can Also Check

Khogendra Rupini Author Profile
VOICES FROM AUTHOR

Khogendra Rupini

Khogendra Rupini is a full-stack developer and independent news writer, and the founder and CEO of Levoric Learn. His journalism is grounded in verified information and factual accuracy, with reporting informed by reputable sources and careful analysis rather than live or speculative updates. He covers technology, artificial intelligence, cybersecurity, and global affairs, producing clear, well-contextualized articles that emphasize credibility, precision, and public relevance.

Founder & CEO, Levoric Learn Editorial and Technology Analysis
or
or

Edit Profile

Contact Khogendra Rupini

Are you looking for an experienced developer to bring your website to life, tackle technical challenges, fix bugs, or enhance functionality? Look no further.

I specialize in building professional, high-performing, and user-friendly websites designed to meet your unique needs. Whether it’s creating custom JavaScript components, solving complex JS problems, or designing responsive layouts that look stunning on both small screens and desktops, I can collaborate with you.

Get in Touch

Email: contact@khogendrarupini.com

Phone: +91 8837431044

Create something exceptional with us. Contact us today