What To Know
- Experts argue that this is not a regional or language-specific issue—it is a global one that affects every platform currently on the market.
- The call is clear—AI in news must be guided by transparency, accountability, and strong ethical standards if it is to serve, rather than mislead, the public.
Thailand AI News: A sweeping new European investigation has exposed major flaws in how artificial intelligence assistants handle news. Conducted across 18 countries by leading media researchers, the study found that popular AI tools are far less trustworthy than many users assume when it comes to delivering accurate, verifiable information.

A new European study finds AI assistants frequently misreport or fabricate details when asked about current news events.
Image Credit: AI-Generated
In this Thailand AI News report, experts analyzed more than 3,000 responses from major AI assistants such as ChatGPT, Microsoft Copilot, Google Gemini, and Perplexity AI across multiple languages. The results were alarming. Nearly half of all AI-generated news answers contained at least one significant factual or sourcing error, and over 80 percent had some form of inaccuracy—ranging from incomplete information to misleading statements. These findings raise deep concerns about how people increasingly depend on AI to interpret global news events.
AI assistants often misquote or invent sources
The study’s most disturbing discovery was how often AI platforms mishandled sourcing. A large portion of responses were either missing citations, attributed facts to the wrong outlets, or simply fabricated sources altogether. One of the AI systems tested showed severe weaknesses, with the majority of its responses containing unreliable or unverifiable claims. Accuracy issues were equally troubling—many AI assistants relied on outdated or contextually wrong data, including quoting people no longer holding certain positions or referencing past events as if they were current.
A threat to media trust and public understanding
Researchers behind the project warned that this widespread misinformation risk could erode public confidence in journalism. As more people replace traditional search engines or news sites with conversational AI tools, users may struggle to distinguish verified facts from confident-sounding fabrications. When information is presented smoothly but built on inaccuracies, audiences can become misled without realizing it. For younger users—many of whom already rely on AI assistants for quick news summaries—this trend could reshape how truth and credibility are perceived in the digital era.
Why human judgment still matters
The study emphasized that AI assistants tend to fail most on complex and fast-changing stories where context, timelines, and nuance matter most. While the tools perform adequately for simple factual queries, they falter in topics involving political developments, scientific studies, or breaking events. Experts argue that this is not a regional or language-specific issue—it is a global one that affects every platform currently on the market.
The findings underline a crucial message for journalists, policymakers, and readers in Thailand and beyond: artificial intelligence can assist with information delivery, but it cannot yet replace human editorial scrutiny. Without human oversight, AI-driven misinformation could easily blur the line between fact and fiction, making it harder for societies to stay accurately informed. The call is clear—AI in news must be guided by transparency, accountability, and strong ethical standards if it is to serve, rather than mislead, the public.
For the latest on reliability of AI Platform, keep on logging to Thailand AI News.