News

As millions of people engage with LLMs, experts are voicing growing concerns that AI tools could provide harmful​ mental health advice.
A study finds that popular AI chatbots like ChatGPT, Google's Gemini, and Anthropic's Claude are inconsistent in responding to suicide-related queries.
Three widely used artificial intelligence chatbots generally do a good job responding to very-high-risk and very-low-risk ...
In multiple videos shared on TikTok Live, the bots referred to the TikTok creator as "the oracle," prompting onlookers to ...
Testing has shown that the chatbot shows a “pattern of apparent distress” when it is being asked to generate harmful content ...
Anthropic has given its chatbot, Claude, the ability to end conversations it deems harmful. You likely won't encounter the ...
It's now become common for AI companies to regularly adjust their chatbots' personalities based on user feedback. OpenAI and ...
Claude's ability to search and reference past conversations isn't identical to ChatGPT's broad memory feature that can ...
Anthropic have given the ability to end potentially harmful or dangerous conversations with users to Claude, its AI chatbot.
Anthropic empowers Claude AI to end conversations in cases of repeated abuse, prioritizing model welfare and responsible AI ...
On Thursday, Anthropic launched new learning modes for both Claude.ai, its chatbot, and Claude Code, its coding assistant, to help users learn as they use AI to complete tasks. The modes aim to go ...
This memory tool works on Claude’s web, desktop, and mobile apps. For now, it’s available to Claude Max, Team, and Enterprise ...