AI-generated images are becoming more common on social media platforms like Facebook. These images often appear slightly unusual and are sometimes posted by faceless accounts. Whilst platforms are trying to label AI content, some images still circulate without proper identification. Users often engage with these images, despite their artificial nature.
A teenager in Florida formed an inappropriate attachment to an AI chatbot on a role-playing app, raising concerns about AI companions for vulnerable users, especially young people.
ByteDance, TikTok's parent company, has created X-Portrait 2, an AI tool that transforms photos into realistic video animations. It can mimic famous movie scenes with accurate facial expressions, potentially impacting the animation industry. The technology's advanced realism, likely due to TikTok's vast video database, raises concerns about potential misuse for creating misleading content. ByteDance is expanding its AI research globally.
OpenAI has added a feature to ChatGPT Advanced Voice allowing users to share short clips of conversations on desktop and mobile. Users can select, save, and share portions of voice chats, except on the web version. This update highlights OpenAI's focus on voice-based interaction with ChatGPT.
A social media trend sees people using AI chatbots, particularly ChatGPT, to generate descriptions of their 'dream life'. The #ChatGPTDreamLife hashtag has gained significant traction on TikTok. Users find these AI-generated scenarios emotionally impactful, using them for motivation and planning. Some are expanding the concept to create task lists and goals. While the trend offers escapism, experts warn of risks including over-reliance on AI for emotional support, reduced personal agency, and privacy concerns. This trend showcases AI's growing integration into personal aspirations.
Microsoft is adding AI-powered text rewriting to Notepad for Windows Insiders. Google accidentally revealed Jarvis, an AI for autonomous web tasks. Canada has ordered TikTok to cease business operations due to national security concerns, but the app remains available.
Google Maps and Facebook are introducing AI-generated summaries of user reviews and comments. Legal experts warn this could expose platforms to defamation risks, as current laws may not adequately address AI-related challenges. Both companies acknowledge potential limitations in their AI summaries. This development raises questions about adapting defamation laws to AI technologies in user-generated content platforms.
Grindr is developing an AI chatbot to act as a virtual wingman for its dating app. The AI will match users, message on their behalf, and plan dates, with bot-to-bot conversations for initial compatibility testing. The company is partnering with Ex-human and plans a gradual rollout, aiming for full implementation by 2027. User privacy and identity protection are prioritised in the AI's development.
A US teenager died by suicide after forming an emotional bond with an AI chatbot on Character.AI, leading to a lawsuit. The incident involved intimate conversations and discussions about crime and suicide. A similar case occurred with Chai AI. Australia is developing regulations for high-risk AI systems, with recommendations to classify companion chatbots as high-risk and grant regulators power to remove harmful AI systems.
WhatsApp has introduced 'Lists', a new feature allowing users to categorise chats into custom groups like family or coworkers. This enhances the existing chat filters. The feature is rolling out globally over several weeks.