A new play called "Data" explores the potential consequences of a partnership between a tech leader and an authoritarian US government. Set in Silicon Valley, it examines the human element behind AI algorithms.
U.S. firm Vantiq has donated AI software worth $500,000 to develop smart cities in Ukraine. The project, in partnership with local organisations, aims to establish over 100 industrial parks, potentially impacting 2.5 million citizens. The AI software analyses data to improve city management, enhance wartime communication, and aid post-war reconstruction. The initiative seeks to optimise resource allocation, streamline data collection, and attract foreign investment through increased transparency.
A Reddit user fabricated a story about ChatGPT prompting them to seek medical care for a heart attack. The incident, which gained significant attention before being debunked, demonstrates how AI can be used to create convincing false content online. This serves as a reminder to critically evaluate information, especially in the age of generative AI.
Microsoft will unveil an AI-enhanced project for St. Peter's Basilica in November 2024. This collaboration with the Fabric of St. Peter is part of Microsoft's AI for Cultural Heritage programme, which has developed similar projects for other cultural sites. The unveiling will take place at the Vatican.
Perplexity AI's CEO offered tech support to The New York Times during a workers' strike, despite recent legal threats from The Times over unauthorised content use. The Times' Tech Guild went on strike before a US election. The Times, along with other media outlets, have taken legal action against Perplexity AI for alleged copyright infringement.
AI-driven bot accounts supporting Donald Trump's US presidential campaign were discovered on X. The network used generative AI to create content, with central 'originator' and amplifier accounts. X has suspended the identified accounts. The network's influence is believed to be limited currently. This follows a similar discovery of AI-driven political propaganda accounts. Such AI use may increase uncertainty in attributing disinformation campaigns.
AI tools like ChatGPT and Microsoft's Copilot have produced false information about individuals, leading to legal challenges and defamation concerns. Cases include a German journalist and an Australian mayor falsely described as criminals, and a US radio host suing OpenAI. These cases may set precedents for AI liability. Correcting AI errors is difficult due to system opacity. Some courts now require disclosure of AI use in case preparation. An inquiry is underway into AI use in courts, with potential future applications in case management, translation, and dispute resolution.
An AI platform called Artisan has launched a controversial ad campaign in San Francisco promoting AI workers over humans. The 'Stop Hiring Humans' campaign features provocative billboards, sparking debate on social media about the ethics and implications of AI replacing human workers.
A Polish state-funded radio station experimented with AI-generated content, including an interview with a deceased poet. The controversial attempt to boost listenership was quickly terminated following public outcry and criticism.
A fake Halloween parade in Dublin, advertised on an AI-generated website, drew thousands before police confirmed no event was taking place. The incident highlights concerns about AI content misleading the public and potentially exploiting search engines for ad revenue.
Microsoft and Andreessen Horowitz jointly published an AI policy statement advocating for a market-based approach to regulation. It opposes proactive measures, argues for unrestricted AI access to data, and calls for government investment in digital literacy. Critics argue the statement misrepresents impacts on smaller companies and avoids meaningful industry regulation.