Background

AI News

Stay ahead with our expert analysis of the latest AI news
Showing 1-20 of 24 results
Nov 17, 2024
US Defence Tests AI Drone-Targeting Gun System
The US Defence Department has tested an AI-powered autonomous robotic gun system called 'Bullfrog'. It can target and shoot down small drones using computer vision. Currently, human authorisation is required before firing, but the system is capable of fully autonomous operation. This development raises ethical concerns about autonomous weapons in warfare.
Nov 10, 2024
Chinese Scientists Test AI-Powered Robot Lifeguard
Chinese scientists have developed an AI-powered robot lifeguard that operates autonomously 24/7. It uses AI, big data, and navigation technologies. The robot is being tested at a riverside location in China, where it will be permanently stationed.
Nov 8, 2024
Conflixis Develops AI for Hospital Risk Management
A startup called Conflixis has developed AI software to help hospitals identify risky behaviour by doctors. The company uses AI to analyse data from various sources to detect potential conflicts of interest and regulatory risks. Conflixis has secured $4.2 million in seed funding and aims to assist hospitals in reducing risks, increasing transparency, and improving procurement decisions.
Nov 8, 2024
Reddit User Fabricates ChatGPT Heart Attack Story
A Reddit user fabricated a story about ChatGPT prompting them to seek medical care for a heart attack. The incident, which gained significant attention before being debunked, demonstrates how AI can be used to create convincing false content online. This serves as a reminder to critically evaluate information, especially in the age of generative AI.
Nov 8, 2024
Florida Teen's AI Attachment Raises Concerns
A teenager in Florida formed an inappropriate attachment to an AI chatbot on a role-playing app, raising concerns about AI companions for vulnerable users, especially young people.
Nov 7, 2024
Gray Swan AI Develops Tools for AI Safety
Gray Swan AI, a security startup founded in 2023, focuses on identifying risks in AI systems and developing tools for safe deployment. They've secured partnerships with major AI companies and the UK's AI Safety Institute. Their key products include Cygnet, an AI model resistant to jailbreaking, and Shade, a tool for finding AI system weaknesses. The company hosted a 'jailbreaking arena' competition to test their security measures, has received £5.5 million in seed funding, and is preparing for Series A funding. They aim to build a community of hackers to continually improve AI safety measures.
Nov 7, 2024
Mistral AI Launches Multilingual Content Moderation API
French AI startup Mistral AI has launched a content moderation API using their Ministral 8B model. It detects harmful content in 9 categories across 11 languages, analyzing both text and conversations. Available through Mistral's cloud platform, this positions them to compete with major AI companies on safety issues. Recent partnerships with Microsoft, Qualcomm and SAP boost Mistral's enterprise presence. Their focus on edge computing and safety features may appeal to European firms under strict data regulations.
Nov 7, 2024
OpenAI Impersonation Phishing Scam Targets ChatGPT Users
A phishing campaign is impersonating OpenAI, claiming ChatGPT subscription payments were declined and urging recipients to update payment details via a link. While this attack is unsophisticated, future AI-driven attempts may be more advanced. Experts recommend strengthening cybersecurity measures and providing employee training to combat such threats.
Nov 7, 2024
AI Revolutionises Scientific Prediction Across Fields
AI and machine learning are revolutionising scientific prediction across fields like college admissions, elections, and drug discovery. Large datasets and complex 'black box' models are being used, with statisticians developing techniques to quantify uncertainty without understanding the models' inner workings. Concerns about reproducibility exist, and statisticians are creating safeguards for reliable findings. The emerging field of data science incorporates traditional statistics with new techniques like large-scale population tracking.
Nov 7, 2024
Fortinet Expands AI-Powered Security Assistant Capabilities
Fortinet has expanded its AI-powered security assistant, FortiAI, to include FortiNDR Cloud and Lacework FortiCNAPP. FortiAI uses generative AI to simplify security analyses, helping with threat detection, alert clarification, and risk assessment. Fortinet continues to incorporate AI across its product portfolio.
Nov 7, 2024
Chinese Military Use of Meta's AI Raises Concerns
Chinese researchers used Meta's Llama AI model for military purposes, sparking debate on open source AI and national security. Critics want restrictions, while proponents highlight innovation benefits. China's investment in open source AI development has global implications, especially for US-China AI competition.
Nov 7, 2024
Ant International Develops AI for Currency Predictions
Ant International has created an AI model for hourly currency exchange rate predictions, aiming to reduce cross-border payment costs for businesses. This technology is part of Ant's Alipay+ network, which supports over 30 global e-wallets and banking apps, connecting 88 million+ merchants worldwide. Alipay+ has also expanded its partnership with PayPay in Japan.
Nov 7, 2024
AI System Identifies Pills in EMS Training Test
ChatGPT 4 Omni, an AI system, accurately identified pills in a controlled test, showing potential for EMS training in overdose scenarios. However, its real-world application faces limitations and privacy concerns. EMS professionals are advised to understand the technology's capabilities and limitations before considering integration into practice.
Nov 7, 2024
Google Executive Warns of AI's Impact on Digital Security
Google's top security executive warns that AI is forcing companies to rethink digital security strategies. Traditional measures are becoming ineffective against AI-powered attacks. This is part of a technological arms race in eCommerce, with AI potentially reducing online fraud losses. AI security systems can analyse data in real-time to detect advanced attacks, using machine learning to understand context. Organisations are advised to implement new AI-specific security frameworks, including data sanitization and safeguards against harmful outputs. While costly, these measures are deemed essential for safe AI use and protection against threats.
Nov 5, 2024
X Suspends AI-Generated Pro-Trump Bot Accounts
AI-driven bot accounts supporting Donald Trump's US presidential campaign were discovered on X. The network used generative AI to create content, with central 'originator' and amplifier accounts. X has suspended the identified accounts. The network's influence is believed to be limited currently. This follows a similar discovery of AI-driven political propaganda accounts. Such AI use may increase uncertainty in attributing disinformation campaigns.
Nov 5, 2024
University of Virginia Develops AI for Real-Time Toxic Gas Detection
Researchers at the University of Virginia have developed an AI system that detects and tracks toxic gases in real time. It uses artificial neural networks and graphene-based sensors to mimic the human sense of smell, quickly identifying harmful gases like nitrogen dioxide. The system employs nano-islands of metal catalysts on graphene and optimally placed sensors for efficient coverage. This technology has potential applications in industrial, urban, and residential settings for improved air quality monitoring and health risk prevention.
Nov 4, 2024
OWASP Issues Guidance on AI-Based Attack Defence
OWASP released guidance on defending against AI-based attacks, particularly deepfakes. AI-generated text in emails has increased to 12%, up from 7% in late 2022. A survey found 48% of IT professionals very concerned about deepfakes, with 74% seeing them as a future threat. Experts suggest technical solutions will be necessary to combat sophisticated deepfake attacks.
Nov 4, 2024
Camelot Secure Launches AI Tool for CMMC Compliance
Camelot Secure developed an AI tool called Myrddin to assist with CMMC compliance. Integrated into their dashboard, it provides real-time guidance for IT teams conducting assessments. Using advanced AI, Myrddin aims to simplify and accelerate the CMMC process, reducing time and errors. Camelot received an award for this innovation and plans to expand the tool's capabilities in the future.
Nov 3, 2024
AI Tools' False Information Sparks Legal Challenges
AI tools like ChatGPT and Microsoft's Copilot have produced false information about individuals, leading to legal challenges and defamation concerns. Cases include a German journalist and an Australian mayor falsely described as criminals, and a US radio host suing OpenAI. These cases may set precedents for AI liability. Correcting AI errors is difficult due to system opacity. Some courts now require disclosure of AI use in case preparation. An inquiry is underway into AI use in courts, with potential future applications in case management, translation, and dispute resolution.
Nov 3, 2024
Snowflake CEO Urges Disclosure of AI Hallucination Rates
Snowflake's CEO criticised tech companies for not publishing AI hallucination rates, which can range from 1% to 30%. He emphasised that not knowing which part of an AI's answer is incorrect creates a trust problem, especially for critical applications. Snowflake's head of AI identified hallucinations as the biggest obstacle for generative AI use with external customers, suggesting improvements through guardrails and diverse data sources.