The US Defence Department has tested an AI-powered autonomous robotic gun system called 'Bullfrog'. It can target and shoot down small drones using computer vision. Currently, human authorisation is required before firing, but the system is capable of fully autonomous operation. This development raises ethical concerns about autonomous weapons in warfare.
Northwestern University's Security & AI Lab (NSAIL) is researching AI and counterterrorism. They recently held a conference on AI & National Security and are working on projects like a Drone Early Warning System and a Global Online Deep Fake Detection System. NSAIL collaborates with various departments and international partners to address security challenges using AI.
AI systems, including deep learning and neural networks, are based on mathematics. Neural networks process inputs through layers of artificial neurons, with training involving adjusting connections to minimise errors. Researchers are working to understand neural network decision-making and correct errors. Applications include autonomous satellites, spacecraft inspection, and non-invasive diagnosis of cardiovascular disease.
A new AI-powered reconnaissance drone called V-BAT was tested in Ukraine. It can operate without GPS or communications, fly 500 km, and loiter for 10 hours. The drone identified targets beyond the front line and relayed data to artillery. It's capable of swarm operations and may counter Russia's electronic warfare advantage.