AI‑drone Technology Forwarded by War in Ukraine
Russian drone painted with bird-like silhouette in attempt to confuse the automated targeting mechanism of an AI-enabled hunter drone
When the Signal Dies, the Drone Decides
In modern warfare, losing a drone isn’t always about enemy fire. Often, the first thing to go is the signal.
Flying over hostile territory, drones face a gauntlet of invisible threats: GPS jamming that scrambles navigation, mobile data blackouts that sever control links, and terrain or buildings that swallow radio signals. Russia has turned these tactics into a routine. Shutting down entire regional networks to blind Ukrainian drones mid-mission. Even advanced military craft have been left circling helplessly when their connection to human pilots is cut.
It’s not just theory. In U.S. Army drills in Germany earlier this year, operators reported sudden video dropouts, GPS failures, and controls freezing in their hands. When your mission depends on a constant stream of data, the loss of that stream can be fatal.
Autonomous targeting systems by use of artificial intelligence software can automatically, without direct human intervention, seek out targets and destroy them.
A modern-day weapons race is currently taking place on the front line, as Ukraine and Russia both experiments with autonomy in drones.
Offline Onboard Intelligence
This is where artificial intelligence comes in.
Instead of relying on a pilot’s constant touch, AI-powered systems allow drones to decide their own path, adapt to unexpected obstacles, and, in some cases, finish the job without calling home.
Researchers at MIT have created adaptive control software that lets drones instantly correct for turbulence, wind, or sudden obstacles. Shield AI’s “Hivemind” system enables quadcopters to navigate and execute missions in GPS-denied, jammed environments. In Europe, companies like Quantum Systems and Stark are pushing toward autonomous targeting, with drones able to track and engage moving vehicles over long distances. Stark says its newest design can strike up to 100 kilometers from base, though still with human sign-off before firing.
To Shoot or Not to Shoot: The Ethics of Choosing a Target
This autonomy solves one problem – connectivity. But it raises another: defining the target.
AI can identify patterns in heat signatures, shapes, and movement, but the battlefield rarely offers perfect clarity. A civilian truck can resemble a military transport on infrared. A worker carrying a metal pipe can be mistaken for a rifleman. A school – a weapons storage facility.
In 2020, a UN report claimed a Turkish-made Kargu 2 drone attacked fighters in Libya without direct human command, possibly the first lethal autonomous strike.
And in Ukraine, drones have been used to locate and hit radar stations after losing their link to operators, relying entirely on onboard AI to finish the task.
Why Autonomy Has Become Essential in Drone Warfare
Until recently, many small and medium drones operated over civilian mobile networks – cheap, easy, and effective. But Russia’s rolling shutdowns of 3G and 4G coverage in targeted regions (for example in Rostov-on-Don area) have rendered that approach unreliable.
AI changes the equation. A drone that can navigate, adapt, and execute without a constant uplink isn’t stopped by a dead network or a jammed GPS. It keeps flying, keeps hunting, and, if its onboard brain decides the shot is clear, keeps fighting.
That makes AI not just a technological upgrade, but a necessity in contested skies. The question now not whether drones can operate without direct human intervention, but more about whether we’re ready for the ethical dilemma combatants will start facing more and more as the technology improves.