Iran has placed the blame on Israel for the assassination of Mohsen Fakhrizadeh, a nuclear scientist. He was the mastermind behind the covert Iranian program for the development of nuclear weapons capability. Israel hasn’t confirmed or denied taking responsibility for the killing. In fact, one of the officials suggested that this is a face-saving gambit by Iran.
Iran is claiming that the scientist was killed using a smart gun that was remotely controlled via a satellite. The gun was fired once the victim was identified using AI (Artificial Intelligence) and face recognition.
Nuclear scientist killing inspired by a movie?
If this whole situation sounds like something out of an action movie, you’re not wrong. It seems very similar to the killing machines used in Captain America Winter Soldier by the villains. In that movie, even though the system was using guns that were mounted on the satellite, the villains were planning to assassinate innocent people using smart guns, facial recognition, and AI.
Earlier, publications quoted witnesses who claimed that a truck exploded before a group of gunmen started firing on Fakhrizadeh. However, Iran’s latest claims fall contrary to these claims.
Ali Fadavi, the deputy commander of Revolutionary Guards of Iran told Iranian agencies that there weren’t any assassins at the site where the killing of Fakhrizadeh took place. Instead, the scientist died after 13 bullets were fired at him from a gun mounted on a vehicle. Fadavi claims the gun was using an advanced “satellite-controlled smart system”. It seems like the gun had access to the internet and was operated remotely. If it was a system using short-distance remote, it wouldn’t have had any need for satellite connectivity.
Moreover, the commander claims that Fakhrizadeh was identified using AI and facial recognition before the gun was fired.
Should AI-driven algorithms and face recognition be legal in law enforcement?
While there is no clarity on the murder of Fakhrizadeh, Iran’s claims again highlight an important issue. Should the increasing use of technologies like facial recognition and AI-driven algorithms by countries in war or law enforcement be legal? These systems are a big part of surveillance and law enforcement through video cameras and drones.
The use of technologies like AI and facial recognition for war and target killings is a big issue. In the past few years, Silicon Valley companies like Amazon, Google, and Microsoft among others have come under increasing pressure. They have been asked to stop supplying technology to governments, armies, and law enforcement agencies.
For instance, Google in 2018, said that it will stop developing AI weapons. However, it refused to stop its working relationship with the military. Google even came out with AI ethics guidelines stating that AI applications shall not pursue any weapons or technologies whose principal purpose or implementation causes or directly facilitates injuries to people.