BAKU, Azerbaijan, March 14. Artificial intelligence (AI) can assist in processing information and analyzing targets, but the final decision on the use of force must remain with humans, former Secretary General of the Shanghai Cooperation Organization (SCO), member of the Nizami Ganjavi International Center Vladimir Norov said at the 13th Global Baku Forum on "Bridging Divides in a World in Transition", Trend reports.
"Machines are incapable of making the moral and contextual decisions required by international humanitarian law," noted Norov.
According to him, weapons using AI must fully comply with international humanitarian law, including the principles of distinction, proportionality, and military necessity.
"In real combat situations, distinguishing between combatants and civilians often requires complex human judgment, which modern AI systems are unable to reliably reproduce," Norov emphasized.
He also noted the need for strict operational security measures. According to him, autonomous systems must have reliable mechanisms allowing the operator to intervene, cancel an action, or completely stop the system at any time.
"Emergency shutdown functions, safe standby mode, and other protective mechanisms that prevent unintended escalation or loss of control must be mandatory," Norov added.
Stay up-to-date with more news on Trend News Agency's WhatsApp channel
