top of page

The International Legal Realm of Cyber Security and AI: Age of Conflict

Harnessing AI in Conflicts: Embarking on New Voyages

With the advancing usage of Artificial Intelligence (“AI”), a dual-edged sword with its potential for upgrading both offensive and defensive cyber capabilities, it has been deployed in numerous conflicts that have been taking place at an international intensity. The scope of its usage may not be entirely comprehended due to the clandestine nature of such disputes and scanty public revelations. However, there have been many examples in recent years of it being used by many nations, with some of them succumbing to its severe consequences. When the Israel Defense Forces (IDF) began using killer AI systems in their military operations in 2021, they declared their first AI-war in history. [1]

AI is persistently leading to dynamic evolutions in the realm of International Law. Application of international laws on cyber security and AI gives rise to several major issues like how states shall conduct their cyber operations in armed conflicts or respond to malicious cyber activities conducted by other states. In fact, some even question whether international laws would be able to keep up with the pace of a series of rapid evolutions in AI or not.


Voyages to the Ocean of Cyberspace: The Pirates in the Game

With several States sponsoring and executing intense study and research in the field of cyberspace, owing to the immense gains it could offer, creation of sundry AI-based offensive devices wouldn’t be much of a surprise. Usage and possession of the Autonomous Weapons System (AWS) might actually instigate wars since the human expense of war is minimised. One such illustration may be the substantial rise in the frequency of targeted operations in Iraq following the switch from human piloted aircraft to unmanned aerial vehicles. Other instances of AI being employed for offensive operations include when Azerbaijan deployed Turkish and Israeli- made armed drones to achieve a major advantage over Armenia in fighting for control of the Nagorno-Karabakh region.[2]

A 2021 UN report suggested that Turkish-manufactured Kargu-2 drones might have been used in Libyan Civil War for hunting down the enemy soldiers.[3]

AWS are incapable from an ethical standpoint, regardless of their level of autonomy, as they fail to distinguish between their employers and enemies in the skeleton of the present-day military ethics.

With safety risks being present everywhere, connected devices enhance the ability of cyber security breaches from distant locations and since the code is opaque, security is highly complex. Since machine learning uses machines to train other machines, one may wonder what might happen if there is malware or manipulation of the training data. Hence, when AI goes to war with other AI, the ongoing cyber security challenges will add monumental risks to the future of the human ecosystem.

As nations individually and collectively hasten their efforts to gain a competitive edge in technology, further weaponization of AI is inevitable.


AI vs. AI

As much as nations have already begun to use AI technologies to strengthen their military to such an extent that it aids in increasing the sophistication and range of the assault, they have also been using it to enhance their defence personnel and bolster their cyber security against digital crimes or in times of conflict.

In 2015, ICRC stated that war is justifiable in times when not conducting a war could be an ethically worse choice. Hence, incorporating AI in war against atrocities raised by other States as a form of cyber security would reap benefits as it would minimise civilian casualties. Upgrading defence personnel such as, using AWS over human systems in aerial combat has clear advantages. [4]

Legal Autonomous Weapons Systems (LAWS) can strengthen military’s might, fabricating obstacles that avert conflicts. If war does break out, AI in the coming years can make fighting more efficient and targeted, removing human error and limiting loss of life. For instance, an Israeli remote-controlled machine gun that used AI to target and kill the scientist, Mohsen Fakhrizadeh, was successful in its mission with his wife walking away unharmed from the attack despite sitting inches away from her husband. Iranian investigators found the shooting’s pinpoint accuracy to the weapon’s advanced facial recognition capabilities. [5]

Highly secure military systems could be prone to cyber threats which could in turn jeopardise the mission but usage of AI could assist in protecting programs, data, networks, and computers from persons not authorised to access them. AI also has the skills to analyse patterns of cyber attacks and form protective strategies in order to fight against them. These systems can identify the smallest behaviours of malware attacks far before they enter a network. It is hence, critical for the military to have access to the most advanced and tailored AI cyber security solutions in order to stay safe amidst a constantly evolving landscape of AI-driven cyber security risks.


On the Quest for Harmony

The existing international laws with regard to cyberspace haven’t been clearly defined yet and have undergone several reformations over the past decades. With the Budapest Convention being the World’s first ever international treaty designed to focus on augmenting cybercrimes, there have been numerous discussions relating to how international laws apply to cyberspace.

On September 18, 2012 it happened that Harold Koh, Legal Advisor to the US Department of State delivered a speech at US Cyber Command Legal Conference stating his view that international law principles do apply in cyberspace and States are legally responsible for operations undertaken through “proxy actors,” who act on the State’s orders or under its instructions or control.[6] In 2012, UN group of Governmental Experts (GGE) talked of the State sovereignty and the international norms that flow from sovereignty, apply to State conduct, i.e., State has jurisdiction over cyber infrastructure located within its territory and must meet their international obligations relating to internationally unlawful acts attributable to them. States must not use proxies to commit such acts. Although, GGE recognized sovereignty as foundation on which States rights relating to cyber operations are created, but it did not specify how precisely sovereignty shall bind state to take or not take specific actions.[7] Later on, by a three-year effort of 20 international law scholars under NATO, the Tallinn Manual 1.0 on International Law Applicable to Cyber Warfare, comprising 95 rules, was produced.[8] In 2016, Brian Egan, Legal Advisor to U.S. Department of State, stated the need for states to make known their views on how international law should apply in cyberspace. Any State carrying out a cyber attack must comply with International Humanitarian Law. To further determine whether it is an “attack”, the resulting kinetic/non-kinetic effects, nature and scope of operation must be considered. The countermeasures, against only the State committing an internationally wrongful act against it, must satisfy the principles of necessity and proportionality. It must issue a prior demand that the offending State must cease its act, before it launches its countermeasures. [9] Subsequently, in 2017, the Tallinn 2.0 with 154 rules was produced by NATO, dealing with much broader type cyber operations – those both in and out of armed conflict. [10]

The UK, Estonia, France, Israel, Canada, and other nations expressed their opinions over the following few years. However, a large number of States still remain relatively silent as some may wish to avoid becoming entangled in international disputes among those States who have put forth their views. For the remaining, the issue could be a matter of legal capacity or since several States lack the personnel to comprehend the issues involved in applying international law to cyber space.

The field of international law in the context of cyber security and AI during conflict is ever-transforming and is only set to become more complicated as more pending questions are answered. 


The Ultimate Speculation: Reaching the Shore

Over the years episodes of cyber espionage, AI-powered attacks, breach of cyber security have been witnessed globally and hence, intensified the need for an in-depth study and incorporation of AI in the armament. Instead of ending the usage of AI, nations have embarked on a journey of further cultivating it in accordance with their self-interests and hence, we must find a common ground to strike a delicate balance and by which we all shall find equilibrium to secure our future. States and policymakers, thus play a pivotal role in framing rules governing AI in cyber security. They must secure these rules to offer clear-cut definitions of AI-powered cyber threats, moral guidelines to keep a check on AI usage and potent mechanisms to implement abidance. States must sponsor research centres in conducting extensive study to keep up with the pace with which AI is undergoing immense evolution. A global collusion is highly essential to produce new ideas and formulate fresh measures against cyber threats. General masses should be enlightened about the growing realm of AI and cyber space, thereby collecting varied perspectives to mould fruitful cyber security measures.

In order to answer the question of whether we could bring an end to the growing deployment of AI in armed conflict and refrain from utilising AI in response to malicious operations carried out by other States against them, the answer is aresounding no. But to respond to the question of whether International Laws keep a check on their attacks, while protecting innocent civilians at its best in the days to come, the answer is affirmative.

Many States have put out recommendations and expressed their opinions on how international law should be applied to AI and cyber security in the context of wars. With technology progressing and the need for more States to participate, it is past time to recognize the influence of AI in the upcoming decades and to propose better remedies for the benefit of both the digital future and the future of humanity as a whole.


[1] Anna Ahronheim, Israel’s operation against Hamas was the world’s first AI war, The Jerusalem Post (May 27, 2021) (last visited October 17, 2023)

[2] Eado Hecht, Drones in the Nagorno-Karabakh War: Analysing the Data, Military Strategy Magazine (Vol. 7, Issue 4) (January, 2022)  (last visited October 17, 2023)

[3] Hitoshi Nasu, The Kargu-2 Autonomous Attack Drone: Legal & Ethical Dimensions, Lieber Institute West Point (June 10, 2021) (last visited October 19, 2023)

[4] ICRC Position Paper, Artificial Intelligence and machine learning in armed conflict: A human-centred approach, International Review of the Red Cross (IRRC No. 913) (March, 2021) (last visited October 19, 2023)

[5] Stephen Farrell, Iranian nuclear scientist killed by one-ton automated gun in Israeli hit: Jewish Chronicle, Reuters (February 11, 2021) (last visited October 19, 2023)

[6] Chris Borgen, Harold Koh on International Law in Cyberspace, OpinioJuris (September 19, 2012) (last visited October 20, 2023)

[7] Lauren M. and others, International Law in Cyberspace, American Bar Association (January 27, 2023) (last visited October 19, 2023)

[8] Equipe IRIS-BH, Tallinn Manual and the use of force, Institute for Research on Internet and Society (June 30, 2016) (last visited October 19, 2023)

[9] Brian J Egan, Remarks on International Law and Stability in Cyberspace, U.S. Department of State (November 10, 2016) (last visited October 19, 2023)

[10] Eric Talbot Jensen, The Tallinn Manual 2.0:Highlights and Insights, Georgetown Law (February, 2017)  (last visited October 19, 2023)

Author: Soumili Kundu

University and Year : Lloyd Law College, Greater Noida, Uttar Pradesh & 1st Year

Programme : B.A.LL.B. (2023-2028 batch)


bottom of page