Blog Introduction (Click on me!)

Blog Introduction + User Manual (Click me!)

Sunday, December 1, 2024

Position Paper 1

Context: This paper was written by myself for a senior conference, which was my former high school's first Model UN conference. I ended up sweeping all 4 prizes (1st Best DR, PP, Speech + 2nd Best Delegate). This also won best position paper as well, so it's extra special. :)


[Committee: General Assembly - 1 

Delegation: The Russian Federation 

Agenda: Preventing the Militarization of Artificial Intelligence for Malicious Use]


'“Artificial intelligence will have a more profound impact on humanity than fire, electricity, and the internet.” – Sundar Pichai

 

In the modern era, we see the rise of many trends in the societies we live, work and interact. But sometimes, we witness a specimen of trends that seem to increase exponentially, the foremost being AI. With the advent of such a technology, the international community maintains reservations about AI, owing to its two-fold implications. The first is that it is a beneficial addition to the human race, whilst the second is the risks associated with AI, specifically with enterprising people who misuse it to further their gains. We are here to state our stances, expose the problem, and propose unique solutions to resolve our crises.

 

First of all, how do we define this agenda? This agenda bifurcates in two: the militarization of AI and its malicious use by non-state actors by which they commit crimes with limited risks of detection. Firstly, how can we define the process of militarizing AI? All processes by which AI acquires immense powers to engage in missions ranging from minor operations involving logistics, decision support, command and control facilities, or even lethal force is the militarization of AI.

 

Our stance is a cautionary midway between conventional responses to AI. The Russian Federation believes that AI should develop lethal autonomous weapons systems (LAWS) and strongly opposes any efforts to place international limits on their development but believes that humans are to maintain partial control of such systems and are willing to continue discussions about the regulation of AI so that there would be fewer cases of malicious usages of AI.

 

We also maintain a mandatory first-use policy, which states that no weapons (such as LAWS) can incite conflicts. Unless placed under extremities such as an invasion of Russia, we abstain from using these weapons. To quote the Russian Security Council secretary: 

 

“We believe that it is necessary to activate the powers of the global community, chiefly at the UN venue, as quickly as possible to develop a comprehensive regulatory framework that would prevent the use of the specified [new] technologies for undermining national and international security…"

 

Another line of interest for the delegation is the involvement of non-state actors in this issue because we observe the prevalence of 2 major types of crimes committed by these actors. 

 

The first is coordinated cyber-attacks using AI, which potentially can worm into military devices to extricate important information and obtain classified documents to promulgate the hackers' interests by ransoming and shutting down essential infrastructure or generate money through such schemes to finance and propagate destructive ideologies. Secondly, deepfake videos that sway opinions and carry out misinformation campaigns damage our societies because as Winston Churchill said, “In war, truth is often the first casualty.” At this conjecture, one statistic stands out. Regarding the militarization of AI, the current market value of AI in the military is estimated to be US$9.2 billion and is projected to rise sharply (33%) by 2028 to expand to US$38.8 billion. 

 

The question may arise as to why the Russian Federation has the right to advise the world on this matter. We assert that we have the legitimate right to speak in the past, and we have been instrumental in derailing efforts to enact legislation on such bans on weapons, but events have forced us to reconsider our stances on many such policies and conclude that AI should immediately be controlled. We have worked with the G20 and the United Nations, and play central roles in the DISEC, UNODA, and the UNSC committees on this vital issue. On top of this, we have also signed various treaties like the Convention on Certain Conventional Weapons and created an ethical code binding to Russia’s AI policies alongside other actions that cement the platform upon which we speak and challenge the world.

 

As a proponent of peace and security, the delegation would like to propose using artificial intelligence to solve these issues holistically which covers the malicious uses of AI, namely:


  1. Create a new committee titled the “Artificial Intelligence Oriented Weapons Dissemination Committee” which estimates lethality factors that preserve many lives and destroy lethal weapons before deployment.
  2. Enable AI to find security loopholes and rectify them in our systems, is a promising approach to harnessing AI.
  3. Develop software that detects deep fakes accurately which can resolve our misinformation crises.

 

The Russian Federation concludes by stating that we will face challenges in this endeavour, but we will solve it together.

 

Bibliography:

 

1)    https://www.threatintelligence.com/blog/ai

2)    https://futureoflife.org/recent-news/how-to-prepare-for-malicious-ai/

3)    https://openai.com/research/preparing-for-malicious-uses-of-ai

4)    https://healthexec.com/topics/health-it/cybersecurity/4-recommendations-combat-malicious-use-ai

5)    https://aimagazine.com/ai-strategy/five-ways-ai-can-be-used-to-prevent-cyber-attacks

6)    https://techbeacon.com/security/how-ai-will-help-fight-against-malware

7)    https://unicri.it/sites/default/files/2020-11/AI%20MLC.pdf

8)    https://www.linkedin.com/pulse/use-ai-detecting-preventing-cybercrime-neil-sahota-%E8%90%A8%E5%86%A0%E5%86%9B-/

9)    https://www.forbes.com/sites/forbestechcouncil/2022/07/15/malicious-ai-isnt-a-distant-reality-anymore/?sh=4a0ad4ae1fd6

10 https://www.reuters.com/technology/un-security-council-meets-first-time-ai-risks-2023-07-18/

11 http://government.ru/en/search/?q=ai&dt.till=28.08.2023&dt.since=7.05.2012&sort=rel&type=

12 https://www.marketsandmarkets.com/Market-Reports/artificial-intelligence-military-market-41793495.html#:~:text=%5B296%20Pages%20Report%5D%20The%20Artificial,33.3%25%20from%202023%20to%202028.

13 https://www.cam.ac.uk/Malicious-AI-Report

14 https://www.defenseone.com/ideas/2019/04/russian-military-finally-calling-ethics-artificial-intelligence/156553/

15 https://www.statista.com/statistics/1235395/worldwide-ai-enabled-cyberattacks-companies/#:~:text=In%202021%2C%20around%2068%20percent,danger%20to%20companies'%20IT%20security.

16 https://www.linkedin.com/pulse/dark-side-ai-how-can-used-malicious-purposes-enio-moraes/

17 https://www.linkedin.com/pulse/dark-side-ai-how-can-we-prevent-from-being-used-the-research-world/

18 https://www.trendmicro.com/vinfo/us/security/news/cybercrime-and-digital-threats/exploiting-ai-how-cybercriminals-misuse-abuse-ai-and-ml

19 https://threws.com/the-dark-side-of-ai-how-can-we-prevent-ai-from-being-used-for-malicious-purposes/

20 https://www.bbc.com/news/technology-57101248

21 https://www.deepinstinct.com/blog/how-ai-can-be-used-for-malicious-purposes

22 https://ieeexplore.ieee.org/abstract/document/9831441


No comments:

Post a Comment