Consular and Public Diplomacies Blog
  • Home
  • About
  • Interesting links
  • Contact
  • Home
  • About
  • Interesting links
  • Contact

Digital Diplomacy 4.0: The Basics of Artificial Intelligence

6/7/2023

0 Comments

 
Picture

 
November 2022 could be a historical moment for humanity. OpenAI launched its ChatGPT Artificial Intelligence (AI) program, which is already changing our world. A leading expert, Andrew Ng, equates AI advances that will radically transform society to the arrival of electricity at the end of the 19th century (Jewell, 2019). Besides, just a couple of weeks ago, the stock of Nvidia, a semiconductor manufacturing company that produces chips for AI computing, skyrocketed 25 percent in one day (Aratani, 2023), becoming more valuable than Intel. 
 
AI has been used for some time without much hoopla. We have been using AI in the voice command assistants such as Amazon’s Alexa, Apple’s Siri and HomePod, and Google’s Home. Also, AI makes music recommendations on Spotify or for products on Amazon’s webpage. However, in November 2022, ChatGPT propelled artificial intelligence into the limelight of the tech revolution. Why? It was the first question I had after participating in the Summit on Digital Diplomacy and Governance, organized by DiploFoundation, but I did not know the answer. Fortunately, now I think I can answer the question: Because ChatGPT is a Generative AI that can produce new content, it is very easy to use and is widely accessible (it is the fastest platform to reach 100 million users). 
 
Since then, I have been trying to understand the basics of AI, but I am still struggling. Right now, there is so much information about it that it is mind-blowing to find reliable resources. Many are riding the AI wave taking advantage of its novelty and the ignorance of regular people. In this blog post, I share my understanding of AI’s basics, while another post will focus on the impact on diplomacy and other fields.
 
Warning: I have also been playing around with Google’s Bard and Microsoft’s Bing.chat on an Edge browser; therefore, you will find some of their outputs in the blog post. 
 
So, what is Artificial Intelligence?
 
According to Bing.chat: 

Artificial Intelligence (AI) is a field of computer science that aims to create intelligent machines that work and learn like humans. AI is based on the idea that machines can be made to think and learn like humans. It involves the development of algorithms and computer programs that can perform tasks that would normally require human intelligence.
There are three types of AI: Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI), and Artificial Super Intelligence (ASI). ANI is the most common type of AI and is designed to perform a specific task. AGI is designed to perform any intellectual task that a human can do. ASI is the hypothetical future development of AI where machines surpass human intelligence.
AI is used in many applications, such as natural language processing, speech recognition, image recognition, robotics, and autonomous vehicles. Machine learning is a subset of AI that involves the development of algorithms that can learn from data without being explicitly programmed (Bing.chat 2023).
​
The AI we use daily is mostly the first type, also known as Weak AI, while the second is usually defined as Strong AI. A significant milestone that we have not reached is when computer programs achieve technological singularity, which is “a point in time when humans lose control over their technological inventions and subsequent developments due to the rise of machine consciousness and, as a result, their superior intelligence” (Gaona, 2023).
 
From Frankenstein to Odyssey 2000, humans have been attracted to and fearful of non-human entities that eventually might control or even destroy us. Ambitious computers and killer robots have been part of our imagination for many years; however, as we will see today, key experts are warning about AI. But let’s start by looking over the development of artificial intelligence first. 
 
A bit of history of AI.
 
In 1955 John McCarthy coined the term artificial intelligence while organizing the 1956 Dartmouth Summer Research Project on Artificial Intelligence Conference (McCarthy et al., 1955). However, Alan Turing is also considered not only a critical WWII code-breaker but an essential contributor to AI. He created the Turing Test, where a computer can pass as a human in engaging with people.
 
Remember Deep Blue? Arthur Samuels worked for IBM for many years and in his free time, he developed a program that could play chess (Brooks, 2017), the predecessor of Deep Blue, that in 1997 was able to beat the chess world champion (Yao, 2022). I still remember when the news broke worldwide and generated a lot of debate about computers and AI, just like what is happening now with ChatGPT. 
 
In the early 2000s, artificial neural networks were greatly improved because of the reduction of storage costs and the arrival of new types of chips, significantly impacting AI’s advancement. Artificial neural networks “are systems that are similar to the human brain in the way they learn and process information. They enable AIs to learn from experience, as a person would” (Kleinman & Vallance, 2023), exponentially expanding AI capabilities. Furthermore, AI “transformers” programs were added to the mix in recent years, substantially impacting its development. Generative AI jumped into the limelight in late 2022, starting an AI race between tech giants to launch more AI programs (Grant and Weise, 2023).
 
AI now covers different areas, such as computer vision, automation, robotics, and natural language processing. One of the most important subfields of AI is Machine Learning.
 
Machine Learning in AI.
 
A simple definition of Machine Learning (ML) is when a program “performs a function with the data given to it and gets progressively better over time” (Grieve, 2023).
 
For ML, the systems rely on data, enormous quantities of it, which has generated questions about data privacy and protection, copyright discussions, and inherited biases of the information used that affect the program’s overall performance and can replicate existing discriminatory practices. Recently, Japan announced that it “will not enforce copyrights in data used in AI training” (Technomancers.ai 2023), significantly impacting AI’s further developments, as companies won’t be able to protect their copyright information.
 
There are three categories of ML: supervised, unsupervised, and reinforced. Machado (2016) simply explains the differences between these three categories, including some rewards in reinforced learning. However, it is hard to understand how the machine is rewarded. I must continue to research to truly understand the mechanics behind ML. However, here is what Bard came up with:
How does reinforced learning work?
 
Reinforcement learning (RL) is a type of machine learning in which an agent learns to behave in an environment by trial and error. The agent is not explicitly programmed with the rules of the environment, but instead learns by interacting with it and receiving rewards for taking actions that lead to desired outcomes.
 
The basic idea behind RL is that the agent will eventually learn to take actions that maximize its rewards. This is done by trial and error, with the agent gradually exploring different actions and observing the consequences of those actions. Over time, the agent will learn to associate certain actions with positive rewards and other actions with negative rewards. This allows the agent to develop a policy, which is a set of rules that tells the agent how to behave in different situations. 
 
Reinforcement learning algorithms are used to train agents to learn policies that maximize their rewards. There are many different reinforcement learning algorithms, each with its own strengths and weaknesses. Some of the most popular reinforcement learning algorithms include Q-learning, policy gradient, and deep Q-learning. (Bard, 2023)
​
 
Besides, there is also deep learning “that structures algorithms in layers to create an “artificial neural network” that can learn and make intelligent decisions on its own” (Grieve, 2023). Here is Bard’s response to the question: How does Deep Learning work?:
Deep learning is a type of machine learning that uses artificial neural networks to learn from data. Artificial neural networks are inspired by the human brain, and they are made up of layers of interconnected nodes. Each node performs a simple mathematical operation, and the network learns to perform complex tasks by adjusting the weights of the connections between the nodes.
 
Deep learning is a powerful tool that can be used to solve a wide variety of problems. Some of the most common applications of deep learning include:
  • Image recognition: Deep learning has been used to develop algorithms that can recognize objects in images. These algorithms are used in a wide variety of applications, such as self-driving cars, facial recognition software, and medical image analysis.
  • Natural language processing: Deep learning has been used to develop algorithms that can understand natural language. These algorithms are used in a wide variety of applications, such as machine translation, speech recognition, and text summarization. 
  • Speech recognition: Deep learning has been used to develop algorithms that can recognize human speech. These algorithms are used in a wide variety of applications, such as voice-activated assistants, dictation software, and call centers.
  • Machine translation: Deep learning has been used to develop algorithms that can translate text from one language to another. These algorithms are used in a wide variety of applications, such as online translation tools, travel apps, and business software.
  • Medical image analysis: Deep learning has been used to develop algorithms that can analyze medical images, such as X-rays, MRI scans, and CT scans. These algorithms are used to diagnose diseases, plan surgeries, and monitor patient progress. (Bard, 2023).
 
Large Language Models and Generative AI. 
 
According to the Center for Humane Technology (2023), AI experienced a massive jump a few years ago with the creation of AI “transformers” in Large Language Models (LLMs). 
 
LLMs are a subsection of Deep Learning and require vast amounts of data and pre-training, and the opportunity to be fine-tuned for specific purposes (Google Cloud Tech 2023a); therefore, only top tech companies, such as Meta and Google can afford to develop this type of models.
 
Generative AI, also known as GenAI, is also a subsection of Deep Leaning and is related to LLMs, but according to my understanding, it is not precisely the same. Google Cloud Tech (2023b) defines GenAI as a “type of AI that creates new content based on what it has learned from existing content”, which is a crucial advancement from other AI programs. ChatGPT and Bard are Generative AI platforms.
 
Using foundational models, GenAI can create all sorts of new outputs, using natural language prompts rather than computer programing, from videos to music and text. It even can make new code, which is one of the issues of the fear of AI. 
 
Generative AI’s explosion in recent months has renewed the call for regulation before it is too late. So far, as with most digital technologies, guidelines and norms are limited compared to most other industries. In the next section, I discuss this difference.
 
Differences between AI and other industry security standards. 
 
Since the launch of ChatGPT, the discussion on the regulation of AI has generated one of the most important debates of our era. One of the fathers of AI, Geffrey Hinton, resigned from Google to be able to call for regulating AI (Taylor & Hern, 2023; Kleinman & Vallance, 2023). Even the CEO of OpenAI, Samuel Altman, “implored lawmakers to regulate artificial intelligence” in a Senate hearing (Cang, 2023). Other tech experts, including Steve Wozniak and Elon Musk, signed Pause Giant AI Experiments: An Open Letter in which they ask “AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4” due to their inherent risks. 
 
However, it is fascinating to compare the approach that we have to AI and any other industry. In this regard, Quebec’s Artificial Intelligence Institute’s Chief Executive, Valérie Pisano, states that:
“The technology is put out there, and as the system interacts with humankind, its developers wait to see what happens and make adjustments based on that. As a collective, we would never accept this mindset in any other industrial field. There’s something about tech and social media where we’re like: ‘Yeah, sure, we’ll figure it out later’” (Taylor & Hern, 2023).
 
Imagine having the same approach to aviation, nuclear power, or basic appliances. The world would be in deep chaos without safety regulations on these subjects. But, for some reason, we are doing this with AI today. 
 
Some countries are working on some regulations. For example, the United States published the Blueprint for an AI Bill of Rights, the European Union is debating an AI Act while the Council of Europe is negotiating an AI and human rights accord, and UNESCO issued the Recommendation on the ethics of artificial intelligence. However, there are no binging regulations, with a few exceptions, and the implications for the economy and politics of the planet are immense. 
 
In The A.I. Dilemma (2023), Raskin and Harris explain convincingly the dangers of AI, and they recommend that tech companies should slow down the public deployment of AI systems to be able to regulate and avoid possible catastrophic results. They cite the results of a survey among AI experts in which “50% of A.I. researchers believe there’s a 10% or greater chance that humans go extinct from our inability to control AI” (2022 Expert Survey on Progress in AI, 2022).
 
Manor (2023) evaluates the recent developments in AI, from the excitement about new opportunities to doomsday scenarios and the creation of new companies. He argues that all these activities, called “disruptor/innovator playbook” by tech moguls, are to ensure that governments and societies allow tech companies to self-regulate. So far, it seems that it is working. However, I hope no massive AI-related incident makes governments rush into regulation. 
 
In another post, I will discuss the impact of AI on diplomacy. I just wanted to give you a heads-up that you can stay tuned. 
 
AI resources 
 
Here are some interesting resources about AI:
 
Institutes and other organizations:
 
DiploFoundation AI Diary.
 
DiploFoundation HumAInism (AI at Diplo).
 
DigWatch Artificial Intelligence by Geneva Internet Platform.
 
DigWatch AI governmental initiatives by Geneva Internet Platform.
 
MILA, Quebec Artificial Intelligence Institute.
 
OECD AI Observatory.
 
Global Partnership for AI.
 
AI Now Institute. 
 
Tech Policy Press.
 
Courses: 
 
AI for Everyone Course (Coursera) 
 
Videos:
 
Center for Humane Technology (2023, March 9). The A.I. Dilemma. [Video]. YouTube. https://youtu.be/xoVJKj8lcNQ.
 
Google Cloud Tech (2023a, May 8). Introduction to large language models. [Video]. YouTube https://youtu.be/zizonToFXDs. 
 
Google Cloud Tech (2023b, May 8). Introduction to Generative AI. [Video]. YouTube https://youtu.be/G2fqAlgmoPo
 
DiploFoundation (2023, February 7). Will AI take over diplomatic reporting? WebDebate #56. [Video]. YouTubehttps://www.youtube.com/live/QuRX-2NQ0zQ?feature=share
 
DiploFoundation (2023, March 7). What role can AI play in diplomatic negotiation? (WebDebate #57). [Video]. YouTube https://www.youtube.com/live/qm_JwZBrflE?feature=share
 
DiploFoundation (2023, April 4). How to Train Diplomats to Deal With AI and Data? (WebDebate #58). [Video]. YouTube https://www.youtube.com/live/m5KS3VY929Q?feature=share
 
DiploFoundation (2023, May 2). What Can We Learn About AI Ethics and Governance From Non-Western Thought? WebDebate #59. [Video]. YouTube https://www.youtube.com/live/wdzQ26HYEmA?feature=share
 
REFERENCES
 
Aratani, L. (2023, May 30). Nvidia becomes first chipmaker value at more than $1Tn amid AI boom. The Guardian. 
Brooks, R. (2017, August 28). [For&AI] Machine Learning Explained. Rodney Brooks Robots, AI, and other Stuff Blog.
Center for Humane Technology (2023, March 9). The A.I. Dilemma. [Video]. YouTube. https://youtu.be/xoVJKj8lcNQ.
Gaona, M. (2023, May 15). Entering the singularity: Has AI reached the point of no return? The Hill.
Google Cloud Tech (2023a, May 8). Introduction to large language models. [Video]. YouTube https://youtu.be/zizonToFXDs. 
Google Cloud Tech (2023b, May 8). Introduction to Generative AI. [Video]. YouTube https://youtu.be/G2fqAlgmoPo
Grant, N. & Weise, K. (2023, April 7). In A.I. Race, Microsoft and Google Choose Sped over Caution. The New York Times. 
Grieve, P. (2023, May 23). Deep learning vs. machine learning. What’s the difference? Zendesk Blog. 
Jewell, C. (2019, June). Artificial intelligence: the new electricity. WIPO Magazine.
Kleinman, Z. & Vallance, C. (2023, May 3). AI’ godfather’ Geoffrey Hinton warns of dangers as he quits Google. BBC.
Machado, G. (2016, October 6). ML basics: Supervised, unsupervised and reinforcement learning. Medium blog.
Manor, I. (2023, June 6). Shock and Awe: How AI is Sidestepping Regulation. Exploring Digital Diplomacy Blog (digdipblog). 
Marr, B. (2018, February 14). The Key Definitions of Artificial Intelligence (AI) that Explain its Importance. Forbes.
McCarthy, J., Minsk, M.L., Rochester, N. & Shannon, C.E. (1955, August 31). A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence. 
Pause Giant AI Experiments: An Open Letter. (2023, March 22). 
Taylor, J. & Hern, A. (2023, May 2). ‘Godfather of AI’ Geoffrey Hinton quits Google and warns over dangers of misinformation. The Guardian. 
Technomancers.ai. (2023, June 1). Japan Goes All In: Copyright Doesn’t Apply to AI Training. Communications of the ACM. 
Yao, D. (2022, May 10). 25 years ago today: how Deep Blue vs. Kasparov changed AI forever. AI Business.
 
 
DISCLAIMER: All views expressed on this blog are that of the author and do not represent the opinions of any other authority, agency, organization, employer or company.
0 Comments

Your comment will be posted after it is approved.


Leave a Reply.

    Rodrigo Márquez Lartigue 

    Diplomat interested in the development of Consular and Public Diplomacies. 

    Archives

    September 2023
    August 2023
    July 2023
    June 2023
    May 2023
    April 2023
    March 2023
    February 2023
    January 2023
    December 2022
    August 2022
    May 2022
    March 2022
    January 2022
    December 2021
    September 2021
    August 2021
    July 2021
    April 2021
    March 2021
    February 2021
    January 2021
    December 2020
    November 2020
    October 2020
    September 2020

    Categories

    All
    Artificial Intelligence
    Consular Affairs
    Consular Care Protocols
    Consular Diplomacy
    Corporate Diplomacy
    Cultural Diplomacy
    Denmark
    Digital Diplomacy
    Diplomacy
    Gastrodiplomacy
    Global Consular Forum
    Global Politics
    Global South
    Health Desks (VDS)
    Labor Rights Week
    Mexico
    Nation Brand
    Nation-Brand
    ParaDiplomacy
    Public Consular Diplomacy
    Public-consular Diplomacy
    Public Diplomacy
    Tech Diplomacy
    TRICAMEX
    United Kingdom
    United States
    VAIM

    View my profile on LinkedIn
Proudly powered by Weebly