![]() Note: This post was written in June 2022. A couple of months ago, I was lucky enough to attend the 2022 International Studies Association (ISA) Convention that took place in Nashville, Tennessee. This time, the convention was hybrid as there was a two-day online section followed by a four-day in-person gathering. Participating in a convention allows you to join the global conversation about International Affairs. On this occasion, I was able to submit two papers that were accepted, one online about Mexico´s gastrodiplomacy and one in-person on Mexico´s consular diplomacy. In addition, I was the chair of a panel on diaspora diplomacy and the discussant of a poster presentation. It is a very different experience presenting a paper at a convention rather than just attending. In the former, you must prepare, polish, and finish your essay on time. Besides, you must be able to summarize your finding in no more than ten minutes. Showcasing your research is a more engaging and rewarding experience because of several reasons, including:
Attending the convention also gives you a chance to meet great scholars. I had the opportunity to speak to Prof. Paul Sharp from the University of Minnesota at Duluth. Besides receiving two book recommendations for my class, I also got a couple of the much-sought-after “drink tickets,” which I did not know anything about because the previous convention was online due to the pandemic. You can read my experience in the this blogpost. In the Twitter feed of ISA, there were comments about having a reduced number of participants. I don´t think it was bad because it allowed attendants to interact more deeply. Of course, it was awkward to see panels with no audience or less than half of the scholars that were supposed to participate. You just have to make the most out of the situation. Being a practitioner among scholars sometimes felt a bit strange; however, having a frank discussion between the two groups could add value to the work we all do. Having a set of practitioner panels and a combination of academic and practitioner roundtables would be interesting. An added value of conventions is that you go to places you would most likely never visit on your own. This was the case of Nashville, also knowns as Music City and currently a hot spot for tech companies. Everywhere, there were construction sites. And don´t get me started about the resort Gaylord Opryland Resort. One thing I missed from the online convention format was that I did not have to wait 15 minutes to get an expensive cup of coffee. Attending ISA Convention in Nashville allowed me to join the global conversation on subjects that will determine the faith of many countries and maybe even the planet. DISCLAIMER: All views expressed on this blog are that of the author and do not represent the opinions of any other authority, agency, organization, employer, or company.
0 Comments
![]() November 2022 could be a historical moment for humanity. OpenAI launched its ChatGPT Artificial Intelligence (AI) program, which is already changing our world. A leading expert, Andrew Ng, equates AI advances that will radically transform society to the arrival of electricity at the end of the 19th century (Jewell, 2019). Besides, just a couple of weeks ago, the stock of Nvidia, a semiconductor manufacturing company that produces chips for AI computing, skyrocketed 25 percent in one day (Aratani, 2023), becoming more valuable than Intel. AI has been used for some time without much hoopla. We have been using AI in the voice command assistants such as Amazon’s Alexa, Apple’s Siri and HomePod, and Google’s Home. Also, AI makes music recommendations on Spotify or for products on Amazon’s webpage. However, in November 2022, ChatGPT propelled artificial intelligence into the limelight of the tech revolution. Why? It was the first question I had after participating in the Summit on Digital Diplomacy and Governance, organized by DiploFoundation, but I did not know the answer. Fortunately, now I think I can answer the question: Because ChatGPT is a Generative AI that can produce new content, it is very easy to use and is widely accessible (it is the fastest platform to reach 100 million users). Since then, I have been trying to understand the basics of AI, but I am still struggling. Right now, there is so much information about it that it is mind-blowing to find reliable resources. Many are riding the AI wave taking advantage of its novelty and the ignorance of regular people. In this blog post, I share my understanding of AI’s basics, while another post will focus on the impact on diplomacy and other fields. Warning: I have also been playing around with Google’s Bard and Microsoft’s Bing.chat on an Edge browser; therefore, you will find some of their outputs in the blog post. So, what is Artificial Intelligence? According to Bing.chat: Artificial Intelligence (AI) is a field of computer science that aims to create intelligent machines that work and learn like humans. AI is based on the idea that machines can be made to think and learn like humans. It involves the development of algorithms and computer programs that can perform tasks that would normally require human intelligence. The AI we use daily is mostly the first type, also known as Weak AI, while the second is usually defined as Strong AI. A significant milestone that we have not reached is when computer programs achieve technological singularity, which is “a point in time when humans lose control over their technological inventions and subsequent developments due to the rise of machine consciousness and, as a result, their superior intelligence” (Gaona, 2023). From Frankenstein to Odyssey 2000, humans have been attracted to and fearful of non-human entities that eventually might control or even destroy us. Ambitious computers and killer robots have been part of our imagination for many years; however, as we will see today, key experts are warning about AI. But let’s start by looking over the development of artificial intelligence first. A bit of history of AI. In 1955 John McCarthy coined the term artificial intelligence while organizing the 1956 Dartmouth Summer Research Project on Artificial Intelligence Conference (McCarthy et al., 1955). However, Alan Turing is also considered not only a critical WWII code-breaker but an essential contributor to AI. He created the Turing Test, where a computer can pass as a human in engaging with people. Remember Deep Blue? Arthur Samuels worked for IBM for many years and in his free time, he developed a program that could play chess (Brooks, 2017), the predecessor of Deep Blue, that in 1997 was able to beat the chess world champion (Yao, 2022). I still remember when the news broke worldwide and generated a lot of debate about computers and AI, just like what is happening now with ChatGPT. In the early 2000s, artificial neural networks were greatly improved because of the reduction of storage costs and the arrival of new types of chips, significantly impacting AI’s advancement. Artificial neural networks “are systems that are similar to the human brain in the way they learn and process information. They enable AIs to learn from experience, as a person would” (Kleinman & Vallance, 2023), exponentially expanding AI capabilities. Furthermore, AI “transformers” programs were added to the mix in recent years, substantially impacting its development. Generative AI jumped into the limelight in late 2022, starting an AI race between tech giants to launch more AI programs (Grant and Weise, 2023). AI now covers different areas, such as computer vision, automation, robotics, and natural language processing. One of the most important subfields of AI is Machine Learning. Machine Learning in AI. A simple definition of Machine Learning (ML) is when a program “performs a function with the data given to it and gets progressively better over time” (Grieve, 2023). For ML, the systems rely on data, enormous quantities of it, which has generated questions about data privacy and protection, copyright discussions, and inherited biases of the information used that affect the program’s overall performance and can replicate existing discriminatory practices. Recently, Japan announced that it “will not enforce copyrights in data used in AI training” (Technomancers.ai 2023), significantly impacting AI’s further developments, as companies won’t be able to protect their copyright information. There are three categories of ML: supervised, unsupervised, and reinforced. Machado (2016) simply explains the differences between these three categories, including some rewards in reinforced learning. However, it is hard to understand how the machine is rewarded. I must continue to research to truly understand the mechanics behind ML. However, here is what Bard came up with: How does reinforced learning work? Besides, there is also deep learning “that structures algorithms in layers to create an “artificial neural network” that can learn and make intelligent decisions on its own” (Grieve, 2023). Here is Bard’s response to the question: How does Deep Learning work?: Deep learning is a type of machine learning that uses artificial neural networks to learn from data. Artificial neural networks are inspired by the human brain, and they are made up of layers of interconnected nodes. Each node performs a simple mathematical operation, and the network learns to perform complex tasks by adjusting the weights of the connections between the nodes. Large Language Models and Generative AI. According to the Center for Humane Technology (2023), AI experienced a massive jump a few years ago with the creation of AI “transformers” in Large Language Models (LLMs). LLMs are a subsection of Deep Learning and require vast amounts of data and pre-training, and the opportunity to be fine-tuned for specific purposes (Google Cloud Tech 2023a); therefore, only top tech companies, such as Meta and Google can afford to develop this type of models. Generative AI, also known as GenAI, is also a subsection of Deep Leaning and is related to LLMs, but according to my understanding, it is not precisely the same. Google Cloud Tech (2023b) defines GenAI as a “type of AI that creates new content based on what it has learned from existing content”, which is a crucial advancement from other AI programs. ChatGPT and Bard are Generative AI platforms. Using foundational models, GenAI can create all sorts of new outputs, using natural language prompts rather than computer programing, from videos to music and text. It even can make new code, which is one of the issues of the fear of AI. Generative AI’s explosion in recent months has renewed the call for regulation before it is too late. So far, as with most digital technologies, guidelines and norms are limited compared to most other industries. In the next section, I discuss this difference. Differences between AI and other industry security standards. Since the launch of ChatGPT, the discussion on the regulation of AI has generated one of the most important debates of our era. One of the fathers of AI, Geffrey Hinton, resigned from Google to be able to call for regulating AI (Taylor & Hern, 2023; Kleinman & Vallance, 2023). Even the CEO of OpenAI, Samuel Altman, “implored lawmakers to regulate artificial intelligence” in a Senate hearing (Cang, 2023). Other tech experts, including Steve Wozniak and Elon Musk, signed Pause Giant AI Experiments: An Open Letter in which they ask “AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4” due to their inherent risks. However, it is fascinating to compare the approach that we have to AI and any other industry. In this regard, Quebec’s Artificial Intelligence Institute’s Chief Executive, Valérie Pisano, states that: “The technology is put out there, and as the system interacts with humankind, its developers wait to see what happens and make adjustments based on that. As a collective, we would never accept this mindset in any other industrial field. There’s something about tech and social media where we’re like: ‘Yeah, sure, we’ll figure it out later’” (Taylor & Hern, 2023). Imagine having the same approach to aviation, nuclear power, or basic appliances. The world would be in deep chaos without safety regulations on these subjects. But, for some reason, we are doing this with AI today. Some countries are working on some regulations. For example, the United States published the Blueprint for an AI Bill of Rights, the European Union is debating an AI Act while the Council of Europe is negotiating an AI and human rights accord, and UNESCO issued the Recommendation on the ethics of artificial intelligence. However, there are no binging regulations, with a few exceptions, and the implications for the economy and politics of the planet are immense. In The A.I. Dilemma (2023), Raskin and Harris explain convincingly the dangers of AI, and they recommend that tech companies should slow down the public deployment of AI systems to be able to regulate and avoid possible catastrophic results. They cite the results of a survey among AI experts in which “50% of A.I. researchers believe there’s a 10% or greater chance that humans go extinct from our inability to control AI” (2022 Expert Survey on Progress in AI, 2022). Manor (2023) evaluates the recent developments in AI, from the excitement about new opportunities to doomsday scenarios and the creation of new companies. He argues that all these activities, called “disruptor/innovator playbook” by tech moguls, are to ensure that governments and societies allow tech companies to self-regulate. So far, it seems that it is working. However, I hope no massive AI-related incident makes governments rush into regulation. In another post, I discuss the impact of AI on diplomacy. diplomacy-40-how-artificial-intelligence-is-changing-diplomacy.html. I just wanted to give you a heads-up that you can stay tuned. You can also read my blog posts about the subject here:
AI resources Here are some interesting resources about AI: Institutes and other organizations: DiploFoundation AI Diary. DiploFoundation HumAInism (AI at Diplo). DigWatch Artificial Intelligence by Geneva Internet Platform. DigWatch AI governmental initiatives by Geneva Internet Platform. MILA, Quebec Artificial Intelligence Institute. OECD AI Observatory. Global Partnership for AI. AI Now Institute. Tech Policy Press. Courses: AI for Everyone Course (Coursera) Videos: Center for Humane Technology (2023, March 9). The A.I. Dilemma. [Video]. YouTube. https://youtu.be/xoVJKj8lcNQ. Google Cloud Tech (2023a, May 8). Introduction to large language models. [Video]. YouTube https://youtu.be/zizonToFXDs. Google Cloud Tech (2023b, May 8). Introduction to Generative AI. [Video]. YouTube https://youtu.be/G2fqAlgmoPo DiploFoundation (2023, February 7). Will AI take over diplomatic reporting? WebDebate #56. [Video]. YouTubehttps://www.youtube.com/live/QuRX-2NQ0zQ?feature=share DiploFoundation (2023, March 7). What role can AI play in diplomatic negotiation? (WebDebate #57). [Video]. YouTube https://www.youtube.com/live/qm_JwZBrflE?feature=share DiploFoundation (2023, April 4). How to Train Diplomats to Deal With AI and Data? (WebDebate #58). [Video]. YouTube https://www.youtube.com/live/m5KS3VY929Q?feature=share DiploFoundation (2023, May 2). What Can We Learn About AI Ethics and Governance From Non-Western Thought? WebDebate #59. [Video]. YouTube https://www.youtube.com/live/wdzQ26HYEmA?feature=share REFERENCES Aratani, L. (2023, May 30). Nvidia becomes first chipmaker value at more than $1Tn amid AI boom. The Guardian. Brooks, R. (2017, August 28). [For&AI] Machine Learning Explained. Rodney Brooks Robots, AI, and other Stuff Blog. Center for Humane Technology (2023, March 9). The A.I. Dilemma. [Video]. YouTube. https://youtu.be/xoVJKj8lcNQ. Gaona, M. (2023, May 15). Entering the singularity: Has AI reached the point of no return? The Hill. Google Cloud Tech (2023a, May 8). Introduction to large language models. [Video]. YouTube https://youtu.be/zizonToFXDs. Google Cloud Tech (2023b, May 8). Introduction to Generative AI. [Video]. YouTube https://youtu.be/G2fqAlgmoPo Grant, N. & Weise, K. (2023, April 7). In A.I. Race, Microsoft and Google Choose Sped over Caution. The New York Times. Grieve, P. (2023, May 23). Deep learning vs. machine learning. What’s the difference? Zendesk Blog. Jewell, C. (2019, June). Artificial intelligence: the new electricity. WIPO Magazine. Kleinman, Z. & Vallance, C. (2023, May 3). AI’ godfather’ Geoffrey Hinton warns of dangers as he quits Google. BBC. Machado, G. (2016, October 6). ML basics: Supervised, unsupervised and reinforcement learning. Medium blog. Manor, I. (2023, June 6). Shock and Awe: How AI is Sidestepping Regulation. Exploring Digital Diplomacy Blog (digdipblog). Marr, B. (2018, February 14). The Key Definitions of Artificial Intelligence (AI) that Explain its Importance. Forbes. McCarthy, J., Minsk, M.L., Rochester, N. & Shannon, C.E. (1955, August 31). A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence. Pause Giant AI Experiments: An Open Letter. (2023, March 22). Taylor, J. & Hern, A. (2023, May 2). ‘Godfather of AI’ Geoffrey Hinton quits Google and warns over dangers of misinformation. The Guardian. Technomancers.ai. (2023, June 1). Japan Goes All In: Copyright Doesn’t Apply to AI Training. Communications of the ACM. Yao, D. (2022, May 10). 25 years ago today: how Deep Blue vs. Kasparov changed AI forever. AI Business. DISCLAIMER: All views expressed on this blog are that of the author and do not represent the opinions of any other authority, agency, organization, employer or company. |
Rodrigo Márquez LartigueDiplomat interested in the development of Consular and Public Diplomacies. Archives
November 2023
Categories
All
|