
S P E E C H - What is Artificial Intelligence?
One of the foundational problems facing the field of artificial intelligence is the concept of consciousness. How do you know if something is conscious? To us, our consciousness is clear. We know that we are conscious because of that age old saying: “I think, therefore I am.” Now, I want you all to look around yourselves. Look around at each other. These are your fellow classmates, many of which you have known for a long time. How do you know that they’re conscious? How can you be sure that they have the same sentience as you? You can’t be sure, but you have faith that every human has conscious thought. After all, you yourself are human.
Imagine you’re no longer in this classroom, and place yourself in front of a computer. Envision yourself now talking to someone responding to you using that computer. You see the messages flow in, the person talking the same as your classmate beside you, but you can’t be sure they’re human. This is a problem that has begun to hang over the development of artificial intelligence. Sentience is a tricky thing to confirm, especially when we don’t know they’re human. As Rockwell Anyoha writes, “Google engineer Blake Lemoine had been tasked with testing the company’s artificially intelligent chatbot LaMDA for bias. A month in, he came to the conclusion that it was sentient.” He had analyzed LaMDA, and discerned that it was sentient through its own responses. His claims were unfounded as, of course, an artificial intelligence that is designed to present itself as human will argue that it is both sentient and capable of emoting, two critical components of human consciousness. LaMDA is comparable to autofill on your phone, looking at data (whether relevant or not) and filling in the blanks.
This problem was one of the foundations for the development of artificial intelligence. Alan Turing created a test to see whether AI could be capable of perfectly mimicking a human. IBM credits him in their exploration of the foundations of AI, explaining how the “‘father of computer science’ asks the following question: ‘Can machines think?’" From there, he offers a test, now famously known as the ‘Turing Test,’ where a human interrogator would try to distinguish between a computer and human text response.” However, Turing’s test has largely fallen by the wayside, not acting as an accurate representation of an AI’s intelligence, nor its ability to mimic human thought. Jason P. Dinh explains that “the machine passes if it consistently fools the interviewer into thinking it is human. Experts today agree that the Turing test is a poor test for intelligence. It assesses how well machines deceive people under superficial conditions.” This is its largest criticism.
The most common perception of artificial intelligence is that many are like LaMDA, robots designed to mimic humans, inching closer to sentience. These are known as “strong AI.” IBM also refers to them as “general AI, is a theoretical form of AI where a machine would have an intelligence equal to humans; it would have a self-aware consciousness that has the ability to solve problems, learn, and plan for the future.” While strong AI is plentiful, a vast number of artificial intelligence is instead known as “weak AI.” Weak AI is a bit of a misnomer, (having nothing to do with its strength,) as it is specific, powerful, and widespread, designed to do one thing well. Another common name for it is narrow AI. You can think of Siri, artificially generated art, or even self-driving cars. This is weak AI, and it powers nearly all systems using Artificial Intelligence, despite its name.
Artificial Intelligence is developing more and more by the day. I’m not going to stand up here and warn you about how AI is going to bring about doomsday for us. But I’m also not going to lie to you. Experts on artificial intelligence, like Sam Harris, even go so far as to say that: “There’s an increasing body of evidence that now suggests that beyond a certain intelligence threshold, AI could become intrinsically dangerous.” He argues that this is because AI comes up with “creative” ways of achieving the objectives they’re programmed for. For example, if you ask artificial intelligence to make you the richest person in the world, there are dozens of ways it could go about doing so. It may choose to help you become rich and surpass all other billionaires, or it might choose a more violent path, as Harris explains, and “kill everyone on planet Earth, turning you into the richest person in the world by default.” Many people don’t understand the potential negative consequences of improperly handled AI.
This is why the way that we move forward is important. We must handle artificial intelligence responsibly while not stifling its development. It has the capacity to extend human knowledge beyond anything thought possible, and while we have control over AI, AI may soon have control over us. Though AI is smart, it’s stupid, and needs human intervention to develop properly. Technology is ever-changing, and it’s up to us to ensure that we do what we must to grow alongside it.
​
Bibliography
​
Anyoha, Rockwell. “The History of Artificial Intelligence.” Science in the News, 23 Apr. 2020,
Accessed 4 Oct. 2022.
Dinh, Jason P. “How Will We Know When Artificial Intelligence Is Sentient?” Discover
Magazine, Discover Magazine, 5 July 2022, Accessed 4 Oct. 2022.
IBM Cloud Education. “What Is Artificial Intelligence (AI)?” IBM, Accessed 4 Oct. 2022.