My notes on a TED talk, AI Is Weird, The Danger of AI Is Weirder than You Think by Janelle Shane
Though the singularity is fast approaching, artificial intelligence’s computing power is currently equivalent to that of an earthworm. Research scientist Janelle Shane urges programmers to bear that in mind when assigning AI tasks to accomplish.
With a series of examples that resembles an AI bloopers reel, she illustrates the hilarious and sometimes tragic misadventures of programming AI. As algorithms’ role in human lives grows, understanding AI’s limitations are critical.
Artificial intelligence (AI) isn’t as smart as many assume.
When coders from a middle school fed more than 1,600 ice cream flavours into a self-learning algorithm to see what new tantalizing combinations it would generate, the algorithm invented flavours such as “pumpkin trash break” and “peanut butter slime.” No, AI is not trying to poison humans. Rather, it didn’t understand the problem assigned to it.
“The danger of AI is not that it’s going to rebel against us; it’s that it’s going to do exactly what we ask it to do.”
AI is about as smart as an earthworm or perhaps a honeybee. So AI will attempt to execute human demands if it can, but it might not always complete tasks in ways that humans expect.
Algorithms rely on precise communication and data from coders to solve problems. In one research study, programmers asked an AI program to build from a collection of parts a robot with the capacity to get from point A to point B. Instead of assembling a robot, the AI constructed a tower from the parts at point A, which it then toppled to reach point B. Although the AI completed the set task, the method didn’t employ the logic the programmers had in mind.
Proving AI limitation whereby the AI required more precise parameters.
“It’s really easy to accidentally give AI the wrong problem to solve, and often we don’t realize that until something has actually gone wrong.”
In the past, programmers wrote precise step-by-step codes to instruct machines on how to execute discrete tasks, the likes of automated CNC machines would come to mind. But with AI, programmers provide the technology with a goal and let it figure out how to reach the destination through trial and error.
One software engineer created a game-like interface and asked AI to design a robot that could negotiate an obstacle course. But the engineer learned that he had to specify that he wanted the AI to use legs and that he had to restrict the length of those legs. Otherwise, the AI designed legs so long that the robot could traverse the entire obstacle course with one giant step.
Wrongly designed AI can be disastrous
Feeding imprecise or incomplete data to AI can have unforeseen, sometimes disastrous results. I’ve discussed the importance of designing the AI right, in my previous post, Human Compatible.
When a team of researchers taught AI to identify pictures of a tench fish and then asked which part of the fish is. The AI is taught to make the identification. The AI responded with pictures of human fingers. As a tench is a trophy fish, many of the training pictures the AI had seen of the fish included the hands of anglers proudly holding their catch. This indicates that the AI didn’t realize that the human fingers weren’t connected to the fish.
“It is through the data that we often accidentally tell AI to do the wrong thing.”
In 2016, a Tesla driverless car was involved in a fatal accident when the controlling AI failed to brake for a truck that unexpectedly pulled out in front of the car. Tesla had programmed the AI for highways, but the driver activated autopilot mode on city streets. The AI could recognize trucks only from behind, as viewed on a highway, and not from the side. It’s likely that the program perceived the truck as a road sign that it could drive beneath.
Hopefully, Tesla would be able to improve its driverless AI’s algorithm.
Programmers must ensure that AI works in nondestructive ways.
Amazon scrapped an algorithm to sort résumés based on those of existing employees when the company discovered that the AI unexpectedly discriminated against female candidates. The program had learned to weed out the résumés of applicants who attended women’s colleges or belonged to women’s groups, as per the examples it had received as input. It then replicated that unconscious bias. In this case, a systemic error, which overlooks competent female candidates.
When social media algorithms’ only goal is to increase engagement, they tend to begin recommending content that propagates conspiracy theories and hate messages. Programmers must keep in mind AI’s limitations. AI is certainly not the omniscient technology depicted in science fiction.
In conclusion, AI needs to be designed to benefit and enhance the human experience, but the algorithm needs to be coded properly since its ability is as of now, highly dependent on the programming. Not self-learning.
Read other posts on AI or Artificial Intelligence.