Copyright MIT Sloan Review

Human vs. AI — or — Human + AI

Ari Chanen, PhD

--

A recently published New Yorker article describes a 10-year effort on IBM’s part to build an AI system, called Project Debater, to try to rival humans at political debate.

IBM has an impressive history putting forth and succeeding at grand challenges. Their Deep Blue system defeated the chess grandmaster Garry Kasparov in 1997 in a six-game match, the first time a computer had defeated the best human. Then in 2011, IBM’s Watson defeated two of the best human Jeopardy champions and newly captured the public’s imagination with an AI system that seemed to understand and effectively use humor, puns, analogies and logic to find the right answers. Project Debater continues this impressive series of forays into human activities once thought off-limits to computers.

The New Yorker article reveals that in a February 2019 debate match between Project Debater and a human debating champion, the crowd of people watching the debate judged that the human had edged out the computer. Even so, the fact that an AI system did well against a person is very impressive to me. The author, Benjamin Wallace-Wells, made some interesting observations. First, Project Debater will always have a huge advantage over humans because it has access to 400 million documents from such sources as LEXIS-NEXIS that it can search through in minutes to find relevant facts and supporting evidence with which to construct its arguments. Second, humans still have an edge strategically in that they can come up with subtle arguments that go beyond just the facts and evidence to frame issues in such a way that they would be appealing to an audience of human judges. The technology behind Project Debater will certainly be useful, if not for having arguments on stage with humans, then for analyzing current debates happening now online or elsewhere. According to the article, the creator of the system is now using it to analyze arguments against the COVID-19 vaccine.

I learned a few interesting facts about political debate that show how computer science can influence human-specific endeavors. Expert debaters learn that there are a limited number of abstract types of argumentation. For instance, in debating whether or not it’s a good idea to ban some activity or substance, the same structure of argument could be used in different domains. Expert debaters have a great facility at quickly identifying the different types of arguments that might apply to a given question. The article reveals that a computer scientist on the IBM team estimated that there are between just 50 to 70 different types of arguments. That kind of knowledge is part of what made Project Debater so good beyond just the identification of facts and evidence. Given that there are a limited number of types of arguments, this makes debate a domain that is amenable to being treated like a game that has moves, tactics, and strategy.

This amazing work from IBM reminds me of a recurring theme in these human vs. AI system competitions. In complex games, with huge search spaces, AI almost always outperforms humans when exploring the short-term, tactical options; however, humans often are shown to have a better grasp of long-term strategy. Chess is a great example of this phenomenon of the human/machine dichotomy between strategy and tactics. After Garry Kasparov lost to IBM’s Big Blue in 1997, he started a new kind of chess competition where humans and computers worked together as a team. So far, a top-class team of a human and AI together can beat either one separately. The AI is superior at identifying a limited set of best next moves while the humans are better at picking which move makes the most strategic sense.

Another field in which AI systems are enhancing expert teams territory is radiology. With the rise of deep learning-driven image analysis, some AI and medical experts thought that human radiologists would soon be obsolete. However, in many studies, the combination of human knowledge and AI image analysis does better, on average, than either human or AI separately. Although I don’t know if one could exactly map the analysis of radiological images onto a game with moves, tactics, and strategy, I do believe that, compared to an AI system, human experts in this field can draw on a much richer set of knowledge about the patient and other relevant medical knowledge. However, AI systems are trained on many more correctly identified images/diagnosis pairs than a single radiologist could ever see in a lifetime of practice. Therefore, it makes sense to combine the AI and expert radiologist’s opinions to get the best results.

I predict that it may not be too long before there are debate competitions that team up humans and AI. In the first 15 minutes after receiving the debate question, the computer will search and organize the relevant facts and evidence and then give a set of ranked best arguments that the human could then consider before beginning the debate. As the debate continues, the AI and human will work together to respond to each “debate move” from the opponent.

I do think that, eventually, AI systems alone will be able to do better than humans in many areas. But as long as the kind of AI that we have is just narrow AI, the current pattern of humans and computers combining their efforts to optimize outcomes seems likely to hold.

--

--