May 07, 2014
Rachel Greenberg writes marketing content for Automation GT in San Diego, CA.
This past January, Dr. Gary Marcus wrote an article for The New Yorker in which he claimed that the recent smattering of news stories reporting massive leaps in research on the potential for cognitive functionality in AI should be received with a cautious attitude. He noted that these types of dramatic stories that portend major developments in AI ultimately tend to produce let-downs in readership. That is, the bigger the promise about AI, the bigger the disappointment when those promises inevitably don’t come true.
Dr. Marcus refers in passing to some interesting studies on artificial intelligence. These studies researched the abilities of robots to learn, or appear to learn, and their abilities to self-edit following mistakes. Though journalists greeted this news as evidence that major advancements in artificial intelligence are forthcoming, Dr. Marcus argues that, in reality, these studies only prove that some relatively common algorithms that imitate the behavior of the brain are effective. However, these algorithms have been in use for years in programs that received less attention because they were not directly related to AI.
The biggest takeaway of Dr. Marcus’s article is that if these studies teach us anything, it is that we can successfully imitate the brain in its single capacities, but that stitching all of these capacities together into a single AI that successfully behaves like a human brain is far in the future, if possible at all. Further, the media must be more aware of the impact that their reporting has on public perception of the usefulness and reality of the research behind AI.
Dr. Marcus is wise to make these points, but it seems that in some ways, he may be misreading the general public. There seems to be no end of interest for stories on the advancements of humanoid robots. Much of tech and non-tech public alike is fascinated by the potential of true science to meet the expectations that science fiction has built for them. And whether or not these brain-like algorithms are familiar and non-revolutionary, the potential of science is amazing, and today technology changes at such an accelerated pace, it can actually be difficult for many people to adjust. Though Dr. Marcus is right to be realistic about the challenges inherent to this kind of project, or mapping and recreating the human brain, he may be misguided in his pessimism, both for the scientific and general communities.