The Upside of Artificial Intelligence Development

By Stephen F. DeAngelis

The Upside of AI

In “Practical Artificial Intelligence Is Already Changing the World,” I promised to write a follow-on article that discussed why Kevin Kelly (@kevin2kelly), the founding executive editor of Wired magazine, and Irving Wladawsky-Berger, a former IBM employee and strategic advisor to Citigroup, are optimistic about the future of artificial intelligence (AI). In that article I noted that some pundits believe that AI poses a grave threat to humanity while other pundits believe that AI systems are going to be tools that humans can use to improve conditions around them. I also wrote that it would be foolish to predict which school of thought is correct this early in the game.

In the near-term, however, I predicted that those who believe that AI systems are tools to be used by humans are going to be proven correct. Irving Wladawsky-Berger is firmly in that camp and he believes that Kevin Kelly is as well. “What should we expect from this new generation of AI machines and applications?” asks Wladawsky-Berger. “Are they basically the next generation of sophisticated tools enhancing our human capabilities, as was previously the case with electricity, cars, airplanes, computers and the Internet? Or are they radically different from our previous tools because they embody something as fundamentally human as intelligence? Kevin Kelly — as am I — is firmly in the AI-as-a-tool camp.” [“The Future of AI: An Ubiquitous, Invisible, Smart Utility,” The Wall Street Journal, 21 November 2014]M

Wladawsky-Berger bases his conclusion about Kevin Kelly’s beliefs about artificial intelligence (AI) from what Kelly wrote in an article in Wired Magazine. [“The Three Breakthroughs That Have Finally Unleashed AI on the World,” Wired, 27 October 2014] In that article, Kelly writes about IBM’s Watson system and how it is transforming as it learns and about all of the good things that cognitive computing systems can do now and will do in the future. He continues:

“Amid all this activity, a picture of our AI future is coming into view, and it is not the HAL 9000 — a discrete machine animated by a charismatic (yet potentially homicidal) humanlike consciousness — or a Singularitan rapture of superintelligence. The AI on the horizon looks more like Amazon Web Services — cheap, reliable, industrial-grade digital smartness running behind everything, and almost invisible except when it blinks off. This common utility will serve you as much IQ as you want but no more than you need. Like all utilities, AI will be supremely boring, even as it transforms the Internet, the global economy, and civilization. It will enliven inert objects, much as electricity did more than a century ago. Everything that we formerly electrified we will now cognitize. This new utilitarian AI will also augment us individually as people (deepening our memory, speeding our recognition) and collectively as a species. There is almost nothing we can think of that cannot be made new, different, or interesting by infusing it with some extra IQ. In fact, the business plans of the next 10,000 startups are easy to forecast: Take X and add AI. This is a big deal, and now it’s here.”

[ Related on Insights: Google and Elon Musk to Decide What Is Good for Humanity ]

Unlike the dire warnings that have filled news outlets over the past year, Kelly’s view of the future of AI is not only optimistic it’s almost joyous. Wladawsky-Berger and Kelly are not alone in their optimism about AI’s future. Timothy B. Lee (@binarybits), senior editor at @voxdotcom, also believes that the up side of artificial intelligence will far outweigh the risks of developing it further. [“Will artificial intelligence destroy humanity? Here are 5 reasons not to worry.” Vox, 15 January 2015] Lee believes the naysayers “overestimate the likelihood that we’ll have computers as smart as human beings and exaggerate the danger that such computers would pose to the human race. In reality, the development of intelligent machines is likely to be a slow and gradual process, and computers with superhuman intelligence, if they ever exist, will need us at least as much as we need them.” Even though Kelly is optimistic about the future of AI, he doesn’t dismiss the cautions being raised about how it’s developed. He writes, “As AIs develop, we might have to engineer ways to prevent consciousness in them — our most premium AI services will be advertised as consciousness-free.” Kelly’s big concern about AI’s future is who will control the systems we use. He explains:

“Cloud-based AI will become an increasingly ingrained part of our everyday life. But it will come at a price. Cloud computing obeys the law of increasing returns, sometimes called the network effect, which holds that the value of a network increases much faster as it grows bigger. The bigger the network, the more attractive it is to new users, which makes it even bigger, and thus more attractive, and so on. A cloud that serves AI will obey the same law. The more people who use an AI, the smarter it gets. The smarter it gets, the more people use it. The more people that use it, the smarter it gets. Once a company enters this virtuous cycle, it tends to grow so big, so fast, that it overwhelms any upstart competitors. As a result, our AI future is likely to be ruled by an oligarchy of two or three large, general-purpose cloud-based commercial intelligences.”

That concern aside, Kelly believes that AI will help make humans smarter and more effective. He notes, for example, that AI chess programs have helped make human chess players much better. He adds, “If AI can help humans become better chess players, it stands to reason that it can help us become better pilots, better doctors, better judges, better teachers.” In other words, Kelly sees AI as tool that can help mankind get better not a threat that is going to destroy mankind. He continues:

“Most of the commercial work completed by AI will be done by special-purpose, narrowly focused software brains that can, for example, translate any language into any other language, but do little else. Drive a car, but not converse. Or recall every pixel of every video on YouTube but not anticipate your work routines. In the next 10 years, 99 percent of the artificial intelligence that you will interact with, directly or indirectly, will be nerdily autistic, supersmart specialists. In fact, this won’t really be intelligence, at least not as we’ve come to think of it. Indeed, intelligence may be a liability — especially if by ‘intelligence’ we mean our peculiar self-awareness, all our frantic loops of introspection and messy currents of self-consciousness.”

I agree that with that assessment. Derrick Harris (@derrickharris), a senior writer at Gigaom, asserts that the fact of the matter is that artificial intelligence (at least the narrow kind) is here, is real, and is getting better. [“Artificial intelligence is real now and it’s just getting started,” Gigaom, 9 January 2015] He explains:

“Artificial intelligence is already very real. Not conscious machines, omnipotent machines or even reasoning machines (yet), but statistical machines that automate and increasingly can outperform humans at certain pattern-recognition tasks. Computer vision, language understanding, anomaly detection and other fields have made immense advances in the past few years. All this work will be the stepping stones for future AI systems that, decades from now, might perform feats we’ve only imagined computers could perform.”

Will artificial intelligence be disruptive? Of course it will. It will change the employment landscape in major ways, displacing millions of workers who thought their jobs were safe. Will it create new jobs? Certainly. In fact, I suspect that entire new business sectors are going to be developed as a result of AI. The question will be whether enough new jobs can be created to replace those that have been taken over by smart machines. Will artificial general intelligence be developed? Maybe. And if AGI is developed, that’s where caution needs to be taken. Is humankind in danger? Not at the moment. Narrow uses of AI are going to help humans do a lot of amazing things in the years ahead. In fact, we will wonder how we ever lived without it.

Stephen F. DeAngelis is President and CEO of the cognitive computing firm Enterra Solutions.

-From the editors of

This article was originally published on WIRED (January, 2015)

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: