Does AI have to be bad?

I know I've made some very poor decisions recently, but I can give you my complete assurance that my work will be back to normal,” says the HAL in 2001: A Space OdysseyIn 1969, science-fiction author Arthur C Clarke and filmaker Stanley Kubrick predicted a future of autonomous computers serving humans.  Half a century later, machine intelligence is becoming a reality with the biggest names in tech developing their own AI. Intelligent machines that will likely have unprecedented and unpredictable effects on the way we live today.

We already live in a world full of (less intelligent) machines. Loan and mortgage applications to music and film suggestions from Apple, Netflix, and Spotify, all rely on some form of AI. Robots are already on the network as intelligent machines engage in data analysis for the benefit of whole industries. The AI market is expected to be worth around $23 billion by 2025.

The public is afraid. Over half, (58%) of the UK population fear its impact on humanity, while 41% believe it will destroy us, according to a survey from the University of Sheffield. “We are likely to see robots integrated into society in the near future as shop assistants, receptionists, doctors, bartenders and also as carers for our elderly and children,” said Noel Sharkey, Emeritus Professor of AI and Robotics University of Sheffield.

Huge risks and impacts

AI will replace humans in employment, transform economic development and disrupt daily life. Employment will certainly be impacted, already a million robots will replace workers at giant Chinese electronics contractor, Foxconn.

Work on self-driving cars has thrown up difficult ethical problems, such as what a smart car involved in an accident can do when it knows every alternative available to it will cause loss of life – which life is spared, and which is taken. Beyond this, a recent survey identified there is also the potential for total financial meltdown as algorithms interact in unexpected ways.

Cognitive science professor, Gary Marcus, warns computers may eventually compete with humans in battles for resources and self-preservation, while philosopher, Nick Bostrom points out that machine intelligence won’t automatically share human values, such as altruism or benevolence.

With his own company already building AI, Tesla CEO, Elon Musk has warned that without national and international oversight, “We’re summoning the demon,” he said. He has founded nonprofit research company OpenAI to figure out how to manage AI.

You can’t rationally ignore him. Even Professor Stephen Hawking last year said AI could lead to the destruction of humanity, but does it have to be this way? Perhaps not, but we need proper leadership to manage the adoption of this powerful technology.

A UK government select committee report warns that the UK, among other governments, is “lacking leadership” in preparing for the impact of intelligent machines. The committee wants to establish a ‘Commission on Artificial Intelligence’ to, “Identify principles for governing the development and application of AI, and to foster public debate.”

Not everyone is alarmed. While it concedes AI will impact everyday life, a Stanford University report on the social and economic implications of artificial intelligence says that while there is no immediate threat, “it is not too soon for social debate on how the economic fruits of AI-technologies should be shared.”

Industry recognizes risks of unfettered development

In what could be an attempt to address some of these problems, Microsoft, Amazon, IBM, Facebook, Google, and others launched a new pan-industry ‘Partnership on AI’ group that will focus on helping people understand these technologies. The group “will conduct research, recommend best practices, and publish research under an open license in areas such as ethics, fairness, and inclusivity; transparency, privacy, and interoperability; collaboration between people and AI systems; and the trustworthiness, reliability, and robustness of the technology,” they said.

Google researcher Chris Olah recently blogged, “We believe it’s essential to ground concerns in real machine learning research, and to start developing practical approaches for engineering AI systems that operate safely and reliably.”

They aren’t just worried about big problems, but sweating the detail about AI: What happens if a self-learning machine decides to put something wet inside a power socket? How do you prevent an intelligent machine engaging in socially unacceptable actions without stifling its desire to learn?

The profound impact of connected intelligence means the industry is advancing far more rapidly than the regulatory framework that should exist to govern them. In one example, smart car research is exposing unanswered questions about legitimacy and legal culpability, and these questions have implications beyond connected vehicles and across the whole AI market.

While the productivity benefits could be immense, the profound impact of connected intelligence means good management of their introduction is essential.  The truth is that AI is not inherently bad, but to fail to address the fundamental challenges of its proliferation makes it far more likely to be so.

Now take a look at Real Times, the Orange Business magazine that brings you a range of intelligent changing insights for our changing world.

Jon Evans

Jon Evans is a highly experienced technology journalist and editor. He has been writing for a living since 1994. These days you might read his daily regular Computerworld AppleHolic and opinion columns. Jon is also technology editor for men's interest magazine, Calibre Quarterly, and news editor for MacFormat magazine, which is the biggest UK Mac title. He's really interested in the impact of technology on the creative spark at the heart of the human experience. In 2010 he won an American Society of Business Publication Editors (Azbee) Award for his work at Computerworld.