by Angela Guess
April Glaser recently wrote in Wired, “Researchers disagree on when artificial intelligence that displays something like human understanding might arrive. But the Obama administration isn’t waiting to find out. The White House says the government needs to start thinking about how to regulate and use the powerful technology while it is still dependent on humans. ‘The public should have an accurate mental model of what we mean when we say artificial intelligence,’ says Ryan Calo, who teaches law at University of Washington. Calo spoke last week at the first of four workshops the White House hosts this summer to examine how to address an increasingly AI-powered world.”
Glaser continues, “Although scholars and policymakers agree that Washington has a role to play here, it isn’t clear what the path to that policy looks like—even as pressing questions accumulate. They include deciding when and how Google’s self-driving cars take to American highways and examining how bias permeates algorithms. ‘One thing we know for sure is that AI is making policy challenges already, such as how to make sure the technology remains safe, controllable, and predictable, even as it gets much more complex and smarter,’ said Ed Felten, the deputy US chief of science and technology policy leading the White House’s summer of AI research. ‘Some of these issues will become more challenging over time as the technology progresses, so we’ll need to keep upping our game’.”
She goes on, “Although artificial intelligence already exceeds human capabilities in some areas—Google’s AlphaGo repeatedly beat the world’s best Go player—each system’s applications remain narrow, and reliant upon humans. ‘Intelligence and autonomy are two very different things,’ say Oren Etzioni, the director of the nonprofit Allen Institute for Artificial Intelligence and a speaker at Tuesday’s workshop. ‘In people, intelligence and autonomy go hand in hand, but in computers that’s not at all the case,’ he said.”
Photo credit: Flickr