And now is the first in what will no doubt be a long running series: Elon Musk is Wrong About AI. The number is not random by the way. That is literally an exact count.
Reaching back a bit, because I was too busy being actually productive when he tweeted this the first time.
Got to regulate AI/robotics like we do food, drugs, aircraft & cars. Public risks require public oversight. Getting rid of the FAA wdn’t make flying safer. They’re there for good reason.https://t.co/6OXZC3UdMW
— Elon Musk (@elonmusk) November 26, 2017
In no particular order.
- Despite saying this all the time, Musk has never said how one should regulate AI or how we should even define AI for the purposes of regulating it. That’s because he can’t. It’s too vague a statement.
- Right now most systems that are vaguely called “AI” are really very specific systems that are trained for narrow tasks such as translating text between two languages or telling what is in an image. I have yet to hear a convincing reason these would become a “public risk” in a meaningful way. Implies Terminator scenarios where none really exist.
- Sloppy regulation would plunge a stake through AI research. If, for instance, AI researchers had to submit the kind of documentation aerospace companies submit to the FAA for new aircraft every time they wanted to start a new project, there just wouldn’t be any more AI research (or everyone would break/ignore the rules). Bad regulation here could be very bad for scientific progress.
All these points really deserve greater discussion, but those are my overall thoughts.
Nope. No. Get away from the keyboard Elon Musk fanboys, I know what you’re about to say:
What about AI systems being used in warfare to launch drone strikes or bomb cities. Isn’t that the kind of AI that should be regulated? What about AI being used to diagnose patients going haywire and giving the wrong treatments? What about AI being used to underwrite loans; couldn’t they be susceptible to racial bias? What about self-driving cars? They could be deadly if they make mistakes.
Obviously, those are really important problems that tremendously important that we get right. And yes, most, if not all of those situation probably requires regulation of one sort or another. But guess what? That’s not what he’s saying! If that was what he was saying, he would say that we should regulate AI driving on roads, or specify what decisions AI can make in warfare. The distinction is that those are all specific actions that just happen to be done by AI systems. In fact, all of those actions—warfare, medical practice, underwriting, and driving are all heavily regulated in the United States.
No, what he is clearly trying to get across is that it is the AI systems themselves, the cognition and functioning that are dangerous and need to be regulated. This is insane. We are nowhere near actually understanding how to create “out of control” AI, or even AI that can do anything other than the very narrow thing we trained them to do. Regulating AI would be like Ancient Greece regulating gunpowder because it would someday be used in guns.
The most frustrating part of this nonsense is that Musk will always be just vague enough to both scare people into thinking robots are about to take over the world (they’re really not) and allow apologists to claim he means something reasonable. Please Mr. Musk, either choose you’re words with enough precision to be reasonably discussed or stop talking.