Elon Musk is wrong about AI – Part 7934

And now is the first in what will no doubt be a long running series: Elon Musk is Wrong About AI. The number is not random by the way. That is literally an exact count.

Reaching back a bit, because I was too busy being actually productive when he tweeted this the first time.

In no particular order.

  1. Despite saying this all the time, Musk has never said how one should regulate AI or how we should even define AI for the purposes of regulating it. That’s because he can’t. It’s too vague a statement.
  2. Right now most systems that are vaguely called “AI” are really very specific systems that are trained for narrow tasks such as translating text between two languages or telling what is in an image. I have yet to hear a convincing reason these would become a “public risk” in a meaningful way. Implies Terminator scenarios where none really exist.
  3. Sloppy regulation would plunge a stake through AI research. If, for instance, AI researchers had to submit the kind of documentation aerospace companies submit to the FAA for new aircraft every time they wanted to start a new project, there just wouldn’t be any more AI research (or everyone would break/ignore the rules). Bad regulation here could be very bad for scientific progress.

All these points really deserve greater discussion, but those are my overall thoughts.

Nope. No. Get away from the keyboard Elon Musk fanboys, I know what you’re about to say:

What about AI systems being used in warfare to launch drone strikes or bomb cities. Isn’t that the kind of AI that should be regulated? What about AI being used to diagnose patients going haywire and giving the wrong treatments? What about AI being used to underwrite loans; couldn’t they be susceptible to racial bias? What about self-driving cars? They could be deadly if they make mistakes.

Obviously, those are really important problems that tremendously important that we get right. And yes, most, if not all of those situation probably requires regulation of one sort or another. But guess what? That’s not what he’s saying! If that was what he was saying, he would say that we should regulate AI driving on roads, or specify what decisions AI can make in warfare. The distinction is that those are all specific actions that just happen to be done by AI systems. In fact, all of those actions—warfare, medical practice, underwriting, and driving are all heavily regulated in the United States.

No, what he is clearly trying to get across is that it is the AI systems themselves, the cognition and functioning that are dangerous and need to be regulated. This is insane. We are nowhere near actually understanding how to create “out of control” AI, or even AI that can do anything other than the very narrow thing we trained them to do. Regulating AI would be like Ancient Greece regulating gunpowder because it would someday be used in guns.

The most frustrating part of this nonsense is that Musk will always be just vague enough to both scare people into thinking robots are about to take over the world (they’re really not) and allow apologists to claim he means something reasonable. Please Mr. Musk, either choose you’re words with enough precision to be reasonably discussed or stop talking.

10 Replies to “Elon Musk is wrong about AI – Part 7934”

  1. Hey Kenny! I’m delighted to see a dive into AI research and some effort to clear up the hype.

    Elon Musk may be a bit of a straw man for AI risk. He’s not an AI researcher, he’s a public figure with a lot of money who happened to get interested in the field (and invest a billion dollars in it). I agree with you that his PR efforts leave a lot to be desired. But he does have a point, and I don’t want to see that point overlooked.

    To your points above:
    1. Totally agree. “Regulate AI” is way too vague to be actionable. But this is Twitter, man. What do you expect?
    2. Elon Musk is the wrong source for a primer in this field. What scares me most is not artificial intelligence, but artificial superintelligence. If you or your readers want a rundown of why-are-we-even-talking-about-this, there’s a nice article over at Wait But Why (warning: long. Also, the author is definitely an Elon Musk fanboy). For a different take, see this intro to the field of AI Safety (Warning: also long, but not AS long). This isn’t just Musk, either; lots of smart people both inside and outside the field of AI research have expressed deep concerns.
    3. “Sloppy regulation would plunge a stake through AI research.” Again, totally agree. Regulation may actually be a terrible idea at this point, because it’s almost impossible to do unsloppily. Still, even if we question the policy choice, the danger is real. We need some form of enforceable agreement in the field, or fast-and-careless research could accidentally build Cthulhu.

    1. Hey Joe, thanks for the interest!

      So first, I will defend using Elon Musk as a foil. I don’t think he is a straw-man for AI because he is a real person who says what he says earnestly. He is as you say a very public figure, and an influential one than that. I would say that he is an order of magnitude more influential than any AI researchers on the public discussion of AI. What he says about it really matters, so when he says something that I don’t believe is correct, I think it’s important to talk about it. He isn’t a researcher, and I don’t plan to judge him to that standard, but he gets really basic things wrong and has had every opportunity to listen to AI researches (like the ones he hired for instance), but says what he says anyway.

      1. That’s a pretty fair point. But he hasn’t been terribly specific anywhere, and as I discuss, I don’t think he intents to be.
      2. I haven’t read these articles, but I’ve heard this idea of “artificial super-intelligence.” It might be something I should address in another post, but I do want to mention that this is not really concern of anyone working in research right now. The reasons for that probably require a more thorough analysis thought. I’ll just say that a lot of this stuff is very speculative and isn’t really based on the systems researchers are actually working on. Nobody works on the “general intelligence” problem directly, for instance.
      3. Like I said, the only thing we really need to worry about at this point are the actions AI systems can do in the real world. A malfunctioning robot can do real damage. There’s not much danger of losing control of cognition, because researchers don’t really work on AI cognition.

      1. I’ll keep an eye out for future posts! A few comments:
        “[superintelligence] is not really a concern of anyone working in reasearch right now.”
        A lot of the folks from the third link are working on it. But I agree it’s not widely researched; I happen to think this needs to change.

        “a lot of this stuff is very speculative.”
        Definitely, but it has sound reasoning behind it. I do really, really recommend reading some of that reasoning.

        “Nobody is working on the “general intelligence” problem directly, for instance.”
        See the Machine Intelligence Research Institute (MIRI) for general intelligence research here in the US. They’ve published some papers on that exact problem. So have their counterparts overseas. See e.g. the Future of Humanity Institute.

        “There’s not much danger of losing control of cognition, because researchers don’t really work on AI cognition.”
        I would argue that this is exactly why there’s danger of losing control. We’re doing lots of research into designing intelligent systems; we’re not doing enough serious long-term research into designing them safely. It’s true that it’s early in the process to be talking about safety measures, but that doesn’t mean we shouldn’t.

        1. I’ll have to take a look at some of that stuff, but I’ll stand by what I said before. Hopefully I can dig into this in a future post.

          1. “I haven’t read these articles, but I’ve heard this idea of “artificial super-intelligence.” It might be something I should address in another post, but I do want to mention that this is not really concern of anyone working in research right now. ”

            This post (https://www.lesserwrong.com/posts/BEtzRE2M5m9YEAQpX/there-s-no-fire-alarm-for-artificial-general-intelligence)
            makes a very convincing argument for why is likely that there will be no obvious point in time when it is both clear that people should take Artificial Super Intelligence seriously, and there will be enough time to do so successfully.
            That’s not to say “drop everything and focus on ASI”, but it’s something to think about.
            (the author of the post works at MIRI, mentioned above)

          2. I think this is a really big topic. I want to do a post that really engages with the arguments that people are making, but I suspect it will be a significant time commitment. I promise when I get some more free time I will go into this in much greater detail.

Leave a Reply

Your email address will not be published. Required fields are marked *