Will Artificial Intelligence Kill Us All?

I really enjoyed this video where Sam Harris discusses the dangers in continuing to develop artificial intelligence.

Sam argues that artificially intelligent beings might decide to treat us in much the same way we treat ants.

I hope you’ll give it a watch and leave your thoughts below. I thought it would be nice to escape the topic of the election for just a little while.

 

Advertisements

18 Comments

  1. I would hope that any future hardware capable of squashing us like ants will comply with the 3 laws of robotics. Failing that, I would require that it protect white middle aged males who identify as Atheists.

    Whatever technology they build will still be designed to comply with a predetermined set of goals and unlike humans, I would expect us to code “World Domination” as inappropriate. It’s kinda like parents indoctrinate children into believing fiction, except we’re much more responsible… right?

    Every conflicting objective will have a weighted response, so when our autonomous car has to choose between running over your Grandma, or the small child, it’s bye bye Grandma. It’s an unfortunate choice, but the technology isn’t really making a choice, we predetermined that decision when we set the parameters. But what if the car starts believing in intelligent design!?! If the car “learns” that crashed Grandma’s require more cleaning and that children would probably better off with “god” anyway, then we’re back where we started from.

    If this AI technology detected that humans were destroying the earth and that a cull was necessary, well if that is necessary to save the optimum population (again, that would be the white middle aged atheists), then that is not an unreasonable or particularly fearsome proposal given that it is actually the most compassionate decision.

    But 20 thousand years of intellectual work? Really? Conclusions don’t take that long, even with your average flawed think tank. I think it pays to remember that it isn’t always the thought processing speed that stifles our progression. Humans and our future machines will be limited in many ways not necessarily related to their processing power or intelligence.

  2. I think this may come. Hopefully not the type of danger portrayed in the Terminator series, or even in the comics. I think we are more intelligent than that, but what do I know? After all, I didn’t think we would be stupid enough to sit idly by and watch Trump become president.

  3. Artificial intelligence will do whatever it is designed to do just like any other designed and manufactured product.

    If it doesn’t, that means it has malfunctioned.

    If AI malfunctions kill people and break things I can only imagine a huge federal bureaucracy being created to regulate the industry.

    The FDA, Food & Drug Administration requires a 12 year period before pharmaceuticals are released to the market.

    AI may suffer the same fate.

  4. I think the most important thing about this talk is the idea that we (society) must establish some objective ethical standpoints in a society that is so rapidly advancing with technology. As we become more capable, we are forced to decide what is really important to us. As in many philosophical discussions, conclusions we reach have to be brought to their most extreme consequences to truly test their validity. In the case of strong AI, these conclusions may in fact be carried out to their extremes, where as thought experiments with humans rarely do.

  5. I heard an interview with another IT expert recently who suggested that the Terminator type scenario was not as far fetched as we once imagined. This expert noted that artificial intelligence was moving ahead very quickly, far more quickly than the general population appreciated. But this expert saw the big risk as being the decimation of the job market as computers with IT move into fields that were previously spared automation. The social changes from the decimation of the job market will be profound.

    As an aside it is not trade agreements that decimated jobs in the ‘rust belt’ it is automation. Apparently America manufactures more now than ever before but is doing so with far less workers.

    The risk comes once we give machines the capacity to make decisions without human oversight.

    Whilst I suspect machines deciding to kill human is a low risk it not a zero risk and it is one that should be borne in mind.

    I generally admire Sam Harris’ perspective on things. THough I recall a talk he gave a few years back explaining why free will might be an illusion – it made my brain hurt.

    • Yes, Peter, I’ve also read that AI and automation are the true culprits when it comes to the shortfall in the job market. And it’s interesting that those who are directly affected cannot seem to see this. They prefer to blame “others” and thus elect someone like our current pres-elect who they believe is going to make things all better.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s