Full width home advertisement

Work Portfolio

Pictures

Post Page Advertisement [Top]




As AI gets smarter and robots gets better, shouldn't the world come together to formulate a failsafe plan against bad AIs? 

We've seen a lot of movies about robots that deviate from their original programming to become a threat to humanity. Think of Terminator, the Matrix, and I, Robot and you see all the ways that the world can go to hell because of super smart AIs in metallic, nearly indestructible bodies.

Why not pre-empt the potential problem with a series of guidelines so as to prevent people from creating AIs and robots that might actually threaten all of humanity? Previously, there was a vigorous debate about the publishing of a controversial flu studies that included information about a man-made flu would cause a worldwide catastrophe should it ever be released. The information was deemed so dangerous that the Dutch court limited the publishing of the findings and required the scientist to obtain an export license before his findings can be published.

Shouldn't there be similar concerns with AIs? Scientists have already programmed robots to learn cooking on their own just by watching YouTube videos. Who knows what else the robots will "learn". As AIs continue to improve, we need the failsafe measure as soon as possible. 

It might all sound far-fetch, but so did driverless cars, the hover board, and colonizing Mars

No comments:

Post a Comment

Bottom Ad [Post Page]