Can you safely build something that may kill you?

1490690264 0 jpg

‘“AI will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies,” OpenAI CEO Sam Altman once said. He was joking. Probably. Mostly. It’s a little hard to tell.

Altman’s company, OpenAI, is fundraising unfathomable amounts of money in order to build powerful groundbreaking AI systems. “The risks could be extraordinary,” he wrote in a February blog post. “A misaligned superintelligent AGI could cause grievous harm to the world; an autocratic regime with a decisive superintelligence lead could do that too.” His overall conclusion, nonetheless: OpenAI should press forward.

There’s a fundamental oddity on display whenever Altman talks about existential risks from AI, and it was particularly notable in his most recent blog post, “Governance of superintelligence”, which also lists OpenAI president Greg Brockman and chief scientist Ilya Sutskever as co-authors….’  (Vox)