Toribash
Originally Posted by Lite View Post
I was summoned to say this is fake news.

This post made me consider bringing back reputation system.



Situation described in an OP would only be possible if said AI is either untested and shouldn't have been allowed out of testing labs or it's actually not an AI but a bunch of if-else cases for making coffee.

Real self-learning AI is scary (primarily because it'd make the humanity face something that's never been seen before), but its creation is most likely inevitable - even if that won't happen in close future.
Obviously it can be a huge threat, especially if tech companies continue working with the military. Smart coffee maker is one thing, smart machine that shoots rockets is another.

Then again I'd question how close would an AI creation be to creating an AI that's capable of self-consciousness, with the latter obviously being the main threat. For example, pretty much any living species are capable of learning in some form but only a few can (somewhat) qualify for being self-conscious.
Getting back the example from the previous paragraph, a rocket-shooting AI is obviously bad but an AI that decides it wants to do that on its own while also building more rockets and replicating itself is much worse.