Toribash
Original Post
The threat of artificial intelligence
I've been thinking about this a lot recently.
One point i've saw on a video is "prioritization", by that meaning let's say you tell a robot to make you coffee or something similar, but there's a child on the floor, one of the following could happen:
  • it could continue, go straight towards the child, eventually causing harm
  • you could prioritize the the child, however then it wouldn't be focused on the task
  • you could add a stop button, however it would try to stop you from pressing it because it needs to complete the task
  • you could prioritize the button, but then it'd press it itself, meaning nothing would get done
so on and so forth. so my point all of this is, what if at some point in the distant or close future, depending how fast technology evolves, we get to the point where robots or AI are at the point where they go through their tasks without actually considering harm to their master/creator, etc.