Toribash
Originally Posted by Lite View Post
I was summoned to say this is fake news.

This post made me consider bringing back reputation system.



Situation described in an OP would only be possible if said AI is either untested and shouldn't have been allowed out of testing labs or it's actually not an AI but a bunch of if-else cases for making coffee.

Real self-learning AI is scary (primarily because it'd make the humanity face something that's never been seen before), but its creation is most likely inevitable - even if that won't happen in close future.
Obviously it can be a huge threat, especially if tech companies continue working with the military. Smart coffee maker is one thing, smart machine that shoots rockets is another.

Then again I'd question how close would an AI creation be to creating an AI that's capable of self-consciousness, with the latter obviously being the main threat. For example, pretty much any living species are capable of learning in some form but only a few can (somewhat) qualify for being self-conscious.
Getting back the example from the previous paragraph, a rocket-shooting AI is obviously bad but an AI that decides it wants to do that on its own while also building more rockets and replicating itself is much worse.
Originally Posted by sir View Post
This post made me consider bringing back reputation system.



Situation described in an OP would only be possible if said AI is either untested and shouldn't have been allowed out of testing labs or it's actually not an AI but a bunch of if-else cases for making coffee.

Real self-learning AI is scary (primarily because it'd make the humanity face something that's never been seen before), but its creation is most likely inevitable - even if that won't happen in close future.
Obviously it can be a huge threat, especially if tech companies continue working with the military. Smart coffee maker is one thing, smart machine that shoots rockets is another.

Then again I'd question how close would an AI creation be to creating an AI that's capable of self-consciousness, with the latter obviously being the main threat. For example, pretty much any living species are capable of learning in some form but only a few can (somewhat) qualify for being self-conscious.
Getting back the example from the previous paragraph, a rocket-shooting AI is obviously bad but an AI that decides it wants to do that on its own while also building more rockets and replicating itself is much worse.

Yeah there's really multiple, if not hundreds of different threats or risks that can and might or might not happen, it just really depends what paths you take while making the AI.

Originally Posted by Lite View Post
I was summoned to say this is fake news.

also: https://github.com/pyxelx
Last edited by hanna; Dec 10, 2018 at 01:24 PM. Reason: <24 hour edit/bump
If you're actually a programmer I'm curious as to how you approach your own work because your logic is very flawed.
If your supposed AI can't avoid an obstacle it's clearly not going to be in a position where it would have any chance of harming anyone. Just like you don't take a new car straight out of the factory without any brake fluid and expect it to stop, you don't take a bunch of code of AI and put it straight into working environment. There is insane amounts of planning and testing involved with developing any type of AI, and that will be even more severe with one that has a any potential of harming anyone.

If you presented a problem that is at least slightly debatable that would be an interesting topic, but preventing a robot from stepping on a child really is just basics of AI programming and I'm surprised you can't wrap your head around it considering your professional claims.
Last edited by Smaguris; Dec 11, 2018 at 08:49 PM.
Originally Posted by Smaguris View Post
If you're actually a programmer I'm curious as to how you approach your own work because your logic is very flawed.
If your supposed AI can't avoid an obstacle it's clearly not going to be in a position where it would have any chance of harming anyone. Just like you don't take a new car straight out of the factory without any brake fluid and expect it to stop, you don't take a bunch of code of AI and put it straight into working environment. There is insane amounts of planning and testing involved with developing any type of AI, and that will be even more severe with one that has a any potential of harming anyone.

If you presented a problem that is at least slightly debatable that would be an interesting topic, but preventing a robot from stepping on a child really is just basics of AI programming and I'm surprised you can't wrap your head around it considering your professional claims.

my logic for that situation is based on this video: https://www.youtube.com/watch?v=3TYT1QfdfsM

as for the second part, that was only an example, i created this thread for the general threat, not that single situation.
The problem will be mostly ironed out before mass production and commercial use. The off chance that this indeed happens would be a pretty rare occurrence.

Or there would be work arounds for that specific model that fetches coffee. I'd imagine it would be like one of those small discs that pick up dust and cats ride around, but this instead fetches water or coffee from the coffee maker built for that robot. Then the center of the robot the cup is attached to raises up to about 3 feet for when you're sitting at the table and since hard wood floor with and open concept design is becoming popular it would make this quite feasible.
Last edited by T0ribush; Dec 12, 2018 at 05:14 AM.
Originally Posted by Elite View Post
my logic for that situation is based on this video: https://www.youtube.com/watch?v=3TYT1QfdfsM

as for the second part, that was only an example, i created this thread for the general threat, not that single situation.

Ok I managed to watch that video for 10 minutes and was about to kill myself.

"Robot goes to fetch you a coffee but then there's a kid in its way so you rush to hit the killswitch but then the robot fights you because that'd interrupt him from fetching you coffee" - sorry, what? We already have working car autopilots that don't (usually) run over people in 2018 but future AI-having robots won't have simplest infrared cameras to see obstacles in front of them and would require a killswitch not to destroy everything on their path? Also it'd fight you, the owner? That's ridiculous.

It's generally painful to watch, everything he's covered in that 10-minute bit has been resolved years ago. It'd be somewhat okay if AI development basics were under discussion here - but obviously not its future and/or possible threats.
Originally Posted by sir View Post
Ok I managed to watch that video for 10 minutes and was about to kill myself.

"Robot goes to fetch you a coffee but then there's a kid in its way so you rush to hit the killswitch but then the robot fights you because that'd interrupt him from fetching you coffee" - sorry, what? We already have working car autopilots that don't (usually) run over people in 2018 but future AI-having robots won't have simplest infrared cameras to see obstacles in front of them and would require a killswitch not to destroy everything on their path? Also it'd fight you, the owner? That's ridiculous.

It's generally painful to watch, everything he's covered in that 10-minute bit has been resolved years ago. It'd be somewhat okay if AI development basics were under discussion here - but obviously not its future and/or possible threats.

yeah that's fair enough, then again i made this thread to discuss the overall threat, not a single situation. none the less, as you stated earlier there are many different things that can happen with the development of AI, thought it would be a interesting topic to say the least.
If you want to discuss general threats that's kinda cool, let's do that.

My general concern is that when AI reaches certain point, it will be abused by wrong people and cause global hysteria.

Things that limit most terrorists or criminals are usually their own body capabilities and resources. Considering current technological growth, creating a sophisticated AI will be possible from home, and that brings a lot of power to people that know what they're doing. If you program an AI with the sole purpose of killing as many people as possible, then fit it in a robot full of guns, you're looking at potentially hundreds or thousands of casualties in a matter of minutes.

I guess at that point technology will be so advanced that we can't possibly know about security which will be developed to prevent it, but it makes me wonder how different our daily lives will be.
What do you mean "without actually considering harm to their master/creator".

By design you would that covered, when I build systems, I have to map out situations, I have to create responses to these situations and if ambiguity leads to the unknown then I have to either infer an appropriate response with the information I have (subsumption architecture is a simple but effective way that even an amateur AI hobbyist could learn to implement) or perform the only thing I know which is safe, to stop.

I saw that you were a software engineer (as you say in your Github), then you should also know that an "off" button so to speak is something that could be implemented to override all behaviour, we don't need any intelligent system to consider whether we want to turn it off, on the lower level we could simply turn it off or in the worst case - break it. Luckily circuitry doesn't respond well to EMP devices and the such, but I doubt you would even need to use that.

However - I will say that in the far future, the idea of using robotics not connected to an external network for terrorism is a threat I can imagine to be very real. And one day things will have to be considered for that - but right now I'll stick to my work in the field and be happy that I don't have to work on something so safety critical as that.
I was summoned to add dank thoughts for you.

AI. Where to start? A bot looks at decisions and always picks the best choice. There’s not much more than just math there. A true AI can understand the scale of good and bad decisions without seeing it as “true” or “false”. That gives it range to choice and ‘conscious’. Take the scenario given, a bot will see it as “I have two problems and two different answers, I will pick what my programmer says is priority.” An AI can look at both and comprehend it may fail one, both, or neither. The problem isn’t conscious but free conscious. We still want AI to be bots and that’s just a conflict. Let’s take us for an example of relation with flying. A bot given a human body and asked to attempt to fly just won’t get it unless you tell it how. A person has options. So, if I wanted to jump off a bridge I could try, even knowing I can’t fly naturally, but still I may have motives of learning to fly. AI are designed to not make those bridge jumping decisions but make the better ones. But that’s an expectation we naturally have with each other too. I never expect my peer to jump. The thing with AI is that they will be developed with moral restrictions most likely. Things like ‘don’t hurt other humans’ or “don’t jump off bridges without safety”. I was never born knowing that, but raised in society to think it’s common sense. An AI needs to be taught like what it’s role will be, just like people need to train skills. Makes sense right? Now the program is free and won’t hurt us. We have a lot of similar rules like not flying or shapeshifting. As long as they have moral restrictions the there shouldn’t be lot of issues.

The sum of it is that it’s safe. All the issues you probably are worried about have been thought of years back. This tech is new and by the time it reaches public most safeguards will be in place. Just my opinion and personal knowledge or the subject though