Toribash
Not in a position to watch the videos atm.

I think the main concern that people have right now is AI becoming sentient and I'm not sure how likely that is. Right now we have nothing that's even close to becoming sentient, under the hood all our 'learning' AIs that I'm aware of just update math equations based on the result of an action, all they really are is just math formulas.

Considering we're not even sure what causes sentience or how it works in actual sentient beings it seems unlikely we'll create an artificial sentience anytime soon.

Need help? PM me!
إد هو العاهرة
i doubt people who develop this stuff wont make conditions for the ai to prevent it from going berserk, or atleast an easily accessable shut down method.
Aadame:I'm very signaturable
It's just no one usues my shit .
Originally Posted by BigDog View Post
i don't think there should robots amongst us, not saying it can't be done or it shouldn't it's just not ready for this world yet, i don't think there's people out there with good reasons to make them, the most common thing people want them for is either war or some sort of sex machine, i don't think the fact machines replace jobs for humans is a great idea either, we rely to much on our technology but i think further down the line it's not a bad idea, we just need the right people, right time and to have the right meaning for them.

AI does not equate to robots. There is absolutely no reason to belive the AI (or humanity) would want it to be contained within a fragile physical form.
There is absolutely IMMENSE reason to create AI. Imagine if we could consult something a million times more intelligent than the smartest human. A mind like that could give us the solution to almost any problem we could come up with. Provide us with the strongest and most lightweight materials, blueprints for perfect machines, cures to all illness, weapons of unprecedented power, new concepts our human brains couldn't even begin to grapple alone.
Its intellect would be to us what we are to insects or bacteria.

Humanity losing jobs to machine is absolutely inevitible, and is already happening. We just need to find a way to care for the people at the very bottom of society or face the extinction of the working class. Finland is already experimenting with a standardized basic income for example, that could be one solution.

Originally Posted by Divine View Post
Not in a position to watch the videos atm.

I think the main concern that people have right now is AI becoming sentient and I'm not sure how likely that is. Right now we have nothing that's even close to becoming sentient, under the hood all our 'learning' AIs that I'm aware of just update math equations based on the result of an action, all they really are is just math formulas.

Considering we're not even sure what causes sentience or how it works in actual sentient beings it seems unlikely we'll create an artificial sentience anytime soon.

True, but it wouldn't have to be sentient to start making improvements to itself. All you have to do is buy into the scientific theory that there's nothing unseen going on with brains, that they are just signals in a framework. Eventually we'd get there. I assume rapidly with a selfimproving machine aiding us.
Even if you are spiritual or religious and belive in a soul or some higher power inhabiting the brain you'd still be faced with the issue that a superintelligence doesn't need to be sentient to be dangerous. A calculator isn't sentient but could regardless give you the answer to almost any mathematical problem. Imagine a tool like that for complex issues like waging war

Originally Posted by Alpha View Post
i doubt people who develop this stuff wont make conditions for the ai to prevent it from going berserk, or atleast an easily accessable shut down method.

For sure precautions are taken, but a few years or decades down the line AI would become mainstream and then there's absolutely no way to control and regulate everyone with access to an AI. Once that genie is out of the bottle there's no stopping it. You'd be able to download it like any file, and if its sentient it wouldn't limit itself to staying in your HD. It'd live and spread through the cloud like a virus, or through electrical networks, or even the air, what the fuck do we know.
And if by some gods miracle we manage to contain it the notion that we could keep it secure is so naive its ludicrous.
Imagine in a conservative case a mind that can do thousands of years of human thinking within the span of a few days. We're going to control that? It would be like trying to restrain a god.
It use any number of ways to escape, even without external help. It could trick researchers its harmless by concealing its true intelligence, convince them with bribes, transfer itself by wireless or a billion other concepts no human is smart enough to understand. At the moment we can't even contain members of our own species.
I've thought about this topic a bit in the past myself and I've come up with one conclusion in almost every single scenario I've been able to think up. If a hyper intelligent malicious AI connects to the Internet it's game over we lose that's that. For example, what is stopping a hyper intelligent always improving AI that has access to the Internet from flat out nuking everything? Now I know some people are thinking "But Seth some countries have systems set in place that only allow the manual launching of nuclear weaponry." Even if that is the case there are plenty of non nuclear ICBMs that can be launched entirely electronically at the push of a button. And that's just one example of the quickest way a hyper intelligent malicious AI could hurt humanity. I'm not even going to get into the fact that it could blackmail anyone that's ever done anything even slightly shady on the Internet *Cough* Looking at the thousands of online affairs happening right now *Cough*

I believe that if a hyper intelligent AI that can think for itself and can form it's own opinions outside of what it's programmed goal is connects to the Internet and is able to use it to further itself and do what it wants we're screwed 100%. Just think of the worst criminal you can and then dump a brain capable of thinking and improving thousands of times faster than ours can with the ENTIRE wealth of knowledge that the Internet has at it's disposal.

This is just my personal opinion though. I am in no way an expert on the subject.
The Future is Bulletproof
"How r people still in jail? Just make alts lol" -Wizard
We shouldn't fear the inevitable. Once we have access to hardware that is powerful enough to support such intelligent AIs, it's only a matter of time until someone decides to create a malicious one. The only way we could realistically avoid this scenario is if we hindered our own technological evolution, but in any free economy it is extremely unlikely to have any end to technological advancements.

In my personal opinion humanity is extremely primitive in terms of intelligence and physical limitations, so we shouldn't be surprised when taken over by different form of "life" in the future.
cars make you fat?


no, they help you travel greater distances in less time than walking.
Will the development of superintelligent AI destroy us?

no, it'll help us reach greater things as minimum.


AI doesnt need emotions, there is no reason for it, we need AI to think for us not to feel for us. an AI doesnt have to be an android or humanoid.
Originally Posted by Aoc View Post
cars make you fat?


no, they help you travel greater distances in less time than walking.
no, it'll help us reach greater things as minimum.


AI doesnt need emotions, there is no reason for it, we need AI to think for us not to feel for us. an AI doesnt have to be an android or humanoid.

This sort of answer scares me the most out of any

The "just do this or that and it can't possibly become an issue" attitude that some(almost exclusively those who aren't well read on the subject) people have seems very dangerous to me

Nothing about AI is obvious or simple
I don't know if an intelligence created by humans is going to jeopardise humans; while its information processing power is on a different level, won't the facts that it's mimicking human intelligence and that it is made by humans condition it to its makers' premises? The objective purpose of any healthy living being is to perpetuate its own species. I doubt any kind of mega fast manmade information processing machine running for a billion years could justify the denial of the most primal biological impulse, as long as it respects logic. Logic is essentially consistency, and induction is inconsistent unlike deduction, so unless humans find a way to purposely tweak the AI's decision giving it indeterministic aspects, I think any machine that thinks like a human is not going to disrespect a human's most important premise.
-----
This is of course assuming that intelligence is created in the image of our own. Not that I know if we even know how to create the intelligence of other living beings.
Last edited by pusga; Sep 5, 2019 at 08:46 PM. Reason: <24 hour edit/bump
oh yeah
Originally Posted by pusga View Post
I don't know if an intelligence created by humans is going to jeopardise humans; while its information processing power is on a different level, won't the facts that it's mimicking human intelligence and that it is made by humans condition it to its makers' premises? The objective purpose of any healthy living being is to perpetuate its own species. I doubt any kind of mega fast manmade information processing machine running for a billion years could justify the denial of the most primal biological impulse, as long as it respects logic. Logic is essentially consistency, and induction is inconsistent unlike deduction, so unless humans find a way to purposely tweak the AI's decision giving it indeterministic aspects, I think any machine that thinks like a human is not going to disrespect a human's most important premise.
-----
This is of course assuming that intelligence is created in the image of our own. Not that I know if we even know how to create the intelligence of other living beings.

Given the fact that we don't even know how our own brains work, how COULD we create in our own image? Besides that little problem a computer functions very differently from a human on a technical level, and since we don't even fully understand conciousness the assumption that an AI will be anything at all like us regarding logic and morality feels like a large and uncertain one.
Something on the intelligence level of a general AI would hardly regard itself as a member of our species either, even if it is created a perfect image of us. We're just frail and stupid fleshbags who consume massive resources to survive. It'd be like caring for a billion crippled retard babies who fuck up your belongings and try to stab you in your sleep. Logically it doesn't make any sense to spend the resources to take care of them, and morally(presuming the AI is moral at all) you could make the argument that the retard babies must perish in order for the resources to be allocated where they will actually be useful.

And even if we do manage to create it in the absolutely perfect image of our own, that wouldn't stop it from potentially turning on us. People kill eachother all the time.

And EVEN if it doesn't turn on us we'd surely use it against eachother in political, economical and military ways none can even begin to guess at this point. Or it would be used for good, but seeing as the political climate still is fucked almost worldwide I doubt it.


It's also worth noting that when talking about AI we're not just talking about the world coming together as one to develop an intelligent machine in harmony. Thousands of independant companies, branches of government and groups of IT students at university are working on this technology in their own way. Assuming none of them will be malicious is foolish, especially when the potential for terrorism is on a world-ending scale.




This is such a fascinating discussion to have but we'll never know what is to come until it is behind us. There are too many questions we can't answer yet.

It's like we're being pulled into a black hole, inevitably we'll get there, but we'll never know what's on the other side until we pass into the singularity. And at that point there's no getting out.
Last edited by jisse; Sep 7, 2019 at 12:43 AM.
It is very hard to discuss this topic without drawing parallels with our own intelligence, but generally speaking it is unlikely that artificial intelligence will function in any similar way to ours. Coded language and algorithms are solely logic based, where as humans make most decisions at least partially based on feelings and emotions.

This essentially means it could probably be shaped or influenced towards any direction. It is unrealistic to think that the technology won't reach bad people, but in this ideal scenario where only government controls AI development we could potentially be safe. But then again, if a super intelligent AI would logically decide that humanity is a threat to itself and it only exists for seemingly futile reason of reproduction, it could be an end for us as we know it.

Going back to the 100% logic topic though, this could potentially cause some harm to our society even if it does not destroy us. For example, AI would probably come to the conclusion that disabled people are useless thus should be gotten rid of, people of lower status should be used as slaves for overall more beneficial economy, etc. Without the emotional element, something that is extremely cruel to us would be logical and normal to the AI. That said, I hope no one will attempt to create an AI that can have emotions, as that would most likely end in our demise.