Ranking
Thank you immportapig, that was an excellent post. Now leave out the personal attacks and you are a very good user in this forum.
As I admitted in my post, I have no idea about AI, that is why I asked for clarification on your stance. You did a very good job.


Hah, I ninja'd TDCadmin. He is right though, you did not provide any sources. Seeing as your post has a decent quality that is alright, I guess. I still encourage you to do find sources for your claims. You posted some very technical claims so please find some articles that support you.

edit: I deleted some useless posts due to poor conduct. Be productive or go away. The purpose of the thread is clear. Provide sources and arguments to provide your stance or your post will be deleted.
If no one is capable of doing that this thread will be closed.
There is no such things as “my point is so obvious that I don't even need a source”. If you think that is a reasonable response you are also free to go away.
Last edited by Redundant; Feb 12, 2015 at 02:28 PM.
Originally Posted by TDCadmin View Post
you are not permitted to insult, ridicule or demean anyone in this thread. Treat other posters with respect. There’s a different, less vicious spirit to TDC threads compared to regular Discussion threads.

Originally Posted by ImmortalPig View Post
anyone with even a tertiary knowledge..
your statement shows a supreme lack of knowledge...
I find it hard to believe that anyone who has even a small amount of knowledge of computers...
So come on doomsdayers...

Watch it, buddy.

Also, yet to see any sources about how the threat of AI is overrated. In the OP you've got an open letter signed by the leading thinkers in the field, all pushing to develop ways to ensure Friendly AI because they realise the massive risks AI poses. How will you trivialise that?
Originally Posted by Redundant View Post
It is an interesting topic so don't ruin it by being personal. It's annoying and fruitless.
I don't want it to become a “hurr durr anything is possible“ thread either, however. That is why I asked for sources for all claims.

Last edited by TDCadmin; Feb 12, 2015 at 11:52 AM.
Originally Posted by Redundant View Post
Thank you immportapig, that was an excellent post. Now leave out the personal attacks and you are a very good user in this forum.
As I admitted in my post, I have no idea about AI, that is why I asked for clarification on your stance. You did a very good job.


Hah, I ninja'd TDCadmin. He is right though, you did not provide any sources. Seeing as your post has a decent quality that is alright, I guess. I still encourage you to do find sources for your claims. You posted some very technical claims so please find some articles that support you.

To support what

All I did was explain some very basic concepts that everyone should know BEFORE posting in this thread. If you want to know more about something then say it instead of vaguely saying "your claims".
Originally Posted by TDCadmin View Post
Watch it, buddy.

Also, yet to see any sources about how the threat of AI is overrated. In the OP you've got an open letter signed by the leading thinkers in the field, all pushing to develop ways to ensure Friendly AI because they realise the massive risks AI poses. How will you trivialise that?

That's because there's yet to be any explanation of HOW AI could possibly be a threat.

How about your read the letter before you make claims about it. They are just saying that beneficial AI (as opposed to weaponized AI for example) should be pursued. They say NOTHING about the "massive risks AI poses". And they certainly don't explain any of them.

I'm not sure why I am being asked to explain how AI isn't a risk, when no one is explaining how AI is a risk. Being asked to prove a negative is illogical. If you can't show that AI is a risk then there's no need for me to defend my position. I literally asked for some explanation or proof that AI could possibly be a threat, but all I get instead is you two saying "lol post some sources for you explaining what AI is".

Are you guys for real right now? In old discussion you both would have been banned for shitposting like that. Let's have a discussion, post some content already. Name your specific concerns and I will respond. Ask for citation on a specific statement and I will respond.

Originally Posted by TDCadmin View Post
Well, I'm addressing your claim that AI is overhyped. Source pls. I've provided many. I quoted Hawking talking about specific risks and Musk talking about the holistic risk. There's tonnes of articles and books written about the subject that detail the risks. In the OP I quoted a passage from Nick Bostrom's book Global Catastrophic Risks, and in his book he devotes a chapter to the risks of AI. You have yet to provide any sources that back up your claim that the threat of AI is overrated. You just keep saying 'It's obvious, anyone can see it's not a threat'. Hawking, Musk and Bostrom and many others don't see it the way you see it.

They are saying, as Musk said "“that AI safety is important". Why would AI safety be important if there were no risks?


"You are not permitted to insult, ridicule or demean anyone in this thread. Treat other posters with respect."

Last chance, respect the rules of this thread or get out.

~Ele

You have a lot of people saying "AI is scary" but no one saying why, this is what hype is. This whole argument from authority is kind of week, but whatever I'll humor you.

I'm guessing you are misrepresenting the rest of your sources as you did with the letter (which again, does not explain or even mention any risks of AI) so let's have a look:

So what does Musk say about AI? "With artificial intelligence we are summoning the demon." Wow what an abstract claim, and again, no evidence. "Tesla Motors CEO Elon Musk worries about a "Terminator" scenario arising from research into artificial intelligence." ? What does that even mean? He's scared of weaponized strong AI turning rogue en mass?

The linked Hawkings article doesn't actually go into the what Hawkings actually said, it just talks about scary AI in a general sense. "In the near term, world militaries are considering autonomous-weapon systems that can choose and eliminate targets" Yes, that's dangerous, sure. But again, they would have to be a self-programming strong AI, a normal AI that can discriminate targets is not a problem. And as usual, this isn't a problem of AI, it's a problem of all electronic weapons.

Can't comment on GCR, but I can comment on Superintelligence: Paths, Dangers, Strategies. Bostrom's book discusses the possibility of a single superintelligence attempting to take over the world. To call this "a risk of AI" is very misrepresentative. So what does he say, well firstly he agrees that my simple precaution is sufficient,"If the AI has (perhaps for safety reasons) been confined to an isolated computer, it may use its social manipulation superpower to persuade the gatekeepers to let it gain access to an Internet port. Alternatively, the AI might use its hacking superpower to escape its confinement." Now, the second is obviously not possible since anyone building a superintelligence is going to secure it with sufficient encryption that it can't simply hack it's way out. Time bounded protection (even 1024bit is sufficient) is plenty. As for the first scenario? That's the same risk as "what if the guards at the nuclear missile facility just let someone in?!" Social engineering is a very well studied topic so let's not even bother going into it, suffice to say it's simple enough to prevent by not giving anyone the power to do it. It'd make a hell of a movie but it's just not realistic.


Am I to seriously think that Musk/Hawking/Bostrom are seriously afraid of skynet/war games scenarios? Again this discussion has turned into "anything is possible you don't know the future, look at these works of fiction they are all possible!" I am seriously asking you because I want to be sure that what you are asserting is that skynet/war games are the future and that I am expected to disprove them using citations.

If you are unwilling to make an argument yourself or contribute to the discussion, and are only going to say "no but look at what person X says" without making any kind of effort to discuss what is being posted, then please don't post. It may not be against the rules of the thread, but it is against the rules of the subforum, and you are required to abide by them.

Removed complaint about moderator because that belongs in the complaint board for complaints.
Last edited by Zelda; Feb 12, 2015 at 04:25 PM. Reason: Dang it pig.
<Faint> the rules have been stated quite clearly 3 times now from high staff
Originally Posted by ImmortalPig View Post
Honestly I can't even think of a way to make AI dangerous. So come on doomsdayers, make your argument instead of abstractly saying "AI is scary it will wipe out humans".

As long as vital systems are kept isolated (eg: nuclear launch controls, drone systems and other military networks, computer-driven hospital equipment, dam controls, etc etc.) the best an AI could do is analyze data found on the internet. Even if someone connects these systems to the net, it'd be quite hard to port-sniff all the possible ipv4 and ipv6 addresses (it'd take ~forever).
It's also not 100% that a strong AI would be out to kill us in the first place.
So yes, chances are slim of this happening.

But what about a strong AI designed to control and coordinate military hardware? That'd be scary as hell, and like everything, every new technology comes first to the military.

I also don't feel the need to cite sources every time I discuss something, it seems like you are just bringing this up to counter immortalpig. If you don't think my statements have any background knowledge behind them, read up on the topic yourself. I'm right until proven wrong (that's how it should work, anyways) unless I'm telling you that the sky is made of kittens or something equally ridiculous.

Protonitron: You would still be obliged to check the sky beforehand, and if you have an opinion which is not justified then it is your job to justify it, not everyone else's. You are either right or wrong, this has nothing to do with available evidence or citation, just because nobody checks doesn't mean that it is true. You don't always have to cite sources in discussions because some discussions don't require so much knowledge or evidence for you to make a conclusion, this discussion often does.
Last edited by Zelda; Feb 12, 2015 at 03:39 PM.
Again, you've given no sources about how the threat of AI is overrated.

Originally Posted by ImmortalPig View Post
You have a lot of people saying "AI is scary" but no one saying why, this is what hype is.

Bostrom's GCR goes into it and Hawking mentioned a few specifics (which you conveniently ignored).

Originally Posted by ImmortalPig View Post
I'm guessing you are misrepresenting the rest of your sources as you did with the letter (which again, does not explain or even mention any risks of AI)

"They are saying, as Musk said "“that AI safety is important". Why would AI safety be important if there were no risks?"

Originally Posted by ImmortalPig View Post
So what does Musk say about AI? "With artificial intelligence we are summoning the demon." Wow what an abstract claim, and again, no evidence.

He says it's our biggest existential threat. His concerns about the risks are echoed by the tonne of scientists that signed the letter saying that AI safety is important.

Originally Posted by ImmortalPig View Post
The linked Hawkings article doesn't actually go into the what Hawkings actually said [The linked article is written by Hawking. It's his actual words], it just talks about scary AI in a general sense.

"One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all."

Originally Posted by ImmortalPig View Post
Can't comment on GCR, but blah blah blah

If you can't comment on the source I provide, then don't comment on a source I didn't provide.

~Ele
A strong AI controlling military hardware? Assuming whoever built it put in 0 precautions against rogue AIs (eg an 'off' switch) then sure, it could kill some people, but ultimately they could just be destroyed. It's not a scenario that could lead to doomsday.

A robotic army still has to be developed, designed and built, invariably if a robotic army exists, then we would have the weaponry to destroy it even if it went rogue.
-----
Originally Posted by TDCadmin View Post
Again, you've given no sources about how the threat of AI is overrated.


Bostrom's GCR goes into it and Hawking mentioned a few specifics (which you conveniently ignored).


"They are saying, as Musk said "“that AI safety is important". Why would AI safety be important if there were no risks?"


He says it's our biggest existential threat. His concerns about the risks are echoed by the tonne of scientists that signed the letter saying that AI safety is important.


"One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all."


If you can't comment on the source I provide, then don't comment on a source I didn't provide.

Just calm down and think for a second mate. You are demanding that I provide sources that AI doomsday is overrated, whilst none of your sources are any more than someone saying "AI is scary". Just think about that for a second, you have not provided a single source that has even a remotely viable way for a doomsday to occur, and yet you want me to disprove it. This is called "proving a negative" - you are the one making the assertion, the burden of proof is on you. If you can't provide even a single legitimate source then how can I argue against it?

If I said that "oven safety is important" does that imply a risk of global doomsday at the hands of ovens, or am I just concerned that someone might burn themselves?

Again, the letter is not about AI doomsday, it's just that they want priorities steered towards beneficial AI - as you would know if you actually read the letter. Though again I will ask you to drop the appeal to authority fallacy, but I will again humor you.

One can imagine all kinds of things without sources. One can imagine humans colonizing the sun (of course, they'd have a moat around their colony to keep them cool). How about you disprove sun colonies - remember to provide sources!

In all seriousness, I didn't realize that Hawking actually wrote the article, I guess his superior intellect outsmarted me by quoting himself in the title and by listing himself and 3 others in the author list. Well that and all his pop culture references, I understand it's not a serious article, but still, it's not what you expect. As for the content, well it has been discussed above already so there's no need to respond further.

Ok, I will make sure to only reference sources that you first provide, and naturally I will not require you to actually provide the text of something that you cite, I will merely accepted that it supports your argument without reading it. This is a reasonable and fair rule, thank you TDC overlord - forgive me my crime of posting an unsanctioned citation.
Last edited by ImmortalPig; Feb 12, 2015 at 03:01 PM. Reason: <24 hour edit/bump
<Faint> the rules have been stated quite clearly 3 times now from high staff
Originally Posted by ynvaser View Post
I also don't feel the need to cite sources every time I discuss something, it seems like you are just bringing this up to counter immortalpig.

You don't understand what Pig's claiming. He's claiming that the threat of AI is overhyped and decrying all the top leaders in the field that say that the threats are real. He's at odds with a whole lot of experts on the subject. Am I going to ask for his source on claim? Of course I am. It's an extraordinary position he's taking.

Originally Posted by ImmortalPig View Post
A strong AI controlling military hardware? Assuming whoever built it put in 0 precautions against rogue AIs (eg an 'off' switch) then sure, it could kill some people, but ultimately they could just be destroyed. It's not a scenario that could lead to doomsday.

Say the AI gains access to nukes. AI can do things in the blink of an eye, before you hit the off switch. Boom, nuclear holocaust. No need for a robot army.

Originally Posted by ImmortalPig View Post
Just calm down and think for a second mate. You are demanding that I provide sources that AI doomsday is overrated, whilst none of your sources are any more than someone saying "AI is scary".

No. You're exaggerating. Doomsday =/= threats.

Originally Posted by ImmortalPig View Post
Just think about that for a second, you have not provided a single source that has even a remotely viable way for a doomsday to occur, and yet you want me to disprove it. This is called "proving a negative" - you are the one making the assertion, the burden of proof is on you. If you can't provide even a single legitimate source then how can I argue against it?

Your argument is that the threat of AI is overhyped. You're saying people are overhyping the threats, which means you recognise that there are threats that are being hyped. If you truly don't understand what the threats are, then how can your argument be that the threats are overhyped?

Originally Posted by ImmortalPig View Post
If I said that "oven safety is important" does that imply a risk of global doomsday at the hands of ovens, or am I just concerned that someone might burn themselves?

Again with the exaggeration. You also proved my point. Yes, you're concerned about people burning themselves, because that's a potential threat. Yes, those leading researchers are concerned about AI safety because there's potential threats.

~Ele
Originally Posted by TDCadmin View Post
You don't understand what Pig's claiming. He's claiming that the threat of AI is overhyped and decrying all the top leaders in the field that say that the threats are real. He's at odds with a whole lot of experts on the subject. Am I going to ask for his source on claim? Of course I am. It's an extraordinary position he's taking.

None of the guys you quoted are "top leaders in the field", and the open letter is not about doomsday AI...


Originally Posted by TDCadmin View Post
Say the AI gains access to nukes. AI can do things in the blink of an eye, before you hit the off switch. Boom, nuclear holocaust. No need for a robot army.

CITATION PLEASE.

Nukes do not launch instantly, nukes are not uncounterable, nukes are not connected to the internet, nukes cannot be launched in the manor you suggest.

What's more, we weren't even talking about nukes, you suddenly replacing robots with nukes then saying "omg nuclear holocaust" is not productive!

Originally Posted by TDCadmin View Post
No. You're exaggerating. Doomsday =/= threats.


Your argument is that the threat of AI is overhyped. You're saying people are overhyping the threats, which means you recognise that there are threats that are being hyped. If you truly don't understand what the threats are, then how can your argument be that the threats are overhyped?


Again with the exaggeration. You also proved my point. Yes, you're concerned about people burning themselves, because that's a potential threat. Yes, those leading researchers are concerned about AI safety because there's potential threats.

~Ele

I see you've dropped your argument from "skynet/war games" to "threats exist, no matter how minor". That's good progress, now you just need to understand that yes, there are threats, but existing standard procedures easily defeat them, as Bostrom says in the book that you rejected because you didn't post it...


I think we have now come in a full circle and are in agreement that, as we have seen from your links, anyone concerned about AI doomsday are overhyping the risks and threats. Minor threats do exist, but the risks can easily be controlled.
<Faint> the rules have been stated quite clearly 3 times now from high staff
ok as an Lmod who is also familiar with TDC's rules I am going to start deleting TDCadmin's future posts, you are not an admin here I am afraid, you are just Ele. Me and redundant are Mods on this board and if you think we should moderate differently you are free to write a complaint. Sorry Ele, just use your own account and trust the staff to try to keep shit classy. Thanks.
Last edited by Zelda; Feb 12, 2015 at 04:27 PM.
Good morning sweet princess
Originally Posted by protonitron View Post
I feel like while the rest of this thread is perhaps focussing on only strong more humanoid AI while you are still focusing on what we currently have. This seems to be an interpretive error.

I think my quote explains that I am clearing that up and explicitly saying that we are talking about strong AI.

Discussion of watson/NTM/etc are not strong AI but are casually mixed in there, and I am making the distinction.
Originally Posted by protonitron View Post
I'm not sure why you think watson or the NTM are dangerous in any way... Both are practically identical.

They are not, and I just said they are practically identical, I'm not sure how you got that interpretation honestly.


Originally Posted by protonitron View Post
So you seriously expected a moderator to heed your claims that those who disagree with you (me and redundant included) should have their posts deleted because you think they are wrong about something and therefore don't know enough to participate? Otherwise I am pretty sure it counts as rhetoric.

No, I am categorically against deleting posts for one thing. But we have said rule, and to stop this situation where people who don't know anything about AI quote a few famous people without any supporting evidence and assert it as fact.

Having to prove mundane things like "an isolate system can't take over the world" is far beyond unproductive.
Originally Posted by protonitron View Post
As I think I said earlier in this post, in previous posts I have only been referring to strong AI. Your misinterpretation of my point as a strawman is my fault for lack of clarification. Your point that humanity is decent at isolating dangerous stuff is one of your stronger arguments.

That quote was in reply to Redundant, not you, so perhaps you are logged into the wrong account, but if not then please refer to his post "You would also require arguments and sources why all AI in the future would definitely be on isolated systems exclusively." which is most certainly a strawman.

Originally Posted by protonitron View Post
Some people are just stupid regardless of how well informed they are so the rule you quoted does not apply. Anyway, as far as I could tell a lot of people believe that isolating all strong AI would be difficult and at least uncertain. Your claims that AI is completely safe up to this point rely on people isolating it which is uncertain so in other words there is some risk of it being let loose.

Sure. Isolation is a security method and of course it can fail if people don't do it. Is there a risk of people not doing it? Yes of course.

The argument at this point is something like;
AI is scary -> isolate it and there's no problem -> but what if we don't -> well then that's a problem...
Is this a productive line of thought? To me, it definitely isn't. The answer has been given, and if the only counter argument is "but what if we don't" then there's not much point in further discussion.

Originally Posted by protonitron View Post
I believe a lot of people believe it will take over in the same way most humans would given that sort of intelligence and a bolt to transfer information directly to computers , but to be honest, as I have said we have no way of knowing how a human like AI would act (human like in mechanism but not capability or processing ability).

I feel like this should have been the focus of your post rather than an afterthought.

Ok? People can believe whatever they want, but in lieu of a valid argument, I don't think I'm obliged to respond to beliefs.
<Faint> the rules have been stated quite clearly 3 times now from high staff