Ranking
Originally Posted by ImmortalPig View Post
None of the guys you quoted are "top leaders in the field", and the open letter is not about doomsday AI...

Yeah, the letters about preventing the threat of AI by improving AI safety.


Originally Posted by ImmortalPig View Post
Nukes do not launch instantly, nukes are not uncounterable, nukes are not connected to the internet, nukes cannot be launched in the manor you suggest.

What's more, we weren't even talking about nukes, you suddenly replacing robots with nukes then saying "omg nuclear holocaust" is not productive!

Hawkes (whose post has gone missing), talked about the threat of AI being connected to militaries. You said a robot army is necessary for this threat to become a reality. Uhm? Yeah no. There's plenty of other ways for AI connected to militaries to screw with everyone other than by creating a robot army. Nukes could be one way (if you think creatively, I'm sure you could think of others) - despite your expert assurance that they aren't.

Originally Posted by ImmortalPig View Post
I see you've dropped your argument from "skynet/war games" to "threats exist, no matter how minor".

Never once mentioned skynet/war games. Only ever mentioned that threats exist. I see you dropped my argument about the logical insanity of you saying we're asking you to prove a negative.


Originally Posted by ImmortalPig View Post
That's good progress, now you just need to understand that yes, there are threats, but existing standard procedures easily defeat them, as Bostrom says in the book that you rejected because you didn't post it...

If existing standard procedures easily defeat the threats, then why did Musk invest $10m into developing ways to ensure AI safety? If the problem is as simple as you say it is, why are the leading experts and other great minds (greater than yours, I think it's fair to say) so concerned about it?


~Ele (@proto, I'll make posts my acc. from now on (just used this account in the first place to refer Pig to the conduct rule, then continued using it for the sake of thread clarity))
Last edited by TDCadmin; Feb 12, 2015 at 04:07 PM.
Sup guys, I'm a Junior IV software engineer at HP, my focus here is to automate tasks and replace human resources, starting 2011 HP started a new brand product called Smart Planning, this works with a HP software called Service Manager and the name speaks for itself, it manages your service, most clients like Chrysler and American Airlines use the Service Manager only as a ticketing tool (the requester calls a service desk and the Service Manager is used to store and escalate that ticket), but the company as a whole makes a lot more with Service Manager, we have a badge control system that allocates your productive data on your ID Badge, the ID Badge contains your full name, ID, E-mail, blood, organization, some HR info and your work hours, with the new Smart Planning system now we have a tool that manages this information and 'helps' us understanding/improving it.

How that works: First of all the Smart Planning platform is able to read all your data, it knows when you arrived the place, when you got out, how many hours you spend in your desk and such (bathrooms and common areas are 'not' stored because of legal concerns about it. Right now this system cannot be called AI by our standards because it learns only when data is imputed manually, it can read all your data, it knows where you are, but if you input a query like 'what is an apple?" on it the Smart Planning msys it wont be able to answer you properly.

BUT the Seniors managed to create a Mid-level algorithm that is able to research, not only with google, it researches employees inputs, reads pre-stored data and, wait for it... Big Data. In my honest opinion Big Data is the shortcut for any AI to become powerful and therefore smarter and potentially dangerous to humans because Big Data is the ability to learn from data that was naturally imputed, like that thermostat that 'learns' what temperature you like, when you go to bed etc, but Big Data goes a little beyond that, it learns every possible path needed to achieve a goal or answer, let's face an issue like a garden, basic programming is a path on that garden, advanced data research is asking people what path they take and then build a path, big data is not building a path at all, but watching every step and store that information to study the best path and it's needed features. The new Smart Planning system with Bid Data is nowhere near to be implemented, but A. LOT. of capable people were hired only for this task and the company is really looking forward to this with the new Cloud storing technology.

That being said I would like to contribute with my opinion on the subject:

I think both sides have their right points: AI is at this point not dangerous at all, actually the most efficient AI in use today is helping in healthcare (http://www.openclinical.org/aiinmedicine.html) and other ones are being developed to help disabled people (but they're not as much advanced as the healthcare ones, at least from what I know). When comes to developing we learn that you always need to have an OFF switch if your system/bot/algorithm is set to be automatic, there are many ways to keep a system safe, in my city for example the power plant is managed by a system that detects power surges and replace (somehow) the energy needed by that place for a more stable (yet pricier) one for a few time, but sometimes in storms the ground itself can become ionized (or deionized) and the system needs to be shut-down to avoid uneeded replacements, the system already knows places with the most chances of a power surge and it keeps an extra eye on that places, but if something is done wrongly, two buttons and it's back on manual again.

On the other hand, not everyone builds everything following the exact standards, I have faced in my life automated systems with no OFF switch, algorithms with infinite scheduled tasks and no data confirmation. These are the dangerous pieces and they pretty much already exist, but yet not that smart.

The danger in AI is the person who built it.
I think a good thing to focus on would be whether it is worth the risk. In the meantime I am going to try to dig up an article in the economist from a few years ago my Dad got obsessed by.

My opinion is that truly life like AI could certainly be made unthreatening as long as we had suitable understanding of it beforehand. Whether humanity us capable of that sort of understanding is another matter, we would need to make it limited in its processing speed. I would not advise we try to discuss what sort of understanding or control humanity is capable of because none of us are neurologists or computer scientists.
Good morning sweet princess
Originally Posted by TDCadmin View Post
Hawkes (whose post has gone missing), talked about the threat of AI being connected to militaries. You said a robot army is necessary for this threat to become a reality. Uhm? Yeah no. There's plenty of other ways for AI connected to militaries to screw with everyone other than by creating a robot army. Nukes could be one way (if you think creatively, I'm sure you could think of others) - despite your expert assurance that they aren't.

Internet enabled nukes are a threat, regarldess of AI.

Originally Posted by TDCadmin View Post
Never once mentioned skynet/war games. Only ever mentioned that threats exist. I see you dropped my argument about the logical insanity of you saying we're asking you to prove a negative.

That was the argument of those you keep quoting.

Originally Posted by TDCadmin View Post
If existing standard procedures easily defeat the threats, then why did Musk invest $10m into developing ways to ensure AI safety? If the problem is as simple as you say it is, why are the leading experts and other great minds (greater than yours, I think it's fair to say) so concerned about it?

It's his money, he can do what he wants with it. You can read what FLI does here; http://futureoflife.org/static/data/...priorities.pdf
As you can see, it's all mundane things that are normal concerns, it's not about the doomsday. As I've said before, doomsdays concerns are overhyped, and now that we delve into what FLI actually does, we see that the claim that Musk did not invest $10m to prevent skynet. Hype has struck once again.


I'm sensing a pattern here; you seem to only be able to produce arguments from authority...
<Faint> the rules have been stated quite clearly 3 times now from high staff
I feel like the saying that human like AI is scary can sort of be justified by the fact that we would be trying to make something which controls itself, this means we might struggle to control it. The mathematical power of computers is also scary, making something we can't control sounds risky enough without adding Sherlock Holmes (sociopathic Cumberbatch TV series version, not tweed clad, pipe smoking literary version) like memory and processing speed.

Then again, such fears can't be proved even if they are understandable.
Good morning sweet princess
Originally Posted by ImmortalPig View Post
So what does he say, well firstly he agrees that my simple precaution is sufficient,"If the AI has (perhaps for safety reasons) been confined to an isolated computer, it may use its social manipulation superpower to persuade the gatekeepers to let it gain access to an Internet port. Alternatively, the AI might use its hacking superpower to escape its confinement."

Originally Posted by ImmortalPig View Post
As for the first scenario? That's the same risk as "what if the guards at the nuclear missile facility just let someone in?!"

Nitpicking again because I don't have any problems with anything you've said up to this point and/or controversial points have been addressed. I've already provided evidence that your statements may not be based in reality, reposting it here. If it's easy for a smart human to convince a slightly less smart human to "open the box" within an hour, how easy would it be for a malignant superintelligent AI to convince a human of any intelligence to hook it up to an internet connection for however long the guard is there? Even if the regular guards/gatekeepers don't have access to a text terminal, SOMEONE would have to talk to it at some point, otherwise there would be no reason to have the isolated AI system in the first place.
Last edited by hawkesnightmare; Feb 13, 2015 at 12:09 AM.
All it takes is one bad day to reduce the sanest man alive to lunacy. That’'s how far the world is from where I am. Just one bad day.
Originally Posted by hawkesnightmare View Post
Nitpicking again because I don't have any problems with anything you've said up to this point and/or controversial points have been addressed. I've already provided evidence that your statements may not be based in reality, reposting it here. If it's easy for a smart human to convince a slightly less smart human to "open the box" within an hour, how easy would it be for a malignant superintelligent AI to convince a human of any intelligence to hook it up to an internet connection for however long the guard is there? Even if the regular guards/gatekeepers don't have access to a text terminal, SOMEONE would have to talk to it at some point, otherwise there would be no reason to have the isolated AI system in the first place.

Sure, but you could do things like not having any network ports in the room, having the network card removed from the AI's box and having it locked with the key held by a third party, etc. In the same way that one rogue guard can't let a prisoner out, no one should be able to let out the AI on their own.

Requiring the consensus of a large number of people greatly reduces the risk.
<Faint> the rules have been stated quite clearly 3 times now from high staff
I like how everyone just ignored this post. Finallsy someone who has a remote idea what he is talking about and everyone ignores it.
That makes me sad.

Originally Posted by IIInsanEEE View Post
Sup guys, I'm a Junior IV software engineer at HP, my focus here is to automate tasks and replace human resources, starting 2011 HP started a new brand product called Smart Planning, this works with a HP software called Service Manager and the name speaks for itself, it manages your service, most clients like Chrysler and American Airlines use the Service Manager only as a ticketing tool (the requester calls a service desk and the Service Manager is used to store and escalate that ticket), but the company as a whole makes a lot more with Service Manager, we have a badge control system that allocates your productive data on your ID Badge, the ID Badge contains your full name, ID, E-mail, blood, organization, some HR info and your work hours, with the new Smart Planning system now we have a tool that manages this information and 'helps' us understanding/improving it.

How that works: First of all the Smart Planning platform is able to read all your data, it knows when you arrived the place, when you got out, how many hours you spend in your desk and such (bathrooms and common areas are 'not' stored because of legal concerns about it. Right now this system cannot be called AI by our standards because it learns only when data is imputed manually, it can read all your data, it knows where you are, but if you input a query like 'what is an apple?" on it the Smart Planning msys it wont be able to answer you properly.

BUT the Seniors managed to create a Mid-level algorithm that is able to research, not only with google, it researches employees inputs, reads pre-stored data and, wait for it... Big Data. In my honest opinion Big Data is the shortcut for any AI to become powerful and therefore smarter and potentially dangerous to humans because Big Data is the ability to learn from data that was naturally imputed, like that thermostat that 'learns' what temperature you like, when you go to bed etc, but Big Data goes a little beyond that, it learns every possible path needed to achieve a goal or answer, let's face an issue like a garden, basic programming is a path on that garden, advanced data research is asking people what path they take and then build a path, big data is not building a path at all, but watching every step and store that information to study the best path and it's needed features. The new Smart Planning system with Bid Data is nowhere near to be implemented, but A. LOT. of capable people were hired only for this task and the company is really looking forward to this with the new Cloud storing technology.

That being said I would like to contribute with my opinion on the subject:

I think both sides have their right points: AI is at this point not dangerous at all, actually the most efficient AI in use today is helping in healthcare (http://www.openclinical.org/aiinmedicine.html) and other ones are being developed to help disabled people (but they're not as much advanced as the healthcare ones, at least from what I know). When comes to developing we learn that you always need to have an OFF switch if your system/bot/algorithm is set to be automatic, there are many ways to keep a system safe, in my city for example the power plant is managed by a system that detects power surges and replace (somehow) the energy needed by that place for a more stable (yet pricier) one for a few time, but sometimes in storms the ground itself can become ionized (or deionized) and the system needs to be shut-down to avoid uneeded replacements, the system already knows places with the most chances of a power surge and it keeps an extra eye on that places, but if something is done wrongly, two buttons and it's back on manual again.

On the other hand, not everyone builds everything following the exact standards, I have faced in my life automated systems with no OFF switch, algorithms with infinite scheduled tasks and no data confirmation. These are the dangerous pieces and they pretty much already exist, but yet not that smart.

The danger in AI is the person who built it.

Since this thread is going nowhere and the only person who has some actual points is being ignored I'll close this thread.