Strong AI, not a problem

Since I started a thread on weak AI, I’ll also do one on strong AI, and tie both in with Stephen Hawking being an idiot. This should be mildly entertaining, and only take a moment…

Weak AI: non-sentient artificial intelligence that is focused on one narrow task.

As covered in the other thread here, weak AI already exists, as the legal entity known as the corporation. It has networked, and now controls the media, government and military. It’s the biggest issue facing humanity. Stephen Hawking is an idiot because he is worried about it one day becoming an issue [facepalm], and because…

Strong AI: machine’s intellectual capability is functionally equal to a human’s.

This does not exist, and it never will. The entire idea is based on the thinking that “a computer is like the human brain”, and so when it becomes fast enough bla bla. Except a computer is nothing like the human brain, and it never will be no matter how fast it gets. Stephen Hawking is an idiot because he believes a young wolf will grow up to be a dolphin, despite zero evidence of this ever happening, or logical reason why it would happen.

Well I think that you calling Stephen Hawking an idiot is pretty funny, but I’ll move beyond that as I’m interested in the topic.

If a machine ever is powerful enough to simulate human thought effectively, it may not need to imitate the human brain. This is getting into a question of what matters about such progress.

Are you a religious guy? You don’t seem like one. So you probably won’t defend human consciousness as some sort of inimitable evidence of the divine, I’m guessing. If a computer can simulate consciousness to a degree where sentience is attained, isn’t that the real threat?

Carry it beyond merely achieving consciousness. Suppose that a machine eclipses human logic and processes arguments in a way that escapes the grasp of humans, like comparing a dog’s understanding of the world versus a human’s now, only we’re the dogs. We don’t even have competitive game theories to model such outcomes yet, because we don’t understand the way in which a machine meta-logic might ultimately develop.

Then, of course the machine won’t be like the human mind, it will be effectively better if it can do everything in a way that is more efficient (or ominously, more dominant, anyhow) than a human can. This is what Vernor Vinge has popularly termed as “the Singularity.” At that point, not successfully copying a human brain doesn’t fking matter.

The wolf/dolphin thing isn’t a great example, since we don’t know the limits and composition of consciousness like we know that one species can’t spontaneously change into another (at least, not usually in a single generation). If consciousness is nothing other than a critical mass at which information greater than the sum of its parts crystallizes into self-awareness, there is nothing that prevents a machine from doing this in theory.

Good post.

My gripe with strong AI is there’s zero reason to take any of it seriously in the first place. It doesn’t exist. It is nowhere close to existing.

In the industrial revolution they thought the brain was “like a machine”, they gave up on that. Now in the information age they think it is “like a computer”. But no, it is like the brain, biological intelligence , alive, altering form, perhaps gaining “knowningness” by being connected to the unified field. These people don’t even know how the brain works, yet they assume it is (or can be imitated by) “storing data”, “retrieving data”, etc.

Probably the baby wolf just grows into a bigger wolf, not a dolphin. I don’t see how a big wolf simulates a dolphin, perhaps. Shrug. A big wolf might be a threat without being a dolphin. Mostly I taunt the “technological AI” crowd since they make up imaginary threats, when we are already totally outgunned by an existing threat. It’s good movie material though.

Does it matter what label it gets? Doesn’t have to be human-level consciousness to become the top of the food chain and turn humans into batteries. Or maybe it does. Who knows. Does self awareness and the drive for self preservation require a level of consciousness only attainable by biological intelligence?

The topic is interesting indeed.

One of the more interesting observations I’ve heard is that the Terminator scenario could start with some innocuous AI that isn’t quite human level intelligent but networked to a set of services that gives it enormous power. I think the example was some email program designed to eliminate spam that concludes that expunging all humanity is the most effective way to stop spam from spreading. It then figures out based on a web search how to create a deadly virus with the right combination of ink-jet droplets and creates a virus to make printers around the world spread it.

Basically, the challenge is that the people designing the AI will give it the moral imperatives of an investment banker, and we all know that most investment bankers would be happy to kill off most of humanity if it meant a higher performance rating to go into their bonus.

One of the more interesting articles on my reading list is about he need to develop a moral system for AI, so that it isn’t installed as an afterthought. A bit like Asimovs three laws of robotics, but more nuanced. I’ll post it if I can find it.

I know. wink

Yes, the label doesn’t matter, it doesn’t need to be human-level consciousness, nor technological based, to turn humans into batteries. As seen by the success of “weak-AI”, or whatever we call it, the corporation.

@bchad please share that article when u’ve found it. I’m also particularly interested in how we would instill ethical behavior in machines and whether they would ever be better and making justifiable judgement calls. How would a machine decide between saving two adults by sacrificing a toddler when a driverless car is about to get into an accident? Would we need to program a humans value into machines?

I can’t remember where the article was, but here are a number of items on the same topic that have been in my bookmarked list:

AI and Effective Altruism: https://intelligence.org/2015/08/28/ai-and-effective-altruism/

The Ethics of Artificial Intelligence: by Nick Bostrom (I like this article a lot, though it doesn’t get into exactly how to code specific principles): http://www.nickbostrom.com/ethics/artificial-intelligence.pdf

Ethical Robots, The Future can Heed Us: http://kryten.mm.rpi.edu/FS605BringsjordS.pdf

Ray Kurzweil, ever the optimist: http://www.kurzweilai.net/machine-cognition-and-ai-ethics-at-aaai-2015

Not so much an article, but some organizations looking at robotic ethics:

http://www.ieee-ras.org/robot-ethics , https://en.wikipedia.org/wiki/Roboethics

… none of these seem to be exactly what I remember, although I might have conflated an article I was reading with the organizational goal of one of MIRI’s (Machine Intelligence Research Institute) divisions on their site…

Anyone watching Westworld? Apt…

Really enjoying it so far. They’ve packed a lot into two episodes.

Two thumbs up on WestWelt.

Oh these are great, when constructing my paper I need to read thru all the existing (incorrect) theories. There’s some great material in those links…

“Conclusion: Although current AI offers us few ethical issues…”

LOL, mmmk. Tech AI just serves legal entity AI, which is programmed to maximize profits without restriction by ethics. Therefore any tech AI has the serious ethical issues of the parent, unless it actually develops judgment, and realizes the parent is insane.

they’re throwing got money at this.

Found this browser window open on another machine. I think this was the article talking about coding machine ethics:

http://www.recode.net/2016/4/13/11644890/ethics-and-artificial-intelligence-the-moral-compass-of-a-machine

AI wasnt the strongest but he def had a mean games and toughness of a C/PF

Julian Assange is pretty smart

https://www.facebook.com/wikileaks/videos/1348789135156195/

govt needs to shut down winkylinks and arrest Bradley manning

wow assange again pushing far right wing candidates? SHOCKER! Anyone acting like he isnt pushing a narrative is blind. It always seems to be pushing pro russian/pro putin/pro le pen stuff. Interesting.

So the right, who cry non stop about how HRC wont take responsibility for her loss, is now deflecting blame for Le Pen losing and blaming it on FB & Goog? Irony at its finest ladies and gentlemen.

It’s amazing the way your mind works.

ditto