Skip navigation

Sam Hughes of qntm.org after some frustration with some transhumanist hypothesized about how creating better-than human AI to create better-than-better-than-human AI and so on is impossible. Big discussion ensues, he ends up closing the post seemingly conceding defeat. I do not agree with his arguments, but i agree wholeheartedly with his conclusions. If that does not prove i am not qualified to talk about intelligence then what does? ;-)

Nevertheless. AI. The subject seems pretty hot ATM. What i think.

Basically, i hold “explosive AI” to be possible, in a theoretical kind of way, but for it to actually happen in our civilization’s time frame would call for tech breakthroughs on par with warp-drive or time-travel. To put it another way: common-AI is already trivial in our present, but this technology can’t progress into the general 42 that transhumanists expect.

Artificial Intelligence is already a well established field in science. There are undergrad courses on the topic (i guess). There are symposiums about it. Reams of papers. Not really hot news.

It turns out, AI is more like a big calculator than like HAL. Fuelled by SF, and maybe by bad SF, we were led to believe that “good AI” would be like finding another living intelligent species somewhere else in space. We wanted it to basically be like people, just made of metal. We didn’t want AIs to have ideas, we wanted them to be self-aware. But although experiments with more practical AI, like using it to solve problems, did prove fruitful, the attempts at creating awareness have been abysmal.

At the core of the issue is the — rather astonishing, actually — distinction between calculation and thinking. Calculation is counting and multiplying. Thinking is understanding mathematics. Curiously they are very different things. The curious part is that, not so long ago, both were considered part of humanity’s mojo, that whatever-it-is that would set us completely apart from all of existence. But now we can see that neither we are so damn smart (apes have been shown to do whatever it was we defined as “intelligent”, from language to tool use, even some bare writing), nor pure formal thought is so powerful.

From that perspective, the transhumanist extrapolation from Moore’s law into skynet does not look so secure. Basically, they wanted too much from technology. If technology changes the rules, they think, than ultra-tech should ultra-make everything extra-spiffy. Unhappily, transhumanists (like some weird platonic respin of things) hold reason itself to be able to procreate, in their world-views they give to reason a sort of independence from reasoning beings, so they can’t really appreciate the dynamics of intelligence development.

Further down the road, transhumanists think that we can cheat the AI building process by “copying the brain”. Reality check: How much do we know about inner-cell biochemistry? Some, but however much we know is a very small slice of cell biochemistry. How much do we know about the brain? Very little. If you really bought into that part of the argument, you should really really go check your sources.

An interesting take on the issue, as weird as it sounds, is actually Turing’s test. Take some chatbots and see which ones of them can fool random people. As poor a test of achievement as this is, it is actually a great piece of argument: Turing wasn’t actually trying to convince anyone that computers are intelligent, he wanted to show that intelligence itself is something we know very little about — that everything we know about intelligence we have actually extrapolated from random chat.

If we can take any form of symbol activity to be intelligence, then Babbage’s Differential Engine (which he actually did build, what he didn’t build was the Analytical Engine) was already more intelligent than a normal human being. It could calculate faster than i can. But that is not very interesting. It is actually mighty boring. What we want from AI is this kind of cheating capability that turns things upside down, that changes the rules, that achieves the impossible. We want it to be interesting.

Calculation-based AI is built upon rules and regularity, so it is not suited to breaking rules and exploiting external regularities. Maybe we could counter this by simply adding FLOPs to the engine, that is one of the currents of thought. This current of thought has been pursued. Generally speaking, what happens is that more FLOPs just makes it more obvious how uncreative computers are, because they reach the “end” of their complexities faster. For example, Google Search, arguably one of the most important AI systems in operation, does not create anything, it does not make propositions, even if what it can do with calculation is simply mind-bogging. But it keeps on being calculation: The interesting part is on the dataset.

Alternative approaches, relying on more random components, like swarm-AI or neural-networks, again display good performance at specific domains but a very steep difficulty to rise in complexity.

Basically, this is part one of my argument: AI has been tried, and it did not work like that. Sorry you guys, that’s just not how things work.

This does not really mean “never” as per in the post title, but just really not in any foreseeable future, or more precisely so not really gonna happen as to render the issue unworthy of pursuit. Now a truly dedicated trans would have to ask why some crazy fella can’t come and change all the rules, simply find something, some algorithm, or some device, that produces interesting-AI? Well, obviously, it can happen. It can also happen that i start farting unicorns. The burden of proof should fall on the guy making amazing claims, riiiight? Even then, let me try to give an in-abstract response to this claim, in the part two of my argument.

Skynet can’t happen because we can’t create something with drive. What makes our intelligence interesting is not the intelligence itself, but that we use it to achieve goals that, at the end of the day, do not come from intelligence. The easy part of intelligence is calculation. We didits. Wasn’t all that nasty. Creating pull is a completely different problem and none of the computer technology is useful for the purpose — it is made to be regular and predictable not the other way around. For that matter, all the rest of contemporary technology does not appear to have anything to say about the issue either. We simply have absolutely no clue at all about how to do it, and judging by the complexity of the human brain it is likely to be one of the most difficult things to accomplish period.

And as for the general trend in which transhumanist places the whole “explosive AI” thing, that technology is self-propelling and therefore we are close to an exponential growth of complexity: Old news. Actually, this process has been developing for the last 400 to 600 years. And a similar cycle has already come and gone for us apes, in the neolithic revolution: Judging from this precedent what’s likely to happen is some more years of creativity followed by a longer cycle of linear growth, when certainly things will work in very different ways but not as different as to make everything prior irrelevant.

Ah, and the whole “uploading” thing: Meh!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: