Much of our understanding about intelligence seem to come not from intelligence itself, from how we live it and experience its works, but instead from rationalizing the activity a posteriori. So for example, almost anyone who is really good at physics will not go at it by following rules and formulae, but will actually have a weird “feel” for the thing. But, when asked about their own performance, they will rationalize it into an red-tape-y picture of abstract, clean, impersonal logic — a picture that is as far as the real thing as possible.
A certain Jan Snajder commenting on Brent Yorgey’s post, almost nails it:
From my experience, getting to understand monads (or any abstract notion for that matter) is about developing an intuition about when and why they might be useful. This intuition has to be developed gradually on a series of examples. Only if I can see a pattern there, will I be willing to move to a more abstract level, otherwise I won’t bother. Burrito analogies are useful, but only as a sort of private post hoc explanations.
Sam Hughes of qntm.org after some frustration with some transhumanist hypothesized about how creating better-than human AI to create better-than-better-than-human AI and so on is impossible. Big discussion ensues, he ends up closing the post seemingly conceding defeat. I do not agree with his arguments, but i agree wholeheartedly with his conclusions. If that does not prove i am not qualified to talk about intelligence then what does? ;-)
Nevertheless. AI. The subject seems pretty hot ATM. What i think.
Basically, i hold “explosive AI” to be possible, in a theoretical kind of way, but for it to actually happen in our civilization’s time frame would call for tech breakthroughs on par with warp-drive or time-travel. To put it another way: common-AI is already trivial in our present, but this technology can’t progress into the general 42 that transhumanists expect. Read More »
Very intelligent people are people who have a powerful and active frontal brain lobe. Or more properly, a strong prefrontal cortex. This brings the capacity of deciding (or more precisely finding paths) brightly. They usually do the things that work.
But there’s a wrinkle in that picture. Brains are not mystical, transcendental, independent decision makers. Brains are physical systems. If a brain chooses anything, that means a positive feedback device inside the brain has been activated. This person has a given pattern up there that tries to repeat itself, a brain process that enhances its own chances of happening again. This is basically undistinguishable from an addiction.
In fact, we should think about the brain as a complex interweave of addictions, and about beliefs as addictions. Chemically speaking, that’s what they are.
If a person is intelligent, she likely has some of those positive feedback things about using her noodles. She likes to think.
The reason being that, just as no one gets strong without using their muscles, you can’t get intelligent without practising thinking. A person does not begin to like thinking because she gets intelligent, she will get intelligent if she starts to like it.
If she has a positive feedback loop about using her brain, she will tend to put herself in situations where she has to. Not only she will do what makes her happy, she will also do what makes her brain active and challenged. In the long run, this means she’ll get used to creating problems to herself! Intelligence, then, just like Truth or Science or Love or anything else, is a trade-off — and we should deal it with care.