Skip navigation

The C computer language is based on the approach of using constructs that are easy to map into almost any computer’s native opcodes. In this way, C constructs have little ambiguity. They are an odd form of an assembly-macro-collection.

Code-optimizations are based on substituting slow machine operations for faster ones. The substitutions have to be made in such a way that the new version still works. Both op sequences must solve our problem, which inside a computer means that both are logically equivalent.

If a computer has a limited number of machine operations — and this is true by definition — then the number of non-simplifiable ways of doing something must also be finite. One can always substitute [A] by [NOT NOT A], but this is never a good idea in a optimization attempt.

If the logical meaning of a programming language statement can be precisely understood, without ambiguity, optimization can progress through comparing each of all the possible logical equivalent constructs in the instruction set. If there isn’t a clear and valid and smart way of figuring the best way of doing something, just brute-force it and measure the exec time of all possible ways.

According to this reasoning any language that was ambiguity-free should give automatically best-case optimization. It should be a solved problem.

But it is not.

The question is that languages have grown unambiguous regarding opcodes but not regarding the ultimate consequences of language constructs.

Example — if we have in C:

int i;
for(i=0,i<length_of(b),i++) {
a=a+b[i];
}

it is ambiguous whether or not the counter variable i is needed or not. Actually, the language is perfectly clear about what is irrelevant (in this case, it clearly requires a memory allocation corresponding to i) but what exactly should be the result of the particular idiom is ambiguous. Is it the same as “foreach b do a + b”?

The optimizer will have to resort to all kinds of heuristics to decide which optimizations to use or not. And now optimization becomes a difficult problem.

Higher-level languages, being less ambiguous about consequences, should provide better optimization. But exactly the opposite holds true at the moment.

This is so because hiqh-level languages are more concerned with “fixing C” — being what C is but easier — than with forming a consistent and useful consequence-to-construct mapping.

But come on, people, optimization is not work for humans! Comparing cost of execution of opcode sequences is boring boring boring.

I have never studied algorithm-theory, so my argument probably has lots of holes, but WTF! current programming languages are lame! Also, i suspect computer scientists will look at this and just say: “recursion!” and i don’t think so, dear, but the margins are just too short.

One Trackback/Pingback

  1. […] [edit] The difference is that optimization is currently based on avoiding unneeded work. While avoiding extra trouble is a good thing, the issue can be dealt from another angle: scrap representation. Conversely, this turns optimization from an additive issue (where you are always having to extract that extra bit of speed) into an deterministic issue (where differences will come only from changing the perspective of the problem and not through boring blind chopping at complicatedness). If a simple and correct algorithm can be devised to determine the minimum representation proved-necessary one could automatically determine best-case optimizations for all constructs. It would be the same as a ambiguity-free language. […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: