The C computer language is based on the approach of using constructs that are easy to map into almost any computer’s native opcodes. In this way, C constructs have little ambiguity. They are an odd form of an assembly-macro-collection.
Code-optimizations are based on substituting slow machine operations for faster ones. The substitutions have to be made in such a way that the new version still works. Both op sequences must solve our problem, which inside a computer means that both are logically equivalent.
If a computer has a limited number of machine operations — and this is true by definition — then the number of non-simplifiable ways of doing something must also be finite. One can always substitute [A] by [NOT NOT A], but this is never a good idea in a optimization attempt.
If the logical meaning of a programming language statement can be precisely understood, without ambiguity, optimization can progress through comparing each of all the possible logical equivalent constructs in the instruction set. If there isn’t a clear and valid and smart way of figuring the best way of doing something, just brute-force it and measure the exec time of all possible ways.
According to this reasoning any language that was ambiguity-free should give automatically best-case optimization. It should be a solved problem.
But it is not.
The question is that languages have grown unambiguous regarding opcodes but not regarding the ultimate consequences of language constructs.
Example — if we have in C:
it is ambiguous whether or not the counter variable i is needed or not. Actually, the language is perfectly clear about what is irrelevant (in this case, it clearly requires a memory allocation corresponding to i) but what exactly should be the result of the particular idiom is ambiguous. Is it the same as “foreach b do a + b”?
The optimizer will have to resort to all kinds of heuristics to decide which optimizations to use or not. And now optimization becomes a difficult problem.
Higher-level languages, being less ambiguous about consequences, should provide better optimization. But exactly the opposite holds true at the moment.
This is so because hiqh-level languages are more concerned with “fixing C” — being what C is but easier — than with forming a consistent and useful consequence-to-construct mapping.
But come on, people, optimization is not work for humans! Comparing cost of execution of opcode sequences is boring boring boring.
I have never studied algorithm-theory, so my argument probably has lots of holes, but WTF! current programming languages are lame! Also, i suspect computer scientists will look at this and just say: “recursion!” and i don’t think so, dear, but the margins are just too short.