Computer programming these days is full of abstraction levels. We have high-level languages being translated into bytecode, we have libraries depending on libraries depending on libraries, we have virtual machines, we have OSs that need graphs so that you can understand how many “levels” they have. If you insist in mapping those levels and start with the bare metal… well…
Generally speaking, more levels means things get easier. That is, actually, the whole point of “leveling”. But those levels also carry a subtle dilemma: extra levels enhance the cost of uncorrectness!
“So what?” you might say, incorrectness is never a good thing anyway, and this is not the only thing that adds cost to our current over-complicated computer landscape. Granted.
But one of the main benefits of computation, in the first place, is that it allows the “worse-is-better” approach. Let me restate this: automatic computation (as in IT and binary logic) allows imperfect and inelegant solutions to be applied to issues without the inherent woes of uncorrectness via brute-force number crunching. Informatics is a set of formalist, arbitrary and meaningless conventions that allows achieving informal, meaningful effects.
(This might sound strongly counter-intuitive, even outright wrong! A number system can seem much more elegant, depending on who makes the description. But i really stand for this point-of-view. And, well, the margins are just too short.)
If we assume an instrumentalist stance towards computation — that is, take computation as primarily a way to achieve specific goals — then we bits remain an approximation of the real, and not the other way around. And the consequence of this is that the flexibility and abstractability of the code is much more important than it’s correctness. Or, in someone else’s words, “it is not our business to make programs; it is our business to design classes of computations that will display a desired behaviour.” Worse-is-Better in this view becomes almost the essence of informatics!
And here lies the dilemma of “layering”. Further abstraction levels allows programmers to be less precise (correct) about their code. It allows them to express their ideas with more freedom, and those ideas to become more powerful and flexible. In the other hand, into each new level the uncorrectness collects it’s toll. One extra instruction in ASM will cost like 0.000000000000000001 seconds, but one extra instruction into a higher level language will cost much more, even more so because it will make it harder to avoid waste in each of the levels bellow it.
In this way, layering both diminishes and enhances the costs of abstraction. The fine line between the two is very subtle. Therefore, layering is definitely an important tool, but it is not one to be used blindly — by which i mean not one to be taken in disregard to the exact role it plays into our system.