A guy from USAF talking about windows (through OSnews):
They’re still in the model that they want to give all the features enabled to clients, but I think we’ve reached a point where that model is one that is no longer effective. I’m of the opinion that all products ought to be configured with these locked-down configurations, and if the customer decides they want to undo them, then they can do that. They cannot continue fielding products where the cost that is being borne by the consumer in terms of having to maintain configurations and deal with attacks is so high.
This underlines a very important issue, to which i still don’t know which words to apply, so let’s begin by that: the act of configuring a computer has a cost.
It’s more complicated than that, though. The guy’s words give us a first hint: Features enabled VS Configuration costs. Configurations affect what a computer does, a computer with a different configuration has different capabilities. In fact, your overall hacker-owned box will be significantly faster than even better spec’d muggle-owned boxes.
FEATURE vs CONFIG…
The reason should be readily apparent: Data is Code. Code is Data. Everything in a Turing Machine is just numbers into a long line of number-containers. Everything in our computers is just bit-mess on RAM, even if they reside on HDs or over the interwebs, they in effect are RAM-elements because that is what drivers do, drivers are meant to make it just like everything is in RAM.
Although this sounds like a distinction without difference, it shouldn’t. It is important to control where things are stored. This mental model of disks and webs is useful. It is also useful. But let’s look at this chimeric nature of bits.
Bits are stupidly flexible. They are so flexible, in fact, that without structure at another level they are useless.
Processors are created in such a way that when fed specific numeric codes they perform specific operations. It is like if you had different calculators, each one working in a particular way, and you could select which one you was using trough a switch. Thus “code” bits are the ones you feed into the processor.
A bit that does not get fed into the processor is called “data”, but even then it affects how the processor behaves. Imagine a bit inside a jpeg file. Let’s say it means “red”. This will transfer into a MOV instruction (that is, technically, “code”) which takes an “argument” in the form of said red value, and send it to the VGA. If you think of the processor as a function, that accepts arguments and spills results, then it is immaterial whether or not each argument is “code” or “data”. Both are part of the workings. In fact, “data” bits are more dangerous than “code” ones, because error conditions in code have well-explored consequences — even if sometimes undefined.
And now that i’ve already lost all possible prospective readers — computer-illiterate ones will think this is all too technical, computer-literate ones will find it mighty boringly commonplace — let me try to see what does this mean.
“Configuration” are special kinds of bits. They are, strongly, in the “data” category, but they are a kind of data meant to specifically alter behaviour. It is a bit that supposedly tells the computer “what someone prefers”, but, as the computer doesn’t discriminate, for him it is either an order or a NULL pointer.
Example case: when the developer behind HAL had a gripe against Ubuntu for their deploying custom fdi rules along the distro, which he felt as undermining the upstream process. In a sense, he was right. This only shows how those rules were in a sort of grey area between code and data.
A configuration, then, acts as a special purpose, limited, and messy programming language. It tells your computer what to do in ways that are unpredictable and unspecific.
Configurations feel like a hack. They are just messy. But if you stop to think about it, that is so by design. It is not that configurations have an evil nature, it is just that we created this fable of the division between code and data and we want to make everything conform to it.
And this now should reveal how much i have nothing at all to say, for i intend not to preach against this dualism.
Maybe we could think of the whole computer as a string of codecs. The processor is a bootstrap codec: It can map codec into codec. And the OS is an abstractor-codec: It maps conflicting bitstreams (both in program form and I/O form) into VM behaviour, sort of like demuxing them.
If there is any “answer” to the issue at all, if we are not to retort “Mu“, is that in a system of interlocking codecs the goal is not to find the “canonical” representation but instead to define a strong set of interfaces.
Don’t make a rule of “never turning that into an option” or even “avoid modal at all costs”. Instead, create a better options dialog. For example, the “wizards” frisson of some years ago side-stepped the issue, even if it had problems of its own. Or Photoshop’s “tools” are “modal” no matter how much you to ignore the fact, but they lack the annoyingness of most modal interfaces.
We’ve been making layers of bits for decades now. This has some advantages, it is almost the very thing that makes computers useful at all, but it is a trade-off. Maybe we still need to explore the dimension of this trade-off, and its intricacies.