It does not take a genius to see this one: the problem with computers today lies in software. At least, it does not take a genius to see that if there is any problem with computers today it is in software. For the capability of squeezing transistors into micro-space just does not stop growing!
It might actually be harder to see that there are problems with computers today at all. Most of the avenues of development initially undertaken (and ergo their inherent pitfalls) have come to fruition, have been as explored as they could be. But they have also come to the point where they are victims of their own strengths, they are hindered by their own ways of dealing with problems, their very tools limit them.
One of the most evident examples is the multi-core CPU bonanza currently shown by both Intel and AMD. For a long time now almost all of the problems in day-to-day computation have been problems of concurrence — like trying to use word and MSN at the same time. Those are problems that are difficult to solve not because they need raw computation force, but because they need the computer to be at two places at the same time, which he can’t do, but which having two computers — or two processors — can. And having more than one processor is not that difficult. It is feasible in hardware cheaply for years. But it was not happening because OSs couldn’t handle them. Even the Linux kernel took so long to support this, but Windows and MacOS only recently took the leap.
I do not abide to the idea of a “Software Crisis”, i think the complexity of software is not bigger than the complexity of the world in general, and i think the vulnerability of software against complexity (in the sense that complexity issues arise more commonly in software) is a consequence of software being inherently a complex form of dealing with complexity.
So, what exactly do i think is this “problem in software”? I don’t think i have one clear cut answer, as of now. But i have some musings.
For one, i believe in general it is a trap to stop thinking in computers as calculation-executing machines, as machines to which you bring complicated mathematical problems and wait for them to munch over the numbers. I think we need better ways to deal with computational programs as computations, instead of trying to make everything understandable and usable. I mean, i do not believe a star-voyage background and tridimensional overlay of folders make it any easier to “use computers”, although i definitely believe a clear, powerful and consistent system for making and organizing backups do.
How much does the “presentation bias” affect the underlying algorithmic system?
(Obviously, i do not think that “user-friendliness” is a problem. But i do think that hiding complexity from the user is what more than half of the HCI talk comes to, and this is complete mistake.)
Secondly, i believe the layering of computers has at some point become pointless. Each OS is described as 5 or 6 layers, each driver some 4 more, the processor has itself at least three (binary code, microcode, assembly code…), the windowing system some more, ergo the network, and on top of all this the GUI toolkits and their own layers. I do not see exactly where it all leads us. I do think this “layering thinking” is in general a good thing, as it furthers abstraction, but i also feel that too much layers might represent some calculations of things we do not care to know about.
For example: if my graphics card has built-in the capability of displaying windows into a transparent cube, hell, why not? — but why is it that this has come to be in the first place? What is the purpose of 3D graphics really? It is cool, but is it important?
Third, i believe computer languages have to grow beyond the current metaphors: object-oriented is just another way to say structured, which is just a way to say assembly with syntactic sugar. And syntactic sugar is not bad per se, but it is not also the solution to anything. After all, C is in many ways just another High-Level Assembler! Languages are one of the best tools for abstraction (and that since the Greek alphabet introduced vowel signs), and new, more powerful computer languages are definitely a promising avenue of development.
(I know, i know, this all sound like i fell in love with Haskell, but, really, languages have some potential to explore…)
Basically, it all comes to this: we can have better ideas about computers.
So to say, we can think computers differently. I tried to address some of this in my ranting over my new Palm, but seriously this all is a much older wish, to try to imagine computers in new ways and with more power. I do not want a better Word+Excel+PowerPoint combo, i want things that would make this combo look like an blunt tool.
In some great degree, i think this translates to simpler tools. Simpler programs. Programs with less resources. But at the same time programs which make better tools to address the complexity of my dealings with computers. Programs that form better symbiosis than those. Like a UNIX-philosophy of small tools that work well together blown up to new dimensions. (And no, i do not mean a graphical GREP!!!!)