Software features that I would like to see become standard

From Nick Jenkins
Revision as of 08:43, 18 August 2010 by Nickj (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search


Every bit of software that stores a user's work in memory before saving it should have an auto-save / auto-recover function, and it should be enabled by default, and it should work.

Software should never require the user to reboot[edit]

I have yet to hear a solid technical reason why this should be required. Next time you're using any computer & see a message of some kind to the effect that you must reboot the computer, ask yourself what the fundamental reason for this is.

To reboot is : "Rebooting, from the term bootstrap, or pulling up from one's bootstraps, is to start in a known state." It follows therefore that a reboot is a sign that the software either forgot what state it was in (the software or hardware are buggy), or the software authors didn't make the effort (except for one particular standard known state - i.e. boot-up)

Configuring hardware drivers is often a time when numerous reboots are required. Why? As best as I can determine, the answer is that the operating system interface to the driver forces it to provide 'load driver' functionality, but does not force it to have 'unload driver' functionality. Well, it should! Dynamic loading & unloading of drivers is an idea whose time has come.

Installing applications is another favourite time to insist on a reboot. Again, the question to ask is: 'Why?' In this case it's usually to force new libraries or environment settings to be loaded. Yet again, it should be possible to allow these things to happen dynamically. Environment settings should have to use a unique nomenclature (should as including the full software name & version) so that different bits of software don't clobber each other. Furthermore, each bit of software should keep its libraries in its own subdirectories with the application, and not put any files into shared or operating system directories – doing this is just asking for trouble, and is something that should never have been allowed in the first place.

Another bad offender for forcing a reboot is a crash of either the operating system or applications. Applications crashing should not halt the entire OS – this is a design issue, and both NT & Linux handle this much better than 95 or 98. Some reasons operating systems can crash is because of bad device drivers – this has been a real problem for me in NT together with a buggy Epson printer driver causing blue-screens-of-death. Again a design issue, because the drivers are trusted more than they should be, and allowed access to Ring Zero. Ring Zero is just like the old DOS TSRs, which could do just about anything they wanted, and if they died they took everything down with them in a flaming heap. The upside of this is that there is a speed benefit, but as far as I'm concerned it's nowhere near enough to justify the Pandora's box of lost stability that it opens.

The hardest situation of all to solve when avoiding a reboot is updating the version of software that's currently running – such as the kernel or loaded libraries. This is an extremely hard problem to solve correctly. Nonetheless, I am not convinced that this is not an insolvable problem. Thinking about it at a high-level point of view, it's a question of saving state, unloading whatever it is, and then loading the new version with the same state. It requires careful design of the state saving so that it can specify all that is required for subsequent versions, especially since you don't yet know completely what future versions of the software will want to know! This is pretty tricky, so I will wait for the above problems, which are more readily solvable, to be rectified before expecting much progress on this front.

Here are some hints on this for Microsoft, who by design of their operating systems cause many of the above situations, and who are well behind Linux in the whole uptime field:

  • Changing a network configuration or IP address is not an acceptable reason to force a reboot.
  • Changing display settings isn't either.
  • Installing an application, even if it's big like Visual Studio, still isn't a valid reason.

Microsoft tells me that the number of scenarios that require a reboot went from 90-something in NT4 to 20-something in Windows 2000. This is good in that at least Microsoft are counting these scenarios and trying to minimise them, and this reduction is definitely a progression towards zero reboots. Nonetheless, it's still 20-something reboots too many.

In the final analysis, there's only one totally irreproachable reason for software to force a reboot – namely that the hardware requires it. Of course you have to turn off a computer when adding or removing hardware inside the case – but this should hopefully happen less in the future with more devices available using PCMCIA or USB, both of which support hot-swapping.

I'm looking forward to the day when machines reach a much higher level of reliability and uptime – I see this as a quality issue more than anything else.

Added 15 Jul 2003 - a link to a Linux discussion on Thoughts On Bootless Kernel Upgrades (link broken now).