Wednesday, January 3, 2024

Microkernel operating systems: introduction and history

(this is a slightly edited transcript of a YouTube video)



Microkernel OSes have been around in some form since the late 60s. If implemented properly, they can have significant advantages over conventional OSes. However, they have been surrounded by controversy, including a famous debate in the early 90s on Usenet between Linus Torvalds, founder of Linux, and Andrew Tanenbaum, founder of the MINIX and Amoeba microkernel OSes. This is largely due to several suboptimal implementations that significantly over-promised and under-delivered.

Minix 1.5 (1991); x86 PC (QEMU)

Microkernels vs. monolithic kernels

Basically, in a pure microkernel OS, the kernel - the lowest level and most privileged part of the OS - is reduced to providing only basic CPU scheduling, memory management, and communication between tasks. Small size alone does not make a kernel a microkernel, even though microkernels are usually relatively small; microkernels are defined much more by reduction of the types of services provided by the kernel to a minimum rather than just by size alone.

This is in contrast to mainstream OSes like Windows, Linux, and macOS, which are all based on kernels that are more or less monolithic, where in addition to the functionality provided by a microkernel, higher-level subsystems like device drivers, filesystems, network protocols, and even sometimes windowing run in the kernel. If any of these subsystems crashes on a monolithic kernel, the entire system is usually taken out; similarly, a security hole in a single subsystem may allow full access to everything in the kernel.



In a microkernel, on the other hand, these subsystems run as separate server tasks, usually with memory protection between them, on top of the kernel, and are accessed through some form of message passing. This kind of architecture significantly enhances security and reliability over monolithic kernels if the OS is properly designed. Since subsystems are protected from one another, crashes and security holes usually only affect one subsystem and maybe some of the processes with which it communicates.

Another major advantage of microkernel OSes is that they are easier to design for extensibility. Extending the functionality of typical monolithic kernels tends to be rather difficult since there are many special considerations involved in writing kernel modules as opposed to regular user programs. Extending a well-designed microkernel OS is normally just a matter of writing a normal user program that uses APIs to export an interface to other processes.

The main disadvantage of microkernel systems is that they can be significantly slower than monolithic kernels if poorly designed. However, this can be mostly avoided in a well-designed microkernel OS, and back in the early 90s, QNX, which is one of the most successful microkernels, actually outperformed System V/386, one of the leading conventional OSes of its time, on some benchmarks in one paper.

Another thing that microkernels can make easier is implementing multiple personalities providing compatibility with different existing OSes in one system, although this is much less relevant in the modern world with only a few mainstream OSes and widespread use of virtualization.

Hybrid kernels

In between typical monolithic kernels and typical microkernels, there are also a few different forms of hybrid kernels, which are basically a way to try to retain some of the advantages of microkernels while avoiding the performance issues of certain types of microkernels.

One form of hybrid kernel is fairly similar to typical monolithic kernels in that it includes most of the same higher-level subsystems, but instead of directly providing the environment that user programs see, it provides a generic mid-level interface, with the user-visible environment provided by a personality server, with multiple such servers being supported to allow compatibility with different OSes. NT-based Windows, which includes all modern Windows versions, is by far the most common example of such a system.

Architecture of NT-based Windows; the "microkernel" in this case is just the base scheduling/IPC layer, rather than a true microkernel

Windows XP (2001); here the Unix programs run under the add-on Interix personality and the desktop runs under the built-in Win32 one


Another variant of hybrid kernel includes some higher-level subsystems in the kernel, but it also provides an extensibility layer that allows user-level servers to export the same kinds of interfaces that the kernel uses for its own subsystems. Plan 9 is a good example of such a system. Virtually everything except memory is accessed through a filesystem interface, with both kernel drivers and user-level servers exporting filesystems that are accessed the same way by client programs. 

Architecture of Plan 9

Plan 9 4th Edition (2004); showing the uniform filesystem interface

In addition to these kinds of hybrid kernels, there are also more or less typical monolithic kernels that include microkernel-like message passing, and these are sometimes described as hybrids, but I don't consider them to be such, since they are otherwise typical monolithic kernels. One somewhat well-known family of such systems is the BeOS and Haiku family. 

Architecture of BeOS and Haiku; user mode servers are accessed through anonymous message passing, while kernel drivers are accessed through files/sockets

Haiku R1 Beta 4 (2022); x86 PC (QEMU)

Darwin, the basis for all of Apple's current OSes like macOS and iOS, used to also be a very typical example of such a system, although more recent versions have moved towards something more like a true hybrid kernel with the addition of user-mode device driver support, although its extensibility is sort of limited compared to something like Plan 9.

Architecture of Darwin-based OSes

Mac OS X 10.3 (2003); Power Mac G4 (QEMU)

Related to hybrid kernels are microkernel OSes like QNX and Amoeba that build certain server tasks into the kernel. However, in such systems, these higher-level services are still accessed through message passing as if they were regular server tasks, whereas at least some such services are accessed through direct system calls rather than messages in hybrid kernels and typical monolithic kernels.


Architecture of QNX

History of microkernel OSes

Early atypical microkernel systems

Early examples of OSes that are broken down into a collection of small programs were the SCOPE and NOS family for Control Data's 6000 series, developed from the mid 60s to the mid 90s, as well as the THE OS for the Electrologica X8 from the mid 60s, although these aren't quite true microkernels. 

NOS 2.8.7 (1997); Cyber 170 (DtCyber)

The earliest true microkernel OS may have been MTS, with the first version released in 1967.

MTS D6.0A (1988); IBM S/390 (Hercules)

Other early examples include the RC 4000 Monitor, with the first version released in 1969, and RSX-15, with the first version released in 1971. However, these OSes were still somewhat atypical. 

RC 4000 Monitor 3.0 (1993?); RC 4000 (RC 4000 Emulator)

XVM/RSX V1B (1976); PDP-15 (SIMH)

Early typical microkernels

One of the earliest typical microkernels was Thoth, the first version of which was completed in 1976. Thoth was a significant influence on QNX, which was one of the first widely successful typical microkernel systems, with the first commercial version released in 1982. 

QNX 1.2 (1983); IBM PC XT (PCE)

Due to its good realtime support it became popular for high-end embedded systems, and it is still actively developed and quite popular. 

QNX Neutrino 6.1.0 (2001); x86 PC (QEMU)

It was also used on desktops and servers to some extent but was never particularly popular there, and recent versions basically dropped support for non-embedded systems entirely.

CTOS was another microkernel system from the early 80s to experience some degree of success, but not quite to the same degree as QNX, and it was discontinued in the late 90s. 

CTOS III s1.3.5 (1998); x86 PC (VirtualBox)

CTOS III s1.3.5

Other well-known families of microkernel OSes originating in the 80s are Minix and the AmigaOS family, although Amiga-like OSes are rather atypical and have little to no memory protection.

Minix 3 (2017); x86 PC (QEMU)
AmigaOS 1.3.3 (1990); Amiga 2000 (FS-UAE)
AROS One (2023); x86 PC (QEMU)

 

Later microkernel systems

Later well-known examples of microkernel-based OSes include GNU/Hurd, EPOC32/Symbian, and Horizon, the OS used on the Nintendo 3DS and Switch.

Debian GNU/Hurd 2022-10-29; x86 PC (QEMU)

EPOC32 1.05 (1999); Psion Series 5mx (WindEmu)

Nintendo 3DS OS (Horizon) (201?)

Nintendo Switch OS (Horizon) (201?)

 

Mach and the failed push to make microkernels mainstream 

Despite the commercial success of some early microkernel OSes like QNX and CTOS, microkernel architecture didn't start to get widespread attention from industry and academia until the release of Mach in the mid-80s. Between the late 80s and the mid 90s, Mach was hyped as the next big thing, and it was thought that all mainstream OSes would eventually be based on Mach-like microkernels.

4.3BSD/Mach (1986); VAX-11/780 (SIMH)

Mach originally started out as a monolithic kernel with support for message passing, but was designed with conversion to a microkernel in mind. Unfortunately, attempts to convert it to a microkernel were never particularly successful. When built as a microkernel, Mach had significantly worse performance than monolithic kernels, being almost 70% slower in the worst cases (e.g. see this paper comparing a Mach microkernel system with Ultrix, a conventional monolithic Unix, and this comparison of microkernel and monolithic builds of OSF/1). Mach and similar kernels seem to have completely ignored the architecture of QNX and systems like it even though QNX was already fairly successful and well-established at that point. The biggest problem was that Mach had a much more complicated implementation of message passing than that of QNX-type OSes. It turned out that the only way to get anything resembling decent performance out of Mach was to retain its original monolithic architecture, which is what most commercial Mach-based OSes like DEC OSF/1 and the NeXTStep and Darwin lineage that includes Apple's current OSes have done. 

DEC OSF/1 2.0 (1992); DECstation 5000 (GXemul)
NeXTStep 3.3 (1995); NeXTStation (Previous)

As a result, many people assume that poor performance is inherent to microkernels even to this day and that Linus was right about monolithic kernels being generally better for practical systems, despite this not really being the case.

Post-Mach research microkernels

Research microkernels did eventually move on from Mach-type systems, ending up more or less independently reinventing QNX's kernel with the L3 and L4 kernel family, but the damage had already been done when it comes to microkernel-based OSes for desktops and servers.

L3FREI 08.15.0 (1994); x86 PC (VirtualBox)

L3FREI 08.15.0

It's only relatively recently that interest in microkernels for general-purpose systems has started to pick back up with systems like Fuchsia and Genode, although in my opinion these systems still have several issues that limit their potential as practical general-purpose OSes.

Fuchsia alpha (2017); x86 PC (Fuchsia emulator)

Genode Sculpt 21.10 (2021); x86 PC (VirtualBox)

Issues remaining in post-Mach research microkernel systems

One of the biggest is excessive vertical modularization, an example of which is implementing device drivers as separate servers from higher-level services that build on top of them like disk filesystems and network protocols, even though the device drivers are often just dealing with the same data as the higher-level services and splitting them up just adds overhead in most such cases. On the other hand, QNX and similar systems tend to implement closely-related lower- and higher-level services as plugins within the same process. Even on a microkernel with fast message passing, a round trip through several processes can still have significant overhead. 

e.g. QNX implements its entire storage stack as a single disk server process, whereas Genode splits it into three separate servers

A well-designed OS can allow layers to be optionally loaded into multiple processes in situations where vertical modularization does make sense by allowing instances of the same server with different layers to call each other.

e.g. a QNX-like OS could theoretically run separate disk servers to separate the encrypted and decrypted sides

Another is that even though the kernel message passing layer itself is lightweight in many of these systems, the user-level message transport layer on top is still more complex than is necessary, with support for a wide range of structured data types in messages (e.g. compare Genode's transport layer with its support for inheritance and dynamic marshalling with that of QNX and its closed set of Unix-like functions that all use a per-function fixed structure). This can add overhead to services that only deal in bulk unstructured data like disk filesystems and network stacks, and can also introduce more room for security holes where it would otherwise be completely avoidable. Structured message support can just as easily be implemented in optional libraries only for services that actually need it.

Still another is that some of these systems have various limitations on Unix compatibility, which isn't necessarily a problem for research or embedded OSes, but it significantly limits the available application base for a general-purpose OS.

Conclusions

To conclude, I think that microkernel architecture still holds a lot of promise despite quite a few microkernel OSes failing to live up to the hype. QNX basically got the general architecture more or less right in the early 80s. I said in my overview of the Unix-like family that I consider it to be the best balance between architectural purity and practicality of any OS family, and I think that a QNX-like OS is probably the optimum within the Unix-like family for a large range of use cases, since it is going to take advantage of the superior extensibility, stability, and security of microkernels while remaining reasonably compatible with existing Unix code and still having decent performance. This is why I'm writing my own QNX-like OS, although at the moment it's still extremely preliminary.

No comments:

Post a Comment

QNX Neutrino 6.1 (2001) - Widely successful but underappreciated outside embedded (part 2 - tour of installer and running system)

(this is a slightly edited transcript of part 2 of my YouTube video review of QNX 6.1; here is part 1 )   Installation Native installer QNX...