A security kernel is “the hardware/software mechanism that implements a reference monitor”, which is responsible for “mediating every attempt by a subject to access an object”. Security kernel, operating at the center, would enforce all system functions to behave according to their respective security policies, while users with different clearance levels are allowed to interact with the same system, i.e. share the hardware. We will look into the shortcomings of security kernels in terms of their development methodology, their usage by multi-level applications, and discuss how the identified problems of security kernel could be addressed, as discussed in detail by referenced paper.
Overall, current state of security kernels has left something to be desired. The requirement of a security kernel, how it could facilitate the secure operation of a multitude of applications, and the value of security mechanisms such as process isolation must be acknowledged. However, the manifestation of the security kernel falls short of expectations stated by its definition.
Kernel development consists of 4 phases – mathematical model, formal specifications, Higher-order-logic code and subsequent implementation, with a requirement for formal verification that “representation of the system at that step corresponds to its representation at the previous step”. This latter statement holds much significance. A recurring theme in each of these steps, is how the formal verifications have only been achieved through certain compromises. For example, the gap between mathematical model and formal specifications demanded a hierarchy of specifications for verification. HOL code to low level formal specifications verification was done through alternative languages over the obvious candidates (GYPSY and EUCLID) due to their lack of support at the time. Machine code correspondence to HOL Code lacks a well-formed verification process and therefore is done through testing and manual inspection.
The general criticism towards Kernels would be that while the practical use is of import, by definition the Kernel implementation efforts seem to be overly-ambitious. Kernel development efforts seem to have various levels of verification plans, which are expected to be highly secure, due to the “high scrutiny on protection mechanisms used” and “use of formal methodologies in the design and development.” For example, the Honeywell’s MULTICS system used for multi-level classified information processing by Air Force Data Services Center has the rules of the mathematical model integrated into the operating system.
In terms of applications of security kernels, the necessity of trusted processes with regard to functional multi-level applications should be highlighted. Most implementations of the guard application which performs sanitization and downgrading of classified documents to make them accessible in lower security levels in military applications, are aided by multiple “trusted processes”. Attempts of building Secure database management systems where over-classification of data (due to having being processed at the same level despite their security level) is otherwise unavoidable, in kernelized environments have shown little success, and were further complicated by their user-interface requirements. Secure Message Processing systems with multiple terminals containing information of their corresponding classification levels, requires an “unclassified process to signal processes of all security levels”, along with a trusted process for user-interface functionality.
In all above applications, Kernels by itself seems insufficient to support required functionality. The option of building such applications in the kernel itself, is presented as a solution but it would not be
practical for most situations where such applications are required. The complication of providing multi-level security for user-interfaces is also a prominent problem, because integrating it to a kernel would severely damage usability of any application. Alternative to above applications, “Secure Network Front Ends” and “Secure Distributed Processing” applications demand the use of kernelized processors, making it evident that “Trusted processes” should not, and cannot be used to solve all problems.
One of the prominent limitations of kernel methodology is that that security kernel is burdened with implementation of a reference monitor which is “an overly simplified view of the world”. Limitations can be summarized as below.
The crux of the matter of the oversimplification of the model lies in the bride between the modelling and implementation effort. Axioms of the mathematical model were translated to a set of hypothetical building blocks to aid designers, but no kernel could exactly match the building blocks, which resulted in demonstrating functionality being accurate over correspondence to the model. The model also fails in accounting for integrity constraints in favor of compromise protection, and efforts to retrofit integrity protection to the model has not been successful.
Integrating additional protection mechanisms to kernels at lower level of abstraction is done without correspondence to the mathematical model, which poses an evident risk of opening the system up to previously known attacks.
The author firmly states that the misuse of trusted processes is the most prominent limitation of security kernels. Trusted processes are attached to the kernel with permission to violate certain rules of the model, and while they are required to be equivalently verified as the kernel itself, they are usually misused as a cure-all.
A recurrent theme among these limitations is that (likely functionality-driven) policy enforcement-driven extensions to the security kernel (such as additional protection mechanisms and trusted processes) require to be subjected to the same scrutiny as the kernel and must be well-defined and formally verified if we were to rely on a security kernel which was derived off a well-defined (albeit oversimplified) model. This is a very important point, wherein the concept of a verified security kernel could draw users into a false sense of security, while the kernel itself has been “extended” in various ways.
Performance penalties are evident in process isolation and operating system emulators. Process isolation refers to a per process virtual environment were each individual process serves are a protection boundary. Most operating systems does not implement complete process isolation, as necessitated by the needs of context switching of multi-level applications and general lack of requirement for such strict boundaries. It can be argued that Operating system emulators, do not provide exact file and process management desires, and therefore if an application requires high level calls to be made to the kernel, it should be built into the kernel itself. However this being viewed as a limitation depends entirely on the context. In practical use, virtual machines are quite useful in development related tasks. Even in production systems, virtualization has become extremely popular. It should be noted that this has to do with the hardware and networking improvements which came later to the popularity of Kernel development.
Limitations of program verifications must also be discussed. An interesting point brought out by contemporary researchers, was that hardware anomalies and low-level synchronization verifications cannot be derived off an abstract model, which results in the conclusion that program verification exclusively would not be sufficient and would have to be supplemented by testing. Among further multiple points against program verification, two interesting notions were that due to the extensiveness and complicated nature of verification logic, it may not be well scrutinized, and that verified software will dissuade users from being concerned of failure. It should be pointed out that with the effort and scrutiny going into security kernel development itself, it would be unlikely that no one will pay attention to the verification logic simply because of its complexity. While that current verification tools are less than ideal, it is quite important in “demonstrating the absence of a wide range of bugs”, as in contrast testing would only assert the existence of bugs.
It can be concluded that kernels will most likely not be verifiable. A trusted delegate machine which is verified, supplemented by untrusted machines for computing power would serve as a secure distributed system can be recommended in its stead. The use of encryption and physical isolation in maximizing security kernel applications can be recommended for highly sensitive, military, government or cooperate operations, while being unlikely for practical use.
Existing security kernel categories, such as monolithic kernels (where services reside in the same memory area), microkernels (where user space communicates through a minimal kernel with services that are segmented), and hybrid kernel (which is a combination of both where some services are developed along with the kernel) are indicative of the compromise between performance, general-purpose use, and security. While Mac OS and Windows have adapted to a hybrid kernel, Unix-derivatives favor module-loading monolithic kernels, the modularity of which is sometimes confused for microkernel behavior.
The techniques used to realize security kernel objectives however, are quite prevalent among modern implementations of security kernels. To name a few would be, the “ring mechanism” for isolating security in one or more protection domains, “process isolation”, “Operating System Emulators” and “Trusted Processes”. Trusted processes such as the update utility in operating systems play a major role in the updating the kernel itself, which is required to ensure continued security and patching of vulnerabilities. However, the issue of formal verification prevails, where in in this instance, the update utility and kernel should be able to mutually verify each other as trusted components.
Operating System Vendors use multiple methods to ensure that their code base is sound. This is somewhat limited to extensive testing procedures (e.g. ELA4 by Microsoft), but since these methodologies are standardized, they are likely to provide ample amount of security for general purpose systems.
It could be argued that development tasks focus on initial delivery and subsequent maintenance contracts for the sake of monetization while security being retrofitted as an afterthought. While this aspect is likely to remain true in general software applications, the emergence of cloud computing and virtualization has vastly increased the necessity of verifiable security kernels. Concepts such as “diver isolation” has become a necessity for cloud service providers to ensure security of their clients, as one malicious user could compromise a massive cooperate user-base if the security was inadequate. This reflects the two directions in system development taken after 1976 Multics experiment. While applications are likely to favor functionality over security such as early general-purpose UNIX systems, cloud service providers would invest heavily on development of secure computer systems. This would also alleviate problems such as “vendor-locking” where the end-users would not have to concern themselves with added hardware costs, but simply the service cost of cloud service provider, while leaving service provider capable of implementing a security infrastructure that best assures integrity and confidentiality.
Ames Jr, Stanley R. “Security kernels: A solution or a problem?.” In 1981 IEEE Symposium on Security and Privacy, pp. 141-141. IEEE, 1981.