One of the principles of good software architecture, the separation of responsibilities, induces system quality as a result of reducing distinct responsibilities to distinct parts ("modules", "applications", "services"), which can therefore be studied, understood and repaired separately. It is intuitive that human attention will be more effective if it has to capture and understand the structure and behavior of small things at a time rather than huge things all at once. When trying to understand a huge system, the chance of missing some detail or particularity will be significantly higher. One class of detail that will be easily missed is the class of security.

In the first quarter of this century, the software industry has seen the rise of access control violations based on the leakage of secrets. Some of the most massive data leaks in recent history began with the improper exposure of access credentials. In the 2020 US government leak, adversaries obtained access credentials to software production lines, through which they were able to insert malicious instructions directly into the source code of the victim's programs.

Access credentials are a type of secret, information whose access must be strictly controlled, otherwise there could be a cascading effect, where the adversary with access to the secret applies it to gain access to sensitive information and services. In everyday experience, the access credentials for our homes are the keys we use to unlock our doors. Anyone in possession of a copy of our keys can unlock our doors; we produce copies of our keys only for people we trust, who are authorized to enter; copies of our keys circulating freely in the hands of people we don't trust would be a serious security problem for our homes.

In this article, we will consider the "copiability" of an application's keys, with a brief analysis of the memory hierarchy of a modern computer architecture. In order to keep copies of keys under strict control, we will be guided by the principle of separation of responsibilities, seeking to restrict access to keys only to tasks that have the responsibility to use those keys, preventing access by tasks that do not have this responsibility. This analysis will not be exhaustive and will only serve to begin exploring the problem.

The modern machine is composed in the abstract of three parts: processor, memory and devices, in particular storage devices. Our keys, their bits and bytes, will exist "online" in memory and "offline" on a storage device.

To operate with keys, to sign or encrypt, their bits and bytes must be in memory, accessible to processors. In principle, any processor can execute instructions by freely addressing the bits and bytes of the keys in memory.

In simple systems, dedicated to a single task, there is no problem of controlling access to these bits and bytes, since the task with legitimate access is the only task running.

In complex systems, where different tasks execute simultaneously, the question of legitimate access arises, since, in principle, memory is shared by all processors, and therefore instructions from task A can address any memory, including the keys of task B. In these systems, the guarantee that task B's keys will be accessed exclusively by task B depends on an externally established trust about the appropriate behavior of all the other tasks in the same system.

This problem of free memory addressing is a broader problem than the risk of improper access to secrets. A defect in task A with free access to all memory can cause not only the failure of task A but also the destabilization of the entire system. To prevent this class of defects, among other benefits, virtual memory mechanisms were invented, which effectively restrict a task to addressing only dedicated memory for itself. A task management system with virtual memory guarantees that task A cannot read or write the bits and bytes of other tasks. In these systems, the guarantee that task B's keys will be accessed exclusively by task B is ensured by the operating system, which makes it impossible for other tasks on the same system to directly address the bits and bytes in task B's memory.

However, for debugging purposes, modern operating systems offer the administrator services that allow administrative processes to indirectly observe the memory of any process on the system. On such systems, processes running with administrator credentials can use the debugging services to indirectly observe the memory of any other process on the system, including copying the bits and bytes of its memory. On these systems, the certainty that task B's keys will be accessed exclusively by task B depends on an externally established trust about the appropriate behavior of all tasks running with administrative credentials on the same system.

A particular but important case of this problem is the human operator's access to the system via a terminal. Typical maintenance tasks, such as updating software, adjusting network interfaces, or even consulting system logs, in principle require access by a terminal with administrator credentials. A human operator with access to a terminal with administrator credentials can run any program with administrator credentials, including a debugger capable of observing and copying the virtual memory of any process in the system.

The above considerations deal with access to keys in "online" memory. The case of accessing keys on offline storage devices is similar, but with important differences.

The virtual memory mechanism, which makes it impossible for a process to address the memory of another process, only applies to memory (which contains instructions and data "online") and does not apply to storage devices. Modern operating systems segment storage devices by a filing system with files addressable by all processes running on the same system. In principle, any process on a system can address any file in that system's storage.

The problem of free file addressing is similar to the problem of free memory addressing. Not only is there a risk of exposing secrets, but there is also a risk of destabilizing the system as a whole. To prevent this class of defects, among other benefits, containerization mechanisms were invented, which effectively restrict processes to addressing only the files dedicated to them. With containerization, the guarantee that the files with the keys for task B are accessed exclusively by the processes in container B is ensured by the operating system, which makes it impossible for processes in other containers to directly address the files in container B.

Typically, however, administrator credentials have unrestricted permission to access all files, whether global or containerized. Therefore, the above considerations about administrator credentials apply equally to the access control of "online" and "offline" keys.

In addition, machine operators, in particular operators with access to the storage device, can in principle copy the contents of the entire storage, including the keys. For the operator with access to a physical device, the conditions for making this copy can be somewhat difficult. For a hypervisor operator with access to a virtual device, making this copy can be as easy as exporting a backup.

Each hypervisor and operating system provides various security mechanisms to mitigate these risks, reducing the exposure surface in each case, such as access control lists. On the other hand, there are more exposure risks to consider, such as exposure through memory swapping.

To completely isolate two applications, "online" and "offline", we can run their processes on different machines and connect them via a network device. Two processes connected by a network have access only to explicitly and voluntarily shared information; such processes have no direct access to each other's memory or storage, regardless of their status for the respective operating system.

One of the main advantages of this option is that, like database management systems, we can produce key management systems, i.e. systems dedicated to the custody of keys and the provision of key services. With a key management system, we can solve access control problems within a strict perimeter, relieving all other systems running applications of this responsibility. Thus, we obtain the benefit provided by the principle of separation of responsibilities in the particular case of custody and application responsibility, such as signing or cryptography.

Prodist has been supplying key management products for over twenty years, specializing in applications for the Brazilian payment system (e.g. PIX, SITRAF, SILOC, DDA, MES); and the various services of the NUCLEA ecosystem (e.g. SLC, C3).

In 2024, Prodist increased the scope of its encryption solutions with the launch of PRODIST SECURITY MODULE (PSM), with key management functionalities, data encryption (e.g. AES, RSA), digital signatures (e.g. RSA, Ed25519, ECDSA with DREX curve support), etc., taking advantage of the most advanced capabilities of modern systems, such as containerization, high performance, scalability and fault tolerance.