Secure enclaves & Intel® SGX
Secure enclaves provide an operating environment for code which is secure from interference by outside parties, including root users, malware, and the OS. This environment is built on three key pillars:
- Fully isolated execution
- Remote attestation
Intel Software Guard eXtensions (SGX) is an instruction set extension implementing secure enclaves in x86.
Fully isolated execution
Normally, when you run a program on a computer, there's a large amount of code with strictly more privileges than your program: the operating system, the hypervisor, and the firmware. These lower layers of the system stack can inspect and interfere with the operation of your program. They can read secrets from memory, create copies of your program on other machines, and modify the way your program works. Privilege escalation vulnerabilities can even allow less-privileged users to do the same.
Programs running in a secure enclave are fully isolated from the rest of the system. The lower layers do not have access to the memory inside the enclave. This means that once the enclave is loaded, the code and data kept in the enclave is safe from modification. This isolation is enforced by hardware, so there is no chance of a software bug or misconfiguration circumventing the isolation. What's more, whether the isolation is correctly set up can be verified, see below for remote attestation.
As an enclave is launched, its memory contents are recorded. These initial memory contents are the identity of the enclave. The identity may be summarized by computing a cryptographic hash of the initial memory contents (this hash is called a measurement in Trusted Computing jargon). If you change the code of an enclave (and therefore its functionality), its identity changes. Note that changes that the enclave makes to its memory as it's running don't change its identity, because these changes are not part of the initial memory.
This identity allows you to use code as a security principal: you can authorize programs to do certain things depending on their identity. For example, the reporting software may have read-only access to a database, whereas the accounting software has read-write access. This distinction is not based on different credentials configured for each application, but it is based on the actual software itself, and therefore the accounting software's authority can't be misused by the reporting software.
An enclave can provide proof of its identity as well as its execution environment using remote attestation. An attestation report is a cryptographic proof, created by the CPU, saying that the hardware is indeed running a particular enclave, with SGX, on legitimate hardware. The proof is a digital signature, where the CPU owns the private key. The private key is only used for remote attestation and can't be misused. The public key is available from the CPU vendor and can be used to verify the signature.
Because the attestation report includes the enclave identity as well as information about the execution environment, it can be used to verify that a particular program is running properly in a secure environment. For example, an SGX emulator would not be able to produce a valid attestation report.
Additional data may be included in an attestation report which allows remote attestation to be part of a cryptographic protocol. For example, you could use remote attestation to authenticate the peers when establishing a secure channel.
When using remote attestation, ensure sufficient binding between any information being exchanged and the attestation report.
Enclaves don't have an intrinsic way to persistently store data. As an alternative, an enclave can obtain a key tied to its own identity and that of the CPU. This key is called a sealing key, and it can be used to encrypt data in such a way that it can only later be decrypted by the exact same enclave program running on the exact same CPU. If the program changes, the enclave identity is different, and the sealing key will be different. If the program runs on a different CPU, the CPU identity is different, and the sealing key will again be different.
The sealing key can be used to perform authenticated encryption on data. The encrypted data, the sealed blob, can then be handed to an untrusted service for persistent storage. Later, an enclave application can request the sealed blob from the service and decrypt it.
When reading from an untrusted storage or database service, the service may perform a rollback attack by not returning the latest sealed blob. One possible defense against this attack is to use a trusted freshness tracking service.
Also, if your enclave stores more than one sealed blob, the service may perform a partial rollback attack by returing different versions of different blobs. One possible defense against this attack is to track the integrity of all blobs as a whole, for example with a Merkle tree.