Corda and SGX: a privacy update

June 22, 2017

Corda and SGX: a privacy update

In this technical update I want to share something a little bit special — a privacy upgrade to Corda based on Intel’s Software Guard Extensions technology (SGX). We first talked about this last year in our technical whitepaper (p51) and now is the time to provide the technical community with more information on where we’re heading.

What is SGX?

Modern Intel CPUs (Skylake+) have a new instruction set that allows software to compute on private, encrypted data without revealing that data to the owner of the hardware. You can send someone data you don’t want them to see, they can run a computation on that data, and they obtain only the result but not the inputs. In this way it is like a hardware implemented form of multi-party computation or zero knowledge proof, with many of the same uses.

For the last year and a half we’ve been engaged in a research project exploring the integration of SGX with Corda. We’ve now progressed far enough that it’s time to talk about what we’re doing, and we think this work will resolve concerns about ledger privacy (like those recently raised by the Bank of Canada).

Why use SGX?

Right back to the earliest days of Bitcoin, one of the biggest concerns about blockchain based systems has always been privacy. The desire for business confidentiality exists at all levels of the formal financial system and is in fact frequently demanded by regulators, contrary to their reputation in the cryptocurrency community! Therefore privacy has been a focus of Corda’s design right from the start.

A few eyebrows went up when we released our technical white paper last year, along with open source code. Despite being heavily inspired by Bitcoin in many technical respects, Corda differed in one key way — it doesn’t organise the timeline into a chain of blocks! Instead, double spend resolution occurs at the individual transaction level by coalitions of nodes called ‘notaries’ (these are the Corda equivalent of miners).

This design allowed us to make our first big privacy improvement over traditional block chains — transactions don’t get broadcast to everyone on the network. Instead they only go where you need them to go. This yields a form of partial ledger visibility in which regular participants don’t see the whole thing, even though it’s globally consistent[2]. We added on top Bitcoin-style key rotation with automatic identity management to de/anonymise transactions automatically, transactions structured in a Merkle tree so some parts can be revealed without the rest, and full encryption of the peer to peer network [1].

This was good. But we knew we could do even better.

The innocuous phrase “where you need transactions to go” in my paragraph above doesn’t mean transactions are only seen by involved counterparties, even though that would be intuitive to people who aren’t experienced with block chains. To verify the integrity of a change to the ledger, the history of that part of the ledger must still be downloaded and audited. If that wasn’t the case then Corda would just be a messaging network and a payment “transaction” would be little more than an IOU. It’s the ability to verify the history of an asset, deal, or any other piece of data that makes Corda a distributed ledger. So, if Alice sends Bob $100, then in the Corda protocol Alice must also send Bob the chain of custody leading up to that transfer all the way back to the initial issuance. This is clearly better than a model where everyone tells everyone else about every transaction ahead of time. That’s especially true for a ledger like ours, where many use cases result in small transaction graphs that don’t circulate widely. But it doesn’t match what you intuitively want — only participants can see a transaction, yet those participants can also be sure they’re not being ripped off with funny money. This is an unavoidable tradeoff with standard technology.

Corda calls the process of asking a peer for transaction histories and verifying the results resolution. By putting the resolution process inside an SGX enclave — an encrypted tamper resistant memory space — it becomes possible to fully encrypt the ledger such that nobody has access to data they aren’t supposed to.

What’s been done?

It’s easier to say “just resolve transactions in an enclave” than actually do it. There are many interesting problems to solve along the way.

The first issue is that verifying transactions involves executing their smart contract logic. This logic must be prevented from tampering with the platform code doing the verification itself, so it must be sandboxed. Corda uses a modified and restricted Java Virtual Machine to do this — whilst it is conventional in the block chain space to roll your own VM, we decided early on that it was better to adapt what already existed than start a new toolchain from scratch. I note that Ethereum is looking at using a restricted WebAssembly VM in future, so the move towards non-blockchain specific VMs is spreading.

Inside an SGX enclave there is no operating system access: you can’t make system calls to the kernel, and you can’t dynamically link against anything. An enclave is a statically linked piece of code that’s loaded into memory using special CPU instructions that hash the signed code image. Additional CPU instructions allow you to enter and exit the enclave, as well as derive private keys that aren’t access
ible outside the enclave and generate ‘remote attestations’ that allow a third party to verify the hash of the code that was loaded.

Typical JVMs do not expect to run in this sort of environment and thus cannot be used out of the box. We produced a modified version of the Avian JVM along with a tiny operating system, called StubbyOS, that provides enough functionality for Corda transaction resolution to run. We then embedded it inside an enclave and showed that it could load and verify Corda cash transactions, with smart contract execution as well. StubbyOS provides its own memory management, dynamic linking, support for multi-threading (as you cannot start threads inside the enclave) and a few more things. The JVM itself has a JIT compiler and compacting garbage collector, and can run ordinary Java 8 code.

Running a JVM inside an SGX enclave is an unusual move. Normally you’d try and minimise the amount of code inside an enclave, to reduce the risk of it being hacked. For enclaves written in C or C++ that is indeed a wise strategy. But another way to reduce the risk of hacking is to write your code in a managed language that eliminates typical native code exploits by design. The act of loading, parsing and verifying a Corda transaction is a complex operation that involves carefully working with data that may be malicious. It helps a lot to have a bounds-checked language with managed memory.

In addition, Java was designed from the start for sandboxing of code. We’re not using an applet-style sandbox for this project because we wish to enforce deterministic execution[3] and because we want something stronger than typical sandboxes provide. Instead we use static analysis, code rewriting and a massively reduced standard library to prevent smart contracts from doing things they aren’t supposed to. Despite this alternative approach, Java’s bytecode verification system still does much of the heavy lifting.

What is remote attestation?

A key part of the scheme is how we use remote attestation. RA allows an SGX CPU to produce a so-called “report”: a data structure signed by the CPU indicating the contents of the enclave. Whilst the enclave code itself cannot have any secrets when it’s loaded, by deterministically deriving a private key inside the enclave (which is unavailable to the host operating system or owner of the hardware) it becomes possible to put the public part into the report and thus establish encrypted communication with an enclave. It is also possible to encrypt data for local storage this way (this is called “sealing”).

By having Corda nodes remotely attest to each other, the following features become possible:

  • Nodes can reveal transaction data to each other without compromising privacy.
  • “Light” nodes can request a remote attestation from a fully verifying node and rely on them to check the transactions. This is analogous to Bitcoin’s “SPV mode” where the acceptance of miners is used as a signal of transaction validity. The relying device does not need to have SGX; it could be a mobile phone.
  • Proxy oracles like the IC3 Town Crier experiment can be brought to Corda. Proxy oracles (oracles that download the underlying data from an uncooperative third party) are not so useful in Corda where financial institutions tend to be directly involved, but they could still be applicable in a few cases.

SGX also has applicability for doing multi-party computations in cases where cryptographic approaches can’t be used, and for protection of private keys without the need for expensive HSMs.

How does this compare to zero knowledge proofs?

I like zero knowledge proofs a lot. The progress in this space in recent years has been phenomenal. I have no doubt that Corda will be using various kinds of general zero knowledge proof in future. But there’s a lot of work needed to get the industry to that point, for the following reasons:

  1. The only ledgers today that use zero knowledge proofs are all ledgers for coins. They aren’t capable of doing any kind of smart contract because the proof circuit has to be artisanally hand crafted by expert cryptographers. But the use cases financial firms are most interested in almost invariably do not involve coins, and require complex smart contract logic written by experts in finance, not cryptography. So there’s a basic incompatibility there. Research efforts like MSR Gepetto and vnTinyRAM showed that it’s possible to take ordinary imperative code as found in Corda contracts and compile it to ZKPs automatically, but only at a severe performance penalty. We have some plans for a way forward here that we may talk about in the future.
  2. The best known (within the blockchain community) ZKP protocol today is the zkSNARK algorithm implemented by SCIPR Lab, which has a back door problem — the scheme must be initialised in such a way that the people doing the initialisation could potentially cheat and gain the ability to forge proofs. A new construct (the so-called zkSTARK) solves this problem, but it’s very new cryptography and will take time to fully develop.

For these reasons, we chose to go with SGX. SGX can be made largely transparent to app developers who do not have to learn any new specialised skills, it does not impose severe performance penalties, and in the intended non-light deployment mode a failure of the technology compromises privacy but not integrity. It satisfies more use cases than cryptography alone is presently capable of and is easy to work with. Even in a Star Trek future where zero knowledge proofs are an unremarkable part of everyday financial life, we anticipate that SGX will still play an important role.

What comes next?

I wanted to share this update with both the Corda community and the wider cryptographic research community, because we’re frequently asked about privacy related topics and felt it was time to share our plans. Our research has progressed to the point where we feel confident it can be integrated, although there is still much work remaining. If you’re interested in systems engineering or cryptography, we are hiring.

[1] I’ve found many people think Bitcoin and Ethereum are encrypted networks, but they aren’t. Nothing in Bitcoin or Ethereum is encrypted. It must be that way because these are public broadcast networks that anyone can join, so encrypting ledger data would be pointl
ess. The word “cryptocurrency” comes from the cryptography of digital signatures and hash functions: there’s more to crypto than hiding messages.

[2] Special nodes such as regulator, depository and fully validating notary nodes may still see large parts of the ledger, or all of it, depending on deployment model.

[3] See section 13 of the tech white paper for details on how we do this.