BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Can We Build Trustable Hardware? Andrew Huang at 36C3

Can We Build Trustable Hardware? Andrew Huang at 36C3

This item in japanese

Bookmarks

Andrew "bunnie" Huang recently presented at the 36th Chaos Communication Congress (36C3) on 'Open Source is Insufficient to Solve Trust Problems in Hardware' with an accompanying blog post 'Can We Build Trustable Hardware?' His central point is that Time-of-Check to Time-of-Use (TOCTOU) is very different for hardware versus software, and so open source is less helpful in mitigating the array of potential attacks in the threat model. Huang closes by presenting Betrusted, a secure hardware platform for private key storage that he’s been working on, and examining the user verification mechanisms that have been used in its design.

Huang opens with an illustration of how software has become trustable, because users can generate and view a hash for the software they’re about to run on their own machines and verify that it’s the expected hash for that release of the software. Thus, software has a very short TOCTOU. Open Source Software (OSS) provides the additional benefit that users can trace the software that they’re running all the way back to the code in a source code control system (typically git). He then goes on to say:

I’ve concluded that open hardware is precisely as trustworthy as closed hardware. Which is to say, I have no inherent reason to trust either at all. While open hardware has the opportunity to empower users to innovate and embody a more correct and transparent design intent than closed hardware, at the end of the day any hardware of sufficient complexity is not practical to verify, whether open or closed.

Huang goes on to run through the hardware supply chain and the myriad of possible attacks that can be mounted along it. He also examines the difficulty of detecting an attack, showing that there’s no simple equivalent to hashing for hardware, making verification much more difficult. From this he extracts ‘Three Principles for Building Trustable Hardware’:

 

  1. Complexity is the enemy of verification.
  2. Verify entire systems, not just components.
  3. Empower end-users to verify and seal their hardware.

The Betrusted project provides an illustration of the three principles in action. It’s simple, providing a limited range of functions for secure text and voice chat, second-factor authentication and storage of digital currency. The entire system is verifiable, including keyboard and screen (rather than just the hardware secure enclave). Users can check the components for themselves without needing specialist equipment. Betrusted also illustrates that there are limitations with presently available hardware that lead to a number of compromises.

The CPU is identified as ‘the most problematic piece’, making use of a Xilinx Spartan-7 Field Programmable Gate Array (FPGA) so that a number of the hardware verification tasks can be moved into software. Huang acknowledges issues using proprietary FPGAs, but also points to some mitigations:

The downside of this approach is that the Spartan-7 FPGA is a closed source piece of silicon that currently relies on a proprietary compiler. However, there have been some compelling developments that help mitigate the threat of malicious implants or modifications within the silicon or FPGA toolchain. These are:

• The Symbiflow project is developing a F/OSS toolchain for 7-Series FPGA development, which may eventually eliminate any dependence upon opaque vendor toolchains to compile code for the devices.
• Prjxray is documenting the bitstream format for 7-Series FPGAs. The results of this work-in-progress indicate that even if we can’t understand exactly what every bit does, we can at least detect novel features being activated. That is, the activation of a previously undisclosed back door or feature of the FPGA would not go unnoticed.
• The placement of logic with an FPGA can be trivially randomized by incorporating a random seed in the source code. This means it is not practically useful for an adversary to backdoor a few logic cells within an FPGA. A broadly effective silicon-level attack on an FPGA would lead to gross size changes in the silicon die that can be readily quantified non-destructively through X-rays. The efficacy of this mitigation is analogous to ASLR: it’s not bulletproof, but it’s cheap to execute with a significant payout in complicating potential attacks.

Huang seems optimistic that he can make progress with Betrusted, but ultimately the project may show the limits of trustability. Open source firmware for servers (written in safer languages like Rust) such as that proposed by Oxide certainly help reduce complexity and improve verification; but the overall complexity of servers (and PCs, phones and tablets etc.) may still overwhelm the ability for end-users to verify and seal their hardware.

There are also cases where hardware can be compromised without the cost and complexity of supply chain attacks, such as the recent 'Fatal Fury' compromise of eFuses in ESP32 devices. We’ve come a long way since Ken Thompson’s 'Refections on Trusting Trust (pdf)' in 1984, but overall it seems that complexity has grown quicker than our ability to verify and empower.

Rate this Article

Adoption
Style

BT