BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Google Researchers Say Spectre Will Haunt Us for Years

Google Researchers Say Spectre Will Haunt Us for Years

Bookmarks

According to a paper by several Google researchers, speculative vulnerabilities currently defeat all programming-language-level means of enforcing information confidentiality. This would not be just an incidental property of how we build our systems, but rather the result of wrong mental models that led us to trade security for performance without knowing it.

Our paper shows these leaks are not only design flaws, but are in fact foundational, at the very base of theoretical computation.

Information confidentiality is a highly desirable property of a system that should be enforced at different levels of abstractions, from the hardware up to the programming language.

Programming languages provide a varying number of provisions to guarantee confidential information is not leaked. For example, in many mainstream languages, the type system is designed to rule out a number of dangerous behaviours which could put at risk integrity, confidentiality, and availability. One of the most significant properties type systems attempt to enforce is memory-safety, often the culprit behind many vulnerabilities, and a lot of research and effort have gone into building languages in such a way they can be trusted not to do unexpected things. Spectre has changed all that, according to Google researchers.

Spectre allows for information to be leaked from computations that should have never happened according to the architectural semantics of the processor. [...] This puts arbitrary in-memory data at risk, even data "at rest" that is not currently involved in computations and was previously considered safe from side-channel attacks.

To clarify this claim, the researchers defined a formal model of micro-architectural side-channels used to define an association between a processor's architectural states, i.e. the view of the processor that is visible to a program, and its micro-states, the CPU's internal states below the abstraction exposed to programming languages. This association shows how many correct micro-states may map to a single architectural state.

For example, when a CPU uses a cache or a branch predictor, the timings associated with a given operation will vary depending on the CPU micro-state, e.g., the data may be already available in the cache or needing to be fetched from memory, but all possible micro-states will correspond to the same architectural state, e.g. use a piece of information. Furthermore, the researchers define a technique to amplify timing differences and make them easily detectable, which becomes the base for the construction of a universal "read gadget", a sort of "out-of-bounds memory reader" giving access to all addressable memory.

According to the researchers, such a universal read gadget may be built for any programming language –even if it is not necessarily a trivial task– leveraging a number of language constructs or features that they have shown to be vulnerable using their model. Those include indexed data structures with dynamic bounds checks; variadic arguments; dispatch loops; call stack; switch statements, and more.

As a conclusion, the community is facing three major challenges: discovering micro-architectural side-channels; understanding how programs can manipulate micro-states; and how to mitigate those vulnerabilities.

Mitigating vulnerabilities is perhaps the most challenging of all, since efficient software mitigations needed for extant hardware seem to be in their infancy, and hardware mitigation for future designs is a completely open design problem.

While not reassuring, the paper does make for a quite dense read that is full of insights into how our mental computation model diverges from real computations happening in CPUs.

Rate this Article

Adoption
Style

BT