Spectre and Meltdown exploits

I’m trying to grok how these guys work. The white papers are posted by Google Project Zero.

General idea of what’s going on

The attacks are done by triggering speculative execution of code on out of bounds data, to access data you’re not allowed to run, and by using cache latency to detect if a condition about the out-of-bounds data is true or not. All good CPUs allow out of order speculative execution of code, and once the CPU knows if the code was ok to run it commits the results as having run, else it discards the results as if they were never computed in the first place. However the results don’t matter, what matters is that running the code may anyways have changed what’s loaded in the cache or removed. In other words it can make future accesses to data slower or faster. Timing these differences is what’s used by the attacks.

  • Flush cache so that load_something_slow() is slow
  • Run speculatively your code, such as:
    if (unknown_condition) { if (array[x] == y) load_something_slow(); }

    • array[x] == y can be run speculatively by the CPU because it tries to issue instructions past the unknown_condition just in case it saves time to already know the result.
    • Because it is speculative, the CPU will verify if that code needed to run, and if not will dispose of the result so the application never sees it was tried.
    • However the speculative execution was not invisible. It can change the time it takes for the program to run, in particular it may have been forced to do a slow memory I/O that you can time.
    • If the code was fast, the condition went one way, if it is slow, it went another way. In other words, if you monitor the time you can tell what the value you wanted to read is.
    • You have detected a data value from a side channel aka a byproduct of a physical property of the CPU.
    • You have accessed data indirectly via a covert channel, aka a hidden indirect means of communication.


I see 3 levels of risk:

  • sandboxed code spies on the program running it
    For example: javascript user code accesses information stored in the browser
    Property: the sandboxed code is run in the same virtual address space as the program
    How: the sandbox code can use speculative execution to access any data in the program. If the bounds are checked to avoid out of bounds access, it can still trick the CPU speculative execution to do the out of bound access.
  • program spies on OS
    For example: a user program tries to read passwords typed by other users
    Property: the program escapes its user virtual address space and is able to read information in inaccessible kernel address space.
    How: some kernel pages are mapped into user space (like the sys call table I would assume, and possibly other I/O pages). The same approach is done as above, but somehow you force the OS to run code and monitor its performance.
    Bigger deal: This breaks the ring isolation. The OS is in ring0, which is allowed to run privileged code. Your app runs in ring3, which is allowed only to mess with itself, or ask the OS permission to mess with other things via controlled system calls.
  • program spies on another program
    For example: your web browser runs code that accesses your 1password data
    How: I’m not sure, but I assume it uses libraries that are shared between programs. In that case it might find existing code to run with good timing properties, and hack the state of the caches or BTB to get infos. It might work with the BTB if it uses physical addresses to map the branch results because you could trigger a flush of a branch target to make it slow when it runs on the victim program.
    Even bigger deal: if that works is that it breaks Virtual Memory isolation. Both programs run ring 3, but they don’t know about each other because their data and code is in a different, unrelated virtual address space. The OS maps those virtual addresses to physical memory location as needed. For shared libraries, the same physical memory is mapped into the virtual address spaces of the programs.


Meltdown in a nutshell:

It allows a program to read any data from the kernel or any other process!

The issue is that modern Ones map their kernel address space into the address spaces of all their processes for convenience. The user process itself can’t normally access anything because the kernel addresses can only be accessed while in ring-0 privilege, and any attempt would trigger a fault.

Reading kernel data: By using speculative execution, the malicious user program set a physical state in the memory hierarchy, for example loading a cache line, based on a comparison of the value in any kernel address with any value of its choosing, without triggering a fault. It can then time the performance of accessing that part of the memory hierarchy, and get a true/false response to the comparison it performed, hence figure out its value.

Reading other processes data: Physical memory is limited and Linux and other Ones basically map all the physical address space in the kernel’s virtual address space (the virtual address space is pretty much infinite 2^64 bits so it’s cheap to cut out a fe gigabytes). This means the kernel actually holds all the user program data currently in memory in its virtual address space, hence the malicious program can access all the data of the kernel or any application, if it is held in memory. That includes all passwords in clear!

KAISER patch: The patch being applied to Linux and other Ones is basically to NOT map the kernel address space in the user address space, since ring-0/ring-3 protection does not protect against speculative execution access to ring-0 addresses. The fix is not 100% effective because the kernel must map some parts in user space, like the interrupt table or the sys call page I would say, but that’s only kilobytes and none of the other user’s data. The risk left is that these addresses left contain enough information to devise other attacks via other means.

Spectre idk.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s