Pages

11 November 2011

Gmail + Google Chrome XSS Vulnerability

The weekend before last, I found a flaw in Gmail that on the one hand was rather exciting for me (as I hadn't expected to find anything at all, and it was pretty clearly reward-worthy), but on the other was a little unnerving, given how quickly and easily I was able to find it and how serious the vulnerability was.


Vulnerability Discovery

While doing some work on an exploit for an XSS flaw that I had already found on another platform (details will be released in the semi-near future), I decided to see if there were any XSS vulnerabilities that I had missed. The first thing I wanted to try was to see if the application was properly sanitizing filenames in attachments, so I modified a Python email testing application I hacked together and shot off an email with an attachment named '';!--"=&{()}.txt (a la RSnake) .

Everything looked good on the platform I was testing, so I began considering other possible attack vectors. However, on a whim, I decided to open up the email in Gmail as well, so I fired up Chrome and logged into my test account. I could not believe my eyes, but the filename was being used un-sanitized! A couple test emails later, I had a working XSS attack on the standard Gmail interface.


Proof of Concept

Send an email from the SMTP server of your choice with an attachment named :
">http://bit.ly/XcfTv" onload="alert(String.fromCharCode(88,83,83))"/>.txt
Screenshot :


Now, at this point, I was a little incredulous. There's no way that after about 10 minutes of work I had just found an XSS flaw this basic in what, I believe, is the most used webmail interface in the world, right? Well, as it turns out, it was not *quite* as bad as I had originally thought. I opened Firefox to test the flaw, and was surprised to find the filename was now perfectly sanitized. Upon testing in a number of other browsers and OSs, it appeared that the flaw solely affected Chrome on all platforms.

Given that I'm in the middle of exams right now, I sadly didn't have time to try to reverse the exact point of failure before the fix was applied, but my shot-in-the-dark guess would be that Google rolled out a new feature for testing in Chrome first before it moved to other browsers, and somewhere in their changes a sanitation routine got bypassed. However, this is sadly just a random guess, and Google did not enlighten me as to the specific details. Sorry to disappoint, all.

EDIT: I hate to leave questions unanswered; after all, I'm curious too. Although I could be wrong, the flaw seems to stem from Google's addition of a new "drag-files-to-the-desktop" feature a few months back. This would explain why a.) only Chrome was vulnerable and b.) it was the icons/links offering drag-and-drop that were affected, via unsanitized alt attribute. If anyone else knows better, though, I'd love to speak with you.


Disclosure


Regardless, given that an estimated 20% of Internet users use Google Chrome and at least half of them use the webmail interface (I think more, probably, but I'll be conservative), at least 10% of the Gmail userbase was vulnerable to all the nasty things one can do via XSS, simply by viewing an email with a maliciously crafted attachment. Although my exploit simply added some lovely pictures of Rick Astley to the email and popped up an alert dialog, it could have done such things as stolen cookies (giving an attacker a chance to hijack the account remotely), sent emails (think XSS worm), and read out emails and contacts from the account, all largely hidden from the user. I promptly notified Google of the problem, given the serious nature of the issue.

I have to commend the Google Security Team for their blazingly fast response to my disclosure. Although the screenshot demoing XSS in Gmail probably encouraged a faster reaction, they replied within 15 minutes of my initial email notifying me that they had read the disclosure and were looking into the problem. Given that I notified them at 5P.M on a Saturday afternoon, I was duly impressed. Within 24 hours, the flaw had been patched, and soon after I received an email notifying me that a temporary fix had been put into place. I had not even expected much of a response until Monday, let alone a fix, so I was quite happy with their reaction. Friendly, quick, and professional: the Google Security Team should serve as a model for other organizations who are working on handling disclosures by independent researchers effectively.


Random Other Blather

I still have to wonder, though, how this flaw got through Google's testing in the first place. As I stated earlier, it was a little unnerving to me that I found an attack vector that quickly and easily, given that I didn't do anything to find it that anyone should find particularly clever. Clearly, my discovery was aided significantly by blind luck, but my research this past week has certainly has made me think twice about ever using web-based interfaces to view my email.

I am glad to see though that Google is being very active and responsive in closing security holes, though, and (great for me!) rewarding those who report such flaws appropriately. I am hopeful that as long as organizations begin to follow Google's lead and encourage independent security research on their products, we might someday reach a point where finding vulnerabilities of similar gravity is a matter of years of research and development, rather than a few minutes of time and ~10 lines of Python.

0 comments: