How not to implement AWS S3 signed URLs? $25,000 bounty
Aug 16, 2023
How not to implement AWS S3 signed URLs? $25,000 bounty
🔍Get a free 2 week trial of Detectify - the sponsor of today’s video🔍 https://www.detectify.com/bbre 📧 Subscribe to BBRE Premium: https://bbre.dev/premium ✉️ Sign up for the mailing list: https://bbre.dev/nl 📣 Follow me on Twitter: https://bbre.dev/tw This video is an explanation of the attack on AWS S3 implementation on undisclosed bug bounty platform. The vulnerability was found by Frans Rosen and he got $25,000 bounty for it. 🖥 Get $100 in credits for Digital Ocean 🖥https://m.do.co/c/cc700f81d215 ✎Sign up for Pentesterlab from my referral✎https://pentesterlab.com/referral/Vtc … Report:https://labs.detectify.com/2018/08/02 … Reporter’s twitter:https://twitter.com/fransrosen Follow me on twitter:https://twitter.com/gregxsunday Timestamps: 00:00 Intro 00:23 Detectify - the sponsor of the video 00:59 AWS S3 01:55 signed URLs 03:42 attacking signed URLs implementations
Content
0.88 -> Hello! In today's episode of Bug Bounty
Reports Explained, you will learn how
6.08 -> Frans Rosen was able to access millions of
files by bypassing AWS S3 signed URLs. The
14.08 -> vulnerability was rewarded $25,000. The write-up,
originally published at Detectify's blog,
21.2 -> is linked in the description. Detectify is also
a sponsor of today's video. It's a DAST scanner
28.72 -> powered by leading ethical hackers in the world.
Their payload-based testing finds undocumented
35.2 -> vulnerabilities from OWASP top 10, vulnerabilities
behind authentication, misconfigurations in CORS, S3
42.32 -> buckets, encryption and much more. Detectify
puts hacker knowledge into hands of security
48.64 -> engineers to fix their applications. Start a free,
2-week trial on detectify.com/bbre to see what
57.12 -> it finds on your website. AWS S3 is Amazon's
service used for storing files in the cloud.
66.96 -> When the application needs to show the user a file
that's stored there, the developer has at least
74.08 -> two options. Let's say it's an image. One
option is to basically stream the file
80.96 -> from the S3 through the server. But you can
probably tell that this is not the most optimal
87.04 -> solution. It adds latency and uses resources of
the server. It would be much better to let the
94.72 -> user download the file from S3 bucket directly.
However, how do you then make sure that users only
102.48 -> download their own files? If you make the bucket
public then there is no access control at all.
110.48 -> This issue can be solved using S3 signed URLs. It
works this way: a user wants to download a file
120.24 -> that's stored on s3 bucket. The server
generates a specific link for that user
125.92 -> that's only valid for a limited time and
for the one specified file. With that link,
132.64 -> the user can access the file from S3 bucket
directly. But what is that special in this link?
141.12 -> Among other parameters, this link contains a
signature. The signature is only valid for this
148.64 -> one file. If you try to access different one with
the same signature, it will not work. The same
155.92 -> goes for accessing a file after the expiration
period. The signature is generated on the backend.
174.56 -> To create it, the server uses a secret
key which, as the name strongly suggests,
180.24 -> is secret and not present in the link. It is 40
characters long and the signature uses SHA-256.
189.28 -> The brute-force of this would take ages, so for
sure we will not be able to generate our own
195.84 -> valid signature. We need to find
another way to hack this mechanism.
201.44 -> With such things, it's usually better to not
look for a vulnerability in AWS' mechanism
207.92 -> but in its implementations by developers.
We know that we won't break SHA-256,
214.72 -> but what if we would just ask the server to
create a link to a different file. For example
222.08 -> you can use path traversal to get a signed URL
to a file from completely different directory
227.92 -> that the developers wanted to allow. But the
problem is, even if we can read arbitrary file,
234.8 -> we don't know what files are stored there. We
would have to blindly brute-force file names which
240.64 -> would significantly reduce the impact. However,
if you could get a signature for a directory,
247.44 -> not a file, and the server would have ListObjects
permission, you could get directory listing.
255.68 -> And I know from experience that
some developers absolutely love
259.92 -> storing everything just in one bucket: receipts
and reports mixed with css and javascript files,
268.32 -> separated only by different directory. But
coming back to our undisclosed application.
275.28 -> There, you requested a file using s3_key with
the filename and random_key parameters. Then, the
283.52 -> application opened the URL with this random key in
the path which then redirected to the signed URL.
291.44 -> Why did they take this additional step? I
don't know but what i know for sure is that
296.8 -> s3_key was not validated at all. You could just
put the / there, follow the redirect and there
305.44 -> was directory listing of the whole bucket. And
inside... millions of files. Basically all of the
312.56 -> data was stored in this bucket. The hunter quickly
reported it and got $15,000 plus $10,000 of bonus.
323.2 -> To fix this, you mustn't store internal files
along with user files in the same bucket with
329.36 -> the same permissions, you must somehow separate
it. I think the main takeaway from this report is
335.76 -> that if something is signed or encrypted doesn't
automatically mean that it's secure. Anyway,
342 -> if you've learned something new today, leave a
like or leave a subscription. Also, sign up for
347.52 -> BBRE newsletter to get more tips and tricks.
For now, thank you for watching and goodbye!
Source: https://www.youtube.com/watch?v=G7Pre3Y46Fs