Ask HN: Our AWS account got compromised after their outage

215 points by kinj28 12 hours ago

Could there be any link between the two events?

Here is what happened:

Some 600 instances were spawned within 3 hours before AWS flagged it off and sent us a health event. There were numerous domains verified and we could see SES quota increase request was made.

We are still investigating the vulnerability at our end. our initial suspect list has 2 suspects. api key or console access where MFA wasn’t enabled.

timdev2 11 hours ago

I would normally say that "That must be a coincidence", but I had a client account compromise as well. And it was very strange:

Client was a small org, and two very old IAM accounts had suddenly had recent (yesterday) console log ins and password changes.

I'm investigating the extent of the compromise, but so far it seems all they did was open a ticket to turn on SES production access and increase the daily email limit to 50k.

These were basically dormant IAM users from more than 5 years ago, and it's certainly odd timing that they'd suddenly pop on this particular day.

  • tcdent 9 hours ago

    Smells like a phishing attack to me.

    Receive an email that says AWS is experiencing an outage. Log into your console to view the status, authenticate through a malicious wrapper, and compromise your account security.

    • timdev2 9 hours ago

      These were accounts that shouldn't have had console access in the first place, and were never used by humans to log in AFAICT. I don't know exactly what they were originally for, but they were named like "foo-robots", were very old.

      At first I thought maybe some previous dev had set passwords for troubleshooting, saved those passwords in a password manager, and then got owned all these years later. But that's really, really, unlikely. And the timing is so curious.

      • portaouflop 4 hours ago

        Why keep accounts like this around anyway? Sounds like a breach was just waiting to happen…

        • Avicebron 4 hours ago

          A cost center like security? Are you crazy..

    • SoftTalker 9 hours ago

      Good point. Phishers would certainly take advantage of a widely reported outage to send emails related to "recovering your services."

      Even cautious people are more vulnerable to phishing when the message aligns with their expectations and they are under pressure because services are down.

      Always, always log in through bookmarked links or typing them manually. Never use a link in an email unless it's in direct response to something you initiated and even then examine it carefully.

      • plaidfuji 4 hours ago

        What if the outage and phishing attack were coordinated at a higher level? There’s a scary thought.

        • BikiniPrince 3 hours ago

          Bezos will get to Mars at any cost!

      • roblabla 5 hours ago

        You can also use phishing-resistant login/2FA like passkeys/FIDO keys, where it is available (and I'm pretty sure amazon supports it), to minimize the risk of accidentally login into a phishing website while under pressure.

        • akerl_ 4 hours ago

          If my memory is correct, AWS supports FIDO for web login but not for the API, so you either have to restrict access to FIDO and then use the web UI for everything done as that user, or have a separate non-FIDO MFA device (without FIDO's phishing resistance) for terminal/API interactions.

          • jorvi an hour ago

            You can generate temporary AWS keys for privileged users: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credenti...

            Of course, as always, PEBKAC. You will have to strictly follow protocol, and not every team is willing to jump through annoying hoops every day.

            • akerl_ 26 minutes ago

              Can you actually generate temporary AWS STS credentials via FIDO MFA?

              Again, last I looked, FIDO MFA credentials cannot be used for API calls, which you'd need to make for STS credential generation.

              • jorvi 4 minutes ago

                You don't put the temporary credentials behind FIDO because they're temporary anyway. You put FIDO on the main account that has the privilege to generate the temporary credentials.

                So in the off chance that you get a phishing mail, you generate temporary credentials to take whatever actions it wants, attempt to log in with those credentials, get phished, but they only have access to API for 900s (or whatever you put as the timeout, 900s is just the minimum).

                900s won't stop them from running amok, but it caps the amok at 900s.

        • SoftTalker 4 hours ago

          They probably support it but how many accounts have not configured it? I'd bet it's a lot.

      • Scoundreller 4 hours ago

        A phisher that did their homework would send out a tone deaf email with a subject line like this that aws sent me during their outage:

        > You could win $5,000 in AWS credits at Innovate

    • highfrequencyy an hour ago

      I second this, pretty much immediately after my organization got hit with a wave of phishing emails.

  • LeonardoTolstoy 3 hours ago

    Almost this exact thing happened to me about a year ago. Very old account login, SES access with request to raise the email limit. We were only quickly tipped off because they had to open a ticket to get the limit raised.

    If you haven't check newly made Roles as well. We quashed the compromised users pretty quickly (including my own, the origin we figured out), but got a little lucky because I just started cruising the Roles and killing anything less than a month old or with admin access.

    To play devil's advocate a bit. In our case we are pretty sure my key actually did get compromised although we aren't precisely sure how (probably a combination of me being dumb and my org being dumb and some guy putting two and two together). But we did trace the initial users being created to nearly a month prior to the actual SES request. It is entirely possible whomever did your thing had you compromised for a bit, and then once AWS went down they decided that was the perfect time to attack, when you might not notice just-another-AWS-thing happening.

CaptainOfCoit 10 hours ago

Is it possible that people who already managed to get access (that they confirmed) has been waiting for any hiccups in AWS infrastructure in order to hide among the chaos when it happens? So maybe the access token was exposed weeks/months ago, but instead of going ahead directly, idle until there is something big going on.

Certainly feels like an strategy I'd explore if I was on that side of the aisle.

  • iainctduncan 8 hours ago

    Absolutely. I'm in diligence and we are hearing about attackers even laying the ground work and then waiting for company sales. The sophisticated ones are for sure smart enough to take advantage of this kind of thing and to even be prepping in advance and waiting for golden opportunities.

  • jinen83 9 hours ago

    I am from the same team & i can concur with what you are saying. I did see a warning about the same key that was used in todays exploit about 2 years ago from some random person in an email. but there was no exploutation till yesterday.

    • LeonardoTolstoy 3 hours ago

      This is it. I had the same thing happen to me a year ago and there was a month between the original access to our system and the attack. And similarly they waited until a perceived lull in what might be org diligence (just prior to thanksgiving) to attack.

  • shadowpho 5 hours ago

    Wouldn’t this be a terrible time because everyone is looking/logging into AWS?

    If my company used AWS I would be hyper aware about anything that it’s doing right now

    • LorenPechtel 3 hours ago

      I think the idea is that after an outage you would expect unusual patterns and thus not be sensitive to them.

ThreatSystems 11 hours ago

Cloudtrail events should be able to demonstrate WHAT created the EC2s. Off the top of my head I think it's the runinstance event.

  • ThreatSystems 10 hours ago

    I'm officially off of AWS so don't have any consoles to check against, but back on a laptop.

    Based on docs and some of the concerns about this happening to someone else, I would probably start with the following:

    1. Check who/what created those EC2s[0] using the console to query: eventSource:ec2.amazonaws.com eventName:RunInstances

    2. Based on the userIdentity field, query the following actions.

    3. Check if someone manually logged into Console (identity dependent) [1]: eventSource:signin.amazonaws.com userIdentity.type:[Root/IAMUser/AssumedRole/FederatedUser/AWSLambda] eventName:ConsoleLogin

    4. Check if someone authenticated against Security Token Service (STS) [2]: eventSource:sts.amazonaws.com eventName:GetSessionToken

    5. Check if someone used a valid STS Session to AssumeRole: eventSource:sts.amazonaws.com eventName:AssumeRole userIdentity.arn (or other identifier)

    6. Check for any new IAM Roles/Accounts made for persistence: eventSource:iam.amazonaws.com (eventName:CreateUser OR eventName:DeleteUser)

    7. Check if any already vulnerable IAM Roles/Accounts modified to be more permissive [3]: eventSource:iam.amazonaws.com (eventName:CreateRole OR eventName:DeleteRole OR eventName:AttachRolePolicy OR eventName:DetachRolePolicy)

    8. Check for any access keys made [4][5]: eventSource:iam.amazonaws.com (eventName:CreateAccessKey OR eventName:DeleteAccessKey)

    9. Check if any production / persistent EC2s have had their IAMInstanceProfile changed, to allow for a backdoor using EC2 permissions from a webshell/backdoor they could have placed on your public facing infra. [6]

    etc. etc.

    But if you have had a compromise based on initial investigations, probably worth while getting professional support to do a thorough audit of your environment.

    [0] https://docs.aws.amazon.com/awscloudtrail/latest/userguide/c...

    [1] https://docs.aws.amazon.com/awscloudtrail/latest/userguide/c...

    [2] https://docs.aws.amazon.com/IAM/latest/UserGuide/cloudtrail-...

    [3] https://docs.aws.amazon.com/awscloudtrail/latest/userguide/s...

    [4] https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credenti...

    [5] https://research.splunk.com/sources/0460f7da-3254-4d90-b8c0-...

    [6] https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_R...

  • jinen83 8 hours ago

    this is helpful. i will look for the logs.

    Also some more observations below:

    1) some 20 organisations were created within our Root all with email id with same domain (co.jp) 2) attacker had created multiple fargate templates 3) they created resources in 16-17 AWS regions 4) they requested to raise SES,WS Fargate Resource Rate Quota Change was requested, sage maker Notebook maintenance - we have no need of using these instances (recd an email from aws for all of this) 5) in some of the emails i started seeing a new name added (random name @outlook.com)

    • ThreatSystems 8 hours ago

      It does sound like you've been compromised by an outfit that has got automation to run these types of activities across compromised accounts. A Reddit post[0] from 3 years ago seems to indicate similar activities.

      Do what you can to triage and see what's happened. But I would strongly recommend getting a professional outfit in ASAP to remediate (if you have insurance notify them of the incident as well - as often they'll be able to offer services to support in remediating), as well as, notify AWS that an incident has occurred.

      [0] https://www.reddit.com/r/aws/comments/119admy/300k_bill_afte...

  • sylens 11 hours ago

    RunInstances

sousastep 11 hours ago

couple folks on reddit said while they were refreshing during the outage, they were briefly logged in as a whole different user

  • gwbas1c 9 hours ago

    Years ago I worked for a company where customers started seeing other customers' data.

    The cause was a bad hire decided to do a live debugging session in the production environment. (I stress bad hire because after I interviewed them, my feedback was that we shouldn't hire them.)

    It was kind of a mess to track down and clean up, too.

  • __turbobrew__ 10 hours ago

    Maybe dynamodb was inconsistent for a period and as that backs IAM credentials were scrambled? Do you have references to this, because if it is true that is really really bad.

  • afandian 11 hours ago

    Got references? This is crazy.

    • blast 5 hours ago

      I saw a link to https://old.reddit.com/r/webdev/comments/1obtbmg/aws_site_re... at one point but then it was deleted

      • perpil 5 hours ago

        This is not about the AWS Console. It is talking about the customer's site hosted on CloudFront. It is possible to cross wires with user sessions when using CloudFront if you haven't set caching granular enough to be specific to an end user. This scenario is customer error, not AWS.

      • duk3luk3 5 hours ago

        This isn't about an aws account, this is about the auth inside the project that user is running.

  • CaptainOfCoit 10 hours ago

    > couple folks on reddit said while they were refreshing during the outage, they were briefly logged in as a whole different user

    Didn't ChatGPT have a similar issue recently? Would sound awfully similar.

    • sunaookami 9 hours ago

      Steam also had this, classic caching issue.

      • mbo 7 hours ago

        This happened to me on Twitter maybe like, 9 years ago? What's the mechanism of action that causes this to happen?

        • howinator 5 hours ago

          The easiest way to do this is to misconfigure your CDN so that it caches set-cookie headers.

  • liviux 10 hours ago

    A friend of a friend knows a friend who logged in to Netflix root account. Source: trust me bro

yfiapo 11 hours ago

Highly likely to be coincidence. Typically an exposed access key. Exposed password for non-MFA protected console access happens but is less common.

kondro 4 hours ago

us-east-1 is unimaginably large. The last public info I saw said it had 159 datacenters. I wouldn't be surprised if many millions of accounts are primarily located there.

While this could possibly be related to the downtime, I think this is probably an unfortunate case of coincidence.

itsnowandnever 11 hours ago

i cant imagine it's related. if it is related, hello Bloomberg News or whoever will be reading this thread because that would be a catastrophic breach of customer trust that would likely never fully return

  • jddj 10 hours ago

    You say that, but azure and okta have had a handful of these and life over there has more or less gone on.

    Inertia is a hell of a drug

    • testfrequency 5 hours ago

      Similarly, everyone is back to using CS and their stock is just fine

didip 4 hours ago

During time of panic, that’s when people are most vulnerable to phishing attacks.

Total password reset and tell your AWS representative. They usually let it slide on good faith.

geor9e 9 hours ago

If I was a burgler holding a stolen key to a house, waiting to pick a good day, a city-wide blackout would probably feel like a good day.

  • what 5 hours ago

    That’s likely a pretty bad day to burgle. People are probably going to be at home. You should wait for garbage day and see who hasn’t put their bins out.

    • bthrn 4 hours ago

      This guy burgles

bdcravens 11 hours ago

Any chance you did something crazy while troubleshooting downtime (before you knew it was an AWS issue)? I've had to deal with a similar situation, and in my case, I was lazy and pushed a key to a public repo. (Not saying you are, just saying in my case it was a leaked API key)

brador 8 hours ago

Lot of keys and passwords being panic entered on insecure laptops yesterday.

Do not discount the possibility of regular malware.

  • tylergetsay 5 hours ago

    Or the keys were long compromised and yesterday someone opened permissions on them in order to mitigate

AtNightWeCode 9 hours ago

Not uncommon that machines get exposed during trouble-shooting. Just look at the Crowdstrike incident just the other year. People enabled RDP on a lot machines to "implement the fix" and now many of these machines are more vulnerable than if if they never installed that garbage security software in the first place.

klysm 11 hours ago

Sounds like a coincidence to me

NedF 5 hours ago

[dead]