bradgessler 3 days ago

When we did annual pen testing audits for my last company, the security audit company always offered to do phishing or social engineering attacks, but advised against it because they said it worked every single time.

One of the most memorable things they shared is they'd throw USB sticks in the parking lot of the company they were pentesting and somebody would always put the thing into a workstation to see what as on it and get p0wned.

Phishing isn't really that different.

Great reminder to setup Passkeys: https://help.x.com/en/managing-your-account/how-to-use-passk...

  • Aeolun 2 days ago

    Our company does regular phishing attacks against our own team, which apparently gets us a noteworthy 90% ‘not-click’ rate (don’t quote me on numbers).

    Never mind that that 10% is still 1500 people xD

    It’s gone so far that they’re now sending them from our internal domains, so when the banner to warn me it was an external email wasn’t there, I also got got.

    • antonymoose 2 days ago

      I used to work for an anti-phishing focused brand protection firm, we provide training and testing to third parties and heavily, aggressively dog-fooded our own products.

      So, of course, we got to a point as a company where no one opened any email or clicked any link ever. This caused HR pain every year during open-enrollment season, for other annual trainings, etc.

      At one point they started putting “THIS IS NOT A PHISH” in big red letters at the top of the email body to get folks to open emails and handle paperwork.

      So then our trainers stole the “NOT A PHISH” header and got almost the entire company with that one email.

    • solid_fuel 2 days ago

      At a previous position, I had a rather strained relationship with the IT department - they were very slow to fill requests and maintained an extremely locked down windows server that we were supposed to develop for. It wasn't the worse environment, but the constant red tape was pretty frustrating.

      I got got when they sent out a phishing test email disguised as a survey of user satisfaction with the IT department. Honestly I couldn't even be mad about it - it looked like all those other sketchy corporate surveys complete with a link to a domain similar to Qualtrics (I think it was one or two letters off).

      • taneq 2 days ago

        TBH this is probably the best argument for actually conducting phishing pentests. It shuts up the technical users who think they're too smart to need the handrails and safety nets that the IT department set up for the rest of the average plebs who work there.

        (Speaking as one of the technical users here. Of course, it wouldn't happen to ME! :P )

        • eru 2 days ago

          If you never read your emails, it's hard for them to get you with phishing emails.

        • kuschku 2 days ago

          if you've got email filters set up that sort emails by (dkim-verified) sender into folders, phishing becomes immediate obvious as you start to wonder why it isn't sorted into the right folder.

          • zahlman 2 days ago

            I'd heard that the spammers are better at using DKIM correctly than legitimate users nowadays... ?

        • Aeolun a day ago

          I dunno, if I get phishing emails in my inbox I feel like a certain team has already failed. We have a firewall that blocks anything non- approved. Do the same thing with emails.

    • liquidgecka 2 days ago

      My former company would send out rewards as a thank you to employees. It was basically a “click here to receive your free gift!” email. I kept telling the security team that this was a TERRIBLE president but it continued none the less. The first time I got one I didn’t open it for ages, even after confirming the company was real. It was only after like the 5th nagging email that I asked security about it and they confirmed that it was in fact a real thing the company was using. I got a roomba, a nice outdoor chair, and some sweet headphones. =)

      • riffraff 2 days ago

        I'm pretty sure you meant "terrible precedent" but I giggled a bit thinking "yeah the company president is pretty bad for forcing this".

        • taneq 2 days ago

          I kinda want to start using "setting a terrible president" now and see who calls me out on it. :D

      • taneq 2 days ago

        There are SO MANY terrible practices like this carried out by companies big enough to know better. From registering new domains for email addresses (for a while a BigCorp customer of ours had a mix of @bigcorp.com and @bigcorp2.com email addresses, how the hell is any user meant to guess that MediumCorp hasn't also spun up a mediumcorp2.com mail server?!) to FedEx sending "click this link to pay import duties" texts from random unaffiliated (probably personal?) mobile numbers as their primary method of contacting recipients for import duties... The internet (like credit cards) is built on and around trust, and it shouldn't be.

        Congrats on the loot, though! Your former company can't be all bad. ;)

        • pixl97 2 days ago

          >mix of @bigcorp.com and @bigcorp2.com

          This pisses me off when the company I work for as a website for the new application for the week. I couldn't even begin to tell you how many websites we have. They don't have a list of them anywhere.

    • Spivak 2 days ago

      I'm so surprised by this, not because I don't think that many people would fall for a phishing attempt, but because the corporate "training" phishing emails are so glaringly obvious that I think it does a disservice to the people being tested. I feel like it gives a false impression you can detect phishing via vibes when the real ones will be much stealthier.

      Are your phishing emails good? If so if you don't mind name dropping the company so I can make a pitch to switch to them.

      • CJefferson 2 days ago

        I had the opposite problem recently, I got a work phishing email from netflix.com . Now I still shouldn’t have clicked on it, netflix isn’t attached to my work email, but you couldn’t actually send a phishing email from account@netflix.com, they had to give access to our inboxes so the phishing company could manually drop it into our inboxes.

      • masklinn 2 days ago

        Like many other scams, an “obvious” entry point can be very useful as it makes victims self-selected, and a lot more likely to follow to completion. Even if the opportunity cost of phishing is low, having nobody report the attempt makes for a longer window of operation.

    • skeeter2020 2 days ago

      >> so when the banner to warn me it was an external email

      These are so obviously useless. When the majority of your email has a warning banner it stops to be any sort of warning. It's like being at "code orange" for 20 years after 9/11; no-one maintained "heightened security awareness" for decades, it just became something else to filter.

      • dogleash 2 days ago

        > When the majority of your email has a warning banner it stops to be any sort of warning.

        All they've done is teach me to spot the phishing tests, because our email is configured to let the test bypass the banner.

  • Amorymeltzer 3 days ago

    >they'd throw USB sticks in the parking lot of the company they were pentesting and somebody would always put the thing into a workstation to see what as on it and get p0wned.

    One of my favorite quotes is from an unnamed architect of the plan in a 2012 article about Stuxnet/the cyber attacks on Iran's nuclear program:

    "It turns out there is always an idiot around who doesn't think much about the thumb drive in their hand."

    • mr_mitm 2 days ago

      I don't think we should be calling the users idiots when we failed to make our systems secure by design. If a simple act like plugging in a thumb drive by a well-meaning user undermines the security of an entire operation, then why do we allow such a thing to happen?

      Relevant: https://www.schneier.com/blog/archives/2016/10/security_desi...

      • eru 2 days ago

        Yes. People used to laugh at the auto-play for CD-ROMs in Windows 95. But if a USB device can hijack your system, is it that different?

        • pas 2 days ago

          Are we still at the "Bill Gates got a BSoD during the demo of USB" level?

          I know that at least on Linux mounting filesystems can lead to nasty things, so there's FUSE, but ... I have no idea what distros and desktop environments do by default. And then there's all the preview/thumbnail generators and metadata parsers, ...

          • eru a day ago

            One big problem with USB is that something might look like a storage device to the human eyes and hands, but it's actually a keyboard as far as the computer is concerned.

            The U stands for Universal, and it's awfully convenient, but it contributes to the security nightmare.

            A CD we can just passively read the bytes off, but if we want our keyboards to just work when we plug them in, then it's going to be harder to secure a supposedly dumb storage device.

            • pas a day ago

              Sure, and it can be any kinds of device, and ... it can trick the OS to loading strange drivers (with juicy vulnerabilities), but that's the point. How the fuck is this still the norm? (Despite user mode driver frameworks!?)

  • DamnInteresting 2 days ago

    Last year I got a phishing email at my work address, and it was more convincing than most. I knew it was phishing, but it might have fooled me if I'd been less attentive.

    When I see these sophisticated phishing messages I like to click through and check out how well-made the phishing site itself is, sometimes I fill their form with bogus info to waste their time. So I opened the link in a sandboxed window, looked around, entered nothing into any forms.

    It turns out the email was from a pen testing firm my employer had hired, and it had a code baked into the url linked to me. So they reported that I had been successfully phished, even though I never input any data, let alone anything sensitive.

    If that's the bar pen testing firms use to say that they've succeeded in phishing, then it's not very useful.

    • bc569a80a344f9c 2 days ago

      I think it's fair to put more (or, maybe, less) nuance on that. Zero-days against browsers exist, zero-days against plugins installed via MDM exist. Sure, you didn't actually submit any credentials, but cybersecurity training and phishing simulations have to target a lowest common denominator: people shouldn't click on links in shady emails. Sometimes just the act of clicking is bad enough. So that's what they base assigning training or a pass/fail on: whether you accessed the pretend TA site, and not whether you hit a submit button there.

      For what it's worth, all vendors I've worked with in that space report on both. I'm pretty sure even o365's built-in (and rather crude) tool reports both on "clicked link" and "submitted credentials". I'd estimate it's more likely your employer was able to tell the difference, but didn't both differentiating between the two when assigning follow-up training because just clicking is bad enough.

  • amenghra 3 days ago

    If you are getting powned by running random executables found on usb drives, passkeys aren’t going to save you. Same if the social engineering is going to get you to install random executables.

    • tialaramex 3 days ago

      If you're getting pwned a physical Security Key still means bad guys don't have the actual credential (there's no way to get that), and they have to work relatively hard to even create a situation where maybe you to let them use the credential you do have (inside the Security Key) while they're in position to exploit you.

      These devices want a physical interaction (this is called "User present") for most operations, typically signified by having a push button or contact sensor, so the attacker needs to have a proof of identity ready to sign, send that over - then persuade the user to push the button or whatever. It's not that difficult but it's one more step and if that doesn't work you wasted your shot.

      • amenghra 3 days ago

        Malicious binary steals browser cookies giving attacker access to all active sessions?

        • FreakLegion 2 days ago

          It gets better. With malware on the box you own the primary refresh token, which can mint new browser tokens without needing passwords or MFA.

          Definitely use FIDO2, but understand that it's not foolproof. Malware, OAuth phishing, XSS, DNS hijacking, etc. will still pwn you.

      • mr_mitm 2 days ago

        All your 2FA apps, token, security keys, certificates and what not only protect the authentication (and in the case of online banking, a few other actions like transferring money). After that, a single bearer token authenticates each request. If your endpoint is compromised, the attackers will simply steal the bearer token after you authenticate.

        • tialaramex 2 days ago

          That's true, but in terms of system design you definitely should ask to see the proof of identity again during unusual transactions and not just that bearer token - for example attempts to add or remove 2FA should need that extra step, as well as say high value financial transactions or irreversible system changes.

    • rm445 2 days ago

      I think the claim is that plugging in the USB device is enough. If people needed to try running an executable from the device, some devices would still be compromised, but with lower frequency. I don't know exactly what happens. Automatically-triggered 'driver' install that is actually malware? Presenting as a keyboard and typing commands? Low-level cracks in the OS USB stack?

      It feels to me more like OSes ought to be more secure. But USB devices are extremely convenient.

      • scq 2 days ago

        Usually presents as a keyboard that types commands, yeah. Win-R -> powershell -> execute whatever you want.

        E.g. https://shop.hak5.org/products/usb-rubber-ducky

        • yencabulator a day ago

          Still fits "It feels to me more like OSes ought to be more secure."

          New USB-HID keyboard? Ask it to input a sequence shown on screen to gain trust.

          Though USB could be better too; having unique gadget serial numbers would help a lot. Matching by vendor:product at least means the duplicate-gadget attack would need to be targeted.

    • akerl_ 3 days ago

      Sure; the fix for that is blocking unexpected USB devices on corporate devices.

    • bee_rider 3 days ago

      I don’t disagree.

      But, haven’t there been bugs where operating systems will auto run some executable as soon as the USB is plugged in? So, just to be paranoid, I’d classify just plugging the thing in as “running random executables.” At least as a non-security guy.

      I wonder if anyone has tried going to a local staples or bestbuy something, and slipping the person at the register a bribe… “if anyone from so-and-so corp buys a flash drive here, put this one in their bag instead.”

      Anyway, best to just put glue in the USB ports I guess.

      • tsimionescu 2 days ago

        Even if the OS doesn't have any bad security practices and doesn't do this, there is a very simple way to execute code from an USB stick: the USB stick pretends it's a USB keyboard and starts sending input to access a terminal. As long as the computer is unlocked, this will work, and will easily get full local user access, even defeating UAT or similar measures. It can then make itself persistent by "typing in" a malicious script, and going further from there.

        • lmm 2 days ago

          > there is a very simple way to execute code from an USB stick: the USB stick pretends it's a USB keyboard and starts sending input to access a terminal

          Good systems these days won't accept such a "keyboard" until it's approved by the user.

          • tsimionescu 2 days ago

            Which systems ask before allowing you to use a keyboard you just plugged in over USB? Windows, Ubuntu, Fedora certainly don't, at least not by default.

            • JdeBP 2 days ago

              Mine. Not asking whoever happens to have local physical access interactively, strictly speaking, as that just papers over one of the problems; but controlling what Human Input Devices are allowed when plugged in, by applying rules (keyable on various device parameters) set up by the administrator.

              Working thus far on NetBSD, FreeBSD, and Linux. OpenBSD to come when I can actually get it to successfully install on the hardware that I have.

              * https://jdebp.uk/Softwares/nosh/guide/user-virtual-terminal-...

              In principle there's no reason that X11 servers or Wayland systems cannot similarly provide find-grained control over auto-configuration instead of a just-automatically-merge-all-input-devices approach.

              • BobaFloutist 2 days ago

                Assuming the OS isn't running on a laptop, how do you approve the first keyboard or mouse you plug in?

                • JdeBP a day ago

                  It's not an interactive approval process, remember. It's a ruleset-matching process. There's not really a chicken-and-egg problem where one builds up from nothing by interactively approving things at device insertion time using a keyboard, here. One does not have to begin with nothing, and one does not necessarily need to have any keyboard plugged in to the machine to adjust the ruleset.

                  The first possible approach is to start off with a non-empty ruleset that simply uses the "old model" (q.v.) and then switch to "opt-in" before commissioning the machine.

                  The second possible approach is to configure the rules from empty having logged in via the network (or a serial terminal).

                  The third possible approach is actually the same answer that you are envisaging for the laptop. On the laptop you "know" where the USB builtin keyboard will appear, and you start off having a rule that exactly matches it. If there's a "known" keyboard that comes "in the box" with some other type of machine, you preconfigure for that whatever it is. You can loosen it to matching everything on one specific bus, or the specific vendor/product of the supplied keyboard wherever it may be plugged in, or some such, according to what is "known" about the system; and then tighten the ruleset before commissioning the machine, as before.

                  The fourth possible approach is to take the boot DASD out, add it to another machine, and change the rules with that machine.

                  The fifth possible approach is for there to be a step that is part of installation that enumerates what is present at installation time and sets up appropriate rules for it.

            • windward 2 days ago

              I've only seen it on Macs

              • tsimionescu 2 days ago

                Out of curiosity, how does that work if this is the only input method connected? Or is this only shown if you have another keyboard (and/or mouse) already connected.

                • windward a day ago

                  Sorry, IDK, I've only used their laptops

      • throwaway173738 3 days ago

        Good luck doing hardware development without USB ports, as the IT team at my employer recently found out.

        • vhcr 2 days ago

          Second best option is to whitelist USB's PID and VID.

    • windward 2 days ago

      USB devices can be USB HIDs

  • dilyevsky 3 days ago

    The stray USB stick is how Stuxnet allegedly got deployed. Tbh I doubt that works in this day and age.

    • anonymousiam 3 days ago

      What I heard about the Stuxnet attack was different from what you are saying:

      The enrichment facility had an air-gapped network, and just like our air-gapped networks, they had security requirements that mandated continuous anti-virus definition updates. The AV updates were brought in on a USB thumb drive that had been infected, because it WASN'T air-gapped when the updates were loaded. Obviously their AV tools didn't detect Stuxnet, because it was a state-sponsored, targeted attack, and not in the AV definition database.

      So they were a victim of their own security policies, which were very effectively exploited.

      • NicolaiS 2 days ago

        Do you have any sources that the infected USB contained AV updates?

        I can't find any sources saying that..

        • anonymousiam 2 days ago

          This was years ago by word of mouth within channels. AFAIK it wasn't classified, but maybe the guy who told me goofed.

    • roblabla 3 days ago

      A USB can pretend to be just about any type of device to get the appropriate driver installed and loaded. They can then send malformed packets to that driver to trigger some vulnerability and take over the system.

      There are a _lot_ of drivers for devices on a default windows install. There are a _lot more_ if you allow for Windows Update to install drivers for devices (which it does by default). I would not trust all of them to be secure against a malicious device.

      I know this is not how stuxxnet worked (instead using a vulnerability in how LNK files were shown in explorer.exe as the exploit), but that just goes to show how much surface there is to attack using this kind of USB stick.

      And yeah, people still routinely plug random USBs in their computers. The average person is simultaneously curious and oblivious to this kind of threat (and I don't blame them - this kind of threat is hard to explain to a lay person).

      • zahlman 2 days ago

        Do people still commonly use USB for removable storage? I kinda assumed it was all SD/microSD now.

        • JdeBP 2 days ago

          They certainly still plug those SD/TF cards into USB card readers that present themselves as USB mass storage devices.

          • zahlman 2 days ago

            Sure, but who's going to pick up a random USB-to-SD adapter from the parking lot and plug that into a computer? The point of the USB key experiment is that the "key" form factor advertises "there is potentially interesting data here and your only chance to recover it is to plug this entire thing in wholesale".

            • JdeBP a day ago

              You're moving your own goalposts, by now restricting this to a storage device that is fitted into an adapter to make it USB. There is no requirement to limit this to USB, however.

              They'll pick up the SD/TF card and put it into a card reader that they already have, and end up running something just by opening things out of curiosity to see what's on the card.

              One could pull this same trick back in the days of floppy discs. Indeed, it was a standard caution three decades ago to reformat found or (someone else's) used floppy discs. Hell, at the time the truly cautious even reformatted bought-new pre-formatted floppy discs.

              This isn't a USB-specific risk. It didn't come into being because of USB, and it doesn't go away when the storage medium becomes SD/TF cards.

              • zahlman a day ago

                > You're moving your own goalposts... This isn't a USB-specific risk

                I'm not, because I am talking about a USB-specific risk that has been described repeatedly throughout the thread. In fact, my initial response was to a comment describing that risk:

                > A USB can pretend to be just about any type of device to get the appropriate driver installed and loaded. They can then send malformed packets to that driver to trigger some vulnerability and take over the system.

                The discussion is not simply about people running malware voluntarily because they have mystery data available to them. It is about the fact that the hardware itself can behave maliciously, causing malware to run without any interaction from the user beyond being plugged in.

                The most commonly described mechanism is that the USB device represents itself to the computer as a keyboard rather than as mass storage; then sends data as if the user had typed keyboard shortcuts to open a command prompt, terminal commands etc. Because of common controller hardware on USB keys, it's even possible for a compromised computer to infect other keys plugged into it, causing them to behave in the same way. This is called https://en.wikipedia.org/wiki/BadUSB and the exploit technique has been publicly known for over a decade.

                A MicroSD card cannot represent anything other than storage, by design.

                • roblabla 15 hours ago

                  SD/MMC does restrict things a bit, however:

                  1. SD is not storage-only, see SDIO cards. While I don’t think windows auto-installs drivers for SDIO device on connection, it still feels risky.

                  2. It’s worth noting stuxxnet would have worked equally well on a bog standard SD drive, relying only on a malformed file ^^.

                  I wouldn’t plug a random microsd in a computer I cared about.

    • EvanAnderson 2 days ago

      Stuxnet deployment wasn't just a USB stick, though. It was a USB stick w/ a zero-day in the Windows shell for handling LNK files to get arbitrary code execution. That's not to say that random thumb drives being plugged-in by users is good, but Stuxnet deployment was a more sophisticated attack than just relying on the user to run a program.

      (They will run programs, though. They always do.)

  • danpalmer 2 days ago

    I've seen someone do a live, on stage demo of phishing audit software, where they phished a real company, and showed what happens when someone falls for it.

    Live. On stage. In minutes. People fall for it so reliably that you can do that.

    When we ran it we got fake vouchers for "cost coffee" with a redeem link, new negative reviews of the company on "trustplot" with a reply link, and abnormal activity on your "whatapp" with a map of Russia, and a report link. They were exceptionally successful even despite the silly names.

  • KingOfCoders 2 days ago

    Once a head of security worked for me (CTO), and she was great great great. She did the same, putting USB sticks on the printers for example and see who would plug one into their computer.

  • croes 2 days ago

    But don’t use the passkey feature of your smartphone.

    They have no import/export so you are stuck in the iOS/Android ecosystem or have to do the passkey setup for all pages all over again

  • kstenerud 2 days ago

    These audits are infuriating. At one company I was at it got so bad that I eventually stopped reading email and told people "If it's important, ping me on Slack"

stavros 3 days ago

Ever since I almost got phished (wasn't looking closely enough at the domain to notice a little stress mark over the "s" in the domain name, thankfully I was using a hardware wallet that prevented the attack entirely), I realized that anyone can get phished. They just rely on you being busy, or out, or tired, and just not checking closely enough.

Use passkeys for everything, like Thomas says.

  • ChrisMarshallNY 3 days ago

    If you grok Apple, I wrote up a tutorial on very basic PassKey implementation (for iOS apps), here: https://littlegreenviper.com/series/passkeys/

    • stavros 3 days ago

      Very nice, thanks! By the way, the preferred capitalization is "passkeys", like "passwords". It's not supposed to be capitalized like a proper noun.

      • ChrisMarshallNY 3 days ago

        I prefer all lowercase. Not sure where I got the CamelCase version, but it may have been from the Apple or FIDO docs.

        I’d like to write a follow-up that covers authentication apps/devices, but I need to do some research, and find free versions.

  • Y_Y 3 days ago

    Counterpoint: don't use passkeys, they're a confused mess and add limitations while not giving any benefits over a good long password in a password manager.

    • dewey 3 days ago

      They prevent you from being one of these, and copy pasting the password from password manager into the wrong input field. Something that still happens often with many websites not properly auto-filling from password managers.

      > They just rely on you being busy, or out, or tired, and just not checking closely enough

      • o11c 3 days ago

        If you are "copy-pasting" you are not using your password manager correctly.

        • roywiggins 3 days ago

          Password managers rarely are able to autofill 100% of the time. Autofill breaking is not a very strong indicator of a phishing attempt, people are used to manually filling the password in sometimes for totally legit sites.

          • wavemode 2 days ago

            I'm used to 1Password not being able to autofill, yes. But I'm not used to no account showing up at all when I open the UI panel. If that happens, I immediately know I'm on the wrong domain.

            • tsimionescu 2 days ago

              You know you're on a new domain. However, sites change their auth flow much more often than any patitcular person getting phished. So, if you're using a larger variety of site, you'll likely encounter the benign situation at least a dozen times before you ever encounter your first actual phishing attempt, at which point you'll have gotten used to it.

              For example, Twitter relatively recently changed from authenticating on twitter.com to redirecting you to x.com to authenticate (interestingly, Firefox somehow still knows to auto fill my password, but not my username on the first page).

        • kalleboo 2 days ago

          It's far too common for websites to redirect to some separate domain for sign in which isn't the one originally used to sign up, getting users used to "oh gotta copy the password again" as a totally normal thing that happens

          • chrismorgan 2 days ago

            I keep hearing people say this, but I haven’t found it so: in over a decade, I think I’ve only seen it twice. Looking through my password safe which I’ve been using for about twelve years with over 200 entries, I have nine cases with multiple origin URLs, and most of them I’m confident I added manually because I didn’t like the URL it recorded automatically (e.g. it’s on a different domain from the main site, and the specific path is for signup only, but I want to be able to “visit site” from the password safe and get to the login page or at least the homepage). I think that only two of them have ever actually used more than one origin: a banking one that switched from .com.au to .com at some point as part of a broader global restructuring (and they made a fair bit of noise about it, and you had to partly make a new account anyway), and a Microsoft account. There’s a third that I can’t check (COVID-related, gone) that might have been, but I don’t think so.

            Now on a few occasions I’ve had to copy passwords in order to access things in a different browser, and I think I did encounter one site some years ago where autofill didn’t work, but I really do find autofill almost completely reliable.

            • pixl97 2 days ago

              While you kind of addressed it below, I'm not sure you know how bad state government websites can be here in the US.

              In Texas I've had more than one site where create the login on one site, but use that same login on multiple different domains that are NOT directly connected to a singular authentication site (id.me in the example).

            • ericskiff 2 days ago

              go to tax.gov

              You'll identify on id.me

              People have just gotten used to this sort of thing unfortunately

              • chrismorgan 2 days ago

                That’s a different issue, though related.

                For password safe users, auth being handled entirely on a different origin is completely fine, so long as the credentials are bound to (only used on, including initial registration) that origin. The hazard is only when login occurs via multiple domains—which in this case would mean if you had <input> elements on both tax.gov and id.me taking the same username and password, which I don’t believe you do. Your password safe won’t care if you started at https://tax.gov, the origin you created the credentials on was https://id.me, and so that’s the origin it will autofill for.

          • vhcr 2 days ago

            That should only happen once, you should store the password for the second domain too.

        • dewey 3 days ago

          As I said in my comment above, sometimes it’s necessary as websites break the auto fill, or mobile apps don’t offer the password manager sheet.

        • otterley 3 days ago

          This very story illustrates how people will override their password manager's builtin protections when panic ensues.

        • madeofpalk 3 days ago

          If only everyone did everything perfectly all the time, we wouldn't have any issues!

    • tptacek 3 days ago

      This whole story is about us getting zapped because we relied on a good long password in a password manager!

      • dilyevsky 3 days ago

        So what happened exactly? Did Kurt enter his twitter password manually after clicking on that phishing link? Did he not get his sus detector going off after the password manager didn't suggest the password?

        • smsm42 3 days ago

          Unfortunately, this does not work. I see no end of banks, financial institutions, let alone random companies, who keep their authentication, for some reason, on different domain than main company, and sometimes they would have initial registration (which gets recorded in password manager) on one domain, and consequent logins on another, and sometimes it depends on how you arrived at the site, or which integration are you planning to use, etc. I wish there were a rule "one company - one auth domain" but it's just not true.

          Example: Citi bank has citibankonline.com, citi.com, citidirect.com, citientertainment.com, etc. Would you be suspicious of a link to citibankdirect.com? Would you check the certificate for each link going there, and trace it down, or just assume Citi is up to their shenanigans again and paste the password manually? It's jungle out there.

          • toast0 2 days ago

            > Would you check the certificate for each link going there, and trace it down, or just assume Citi is up to their shenanigans again and paste the password manually?

            What do you get from checking a certificate? Oh yeah, must really be citibank because they have a shitton of SANs? I'd guess most banks do have a cert with an organization name, but organization names can be misleading, and some banks might use LetsEncrypt?

            • pixl97 2 days ago

              Org certs have pretty much fallen out of favor these days.

        • stavros 3 days ago

          That happened to me as well, I put it down to "fucking password manager, it's broken again".

          For example, BitWarden has spent the past month refusing to auto fill fields for me. Bugs are really not uncommon at all, I'd think my password manager is broken before I thought I'm getting phished (which is exactly how they get you).

          • darthwalsh 2 days ago

            The ability to autofill by domain is a critical function of a password manager. It sounds like this tool is performing a lot worse than your browser's built-in password manager -- maybe that's enough to encourage a switch?

          • dilyevsky 3 days ago

            Yeah i could totally see how someone in a bind working off of phone could get p0wned like that

            • stavros 3 days ago

              For me it wasn't even a phone, it was on the desktop, I'm just so used to everything being buggy that it didn't trigger any alarms for me.

              Luckily the only things I don't use passkeys or hardware keys for are things I don't care about, so I can't even remember what was phished. It goes to show, though, that that's what saved me, not the password manager, not my strong password, nothing.

        • otterley 3 days ago

          Yes, that's exactly what happened. The nature of panic is that it overrides people's better judgment.

    • corndoge 3 days ago

      Yes, PKC authentication is good, but the way passkeys have been implemented is not great. Way too much trust built into the protocol; way too much power granted to relying parties; much harder for users to form a correct mental model.

    • Spivak 2 days ago

      I mean the problem with Passkeys is that they're unsuitable as the sole login method for an account. They're great as a stronger "keep me logged in" for certain devices but they're something you have and they don't survive a fire. And so every service that offers Passkeys also has to offer a reset mechanism and a backup auth flow if you're on a device without the Passkey.

      Any site that wants to phish you will either just not show the passkey flow and hope you forget or show it and make it look like it failed and pop up a helpful message about being able to register a new Passkey once you're logged in. And Passkeys are so finicky in browsers that I'd buy it.

      • pabs3 2 days ago

        You can do Passkeys entirely in software with KeePassXC.

        • corndoge 2 days ago

          Let me preface this by saying I use passkeys with KeepassXC.

          According to WebAuthn, this is not true. Such passkeys are considered "synced passkeys" which are distinct from "device bound" passkeys, which are supposed to be stored in an HSM. WebAuthn allows for an RP to "require" (scare quotes) that the passkey be device bound. Furthermore, the RP can "require" that a specific key store be used. Microsoft enterprise for example requires use of Microsoft Authenticator.

          You might ask, how is this enforced? For example, can't KeepassXC simply report that it is a hardware device, or that it is Microsoft Authenticator?

          The answer is, there are no mechanisms to enforce this. Yes, KeepassXC can do this. So while you are actually correct that it's possible, the protocol itself pretends that it isn't, which is just one of the many issues with passkeys.

          • pabs3 7 hours ago

            Hmm, I thought there was some form of attestation involved? Is it really as simple as spoofing the device ID? Do you have any more info/links on the spoofing?

    • bigyabai 3 days ago

      Yep. A technical half-baked solution to a problem that has been solved since it's inception. Really just feels like FAANG exists to invent new ways to charge rent...

      • akerl_ 3 days ago

        What’s the solution for preventing this kind of phishing attack?

        • NoGravitas 2 days ago

          TLS client certificates. Unfortunately, the browser UI for them ranges from godawful to removed-because-nobody-used-them.

          • akerl_ 2 days ago

            So the solution for this isn’t actually usable?

  • kgeist 3 days ago

    >I realized that anyone can get phished

    A few years ago, I managed to get our InfoSec head phished (as a test). No one is safe :)

pants2 3 days ago

This "content violation on your X post" phishing email is so common, we get about a dozen of those a week, and had to change the filters many times to catch them (because it's not easy to just detect the letter X and they keep changing the wording).

We also ended up dropping our email security provider because they consistently missed these. We evaluated/trialed almost a dozen different providers and finally found one that did detect every X phishing email! (Check Point fyi, not affiliated)

It was actually embarrassing for most of those security companies because the signs of phishing are very obvious if you look.

  • pixl97 2 days ago

    It easy to block all phishing email. Just block all email.

    It's much much harder to block emails that aren't actually phishing but have components that would flag them anyway.

grinich 3 days ago

I got hit with the same kind of phishing attack a couple months ago

It's pretty incredible the level of UI engineering that went into it.

Some screenshots I took: https://x.com/grinich/status/1963744947053703309

  • fschuett 2 days ago

    Hmm, since Chromium is working on adding browser-local AI features, I wonder if this one day could be a security check (for links opened from the outside of the browser). E.g. the browser detected that you clicked on a new-tab link, and the page looks like a commonly known site, then the AI detects that the URL isn't "x.com" and gives a heads-up warning. At least for the top 1000 most common sites, this could prevent a lot of phishing attacks.

  • giarc 2 days ago

    I'm sorry but the imagecontent-x.com url should throw red flags for anyone.

    • tptacek 2 days ago

      This is exactly how not to defend against phishing. The meaningful defense is to foreclose on it entirely, not to just get super good at spotting fakes.

      • classified 2 days ago

        > The meaningful defense is to foreclose on it entirely

        Sounds easy enough in theory. How do you do that in practice?

        • 9dev 2 days ago

          Use passkeys. Bully services that don’t offer them or lock them behind enterprise plans into implementing them.

          That’s it. The single working Defense against credential theft.

  • everybodyknows 2 days ago

    So, in that case the browser (correctly) did not autofill? Is that a common occurrence for legit traffic from X? And no complaint about the website's identity from the browser -- the expected "lock" icon left of the URL?

    • 9dev 2 days ago

      As long as people are used to companies just buying new domains for the hell of it, yes. Just look at the amount of domains Microsoft uses for signing in! My password manager currently holds 8 of them. Eight! Who can be blamed for thinking it’s the password managers fault?

KingOfCoders 2 days ago

Phishing training does not work.

"Understanding the Efficacy of Phishing Training in Practice" https://arianamirian.com/docs/ieee-25.pdf

  • karel-3d 2 days ago

    "Don't put your password into the website that you shouldn't and put it only to website that you should" is a circular advice.

    It's like those 2FA SMS that say "don't tell this token to anyone!" while you literally share it with the website that you login to. I am always so frustrated when I receive those

  • classified 2 days ago

    > We are reliably informed by our zoomer children that we are too cringe to be trusted on these matters.

    Bullseye. At least they take it with good humor.

  • man8alexd 2 days ago

    The same paper is linked in the original article.

  • ctennis1 2 days ago

    Maybe not - but I work in a regulated industry, we had an employee get phished a few years ago, and the regulatory bodies wanted detailed records of all phishing testing and training conducted for the previous 5 years. So for some of us it's a necessary evil.

__jonas 3 days ago

That's some impressive work on the attackers part having that whole fake landing page ready to go, and a pretty convincing phishing email.

I'm don't know much about crypto so I'm not sure what makes them call the scam 'not very plausible' and say it 'probably generated $0 for the attackers', is that something that can be verified by checking the wallet used in that fake landing page?

tgsovlerkhgsel 3 days ago

This is why properly working password managers are important, and why as a web site operator you should make sure to not break them. My password not auto-filling on a web site is a sufficient red flag to immediately become very watchful.

Code-based 2FA, on the other hand, is completely useless against phishing. If I'm logging in, I'm logging in, and you're getting my 2FA code (regardless of whether it's coming from an SMS or an app).

  • nialv7 3 days ago

    the creator of https://haveibeenpwned.com got phished once (no kidding), and he uses a password manager.

    • phsau 2 days ago

      And if you read the story, it's because he ignored the fact that the password manager didn't prompt auto-fill.

      "I went to the link which is on mailchimp-sso.com and entered my credentials which - crucially - did not auto-complete from 1Password. I then entered the OTP and the page hung. Moments later, the penny dropped, and I logged onto the official website, which Mailchimp confirmed via a notification email which showed my London IP address:"

  • akerl_ 3 days ago

    How does this square with the fact that the tech savvy person in the post was phished despite using a password manager.

    • dgl 3 days ago

      The post calls this out:

      > the 1Password browser plugin would have noticed that “members-x.com” wasn’t an “x.com” host.

      But shared accounts are tricky here, like the post says it's not part of their IdP / SSO and can't be, so it has to be something different. Yes, they can and should use Passkeys and/or 1password browser integration, but if you only have a few shared accounts, that difference makes for a different workflow regardless.

      • akerl_ 3 days ago

        Yes; 1Password was used. And it worked properly. But because humans are fallible, a human made a mistake anyways.

        "Properly working password managers" do not provide a strong defense against real world phishing attacks. The weak link of a phishing attack is human fallibility.

    • otterley 3 days ago

      Precisely. 1Password's browser integration would have noticed a domain mismatch and refused to autofill the password -- but in a panic, Kurt apparently opened 1Password and then copied/pasted the credentials manually.

      • sergiotapia 3 days ago

        This is how they got my Steam account credentials, although I realized the stupid shit I did the second I clicked submit form, and reset my password to random 32 characters using bitwarden. Me! Someone who is deeply technical AND paranoid.

        The key here is the hacker must create the most incisive, scary email that will short circuit your higher brain functions and get you to log in.

        I should have realized the fact that bitwarden did not autofill and take that as a sign.

        • stavros 3 days ago

          Same thing happened to me (not with Steam), but it's also the thought that "this could never happen to me" that leads you to assign an almost zero probability to the problem being a phishing attempt.

        • zahlman 2 days ago

          > The key here is the hacker must create the most incisive, scary email that will short circuit your higher brain functions and get you to log in.

          ... and specifically by using the link in the email, yes?

      • akerl_ 3 days ago

        Which is why a properly working password manager is not a strong defense against phishing.

        • jopsen 2 days ago

          Not a strong defense, but it helps.

          But it's also why sites that don't work well with a password manager are actively setting their users up to be phished.

          Same with every site that uses sketchy domains, or worse redirects you to xyz.auth0.com to sign in.

        • otterley 3 days ago

          Correct. The moral of the story is that hardware MFA and/or passkeys are a necessity in today's world. An infinitely complex password and 2FA are no match for attacks that leverage human psychology.

        • onionisafruit 3 days ago

          It's a strong defense that this guy decided not to use

          • akerl_ 3 days ago

            User security that doesn’t meet real users where they are is just nerd theatre.

            • onionisafruit 3 days ago

              It works for me. I’m unconcerned if it works for anybody else.

              • otterley 3 days ago

                It works for lots of people, until it doesn't. You may well fall victim to such a scheme someday.

                • onionisafruit 3 days ago

                  That’s almost guaranteed now that I made such a confident statement that it works for me.

    • rtpg 3 days ago

      Because CEOs at startups are notorious for trying to problem solve aggressively by "just" doing the thing rather than throwing it at a person who _might_ have made the same mistake, but might be more primed to be confused as to why they are not logged into x dot com and why 1password's password prompt doesn't show up and why the passkey doesn't work or whatever.

      It's always possible to have issues, of course, and to make mistakes. But there's a risk profile to this kind of stuff that doesn't align well with how certain people work. Yet those same people will jump on these to fix it up!

      • akerl_ 3 days ago

        It’s a bold move to typecast all CEOs as uniquely vulnerable to a problem that the evidence shows every single one of us is vulnerable to.

        Blaming some attribute about user as why they fell for a phishing attempt is categorically misguided.

  • esseph 3 days ago

    Turn off autofill, it is exploited by modern attacks including tapjacking

roughly 3 days ago

I was reading this and wondering why it was posted so high (I didn’t recognize the company name), and then I got to the name at the bottom. I think the lesson here is “if it could happen to Kurt, it could happen to anyone.” Yeah, the consequences here were pretty limited, but everyone’s got Some vulnerability, and it’s usually in the junk pile in the corner that you’re ignoring. If the attacker were genuinely trying to do damage (as opposed to just running a two-bit crypto scam), assuming the company’s official account is a fine start to leverage for some social engineering.

  • akerl_ 3 days ago

    I think you mean Kurt.

    • stavros 3 days ago

      It would help if they mentioned his name anywhere in the post, title, or subtitle.

      • roughly 3 days ago

        Yeah, that was definitely a pebkac on my part.

        • stavros 3 days ago

          It's ok, I just couldn't pass up a good opportunity for snark!

          • roughly 3 days ago

            It genuinely took me a second - I was midway through writing a very different comment. Apparently reading comprehension is not on my skills list today…

    • roughly 3 days ago

      You’re right - I flagged on Thomas’s name in the signature and because I’ve seen him around here, well, forever, but Kurt is also extremely savvy.

      • tptacek 3 days ago

        No he's not! He got taken by this dumb phishing thing!

        • bpicolo 2 days ago

          The post is good but this comment is funnier than the whole thing.

          • mrkurt a day ago

            Please don't encourage him.

herval 3 days ago

Great writeup, but also gotta say that’s some excellent phishing

  • tptacek 3 days ago

    This exact phish has been going around lately and we're not the only ones who got bit. But we didn't know that before it happened.

  • ChrisMarshallNY 3 days ago

    I enjoyed the self-deprecating humor behind it.

    I have been almost got, a couple of times. I'm not sure, but I may have realized that I got got, about 0.5 seconds after clicking[0], and was able to lock down, before they were able to grab it.

    [0] https://imgur.com/EfQrdWY

silexia 3 days ago

CEO here, I also almost got taken by a fake legal notice about a Facebook post. My password manager would not auto enter my password so I tried manually entering it like a dummy. Fortunately, it was the wrong one.

  • latchkey 3 days ago

    This is exactly why I turned off auto enter.

    • akerl_ 3 days ago

      Isn’t turning off auto enter exacerbating the problem?

      The avenue for catching this is that the password manager’s autofill won’t work on the phishing site, and the user could notice that and catch that it’s a malicious domain

      • latchkey 3 days ago

        Autofill doesn't always work for every site. So, now you're having to store in your mind where it works and where it doesn't. By disabling it, it forces you to go the extra step (command-shift-L) every time.

        • akerl_ 3 days ago

          Autofill and the hotkey use the same mechanism, and neither is going to work on a phishing site.

          • latchkey 3 days ago

            You're right. The point is that hotkey makes me think and observe more. Again, I don't have to remember if the site previous worked with autofill, or not.

            • akerl_ 3 days ago

              Sure. Except this is a story about the user manually copying the credential into a phishing site after the password manager didn’t fill it in.

              Whether that’s via a hotkey or not seems totally irrelevant.

              • latchkey 3 days ago

                It doesn't seem irrelevant to me at all. Security these days isn't just one action, it is a multitude of actions and steps and thought processes.

                By removing the expectation that my password manager is going to autofill something, I'm now making the conscious decision to always try to fill it myself.

                This makes me think more about what I'm doing, and prevents me from making nearly as many mistakes. I don't let my guard down to let the tools do all the work for me. I have to think: ok, I'll autofill things now, realize that it isn't working, and then look more closely at why it wasn't working as I expected.

                I won't just blindly copy/paste my credentials into the site because whoops, I think it might have worked previously.

      • tptacek 3 days ago

        Yes. This is the problem with the "just use a password manager" answer to phishing-resistance. They can be a line of defense, situationally, but you have to have them configured just right, and if you're using phishing-resistant authentication you don't need that line of defense in the first place.

        • rtpg 3 days ago

          Isn't this backwards? If the autocomplete doesn't show up that's a flag that the password is going somewhere it doesn't belong. If you're always copy-pasting from a password manager then you're not getting that check "for free".

          Obviously SSO-y stuff is _better_, but autofill seems important for helping to prevent this kind of scam. Doesn't prevent everything of course!

          • tptacek 3 days ago

            None of this password manager configuration stuff matters; we've just got Passkeys set up for the account now, which is what we should have done, but didn't, because we spent the last 2 years with one foot out the door on Twitter altogether.

            Since this attack happened despite Kurt using 1Password, I'm really not all that receptive to the idea that 1Password is a good answer to this problem.

            • rtpg 3 days ago

              I guess I'm just saying "1Password with autofill" will help more than "1Password without autofill".

              We can always make mistakes of course. And yeah, sometimes we just haven't done something.

              • tptacek 3 days ago

                I'm saying: an intervention was required here, and that intervention was not changing how we use auto-fill. Doing that would be playing to lose.

                • rtpg 3 days ago

                  Makes sense, think we might have been talking past ourselves. Agreed on what you all actually did being right.

    • OkayPhysicist 3 days ago

      No, that's the opposite of the moral of that story. If the person you responded to had listened to the fact that the auto-enter didn't auto-enter, they wouldn't have been at any risk. Likewise in the article, the problem was that the CEO copy-pasted the password into the phishing page's password field, NOT that the auto-enter prompted him to do so.

      • latchkey 3 days ago

        As I mention below: Autofill doesn't always work for every site. So, now you're having to store in your mind where it works and where it doesn't. By disabling it, it forces you to go the extra step (command-shift-L) every time.

        • throw-10-8 2 days ago

          Hide Sidebar?

          Honestly it sounds like you are missing the point while simultaneously using a bad password manager.

          • latchkey 2 days ago

            That isn't hide sidebar and I use a common PM.

deepfriedrice 3 days ago

I don't know the gullibility of the average tech CEO but this doesn't strike me as a very convincing phishing attempt.

* "We've received reports about the latest content" - weird copy

* "which doesn't meet X Terms of Service" - bad grammar lol

* "Important:Simply ..." - no spacing lol

* "Simply removing the content from your page doesn't help your case" - weird tone

* "We've opened a support portal for you " - weird copy

There should so many red flags here if you're a native english speaker.

There are some UX red flags as well, but I admit those are much less noticeable.

* Weird and inconsistent font size/weight

* Massive border radius on the twitter card image (lol)

* Gap sizes are weird/small

* Weird CTA

  • akerl_ 3 days ago

    I think you'll be led astray thinking this is CEO-specific.

    The whole theory of phishing, and especially targeted phishing, is to present a scenario that tricks the user into ignoring the red flags. Usually, this is an urgent call to action that something negative will happen, coupled with a tie-in to something that seems legit. In this case, it was referencing a real post that the company had made.

    A parallel example is when parents get phone calls saying "hey it's your kid, I took a surprise trip to a tiny island nation and I've been kidnapped, I need you to wire $1000 immediately or they're going to kill me". That interaction is full of red flags, but the psychological hit is massive and people pay out all the time.

    • deepfriedrice 3 days ago

      I razz CEOs in jest, but my point is: This is an example of a good phishing attempt? ChatGPT could surely find and fix most of the red flags I called out. Perhaps the red flags ensure they don't phish more people than they can productively exploit.

      • akerl_ 3 days ago

        There are certainly phishing attempts that are pixel perfect, but I'd say way more energy tends to go into making phishing websites perfect. The goal of the email is to flip people into action as quickly as possible with as little validation.

chews 3 days ago

if anyone @ x.com infosec is here, my buddy got her account phished / there is someone in CS selling creds. Then it was used to pump a crypto scam and she has been trying for months to get it sorted. She's had the account for 16 plus years, it's surprising it's this hard to fix.

It's x.com/leighleighsf, we've tried every channel but for filing a small claims lawsuit in Texas to get her account back.

zahlman 2 days ago

> Had this been an impactful attack, we would not be this flippant about it. For this, though, any other tone on our part would be false.

> ...

> If you were inclined to take us up on an “airdrop” to “claim a share” of the “token” powering Fly.io, the site is still up. You can connect your wallet it [sic] it! You’ll lose all your money. But if we’d actually done an ICO, you’d have lost all your money anyways.

> Somebody involved in pulling this attack off had to come up with “own a piece of the sky!”, and I think that’s punishment enough for them.

I was amused by all of this, but I still feel like they should care more about how impactful this was for anyone who got crypto-scammed at the link. I mean, yes, those are people who would believe the story and also click a link like that. But what if fly.io were found to share liability?

siskiyou 2 days ago

The part I found surprising: 'Twitter fell outside the “things we take seriously” boundary'

Sure Twitter is rubbish, but it's still a huge platform, still tied to your brand, you're still using it, so it can still hurt you. Either take it seriously or stop using it.

  • tptacek 2 days ago

    Before the Twitter Change of Control, we were actively using it. After, it fell into a kind of limbo. There was a solid 6 months or so when we thought maybe we were just going to do everything via our Hachyderm account. Shit's complicated. And if we'd stopped using it altogether, we'd still be in the same boat!

    • siskiyou 2 days ago

      Thanks for your answer. I get it, you're people, it's a mistake, it's not the most horrible thing ever, but a simple "oops my bad" would have been a shorter blog post. Would you really have been in the same boat if the tweet shown mentioned in the phishing effort had never been posted, or did the phisher fabricate that entirely?

  • loloquwowndueo 2 days ago

    You mean X, right? Sounds like neither them nor you take it seriously :)

    • siskiyou 2 days ago

      (Turning up my hearing aids) that's right mate

black_puppydog 2 days ago

Kudos to Thomas and whoever else contributed here, the writing is great! <3

tptacek 3 days ago

I want to say again that the key thing in this post is that anything "serious" at Fly.io couldn't have gotten phished: your SSO login won't work if you don't have mandatory phish-resistant 2FA set up for it. What went wrong here is that Twitter wasn't behind that perimeter, because, well, we have trouble taking Twitter seriously.

We shouldn't have, and we do take it seriously now.

  • breakingcups 2 days ago

    I will say that a "Critical Security Vulnerability in flyctl, update now: https://bad-link/to/update.zip" tweet will have very serious consequences for a portion of your userbase, despite not directly compromising your own infra.

    • tptacek 2 days ago

      You could do that yourself today by getting a blue-checked @realFlyDotIo. But there's a paragraph in the article about this, and we know what we would have done had there been any signs of direct attacks on our users.

      • zahlman 2 days ago

        > You could do that yourself today by getting a blue-checked @realFlyDotIo

        Wouldn't that also require convincing your customers to follow that account?

  • latchkey 3 days ago

    [deleted]

    • tptacek 3 days ago

      Twitter isn't an operational dependency of ours and we don't attest to it at all. It also doesn't require we do that: what SOC2 actually demands of vendor security practices is much more complicated (and performative) than that. If Twitter were a real vendor dependency of ours, most of what we'd need would be a SOC2 attestation from them.

rtpg 3 days ago

Fly has consistently surprised me at how late they have been to doing the "standard company" stuff. Their sort of lack of support engineering teams for a while affected me way more though.

You gotta take the Legos away from the CEO! Being CEO means you stop doing the other stuff! Sorry!

And yes they have their silly disclaimer on their blog, but this is Yet Another "oh lol we made a whoopsie" tone that they've taken in the past several times for "real" issues. My favorite being "we did a thing, you should have read the forums where we posted about it, but clearly some of you didn't". You have my e-mail address!

Please.... please... get real comms. I'm tired of the "oh lol we're just doing shit" vibes from the only place I can _barely_ recommend as an alternative to Heroku. I don't need the cuteness. And 60% of that is because one of your main competitors has a totally unsearchable name.

Still using fly, just annoyed.

  • akerl_ 3 days ago

    I don't know where the official list of "standard company" stuff is, but I'd wager that for small to medium sized tech companies, it's relatively unsurprising for "leadership" to still be in the weeds on various operational projects and systems.

  • nberkman 2 days ago

    Don't know why this is getting downvoted. Agree with this so hard, as a continually aggrieved Fly customer (close to becoming an ex-customer). The too cool for school schtick gets old fast when they don't have the goods to back it up.

  • tptacek 3 days ago

    We've had an unusually large security team for the size of our company since 2021. I'm sorry if you don't like the way I communicate about it but I have no plans to change that. We take security extremely seriously. We just didn't take Twitter that seriously.

    The "CEO" thing is just a running joke. Kurt's an engineer. Any of us could have been taken by this. I joke about this because I assume everybody gets the subtext, which is that anything you don't have behind phishing-resistant authentication is going to get phished. You apparently took it on the surface level, and believe I'm actually dunking on Kurt. No.

    • rtpg 3 days ago

      I'm not talking security, which I generally feel like is probably being done correctly.

      I was thinking about, IIRC, back in 2023[0], where you all were suffering a lot of issues. And I _believe_ I saw some chatter about Fly building out a team of support/devops-y/SRE engineers around that time. And I had just assumed up until there that, as a company about operations, that you would already have a team that is about reliability.

      I am not a major user of you (You're only selling me like 40 bucks a month of compute/storage/etc), but I had relatively often been hitting weird stuff. Some of it was me, some of it was your side. But... well... I was using Heroku for this stuff before and it seemed to run swimmingly for very long. So I was definitely a bit like "oh OK so you just didn't care about reliability until then?" I mean this lightly, but I started basically anti-recommending you after the combo of the issues and the statements your team was making (both on this kind of operations and also communications after the fact).

      I think you all generally do this better now though, so maybe I'm just bringing up old grudges.

      > You apparently took it on the surface level, and believe I'm actually dunking on Kurt.

      No, I took it in the same tone I take a lot of your company's writing.

      > The "CEO" thing is just a running joke. Kurt's an engineer.

      I think if you are the CEO of a company above a certain (very low!) headcount you put down the Legos. There are enough "running a company" things to do. Maybe your dynamics are different, since your team is indeed quite small according to the teams page.

      Every startup engineer has had to deal with "The CEO is the one with admin rights on this account and he's not doing the thing because somehow we haven't pried the credentials from him so that people doing the work does it". And then the dual of this, "The CEO fixes the thing at 2AM but does it the wrong way and now thing is weird". A way you avoid this is by yanking all credentials from the CEO.

      I'm being glib here, because obviously y'all have your success, the Twitter thing "doesn't matter", etc. I just want to be able to recommend you fully, and the issues I hit + the amateur hour comms in response (EDIT: in the past) gets on my nerves and prevents me from doing it!

      Anyways, I want you all to succeed.

      [0]: https://community.fly.io/t/reliability-its-not-great/11253

vednig 2 days ago

Irony would be if we found out the hackers ran their website on fly.io that would be a swell

foxglacier 2 days ago

Like with occupational safety, we should worry about near misses as well as actual hacks. If you realize you just logged into X from a link in an email, you should berate yourself for could-have-been-hacked. Never enter credentials into links from emails!

lawik 2 days ago

Funny!

Now that Kurt doesn't have commit access, who do I ask to get internal Fly Slack bot fizz off of my behind.

I was in a devrel channel for a short while and ever since it has asked me to write updates in a channel I don't have access to. Frequently.

reassess_blind 2 days ago

Is there an anti-phishing extension that detects whether the domain is close to, but not exactly the popular legitimate domain? Would probably need to use a local LLM for the detection. If not I might look into making one.

  • haruka_ff 2 days ago

    MetaMask (the crypto wallet) has one that shows warning pages to all domains that are remotely similar to crypto-related domains, and it is very prone to false positives and annoying. They have to maintain a list to skip the detection for real domains, and it's really inefficient.

    Feels like this kind of detection is hard to balance, and calling legit websites possible phishing might be problematic...

    • reassess_blind 2 days ago

      Seems like the kind of problem LLMs would be perfect for. ChatGPT does a great job at giving a score of whether domains are attempting to appear legitimate, but of course no one wants their browsing history being sent off to OpenAI. Unfortunately from my testing local 4B-7B LLMs aren't up to the task.

  • typpilol 2 days ago

    Edge has some basic typo squat protection

classified 2 days ago

X Terms of Service error: Meme not dank enough.

jryio 3 days ago

I'm always glad to see when companies, developers and CEOs make a heartfelt and humanistic mae culpa.

We would like to think that we're the smart ones and above such low level types of exploits, but the reality is that they can catch us at any moment on a good or bad day.

Good write up

x0x0 3 days ago

... could we get webauthn / yubikeys prioritized for fly? afaik (don't want to disable 2fa to find out), it only supports totp.

For everyone reading though, you should try fly. Unaffiliated except for being a happy customer. 50 lines of toml is so so much better than 1k+ lines of cloudformation.

  • tptacek 3 days ago

    We don't like TOTP, at all, for reasons even more obvious now, but our standard answer for advanced MFA has been OIDC, which is what most people should do rather than setting up bespoke U2F/FIDO2/Passkeys.

    We will get to this though.

    https://fly.io/blog/tokenized-tokens/

    • parliament32 2 days ago

      That would be great, but

      > Fly.io supports Google and GitHub as Identity Providers[1]

      How about you just support SAML like a real enterprise vendor, so IdP-specific support isn't your problem anymore? I get it, SAML is hard, but it's really the One True Path when it comes to this stuff.

      [1] https://fly.io/docs/security/sso/

      • tptacek 2 days ago

        SAML is awful, maybe the worst cryptographic protocol ever devised, and we won't implement it unless we absolutely have to. OIDC is the future.

        I'm not exaggerating; you can use the search bar and find longer comments from me on SAML and XMLDSIG. You might just as well ask when we're going to implement DNSSEC.

        • parliament32 2 days ago

          I certainly see you whining a lot about SAML in your history. This lines up with my "SAML is hard" comment above -- SAML is filled with footguns and various perils, but that doesn't necessarily make it bad. OIDC is certainly better in a few aspects (note trading XML parsing for JSON parsing is not one of them), but the killer SAML feature that you (and by you, I mean fly.io, to be clear) is missing is being IdP-agnostic. You cannot reasonably expect that those two vendors will cover even half of your potential enterprise user base; and yes, for anyone working in an even remotely regulated industry, not being compatible with our SSO ensures you get dropped even before the evaluation phase.

          My favourite slop-generator summarizes this as "While SAML is significantly more complex to implement than OIDC, its design for robust enterprise federation and its maturity have resulted in vendors converging on a more uniform interpretation of its detailed specification, reducing the relative frequency of non-standard implementation quirks when dealing with core B2B SSO scenarios." That being said, if your org is more B2C, maybe it makes sense you haven't prioritized this yet. You'll get there one day :)

          • tptacek 2 days ago

            "SAML is filled with footguns and various perils" is in fact why it's bad. You don't look at an archaic cryptosystem full of design flaws and go "skills issue". The "skills issue" would be using it at all. Sorry, SAML is dead.

            • x0x0 21 hours ago

              But also too almost every vendors implementation is subtly different!

0xdeadbeefbabe 2 days ago

Huh, so I'm stupid I guess, but how is MFA phish proof? Why did Kurt's commit access get revoked?

  • tptacek 2 days ago

    The commit access thing is a joke. I think it's a joke. It's mostly a joke.

    MFA is not in general phish-resistant. But Passkeys, U2F, and FIDO2 generally are, because they mutually authenticate; they're not just "one time passwords" you type into a field, but rather a cryptographic protocol running between you and the site.

theturtle 3 days ago

[flagged]

  • tomhow 3 days ago

    We've banned this account.

nofriend 3 days ago

> But if we’d actually done an ICO, you’d have lost all your money anyways.

tru tru

paxys 3 days ago

> This is, in fact, how all of our infrastructure is secured at Fly.io; specifically, we get everything behind an IdP (in our case: Google’s) and have it require phishing-proof MFA.

Every system is only as secure as its weakest link. If the company's CEO is idiotic enough to pull credentials from 1Password and manually copy-past them on a random website whose domain does not match the service that issued it, what is to say they won't do the same for an MFA token?

  • roblabla 3 days ago

    They literally explain in the article they're using FIDO MFA that is phishing proof as the key authenticates the website (it's not your run-of-the-mill sms 2FA, it's using WebAuthn to talk to your MFA).

    With this setup, you can't fuck up.

  • akerl_ 3 days ago

    FIDO2 won’t send an authentication to a fake site, no matter what the human does.

    That’s what makes it phishing-resistant.

  • parliament32 2 days ago

    Passkeys are called "phishing-resistant" because (when properly implemented) it's impossible for users to fuck up. They literally cannot be phished into giving an adversary their credentials, no matter what they click or what they do.

  • tptacek 3 days ago

    The. whole. point. of. phishing-resistant. MFA. is. that. you. can't. do. the. same. thing.

lijok 2 days ago

This makes fly.io seem like an unserious business. I was under the impression they were trying to build something of substance.