Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think it's worth repeating: at this point, MFA that is not based on Webauthn (https://webauthn.guide/#about-webauthn) should be considered dangerously insecure. Uber almost certainly enforces MFA for remote access; I strongly suspect we'll end up hearing that it was successfully provided during the authentication step (update: screenshots on Twitter appear to confirm this). As we saw in the case of the 0ktapus campaign, a sufficiently-skilled attacker will simply proxy the MFA calls to the real identity provider in real-time, the user none the wiser.

Webauthn, however, binds the authenticator to the domain and port, and requires https as the scheme. If a user gets phished, they cannot be compromised: the phisher's domain will not match and any Webauthn authentication challenge would fail.

So if your workplace is letting you authenticate with SMS codes, push notifications to an app, or 6-digit codes generated by an authenticator app/hardware device, you need to start banging on pots and pans up your reporting chain to get your security team the support they need to make Webauthn + FIDO2 hardware tokens or Webauthn + Mac Touch ID happen.



> you need to start banging on pots and pans up your reporting chain to get your security team

Not sure how Webauthn works, but as long as it can be stored in the cloud, I'm fine.

Industry is going towards this scheme of physical 2FAs that assume some living conditions appropriate for whoever designs things, without alternatives for people who don't fit that ideal way of doing things. Physical stuff gets lost or stolen. I'm optimizing for when I am traveling from home around the other side of the world, 6 time zones away, and my phone / stuff gets lost.

2FA is already unmanageable at this point: "just use your recovery keys" is what people tell you, but that's NOT a viable solution to the problem. Sorry but my recovery codes are in a safe lock, 10,000 Km away from me, I just lost the purse with my phone, or my device broke, or got stolen, or whatever, and need the damn TOTP code to telework _right now_.


Your bus factor is the problem here, not your 2FA system.

"I absolutely need full access to systems while working on a random device using only my password" means that phishing gives all the keys to the kingdom and that you don't have systems in place to make sure things work when people go on vacation.


> ...or whatever, and need the damn TOTP code to telework _right now_.

Do you? Really?

"Your lack of planning is not my emergency"

Unless you're the founder+owner, I'd expect that tech support at your company wouldn't expedite your access request just because you feel entitled to it.

Will a million dollar sales call fail and/or have to be rescheduled because you didn't have 2FA access? You should accept the responsibility, apologise with whoever it is that you let down and move on.

Of course, companies should give us all the tools we need to succeed. Not cheap out on their budget and then shift the blame on us. This means also giving you multiple devices and tokens, to ensure that you have redundancy (if a device fails you have a backup, if you lose a token you have a backup).

Even then, it can happen that right after a trip you might've realized that you left home and/or misplaced your security token. The professional thing to do is to communicate it to the company right away (so they can arrange to verify your identity and/or to ship something to you) once you discover it when you land. Not ignore the problem until you'd be back at work and in urgent need to attend some work meeting.

Travelling across the world, 6 time zones away is not something that you can do with every job. If your company allows it, that's a perk, but you should also treat it with the due care that requires.

Missing a day of work is small stuff compared to the risk that the whole company runs by allowing their employees auth to be phished. If you have to skip a day (or more!) of work on extremely short notice, it might be an unpleasant conversation with your manager, but it's a conversation that you should have nonetheless. (btw, do you have the phone number of your manager on your personal phone? if you lose your work devices, it's important to still have a way to reach out to them).

Security tokens are cheap, just make sure that you have N+1 (one for each device you need, plus one)


> Will a million dollar sales call fail and/or have to be rescheduled because you didn't have 2FA access? You should accept the responsibility, apologise with whoever it is that you let down and move on.

Spoken as someone who has clearly never had any tech duties in the financial sector.

You don't understand what time critical means until a dealer's access stops working / computer freezes 10 minutes before market close. ;-)

That could easily cost millions. That could easily loose the company the entire account.

And god help you if the market moves overnight and you were unable to get the trade on ...

A grovelling apology to the client might help avoid a complaint to the regulator, but you're unlikely to keep their business.

So yes, there will always be genuine need for IT to be able to bypass a user's 2FA, because its certain that user won't be able to wait until you send them a new Yubikey in the post.

And yes, financial companies are also well aware of phishing / SE and take appropriate steps to ID user.


The answer there, clearly, is to not have an individual be a potential SPOF. If failure of that kind of support costs millions of dollars, you absolutely need to have the ‘walked in front of a bus’ scenarios worked out.


> The answer there, clearly, is to not have an individual be a potential SPOF. If failure of that kind of support costs millions of dollars, you absolutely need to have the ‘walked in front of a bus’ scenarios worked out.

I'm not going to post details in public, but suffice to say, you are over-simplistic and don't understand the context.

Sticking with my example of dealers, let's just say people like dealers are not employed in great numbers in all but the largest financial organisation. Let's also say that there are certain events and certain times of day when the entire dealing desk is, shall we say, "busy and stressed out". There is little scope for a colleague to step in at those times, because everyone is franticly busy on the phones with their own workload.

In terms of 2FA therefore, the "walked infront of a bus" scenario is to (after correct security protocol, which includes, but not limited to, senior board-level management and compliance being told and approving) temporarily bypass 2FA for that dealer. Telling the dealer to pass his work to a colleague is just not going to work.

Of course financial organisations have "walked infront of a bus" plans. But they equally have levels of escalation of plans. Sometimes doing stuff at lower level with the help of the IT department is more than sufficient.

I'm not going to elaborate further.


> Sticking with my example of dealers, let's just say people like dealers are not employed in great numbers in all but the largest financial organisation. Let's also say that there are certain events and certain times of day when the entire dealing desk is, shall we say, "busy and stressed out". There is little scope for a colleague to step in at those times, because everyone is franticly busy on the phones with their own workload.

That just sounds like optimizing for efficiency over redundancy, which is a trade off you can make, but not one that is required. Financial organizations could hire more dealers so you don’t have “little scope” for others to help out. Or they could staff an IT group that is open 24/7 ready to help these traders instantly.


The options you are considering seem to be putting over bypassing MFA is:

- hire more dealers ($$$$$$$$$) - staff an IT group that is open 24/7 ($$$$$$$$$) - bypassing MFA ($)

Not sure if you are being serious that the other options are comparable to the 3rd for a business


That’s what I mean by optimizing for efficiency. They’d rather not spend the money to operate in a way that allows for them to be secure or redundant.

Honestly if they are going to just skip MFA everytime it’s a bother they might as well just not use it


I see what you mean, appreciate the clarification


Unfortunately, some places' idea of having this problem "worked out" is to react by making the SPOF's life miserable with punishment or firing. And the bus scenario is "covered" by having the scapegoat be dead. Not a good strategy for the business, of course, but it's definitely the reality at some places. Actually having the SPOF scenarios prevented would be a much more mature approach.


this does not even apply to Uber in any way.

There is no job at uber that could not be done by a coklwague that has access. There is no situation where 10 million delay in uber loses you 100 million dollars


> "Your lack of planning is not my emergency"

> Unless you're the founder+owner, I'd expect that tech support at your company wouldn't expedite your access request just because you feel entitled to it.

You think corporate IT support doesn't help out users who've forgotten their credentials?

I can assure you, resetting forgotten passwords is probably one of the most frequent things first-tier IT support does. And sorting it out synchronously while they're on the phone is normal - it's not like you can do it asynchronously when they're locked out of all the async messaging systems.

(Of course, the bypass might take an inconvenient form - like calling you back on the phone number HR have on file with you, or a three-way video call where your boss vouches for you)


I'm sure it can be frustrating, the idea of employees who don't seem to be sufficiently diligent, and then expect IT to drop everything to fix problems that the employee caused.

Ideally, the IT department is empowered to work proactively on effective infosec, for all of the company's real-world situations.

Then the standard for responsibility of each non-IT employee is only good faith compliance with what IT dept. told them -- not to be an IT expert who can reason about infosec tactics and strategy.


I think the tradeoff between "the entire company is breached" vs "I lost my device while on vacation and I have a tight deadline" is probably best geared to help prevent the former than the latter.

(Webauthn by design requires physical hardware tokens, not cloud storage.)


WebAuthn does not mandate any kind of form factor[1], external tokens use CTAP for USB/Bluetooth/NFC, Apple FaceID/TouchID and Windows Hello using proprietary interfaces with the built-in hardware. Blink-based browsers ships with a virtual authenticator for debugging[2] and there are a few more[3].

Apple and Google already announced cloud syncing earlier this year, using "passkey" as a friendlier term for end-users. QR codes already allow for cross-ecosystem non-synced use cases, like using my personal Android phone to log in an account with my work Macbook. https://securitycryptographywhatever.buzzsprout.com/1822302/... is a good listen to catch up on the latest developments.

[1]: https://www.w3.org/TR/webauthn-2/#authenticator-model [2]: https://developer.chrome.com/docs/devtools/webauthn/ [3]: https://github.com/herrjemand/awesome-webauthn#software-auth...


You are correct, and I should have said "Webauthn is designed to rely on something you have" rather than saying "physical tokens," since the latter is confusing and could be taken to imply a form factor.

If you lose the things you have while on vacation, though, it will be inconvenient (which is what the OP seemed to be against, and what I meant to be responding to). I think for a corporate environment that inconvenience is a reasonable tradeoff.


> (Webauthn by design requires physical hardware tokens, not cloud storage.)

That's not true. As an obvious reductio ad absurdum, you could just build a fake USB driver that presents as a security key. But more practically, I'm pretty sure that iOS the Webauthn secrets are synced cross-device via iCloud.


Not true. Apple Passkeys is cloud based; syncs through iCloud and is webauthn.

Nothing about the spec mandates physical hardware tokens. Software tokens work fine too. https://developer.apple.com/passkeys/


To start with, let's say you have your corp laptop and corp phone (both are access devices and both should have device bound certificates), and your yubikey for webauthn, and your corp photo-badge and your government issued photo identity (all three are authentication factors).

Let's say you are traveling and you lost one or both of your access devices. First immediate step is contact your security hotline and notify them that you lost the device(s). They should immediately remote-wipe/disable said devices.

If you are visiting another branch office of your company, local IT should be able to physically verify you with your government issued ID and your corp badge and issue you a temporary laptop/phone and get you basic access privileges.

If you need high-trust access, it should require more verification steps (your manager has to confirm you are not on vacation and that it is really you who is meeting with the IT shop etc), and more elapsed time (this sucks but it is important to slow things down anytime primary access devices are reissued due to loss/theft).

Obviously, if you are a super critical person and have become a single point of failure, that's bad.


> I just lost the purse with my phone, or my device broke, or got stolen, or whatever, and need the damn TOTP code to telework _right now_.

So, wait for a new authenticator device to be shipped to you. Like, if the work laptop broke, you'd presumably have to wait on a replacement for that, too...


The way I view these types of objections - I know one or two people who have lost their wallet more than a couple times in their live ("I need new ID!") and the rest of the people I have know have NEVER lost their wallet. Even people who have their wallet stolen often recover it later (thieves remove valuable part and throw it away so they aren't caught with it). We probably shouldn't design our security procedures around the very, very small number of people who have lost their wallet a few times as an adult.


"User lost their token/forgot their password/lost access to their e-mail/lost their phone/changed their phone number/changed their mailing address" may only be 0.1% of users - but it's 100% of social engineering attacks :)


Yep, and the GP is saying that you should optimize your procedures to deal with the attacks, not with the honest errors. Exactly because the honest errors almost never happen (even when there are thousands of people on the organization).

Anyway, if a complaint about procedures makes exactly as much sense if you replace the cause with "gets sick and spend a day at the hospital" without losing meaning, then it's not a valid complaint.


I'm curious what problems physical tokens have that don't have easy workarounds. I have a Yubikey on my keychain with home/vehicle keys that supports USB and NFC (so it works with my phone).

For work, I use that as a backup. For personal use, I just leave a cheap Feitian plugged into my desktop. For work, I just leave a slim USB-C plugged into my laptop all the time (if the laptop gets stolen, they still need the first/password factor)

In addition, emergency recovery codes just get copied to a note in my password manager. You don't need to lock recovery codes in a safe--they're only 1 of the two factors. You just need to make sure you don't store them in the same place as the other factor (or, if you do, that place is protected with multiple factors)


Just plug your recovery key in a backup server on a network you have physical access to and expose ssh port as a tor hidden service (optionally with onion client auth if you're paranoid)? Either way as long as you have backup keys that don't involve biometrics you can set up your own infra for that as you see fit.

But yeah, the face whatevers really must not be the only option.


I just travelled eight time zones away and (apparently) lost my security key there. It caused pretty much zero issues, I just stopped by the office and picked up a new one to bootstrap off a spare key. Then I revoked the old one. If you don't have another key (you should really get one!) you can call in and they'll figure it out.


How did you drop into the office to get a new key? Is your office eight time zones away from where you live?


Nope, I was traveling on business to a place with another office. This is true of a significant portion of business travel, no?


Perhaps, but what does this have to do with the situation you are responding to where the person WASN'T doing that.

I have literally never travelled to "another office" in my entire career. But then I've always worked for startups ...


> Not sure how Webauthn works, but as long as it can be stored in the cloud, I'm fine.

Apple is solving for this exact request with Apple Passkeys; it is just WebAuthn under the hood with cloud backup, multi-device, and sharing built in.


Truly secure access to computers is simply not possible under those conditions. A hardware root of trust used to verify your identity is mandatory.


It’s too bad the user experience across devices sucks. The best experience by far is a yubikey nano since it is mostly permanently attached to your laptop. It’s always there and you just quickly tap it. Love it.

Of course that doesn’t work with my iPhone. So I guess I need a second NFC yubikey that stays on my key chain in my pocket (which I don’t have since I don’t carry keys.). So then I have to remember to register both yubikeys. Then every time I have to login to GitHub or whatever on my phone I have to pull out my keychain (which I don’t have) and tap it on my phone.

I wonder when I can just get a virtual yubikey built into my phone. No extra device. My phone is my device. It kind of sounds like what Passkey is but I don’t want to pull out my phone to auth my laptop.

I really loved the idea and convenience trade off of SoftU2F. Too bad it’s dead now.


> I wonder when I can just get a virtual yubikey built into my phone. No extra device. My phone is my device.

Last year, Apple shipped the first version of this: you can enroll your phone (or TouchID-equipped Mac) on sites like GitHub.com and it’ll use the Secure Enclave for WebAuthn secrets. I’ve been doing this since 15.4 came out and it’s great. Prior to that, I used a Yubikey 5 with USB and NFC, which is still handy since that’s where I store TOTP seeds for less secure sites.

Passkeys extend that idea further by allowing you to register once and have it synced rather than having to register every device on every site[1].

That last part is important because AWS has a huge barrier: the number of MFA devices you get is one, which means you either need insecure things like synced TOTP seeds or you have to be comfortable never losing your Yubikey. I have been asking our TAM to prioritize fixing that for years so backups can be a real thing.

1. Over simplified a little - hear Adam Langley at https://securitycryptographywhatever.buzzsprout.com/1822302/... for the right version


> That last part is important because AWS has a huge barrier: the number of MFA devices you get is one, which means you either need insecure things like synced TOTP seeds or you have to be comfortable never losing your Yubikey. I have been asking our TAM to prioritize fixing that for years so backups can be a real thing.

Actually, can we mark AWS as insecure? Seriously, it was a bother at the time it was rolled out but now I could say that Amazon is insecure. If they could not bother with this, there are definitely more security holes intentionally hidden from the general public.


No – the problem here is really the reverse: they limit the number of devices which can gate access to your account, on the theory that you'll handle a breach by contacting them. That's theoretically more secure but slower.

(Non-root MFA can be reset by another admin or root so this is most of a concern for the root account)


> That's theoretically more secure

I fail to see how that can be true. How will they validate that it is you on the phone?


If you haven't dealt with this before, it's not calling support and social-engineering someone into hitting the reset button. The last time I knew someone who had to reset an AWS root account password (broken Yubikey), it required multiple phone calls AWS initiated to the billing & technical contacts (which can only be set by root so an attacker can't easily change them) to confirm intention and then they had to sign a form in front of a notary who checked photo ID.

I classed that as more secure since you can't do it remotely and the in-person portion further increases the difficulty.


You do not have to even talk to AWS to remove the MFA from the root account. You simply need access to the phone number on the account (though there are ways around the phone number, see below) and the email address for the root account.

It's been a little over a year since I've done it but as I recall this is how it goes. You receive an email with a link that takes you to a site that starts a verification process via the phone. You get a number from the site that you are prompted to enter when they call you on the phone. Once that's done you can log into the account with the MFA device and then even remove the MFA device entirely.

The email address I believe can only be changed by AWS (and at least the last time this was an issue for me can't ever be reused for a new AWS account).

The phone number can be changed by anyone with aws-portal:ModifyAccount, which probably means someone with admin access. It is NOT restricted to being modified by the root account.

So if you have a working access to an account with that permission and access to the email you can change the phone number to one you have access to and go through the whole process. Meaning if you have the above permission you really only need access to the email.

Link to the documentation for this flow: https://aws.amazon.com/blogs/security/reset-your-aws-root-ac...


Ok, that's not trivial to hack, but it's in no way more secure than accepting a few more backup tokens.

Both email and phone numbers have widely known and exploited vulnerabilities that won't ever be fixed (worse if the phone part is only SMS). Requiring both at the same time is OKish, but not any exemplary security.


For what it's worth the phone portion is a voice call where you have to enter a number with touchtone.


It's possible that even though we are not using GovCloud they had additional precautions enabled for us (this was a few years back). My coworker vividly remembers having to wait for the notary to show up.


Slowing things down is the right approach when resetting/reissuing/rebinding auth devices.


When removing MFA, yes. What I'd like would be changing n=1 to n=2 so you could have a backup against a single failure.


Regarding AWS, it's truly insane that they only support 1 device (breaking from the FIDO2 recommendation) but you can also put AWS behind SSO, and then have 2FA on the SSO.


> it's truly insane that they only support 1 device

I recently got a notification to create a separate AWS account (I don’t use it), and thought might as well enable 2FA. Added the first key. Looked for a way to add the backup key. Was confused, removed 2FA. What is that, lockout Russian roulette?


This predates FIDO2 by at least a decade. SSO works great for everything except the root account, where you really do just need to lock it up tightly and make sure you know how to authenticate to support in a disaster.


Recently my phone broke and it took a day to get a new one. I was so glad to have my 2fa codes somewhere else as well. I'd never want to rely solely on one device for access to everything.

Apples plan is that I need to own several Apple devices for that, this is a non-starter for me.


Your backup plan currently can be hardware devices (a $20 Yubikey works great) or printed codes. When vendors other than Apple ship passkeys, that should extend to include other devices.


The backup option can be a Yubikey rather than another Apple device.


To add, Chrome currently supports webauthn using your phone via BLE. When you try to enroll/sign in, if you click 'add an android device', that QR code will also work in the iOS camera app and allow you to use icloud to store & log in with that security key. The only real requirement here is a browser support and a desktop with bluetooth, something not super common on gaming / custom built PCs until a few years ago.


Apple did something similar: you can login using your phone’s WebAuthn credentials if you’re in BLE range. The main problem is that it’s Safari-only but the passkey spec should allow Firefox to implement it.


TouchID unfortunately does not work with Firefox. Making it non viable for a large rollout.

Yubikeys have the advantage of working with all browsers


Corporate environments usually have no problem mandating a specific browser be used.


Right, that's how we got IE6 :)


How do you recover if you lose access to your device?


Currently: use a hardware key or printed backup codes.

Passkeys: same, or one of the other participating devices.


Backup key on your keychain.


Do you mean a physical key?


I can't speak for them but that's exactly what I do with a Yubikey on my personal keys & work badgecard lanyard. My rationale is that the physical key prevents remote attacks since they can't use the token and anyone who finds/steals my keys won't have the password. Scenarios where the attacker has both aren't on my personal radar — that's the kind of situation where you're looking at things like duress codes.


create a second account or use the root account?


If you have a Linux PC with a TPM, you can use https://github.com/psanford/tpm-fido to create and "plug in" a virtual USB WebAuthn key whose secret is irretrievably stored in the machine's TPM. This effectively asserts that your specific machine is being used to enter a given site. However, it's important to remember it doesn't necessarily verify that *you're* present, or even if *anyone* is present at all, since the presence check is done via a software dialog and can be pwned along with the rest of the system.


>The best experience by far is a yubikey nano since it is mostly permanently attached to your laptop.

Unfortunately ports are increasingly at a premium. You basically need to "mostly permanently" block the use of one of your USB ports for this to work. It's OK with my old MacBook Pro which has a USB port I mostly don't need to use for other purposes but thinner/lighter laptops don't have a lot of ports to spare.


Android has a built-in FIDO2/webauthn authenticator these days (well, built-in to Chrome, and by Android I mean Pixel phones). I'm sure Apple will build something similar as they have the hardware for it.


They called them Passkeys. It's FIDO2 with resident keys only AFAIK though https://developer.apple.com/passkeys/


Here’s Apple’s documentation from 2020: https://webkit.org/blog/11312/meet-face-id-and-touch-id-for-...


> Of course that doesn’t work with my iPhone. So I guess I need a second NFC yubikey that stays on my key chain in my pocket (which I don’t have since I don’t carry keys.). So then I have to remember to register both yubikeys. Then every time I have to login to GitHub or whatever on my phone I have to pull out my keychain (which I don’t have) and tap it on my phone.

That's why I use one of these rather than a yubikey: https://www.ftsafe.com/products/FIDO/NFC . I don't know why yubikey doesn't make a dual-method key when it's such an obviously nicer way to do things.


YukiKey does make a dual key. The problem is that these keys suck to have because you can’t leave them in your laptop and carry it around like that. Which makes it easy to forget.

I personally was always forgetting my key when I worked in office. Or more likely I’d have the key and forget a USB A to USB C adapter (work was cheap and wouldn’t give out USBC keys).

My new job gives out USBC keys and I have yet to forget it when I needed it. They just don’t work as NFC on a phone.

https://www.yubico.com/product/yubikey-5ci/


Regarding the keychain issue, what works for me is using a wallet with a side pocket, that way the yubikey doesn't fall out... (can't remember the name for such an item; clutch wallet?).


And some sites like AWS management console don’t allow you to register more than one key :/


So, where does it end?

Cybersecurity has been playing those silly games of "increasing security" and some 80% of recommendations were frankly BS

"longer passwords" yes. "password has to have symbols, numbers and be rotated every 3 mo" no

2FA yes, but not SMS, but not OTP because people get fished, blah blah blah

Not to forget the "put everything in a password manager" then you lose or forget your "extra safe random password" and are SOL

Meanwhile there are still incompetent people around that think asking for Mother's Maiden Name should be a security question

So where does it end?


> So where does it end?

It doesn't (can't) ever end because security is a process, not something you achieve and are done.

With ~inifite budget, one could achieve perfect security (but only for a clearly scoped threat model) for an instant, but both the infrastructure and the attackers move on, things constantly change, so it's not perfect anymore. And of course infinite budgets don't exist.


I don’t think it ever truly ends, so long as there are secrets and people who want to uncover them. This is the classic arms race, and subsequently people who can’t keep up are just casualties…


It ends when you have an internet that has consequences. To get proper consequences, you need all users on the network to be identifiable and every device linked to that identity. With ipv6, you can give each person a (few?) static ipv6 address(es). This will basically allow you to determine where and who originated every piece of information on the internet. Next to every comment/photo or other upload, your real name/surname and facial photo needs to show. For each country you travel to, you get an IP address and you are bound to all laws for that country. Your IP address basically becomes your online passport. Anonymity is a source of massive amounts of nastiness online, there are things that people say/do online that they would never do in real life as there are social consequences. The same rules can be applied (or even stricter ones) to businesses or any server, that way you know who your attackers are. It should also be easier to block out whole countries from connecting to you, if you want that (not enforced).

Obviously, no children should be allowed online even in a clean / locked down version of the internet. Ideally no ads/marketing should ever target children either.

I have the same impulse as most people that locking the internet down is a bad idea, but it seems like that is the only natural outcome eventually. We either stop abusing the internet and keep some form of freedom of speech / freedom (some countries values this more than others), or eventually the internet might become heavily regulated/controlled, which is the path we are currently on. Companies are already testing the waters with this, in some cases they are all ready committed to this (see self-hosting email vs big providers blocking small sender).

The problem of security is just an arms race towards the above, it is just a side effect because it is tolerated / no real consequences for misuse of the internet in most countries (that includes leaking data, being breached and data stolen, or just plain ol phishing - companies face zero consequences for data breaches).

Another way to think of it, you have a scale: on one side you have ultimate control, ultimate safety, no freedom - on the other side you have less control, less safety, but absolute freedom. The security arms race is just the shifting of balance between the two extremes.


Smart cards have long been mostly phishing proof. WebAuthN is essentially a more convenient interface to the same technology.


Then your phishing involves them installing some type of remote access on their machines. Or to get some information you need


Don’t worry, we’ll cover this in annual mandatory security training.


I sure hope WebAuthn is easier to implement than that site is making it look like. I know nothing about how it works, but the sample code (Under "Example: Parsing the authenticator data") that requires parsing slices of bytes out of the response and then constructing some object with magic numbers looks really hacky. Maybe it is supposed to be exposed at a low level like that so wrappers can be made around it, but if there is any hope of migrating sites over to it, it's going to need to be dead simple to implement without screwing up.


For the most part, none of that matters client side. The web client side is mainly just a passthrough to your backend that'll do the actual processing of those binary blobs other than for example code.


I see, if all that is server side then it makes a lot more sense as I assume backend libraries will get created to handle this for various languages. Bookmarking it to check it out later when I have time to read it all, hopefully it isn't as daunting as it looks.



How would one slice a byte array without magic numbers? Or without constants representing said magic numbers?


By having an API that abstracts the details away, like any other browser API. I would expect something like attestationObject.getPublicKey() versus whatever is going on in that demo.

I believe it was some part of oauth or saml (again not an expert here) where developers were making a common mistake by not verifying everything in the spec, leading to an easy bypass if you knew how it worked. Having devs implement a complex spec relating to authentication is a recipe for disaster.


It's good to have but in the real world that wouldn't have stopped a determined attacker. They could have social engineered them to run code on their PC


So an entire class of attacks would have been removed and the attacker would have moved to another class of attacks.

As for running code in the environment there are many, many ways to deal with that. Obviously it's an easier environment to audit, but it's also much easier to control.


Yes, I don't disagree with anything you said. I am not saying MFA may not have at least slowed down the threat actor but the focus here should be how easy lateral movement was. Like you said there are many ways to get in. If the network share was treated the same as internet facing stuff though, that sounds like a deeper issue many orgs face but I am surprised that a fairly new org like Uber is not doing that already.


> So if your workplace is letting you authenticate with SMS codes

There's an old saying in photography, "the best camera is the one you have with you".

IMHO its very much the same thing with 2FA.

Any 2FA is better than no 2FA.

Sure some 2FA options are more secure than others, but by the same token, there's also a scary number of websites out there that have zero 2FA options. Others make it inordinately difficult to find (e.g. I'm looking at you Slack ... finding where to turn on 2FA in Slack is a nightmare).

Ironically there's no 2FA option for HN either. ;-)


That's like saying MD5 is fine for hashing passwords, because it's better than plaintext.


> That's like saying MD5 is fine for hashing passwords, because it's better than plaintext.

No, I'm saying we need to come down to planet earth and recognise we live in the real world. Hence SMS or TOTP is preferable to nothing at all.

Its a bit like the hardcore open-source types who can't see the wood from the trees and cannot fathom why anyone would possibly want to use anything else other than Linux and fully open-source alternatives to Microsoft Office or Photoshop. Sometimes you have to compromise.


> That's like saying MD5 is fine for hashing passwords, because it's better than plaintext.

If for whatever reason you can't have anything else, MD5 is obviously better than plaintext. Not fine, but better.

With passwords you don't have external dependencies but with MFA, you do. Things are more complicated and real life is messy.


They're saying perfect shouldn't be the enemy of good enough. Completely valid.


> Ironically there's no 2FA option for HN either. ;-)

You can't even delete comments or your account on Hacker News, so it's not like it takes privacy or security seriously.


Any 2FA is better than no 2FA

That's simply false because of the poor customer service of the providers and fates of many phones.


How is that false? Name a single example where SMS 2FA is worse than none. And just because it will always come up: 2FA, not treating the second factor as only factor.


> Name a single example where SMS 2FA is worse than none.

SMS is terrible because it is so easy to lose account access.

Phone broken/stolen? Completely locked out.

Or, I have this one financial institution that insists on sending SMS 2FA to the phone number on file, which is a 20+ year old landline which obviously can't receive SMS. Completely locked out. Someday I'll have to find out some way to get my money out of there (they have no local branches).

I will always use TOTP if at all possible, because it's not a single point of failure. I store the seed values securely and they are backed up, so can't be lost.


That is actually a good point. Hadn’t thought of that.

I hate TOTP, can handle SMS 2FA (sim-swapping is super rare here) and love FIDO/U2F/Webauthn (or whatever it’s called today). I have one with NFC on my keychain, and a backup device in the drawer. No off-site backup key, but encrypted backup codes.


when your sim gets hijacked and someone steals your entire bitcoin wallet?

worse than none because it "justifies" being sloppy with the first factor (i.e. account password).


Okay, I guess if you stretch that hard you can reach your goal.

edit: Your first sentence is meaningless because that is just as stolen with no 2FA.


> Any 2FA is better than no 2FA.

False.

SMS 2FA is significantly, uncategorically, undeniably worse than no 2FA at all.

If your SIM card is hijacked, most websites/companies will quite happily let the impostor click a "Forgot password" link and get a SMS code to verify their identity, which will allow them into the account to take/change whatever other details they want at that time.


That's a poor password reset process, not SMS 2FA. You can do SMS 2FA without having that terrible reset process, you can have a terrible SMS reset process without SMS 2FA. They're two different concepts.


> most websites/companies will quite happily let the impostor click a "Forgot password" link and get a SMS code to verify their identity

That's not 2FA. There is one single factor there, the SMS code.

SMS 2FA does not require you to have a 1FA backdoor, so you can't claim the latter is an inherent fault of the former.

For example, pairing "enter the SMS code" with "click the link we sent to your backup e-mail address" gets you a two-factor password recovery process.

That isn't the best or only method of 2FA password resets, it just comes to mind first because it's the last one I used and it is sufficient to prevent access via SIM hijacking alone.


You’re right and fair enough.

I do feel though that SMS porting is such a lax system that using it as an authentication factor leads you into a lot of (SMS && social-engineering) situations that would be more preventable if SMS was not involved.

I say this fully realising that in this scenario the party allowing allowing these attacks to work due to poor understanding or lack of proper checks is the real problem.


100%. The common thread in all of these recent attacks (Uber, Twilio, Okta, etc) is the “phishability” of the authentication methods involved -- as you mention, the unphishability of WebAuthn is what makes it particularly compelling.

What’s head-scratching to me is why tech-forward enterprises haven’t been faster to adopt unphishable forms of authentication like WebAuthn. I’m biased as I run an identity and access management company (stytch.com), but I hope more companies will consider integrating WebAuthn to support unphishable MFA.

Today, WebAuthn introduces some nuances that can discourage a B2C company from supporting it today (e.g. account recovery with lost devices), but it’s a clear win for corporate network/workplace authentication and B2B apps. I believe some of the lack of adoption is due to complexity to build (more complex than traditional MFA) and cost for off-the-shelf solutions (Incumbents like Auth0/Okta require ~$30k annual commitments to let developers use WebAuthn). If developers decide to build with Stytch, WebAuthn is included in our pay-as-you-go plan and can be integrated in an afternoon(https://stytch.com/products/webauthn)


I’ve not read up about webauthn yet. How does it work & what makes it unphishable?


Here's a bit more background on WebAuthn: https://stytch.com/blog/an-introduction-to-webauthn/

What makes it unphishable is that the authentication is not based upon something that a user can be deceived into sharing with an attacker. Passwords and one-time passcodes (OTPs) can both be remotely acquired from users when attackers convince users to share these text-based verifications with them.

Because WebAuthn validates possession of a primary device that was previously enrolled (either the computer/phone the user is leveraging for the biometric check or the user's YubiKey), it's device-bound and cannot be phished.


How do you proxy MFA unless you're using a third party service for authentication? Plenty of password apps can bind to specific URLs and ports to support TOTP. In what ways do you think it is more secure if an authentication provider gets hacked? Then they could just as likely proxy the hardware token handoff. I don't think hardware tokens are all that much better than someone who is more security conscious, but they are certainly great for people that have no clue what they are doing or just one step in a MFA process.


Actually using a pw manager for TOTP is quite rare. People using a PW manager at all outside of the tech space is rare in my experience as well, outside of the built-in chrome PW manager.

Their 2fa is most likely okta or Duo style, in that the default authentication method is via push notification.

> Then they could just as likely proxy the hardware token handoff.

You can't do that because the security token itself receives the "relying party" in the form of the domain name it's trying to present authentication for. Requesting "uber.com" when on "ubeer.com" won't work.


You think that it is worth repeating that multifactor authentication not based on the latest unproven marketing hype technology, Webauthn, is dangerously insecure? You don't know what you're talking about.


> not based on the latest unproven marketing hype technology, Webauthn

WebAuthn is an ongoing project but the history goes back almost a decade to U2F, and the ongoing work has been carefully reviewed by a number of industry heavy-hitters. We know that it’s robust against phishing, too, which is why it’s so relevant to this conversation.

I’d also like to know more about your rationale for describing a system all of the major players have implemented as “unproven marketing hype”.


Maybe "unproven" was a poor choice of words. I'd be willing to go so far as to say that it is "proving" itself as bleeding edge technology. However, if measured by adoption and risk-taking, it is largely unproven.

The history may go back almost a decade, as experimental technologies driven by industry working groups tend to do, but that work does not extend beyond the theoretical. If Facebook and Google implemented WebAuthn, they're still not staking their reputations on it. If they did, we wouldn't be using password-based logins nor MFA. Instead, they're slowly testing the waters in the real world, waiting to see how hackers respond to it. Consequently, WebAuthn remains on the bleeding edge, in the very early part of the adoption curve as it proves itself.


> If Facebook and Google implemented WebAuthn, they're still not staking their reputations on it. If they did, we wouldn't be using password-based logins nor MFA.

I think the naming here is causing confusion. This thread started about MFA usage, which is a chain of functionality going back to U2F:

> at this point, MFA that is not based on Webauthn (https://webauthn.guide/#about-webauthn) should be considered dangerously insecure.

That is broadly adopted and all of the companies you mentioned use it internally and and recommending it as the most secure form of MFA, based on both the strong phishing resistance and ease of use improvements, and I don't think that position I quoted is especially controversial in the security community other than that people in enterprise environments acknowledge the challenge of retrofitting older applications and services.

WebAuthn also allows you to setup passwordless login flows, which relies on some newer features which were added such as attestations about how the token was unlocked (i.e. corporate IT probably wants to require biometrics, not just a Yubikey-style tap). That is definitely newer, but again, you're talking about something which Microsoft and Apple have already shipped.


Well, they have recently added a bunch of weirdness to the spec.

At one point U2F was simple - a USB token with a hardwired user presence button, providing a second factor alongside a username and password. Trivial to move between different computers and OSes. Secure even if the host OS can't be trusted. Physically unpluggable.

These days there's a mad variety of options. Options that are only secure if the host OS can be trusted. TPM-based options that are tied to a single laptop. Options like Windows Hello that are locked to a single OS vendor. Passwordless login, turning two factors into one. Copying credentials between devices, through the cloud. Sketchy low-cost biometric scanners.

For a security system, the latest versions sure are embracing a lot of complexity.


When you make a communications protocol open, people create all kinds of crazy endpoints for it.


I will grant you that the parent's opinion is a little strong, but fundamentally, they have a point. The weakness here is the human. Standards like this take make social engineering attacks much more difficult.

With that said, MFA in some form is better than none. However, some implementations provide better security than others (of course).


I'm a security professional with a decade and a half of hands-on, real world experience. My most recent position being the product manager for Identity and Access Management for a leading B2B SaaS, dealing with real world attacks from extremely sophisticated threat actors. I assure you I know what I'm talking about, I've lived and breathed it every day for many years and have followed these standards since their initial drafts.

If you have knowledge on superior ways to protect users from MFA passthrough, please share it. I am always happy to learn about better ways of doing things. But contrarian bandwagoning without providing effective alternatives isn't helpful.


I would be shocked if they didn’t issue all employees YubiKeys.


A lot of people still have legacy Yubikeys floating around, and these are replayable. What you need now is something like the Google Titan FIDO2 key or one of the Yubikey FIDO2 keys. Transitioning an entire company to these, getting everyone to self-enroll, and then removing the ability to use all the less safe options across the employee base, contractors, etc is not cheap nor easy, and of course requires a massive amount of retraining. It's not as trivial as just buying them, sadly.


What retraining? You install the yubikey by plugging it in, registering it, and using it by tapping it as needed. What is complex?

There was literally no training involved for this during my time at ElGoog. One wiki page covered it adequately.


For someone who claims to have worked in big corporations and accuses others of not having worked in them, you sure are optimistic about the capabilities of the average user. Not every company is Google.

For starters, how do you register the Yubikey? On Okta, this is a multi-step process with at least one non-obvious step, and one easy way to screw it up.


The people who work at Google are not smarter than the people who work at Uber, or any other tech company. Getting people to understand 2FA might take a little bit of work but it's not hard.


WebAuthn (well, U2F, but that's essentially WebAuthn with a slightly different browser API and WebAuthn is backward compatible with U2F only devices) support has been on the YubiKey since the Neo in 2014 [0]. It's basically impossible to have a YubiKey that does not support WebAuthn. You do not need FIDO2 on-key resident credentials to benefit from WebAuthn.

[0] https://www.yubico.com/blog/neo-u2f-key-apps/


They did for a while but it was too expensive. Uber uses OneLogin, who I'm sure is also investigating. We had apps on our phones that received as "is this you trying to log in?" notification. You had to consciously hit "Yes" in order to continue the login flow. It wasn't offline 2FA like Authy or something. There _was_ a much higher standard of security there.

This was _social engineering_, something that even the finest MFA algorithms don't guard against.


I my experience, prepare to be shocked.

And even if they do, they likely have a “backup” for people who never seem to be able to use the Yubikey right.


[flagged]


This is false, a gross oversimplification. Every organization has complexities, it doesn't reduce to a common idiocy. Even when the net result is idiotic in hindsight.


[flagged]


It sounds like you're experience has been at a small firm. At scale, 2fa and yubikeys are a no brainer with regard to risk vs reward/ safety.

Do you think all security engineers or whatever you want to call them are total incompetent idiots? If yes, I can't help you. If no, then you don't need further explanation from me.

Security requires a complex balancing act, and in this case they got it wrong, end of story. As stated elsewhere in this thread, there are only those who've been breached and those who don't know they've been. End of story.


Corporate security teams are built on three different traditions:

1. The policy/compliance tradition - the kind of people who looked at PCI-DSS and decided that was what they wanted to devote their life to. The accountants of the tech world. You've got resources and want to roll out U2F? That's not in the policy document, we'd rather spend the resources on this great audit of our suppliers' compliance that I've been planning....

2. The be-lazy-be-popular tradition - for the team that hires a guy to reduce other people's workload, not increase it. Resources to roll out U2F? I won't stop you if you've got the money to spend, but what we've got is probably enough - a lot of companies don't use U2F, you know.

3. The hacker tradition - the kind of people who see every real, exploitable vulnerability they find as proof of their 1337 status. They don't care if a policy document says you should disable paste, that's bad advice, don't do it. Rolling out U2F sounds like a great idea - but a lot of corporate environments will chase these types out, or curb their enthusiasm by ignoring their reports.

Perhaps kirbys-memeteam worked in companies with security teams that tended more towards traditions 1 and 2, while your employer's security team had more of tradition 3?


[flagged]


The bigcorps don't make exceptions for Tiny Tony's. If you work at these sorts of firms, you should probably start an anonymous exposé blog, it would be enlightening for the rest of us. It would also probably help get things fixed so they could avoid further embarrassment before it becomes a real problem (like in this case).

I bet you could make a fair sum from the ad impressions alone, and feel good knowing you were acting as the force multiplier for positive change.

Edit: Your personal jabs aren't in the spirit of a collaborative or curious conversation. You've revealed yourself as just another 007 wannabe. Boring.


Explain why what is false? You're just making vague claims about anecdotal experiences. It seems unlikely, in general, that a security team would not support a Yubikey rollout. Even one that only cares about compliance would likely support it because it will make compliance easier - auditors care a lot about phishing and if you can say "OK, yeah, we had some users fail the phishing test again BUT our 2FA is phish-proof" that's an easier conversation.

I'm sure there are truly lazy and incompetent security teams out there but it makes no sense that they would be the majority or even particularly prevalent. Maybe you're just unlucky and ran into one, or maybe there were real reasons why a Yubikey rollout wouldn't work;

a) Who's going to ship the keys? Yubico provides services for that, will you use those? Pay for them?

b) Who's paying for this? Did your infra team ask the security team to pay for it? Who's paying for replacements and support?

c) Is this a high priority for the team vs other issues?

d) Do all of your vendors support Yubikeys or are you going to have to have a hybrid solution? What will migration from vendors configured for some other FMA solution to Yubikeys look like?

I support a rollout at any company, for the record, but these vague statements with the conclusion of "security people don't care" leave a lot to be desired.


> Security exists just to check boxes at most firms

And then you get the bullshit solutions that the OP was complaining about.

Nobody serious ever claimed that SMS based MFA was secure. Large companies implemented it anyway, and pushed it into unknowing developers nonetheless.


What? Security is the one domain I found where you can't just waltz in because you've heard of a computer. You need to do the work upfront with Sec+ or the like, it would take months for a newbie. Past that point, what more guarantee can you have? Even work experience can be meaningless if they weren't in the right team/role.


Security is a cost center, not a profit center. Most companies cut that investment to the bone, which means paying the bare minimum that lets them check boxes.

This is true for basically any non-tech company, and is true for like 75% of the tech companies.

> You need to do the work upfront with Sec+

Sec+ is part of the paper mill parent is referring to. A book of terms to memorize for 3 months and then call it good.


>Security is a cost center, not a profit center. Most companies cut that investment to the bone, which means paying the bare minimum that lets them check boxes.

This. Definitely.

But even at organizations with the budget, the knowledge and the infrastructure to do security right, no matter what the security folks think/want/suggest, UX and low friction matters. If a process is too onerous (and that varies from org to org and person to person), it will be rejected post haste -- as will you if you try force it.

This is especially true in the finance sector. Joe trader is too busy making bank to worry about all that security bullshit. "Just make it work! I don't have time for this. I banked seven-figure bonuses in three quarters this year and you're just some asshole! Get the fuck out of here, I'm busy!"

And that attitude often extends to management as well.

If you get away from the front-end and its users, the InfoSec guys are all over the back end like a cheap suit. Because their (not seven-figures, but not a kick in the teeth either) bonuses depend on making sure nothing bad happens.

Money (especially in the finance sector) is a powerful motivator, but it sometimes (more often than I'd like) creates incentives that thwart optimal security practices.


+100

Not only is it a cost center it’s also seen as a hindrance to the fast progress. Rarely will you come across an exec who takes security seriously. For them it’s just a checkbox at best and an obstacle at worst. I’m speaking about application security though. It’s possible that IT sec, physical security etc are taken more seriously.


Cuts both ways, of course. There are terrible IT Security departments that don't understand the concept of false positives, create approval flows for critically needed items with 2 week SLA turnarounds, topple the network with poorly designed endpoint security scanners and tons of useless telemetry and so on.


That is why government intervention is needed. Australia is proposing significant changes to its cyber security framework and legislation. https://www.homeaffairs.gov.au/reports-and-pubs/files/streng... Mandatory cyber security obligations backed by penalties and direct government intervention for critical national security companies. https://www.homeaffairs.gov.au/reports-and-pubs/files/exposu...


We kinda sorta already have laws designed to make software systems secure, but they aren't really followed in spirit. I suspect what's needed is an expansion of NIST-like bodies, more concrete specifications for what is and isn't allowed (e.g. like seatbelt regulations) and such.

Ultimately, its going to be a cat and mouse game. A really determined hacker will find a way. But admin users with permissions over everything, passwords/encryption keys stored in plaintext etc. these are things that we can probably patch up really well, and force companies that can't afford to do that to (justifiably) go out of business.


Sec+ is laughably insufficient.


And even when you do get it setup you end up having to make all sorts of exceptions for various people who can’t be told “no”.


Painfully true. And once that exception is made, its easy to poke holes for more requests, until the security systems becomes somewhat pointless.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: