Opening a phishing email should not be considered a failure. The email client is specifically designed to be able to display untrusted mail.
Even clicking a hyperlink in a phishing email isn't too bad - web browsers are designed to be able to load untrusted content from the internet safely.
It's only entering credentials by hand into a phishing website, or downloading and executing something from a phishing site that is a real failure.
IT departments should probably enforce single sign on and use a password alert to prevent a password being typed into a webpage. They should also prevent downloads of executable files from non-whitelisted origins for most staff.
> It's important to note the nature of the failure.
Definitely! UCSF had a security firm send out a fishy-looking fishing email. My email client pointed out the url did not match the link text, whois told me it was a security company, and I opened the URL in a VM.
“You just got fished!” eye roll
I wouldn’t be surprised if most of those employees at gitlab were not so much fished as curious.
I opened the email and I forwarded the email to abuse at corporate domain just like the corporate website says and my manager still got an email saying I failed the test.
Maybe because the tracking pixel remote image loaded? I remember reading an article where people sent an email to Apple and it got passed around within Apple and iirc either Steve Jobs or someone who reports directly to Steve Jobs opened the email not knowing that they were sending out a makeshift read receipt every time they opened the email.
I'm not even going to get to the point of wondering whether every component is faked or not, since my thought process will stop at "I'm not going to ever enter credentials into a site I got to from a random link in an email". Which seems to me to be a far better policy than trying to figure out whether a particular site I got to from a random link in an email is faked or not.
Nobody is demanding you do. But if you go around claimng people "got phished", then you should be sure.
I've also entered fake credentials into a clearly faked login form to see what'd happen. Would it redirect me to the right site? Just claim the information was wrong? Send me to a mock up of the intranet I was trying to access? You can call it bad policy if you want (although you don't know about my precautions), but it doesn't mean I was phished.
Isn't this fairly common? I've now worked at several organizations where sensitive information was stored on air-gapped networks. Software updates or data were moved in and out using pre-approved external drives.
I tend to think this is good software dev practice anyway. You ought to be able to test everything on your testing servers, and if this doesn't adequately reproduce the production environment, it's a problem with your test system.
This is kinda ridiculous. You first need the email client to have a bug which enables some kind of cross-site scripting just rendering an email, then a sandbox bug for a webpage to leak into the underlying system, and THEN a bug for the VM to escape to the parent OS.
At that point, I think it's as likely that your airgapped email laptop can hack into your work machine through local network exploits.
If you think a hacker is going to manage all that, you might as well assume that the hacker can trick gmail in to opening the email for you. There's a point at which we have to realistically assume that some layer of security works, and go about our lives.
1. Nothing about that post says it's just network layer segmentation. C2S is it's own region, with multiple AZs (data centers). Why would you believe those are collocated with commercial AWS and not, as they write, air-gapped.
2. Please don't contribute to giving marketing license to remove what little meaning words still have.
The wrong one I suspect. An Airgapped machine is a term reserved for a pc never connected to the internet, hence the gap. Usually for extreme security concerns like managing a paper crypto wallet or grid infrastructure.
It is a paranoid stance. But if you are a developer in a large company, think about how likely it is that your computer has (direct or not) access to data/funds worth more than $100k to someone, and what kind of exploits that money can buy.
Anyone can get phished if, on an off day when you're tired or distracted by personal issues or whatever, your guard is down and you happen to receive a phishing attempt that also pattern matches something you're kind of expecting, either because it's a targeted attempt or just randomly for a wide-net phishing attempt. That's my model of how phishing works, they just make lots of attempts and know they will get lucky some small percentage of the time.
With that as my model: the email getting to your inbox is of course the first failure and increases the chance of getting phished from zero to not zero. Opening the email is another failure that raises the chance. Clicking the link is another.
All of the steps leading up to entering credentials or downloading and executing something from a phishing site is a real failure in that it increases the chances of becoming compromised.
That's even true if you're suspicious the whole way through. If you know it's a phishing attempt and are investigating, fine. But if you are suspicious, that means you can still go either way. You can also get distracted and end up with the phishing link in some tab waiting for you to return to it with all the contextual clues missing.
Someone once posted a link on hackernews titled "new phishing attack uses google domain to look legit"
I opened it in a new tab along with several other links to read, I was expecting a nice blog post explaining an exploit.
After about 20min of reading the other tabs I came across that tab again. I had forgotten the title of what I had clicked, I'm not sure I even remembered it was a hackernews link that got me to that page.
"Oh, looks like Google has randomly logged me out, that doesn't happen often" I think as I instinctively enter my email and password and hit enter.
Followed half a second later by "oh shit, that wasn't a legitimate google login prompt."
I raced off to quickly change my password, kick off any unknown IPs and make sure nothing had changed in my email configuration.
I'm lucky I came to my senses quickly. I think it was the redirect to generic google home page that made me click, along with the memory of the phishing related link I had clicked 20min ago.
There should really be a browser-managed 'tainted' flag on any tab opened from an email that prevents password input. Or if not prevents, at least a scary warning click through like an unsigned certificate creates, which at least shows the true full domain name.
Whenever it read about phishing it seems insane we have a system that requires human judgement for this task. If there isn't a deterministic strategy to detect it, how could the user ever reliably succeed? And if there is such a strategy, it should be done by the mail server, mail client, and browser.
Even an extension doing this might work in a corporate context. That makes me wonder if companies do their own extensions to enhance the browser for their needs. If all your employees are using web browsers for multiple hours per day it might really be worth it.
That's exactly what it's for: finding patterns that are too hard or too complex for humans to find. Enumerating every edge case of "enter a password" is not possible for a human, and whatever edge cases we humans miss _will_ be exploited by someone to compromise someone else.
It's also a matter of volume. How many pages can you evaluate and categorize in an hour versus how many can a ML system do in the same? I once saw a demo where a firewall/virus scanner app could detect malware heuristics dynamically by comparing to a baseline system, and could do so in 10 seconds or less per item. It would take a human more than 10 seconds just to read the report to generate a rule, and humans don't scale nearly well enough.
There are lots of complaints to be had about ML and privacy / fairness / ethics / effectiveness, but this shouldn't be one of them.
>There should really be a browser-managed 'tainted' flag on any tab opened from an email that prevents password input
I was going to say that couldn't be done, but then thinking about it - obviously the way OS currently works you can't know if it came from an email but you can know it came from an application that was not the browser (although that of course would require the browser to keep track of where a tab came from, which I assume they already do), but then links opened from web based email client would not have this scare warning click through.
The problem is clearly pretty deep. One posibility is that it's inherently inconsistent with a deep, high speed, long range, high bandwidth data regime. We live in a universe where all of us are ventriloquists, or may be ventriloquist dummies.
There's the questions of what identity is, and its distinction from identifiers or assertions of identity.
There is the matter of when you do or do not need to assert orverify a specific long-term identity, and when you do. When identifiers require a close 1:1 mapping, and when they don't. Of what the threat models and failure. modes of strong vs. weak authentication schemes are.
And ultimately of why we find ourselves (individually, collectively, playing specific roles, aligned or opposed with convention, the majority, or other interests) desiring either strongly identified or pseudonymous / anonymouus interactions.
Easy or facile mechanisms have fared poorly. Abuses and dysfunctions emerge unexpectedly.
This is why an auto-filling password manager is an essential security tool for every internet user. If your password manager doesn't autofill/offer to fill your passwords, the domain isn't legitimate.
Password managers are great for security and super convenient. It continues to shock me how many people surf the web while continuing to type the same password into dozens of sites, and then they wonder why they fall for phishing.
That sounds awful, but all you need to do is add all the legitimate domains to your chase login record, then you are phish-proof.
Obviously autofill itself can break on complex page layouts, and that's fine. The security comes from the password manager doing domain matching and offering to fill the password when you click on its addon menu.
No, this is why FIDO/U2F is essential. Password managers are good but people regularly search and autofill across domains because most companies, especially in industries like finance and HR, have spent years training users to expect random vanity domains and renaming every time someone in marketing wants to mark their territory. People phish TOTP similarly.
In contrast, the FIDO design cannot be used across domains no matter how successfully you fool the human.
> Anyone can get phished if, on an off day when you're tired or distracted by personal issues or whatever
It shouldn't matter how tired or distracted you are: you should never enter credentials into any place you get to from anything you receive in an email--or indeed by any channel that you did not initiate yourself. If you get an email that claims there is some work IT issue you need to resolve, you call your work IT department yourself to ask them what's going on; you don't enter credentials into a website you got to from a link in the email.
It's the same rule you should apply to attempted phone call scams: never give any information to someone who calls you; instead, end the call and initiate another call yourself to a number you know to see if there is actually a legitimate issue you need to deal with.
Rules like this should be ingrained in you to the point where you follow them even when you're tired or distracted, like muscle memory.
I just realized that this might happen to me. On my home PC my alarm bells would definitely go off when Firefox stops suggesting credentials for a supposedly known domain, but on my work computer we're a bit higher security and a password manager integrated into the browser (even with master password and quickly installing patches and whatnot) is just not up to scratch. So what I realized is that I may not notice a lookalike domain because I need to grab the creds from another program anyway.
Is there an add-on for Firefox that warns when you enter credentials on a new domain? Or puts a warning triangle in a password field when today is the first day you visited the domain or something? Firefox already tracks the latter, you can see it in the page info screen, so both should be easy to make but I'm not sure anyone thought of making this before.
I’m not so sure about that. With enough dedication and time I think you could target a specific company from HN. Start writing a few good blog posts that would appeal to your audience, only run attack when some attribute is true to that company (i.e. their Corp IP addresses).
You could even combine the two. Post the blog to hacker news, then send phishing email pointing to HN post. That is a trusted link. Then the user will likely click the source link in HN.
Obviously, a lot harder and lower chance of success, but not impossible.
cannot start phishing GitLab employees using HN posts
You definitely could perform a watering hole attack if you compromised a site that always gets on the front page of HN. If I were an evil hacker and I wanted to compromise HN I would instead attack a site like rachelbythebay.com or some other popular blogger then just wait for HN’ers to click the link.
The point is to recognise the email/situation as phishing or otherwise malicious before deciding to click the link. The chance of clicking a malicious link on HN is pretty low if you stick to the front page.
Ok, so you close a tiny window, while leaving the entire web open as a giant door by its side.
And you do by a really invasive means that will make sure that everybody that knows what they are doing but are curious to safely inspect it further will be marked as clueless. Leading to false positive and negative errors larger than the signal, but you still expect to get useful data from it.
Usually I mouseover and see where the link would take me. If it's something like micr0soft.co, it raises some red flags. For something like a targeted phishing email, it's even more reasonable to be concerned about things like browser 0 days
It's not about defending against something specific.
It's using strategies like teaching people to check links before clicking them that can prevent a number of different things (phishing, malware, etc.)
If you've already clicked a link, attackers know exactly what browser you are using, and that you're probably also willing to click on the next link you send them too, allowing them to go from a blanket attack to a targeted attack.
My understanding of the text is that 10 of 50 actually entered credentials. So the 1/5th is really the number of people who a phisher would've stolen credentials (although they say later they use 2fa which would've prevented a real attack, but still bad enough, as you can expect these people use other accounts which may not even support 2fa).
2FA (assuming TOTP, not hardware keys) prevents attacks using credentials leaked from side channels, but does not work in phishing attacks using a fake login form. The attacker just needs to channel the TOTP you entered into the real login form, and on average they have a bit more than 15 seconds to do so, which is more than enough.
This is what makes security keys so great, you can't surreal a token from one domain and use it on another. They completely remove this type of attack, which no amount of training will ever fully protect you from. You can't put the onus on the employee, you have to make it impossible for them to do the wrong thing in this case.
Most sites, certainly consumer sites, which offer WebAuthn it's very optional. So doing it the current way just adds a step after the password step. You need a (perhaps stolen) password to even find out there's a next step and you're not in after all.
But if we swap it, now we're telling bad guys if this account is protected up front. "This one is WebAuthn, forget it, same for the next one, aha, this one asks for a password, let's target that".
The people with WebAuthn are no worse off that before, maybe even arguably better in terms of password re-use - but everybody else gives away that they aren't protected.
> The email client is specifically designed to be able to display untrusted mail.
Email clients often do things like load images, which can tell the sender you've read the email, which is an information leak.
Some email clients try not to do this, but that's actually somewhat recent, and I wouldn't say they're 'specifically designed to be able to display untrusted mail', rather 'they try to avoid common exploits when they become known'.
Most companies have e-mail addresses that are completely predictable, so you can pretty much assume that this e-mail address exists. If this really was a security risk shouldn't you have UUID emails for everyone?
Also how do you as an attacker know that it was user not a e-mail server checking those images?
I mean you can just get employees from LinkedIn and already know their e-mail addresses with high certainty and know when they work by the timezones. If this information was abusable, why is it so easy to guess in the first place and why is it not actionable then?
It would be arbitrary to have the image links switched out by the server so they always go through a proxy/urldefense and it would never be the user ip address or user agent the attacker sees.
I would assume a company like Gitlab would have such measures if this info was indeed abusable.
> I mean you can just get employees from LinkedIn and already know their e-mail addresses with high certainty and know when they work by the timezones.
Do you put your IP number on LinkedIn?
When you travel do you put the hotel you're staying in on LinkedIn?
Also, not everyone is on LinkedIn in the first place.
> It would be arbitrary to have the image links switched out by the server so they always go through a proxy/urldefense and it would never be the user ip address or user agent the attacker sees.
The word 'arbitrary' doesn't make any sense to me in this context so not sure what you mean sorry.
In general, I don't know what you're trying to say - that there are ways to try to defend against these attacks? Yeah I know. I'm not sure what point of mine you're refuting or replying to anymore.
You asked 'What can be done with this information?' - this is the list of things you can do with that information. Can you defend against some of it? Yes to some extent. But it still leaks for many people.
Which companies own which IP address blocks is public information.
> When you travel do you put the hotel you're staying in on LinkedIn?
Conferences are announced; advertised, even.
> Also, not everyone is on LinkedIn in the first place.
That's OK, companies do a fine job publishing employee information all on their own.
> You asked 'What can be done with this information?' - this is the list of things you can do with that information.
You've moved from Step A, getting the information to Step B, correlating the information, but you've left off Step C, which is profiting from the information. What is a benefit you can gain from knowing someone at some IP address opened your email? Can you get that benefit some other way, such as by looking in a phone book or viewing the company's website?
What I am trying to say is that someone opening an e-mail should not be considered a failure. You can't expect people not to do this. All of this can be avoided if you just use some service to proxy the images. So the IP would not be leaked because the proxy server is fetching the image and it could easily be doing this no matter what and even if it determines the message to be spam and user might not even see the e-mail.
Also called agent fingerprinting. You can look at exactly how the agent is responding and make educated guesses at what agent it is. You think one HTTP request looks like any other, but there's enough little bits of information here and there to leak info.
Now you know who's curious enough to open a shady-looking email, and perhaps click a link out of curiosity. It means your list for the next round of attacks is much smaller and more targeted, making it easier to evade detection.
This is one thing I like about Outlook. It doesn't load embedded images unless you click on a button at the top. All email clients should do this. Not only is it safer, but it discourages people from putting a ton of images in emails which is just annoying in general anyway.
Email clients started out without embedded images. Images came after the initial email implementation. So one could say that displaying images in email clients is rather new. Also, most if not all email clients have the option of disabling Inline images.
Email clients, just like browsers, are made specifically to handle untrusted user content. That then some people/clients allow information leak, is another thing. Just like websockets in modern browsers.
Sure, let's pretend images in email are a new development and should be stopped.
Meanwhile in the real world some of us have actual users. Pretending we should stop using widely used and useful technology while flailing your arms and shooting "but security!" is not going to help anyone.
> Sure, let's pretend images in email are a new development and should be stopped.
What? No. No one is arguing that...
The only thing I'm refuting in my previous comment is "Some email clients try not to do this, but that's actually somewhat recent" which seems to indicate chrisseaton thinks that email clients that don't load images is a new thing. So the idea is that first we had email clients, then the email clients added the option to hide images.
When in reality, email clients started out without images, then they added images.
Way to reply to a comment without reading the context and subsequently completely miss the point.
discriminating between different failure modes is important. However, every situation you've described is still some form of failure mode.
1. A user opening a phishing email means the email made it into their inbox (spamming failure unless whitelisted for the sake of a test) and the user was moved to click the email based on the subject line. This in itself is the lowest risk of the failure modes we're about to describe, But some risk will exist considering e.g malware has spread through the simple opening of emails before.
2. Clicking a link in a phishing email is much higher risk and, regardless of how the phishing test was crafted, is considered with absolute certainty to be a failure mode of any phishing test or event for three reasons: A user has definitively disclosed their presence within a company (email clients today may block trackers from loading, but clicking a link gives it away), the user has disclosed their receptivity to the message, and in a real world attack, merely landing on the page may trigger an event such as the delivery of a malware payload via a functioning exploit against the browser and the underlying operating system.
3. Entering credentials is probably the most obvious one.
Rather than a "password alert" control that just alerts a user that their account was signed into, what would be more helpful is a second factor; a bare minimum would be a prompt on a user's phone indicating that a login attempt was detected and requesting confirmation before that attempt can succeed. This at least helps a user potentially preempt an attack against their own account (assuming they're trained on how this works) even if they never figure out that they've entered their credentials into a phishing site, And if the second factor challenge is never met, an alert to the security team could automatically get the security team to triage the risky login.
What can be done with the info that user has read, opened and clicked on the website? Our company for example has completely predictable e-mail addresses with first letter of first name and then last name @ company.com. You would have this knowledge even without having to send e-mails. I assume Gitlab has it similarly.
Reworked to sales terms: it's the difference between a cold lead and a hot lead. A user who's clicked through has proven themselves to at least be warm or receptive to phishing campaigns in general.
As an adversary, I'd probably couple unique links (for tracking clicks) with heatmapping and other front-end tracking technologies to see what exactly the user is doing and how far they've gone before backing out, which helps me refine the attack. Most attackers probably wouldn't go that far (spear phishing the people who clicked would probably be the extent of it), but if someone is after something of particular value at your firm, there's no reason why they wouldn't put more effort into sharpening the attack.
> Opening a phishing email should not be considered a failure. The email client is specifically designed to be able to display untrusted mail.
I'll go further:
It is impossible to never open a phishing email.
From addresses can be spoofed. Path information... well, it isn't available at all unless you open the email, is it? Also, it can be spoofed right up to the point it enters your company's email system. The Subject can be made appropriate and innocuous, or it can be made just as "OPEN THIS EMAIL IF YOU WANT TO KEEP YOUR JOB!" as the sender desires, and there isn't a person on Earth who has to respond to emails who will be able to divine the inner intent of the sender from just the Subject line.
Should corporate email systems prevent address spoofing? Argue amongst yourselves. My point is, they don't, or at least they haven't anywhere I've worked.
> IT departments should probably enforce single sign on and use a password alert to prevent a password being typed into a webpage. They should also prevent downloads of executable files from non-whitelisted origins for most staff.
I can hear the developers raising Hell at just the suggestion that they don't have local root and free reign with brew, docker, and npm. PMs and marketing can be relied upon to react similarly to being told that they have to use SSO-equipped tools that have been through procurement, and not someone's random free shared Retrium or whatever. That SSO tends to add a zero or two to the cost makes them even more skittish, on top of the chance that the procurement process says no.
Agreed that entering credentials is the most serious security failure here.
It is worth noting that credentials alone are never sufficient to access a GitLab employee's account.
GitLab employees are required to use MFA on all accounts, including GitLab.com. https://about.gitlab.com/handbook/security/#security-process....
Yubikey/hardware token or TOPT (time-based one-time password) from authenticator are necessary to access employee accounts. OTP via email or SMS or email is strongly discouraged and not an option for employees.
You cannot reasonably expect a person to refuse to even look at a suspicious email. Let's say your support product x at your workplace. From: Person you don't know. Subject: Bug report.
Noone would delete that email without reading it, just like the finance department when they see something claiming to be a bill. Now what if your email said "I have sample website that demonstrates this bug". Again, there's no reason for you to not click that. The only thing that you should be able to reasonably expect a person to "fail" on is getting there and downloading a .exe or providing a set of credentials.
I'm a web developer with a focus on security and I nearly got phished multiple times. Once was a legitimate-looking email from Linode, which I opened and was fooled by (I didn't check the domain because I trusted my spam filter too much to consider that it might be fake), I was saved by my password manager not auto-filling the credentials because the domain didn't match, which made me look and see that I was on the wrong domain.
The second time, someone was about to steal $30k worth of cryptocurrency from me with a very convincing page on śtellar.org, where I nearly entered my wallet seed (did you notice the accent over the s? I didn't), and was saved by the fact that I keep my cryptocurrency in a hardware wallet, so I had no seed to enter.
Both times, what saved me from being phished wasn't that I'm trained or that I'm more observant (which my parents have no hope of ever being), but that I had used best practices so I didn't have to rely on being trained or observant.
I'm hoping WebAuthn takes off, which will really kill phishing for good, but you can take steps now: Use hardware U2F keys as second factors, use a password manager, don't use SMS auth. Make long, random passwords, etc.
Two years ago I was fooled by "colnbase.com" (L instead of i) to the point that I was annoyed that 1Password "wasn't working". Of course, 1Password didn't have a uname/password for a phishing site. I almost opened it to copy the password in manually when I spotted the L. It's sobering.
As for WebAuthn and U2F, unfortunately they chose every trade-off possible away from practical usability. They're doomed. Go look up the impl/ux flow for WebAuthn right now for example.
We need less of that and more good ideas that people would actually implement and use.
Really? What do you think is impractical about it? I just tap my USB key and I'm logged in.
Hell, it even supports a mode where you don't have to have a username or password at all (e.g. log in and try adding a key on https://pastery.net, you can then just log in with the key with no username/password at all).
Note that to do the latter ("Usernameless login") you need a FIDO2 key. A relatively modern Yubico product can do FIDO2, but cheaper alternatives mostly don't offer this.
The reason it's a cost upgrade? Those credentials have to live somewhere, and that means they're using Flash storage baked inside the FIDO2 key, ordinary FIDO keys don't have close to enough storage.
Next you might wonder: Wait, how does a FIDO key log me into Google if it isn't storing the keys?
Magic. Well, cryptography. When you registered the key it minted a key pair (Elliptic curve most likely) and obviously it gave Google the public key, but it also provides Google a large random-looking "Identifier" which Google must give back each time you authenticate. That identifier could, by the specification, just be some sort of hidden "serial number" but in reality what everybody does is encrypt the private key or its moral equivalent - with an AEAD scheme using a device-specific secret key and then use that as the identifier. So when Google gives you back the "identifier" the FIDO device decrypts it to discover its own private key for the site which it can use to log you in. The FIDO dongle doesn't actually even know you have a Google account, yet it works anyway. Magic!
FIDO2 is a much less clever trick, and that flash storage is too expensive to use it everywhere - but the UX is so seamless it makes username plus password look like they asked you to undergo a cavity search by comparison.
Why is this downvoted? It's 100% correct, except the distinction is not FIDO vs FIDO2, it's "resident key mode" vs not (FIDO2 supports both, and does non-resident keys in the way described above).
Yes, the fact that you need flash storage for FIDO2 resident credentials is unfortunate, but that's why I'm exited about the new SoloKeys, which I heard will have enough flash space for thousands of keys. In comparison, the Yubikey has 25, which makes it useless for what I want it, and they don't even advertise that limitation anywhere.
Logging in with this usernameless mode is just amazing, you can go to an untrusted computer, plug the key in, tap a button and you're logged in with no possibility of any credential theft anywhere (just make sure to log out afterwards).
That's fair, there probably are more people with a suitable Android phone than with a FIDO or FIDO2 dongle, and you're correct that the phone (having more than enough storage) offers this feature and unlike a dongle I think you can be comfortable the phone won't "run out" of space if you sign up for frivolous nonsense this way.
I finally have direct implementation experience (thanks COVID-19 I guess?) of WebAuthn now so I can speak confidently to this consideration.
I built a toy implementation on my vanity site and am gradually integrating it to a site friends built back when we all lived in the same city at the turn of the century. That site is old PHP (actually parts of it are terrifying Perl CGI code that looks like it was written before HTTP/1.1 existed) so my WebAuthn implementation is also PHP at the backend. This is neither the simplest, nor most capable technology, I have no doubt it can be done faster and better in your preferred language (it certainly can in mine).
I wrote <1 KLOC, no frameworks, no libraries beyond standard components, there's a little corner cutting in my PHP CBOR implementation but nothing likely to break in the real world for this purpose (we can treat all "I don't understand" cases as "Probably bogus, refuse entry" and be fine).
The JS is a little bit of Promises and some JSON processing, nothing every browser (that can do WebAuthn) doesn't offer already and I included it in my < 1 KLOC total.
Now you aren't going to get this done by thinking it's something else. Trying to do all the work on the client? Not going to make that happen. Hoping to hide all the WebAuthn credentials in a 64 character "password" field your database already has for each user? Not going to be like that.
But if a team has one person who understands in principle what this looks like, I'd say it's maybe a week for a backend person, a week for a frontend person and a week for a tester to spin up on what's going on and learn it. And that's the first time. And that's going to be markedly less for people who aren't learning the components (Web Crypto, public key crypto) as they go.
The pay off is huge. When you store passwords, that's a liability you've got there, it's like toxic waste you're storing. If somebody gets those passwords you can face fines, somebody might sue you, even at best you'll need a PR firm to help try to sell how sorry you are about it. But stored WebAuthn credentials aren't even secret. They make your preferred sock colour look like the crown jewels of PII by comparison, yet they're far stronger than a password as login credentials.
Thanks, I had originally typed up the URL in the comment with https:// and HN did convert to punycode, foiling the attack. I never use IDNs, even though I'm in Greece, so I've set that option, thank you.
This is similar to a correlation problem. I was complaining multiple times to a company, finally they called me back. They had this elaborate explanation and needed me to reset my password.
Then they asked for my password. I was pretty confused but almost gave it to them. It was just a coincidence that some con-artist had called me to try to phish me when I had been trying to reach out to the company.
Those assumptions where you know it’s real are dangerous because it can make you ignore red flags.
Most important element is definitely finding a device that suits your needs in terms of connecting it, USB connectors, NFC and so on. The whole idea is these things are trivial to either plug in and leave in a machine that's with you everywhere, or carry on a keyring to use quickly, if it's a whole performance to use your key then you just won't.
I can vouch without hestitation for the Yubico Security Key (newer version has a "2" printed on it clearly, this also does the FIDO2 protocol with resident credentials). This is a relatively expensive option for the purpose but it's robust (lots of people put these on key rings and stuff then carry them everywhere) and simple and the people who built it know what they're doing. But it's a USB A device, if you need bluetooth or USB C or whatever then don't buy one hoping to like it.
That product skips all the fancy Yubico features other than being a Security Key, thus saving a big fraction of the cost - but there are much cheaper options that work if budget is tight, if you're just playing around, or to do testing for a potential deployment: I also have a "KEY-ID FIDO U2F Security Key" again USB A and it works nicely, but many people don't love the bright green LED (all the time, not just when authenticating, it's on all the time). However it clearly also feels cheaper than the Yubico product, this is not an heirloom product.
I have a Yubikey 5C but it might be a bit of a waste of money, since all I use it is for FIDO2/U2F, especially now that SSH supports that.
I'm excited about the new version of the SoloKeys (https://solokeys.com/) coming out next month, they aren't using secure elements like the Yubikeys are but I'm not really worried about someone stealing the key from me to extract the credentials with physical attacks, so they might be a good alternative.
Other than that, I eventually see password managers having built-in software FIDO2 implementations, so you just open your password manager and it automatically intercepts U2F requests and authenticates them, but that's a different thing.
Basically, anything you get that's U2F/FIDO2 compatible is fine, and much better than the second best thing (TOTP or whatever). Get something that's cheap enough for you to get two of, have one with you and the other at home as a backup, and that's it.
I’ve seen this a lot in my work where companies hesitate to conduct phishing exercises that are “too convincing” (or, put another way, too realistic) because they fear documenting poor results. Of course that means the exercise and the learning opportunities are much less impactful. I’ll concede it’s a little different with financial institutions because regulators and auditors will usually see the results at some point but I really admire Gitlab’s commitment to transparency.
I try to emphasize to clients that it’s not a test but a phishing exercise akin to a fire drill. You don’t pass or fail a fire drill - you use it assess how prepared you are for a fire. And if you find that you’re totally unprepared, well wouldn’t you prefer to figure that out before anything is actually on fire?
I love the lure, and I respect the GitLab team for making it public, but this is a tough read - it’s putting way too much responsibility on the end-user. For example I’m a huge fan of security teams using email headers to analyze suspicious messages, but I think it’s a step too far to expect a user to ever look at an email header, no? We can hardly get regular end-users to hover over a link; encouraging them to open up email headers to see what service the mail was sent from, or to understand what a “received” message header vs an x-originating-ip means is counter-productive. Headers are hard to understand even for a security analyst, asking HR or Recruitment or Sales to analyze them and understand them feels like the red-team are underestimating how little time everyone has and overestimating how technical most employees are!
My company regularly runs internal phishing tests like this, using an outside organization. We apparently have a near-constant 7% failure rate. Personally, I cheat: Long ago I discovered that the outside org puts some identifying headers into the email, so I wrote an email rule that adds "[PHISHME]" to the subject line.
The phishing emails are sometimes very good. They appear to be from senior management and address projects or other internal events everyone knows about. Some emails are very easy to spot, in the Nigerian prince category. It is very interesting that we have that 7% failure rate no matter how good or bad the phishing email is.
In general, I think internal phishing tests are a great way to educate the workforce.
> My company regularly runs internal phishing tests like this... I think internal phishing tests are a great way to educate the workforce
Yes and no. I used to report phishing attempts to IT. Then we started running tests like every month, so I'd just delete suspicious messages and move on. Of course, that's when we got a real phishing message.
Frequent company-wide tests are, in my opinion, overboard. Once a year company-wide tests, followed up by more-frequent tests for sensitive groups and/or those who failed previous tests, makes more sense.
That's the thing, reporting a phising email in my org excludes you from one month's worth of email... then two months... then four months... I spoke to the guy in charge and he checked (my account is set to not receive for 2 years)
Our tests seem to be somewhat staggered. We may see phishing email tests twice in a month, then nothing for several months. Typically there is a two-month lag between the tests.
I should note that phishing tests are just one component of many company-wide education programs regarding physical, computer, data, and network security. My company deals with very sensitive data, so information security is a Big Deal.
The problem with targeting these tests is that new employees are constantly coming in and need to be educated/trained. Also, the persistent failures do not seem to be confined to only certain work groups; they're spread around the company fairly randomly, and they move.
Exactly how phishing tests are run probably depends quite a bit on what kind of company you have and what kind of employees work there. A workforce full of programmers would -- I would hope! -- be much less susceptible to phishing scams. The sales force, possibly more susceptible. That may be stereotyping, though.
I'm not a huge fan of these phishing-test exercises. I run the service at https://urlscan.io which a lot of folks use regularly to check out suspicious links in mails / chat messages. I've been approached by some of these phishing-test companies asking me to prevent scanning their domains/IPs. They flat-out told me that they weren't happy about users using my service to check the link, which I always found odd, and I never got an explanation for it. Probably less spectacular findings for these companies if users can figure out a phishing test by themselves...
> Probably less spectacular findings for these companies if users can figure out a phishing test by themselves...
It's the same issue as "ad companies"... if you don't cook the numbers that show your expensive service is worth it, then people will switch to the service that looks worse (this one has 7% fail rate but this one has 50% fail rate)
Not many, I usually only do it when the domain or URL pattern in question is almost exclusively used for sessions/invites/sharing-links and basically every URL submitted leaks either a customer name and/or invite-token and/or PII. zoom.us is a good example, certain DocuSign URL patterns, the sort of thing where knowing the URL gets you a sensitive document, etc.
When I worked at Google, orange teams weren't allowed to use phishing tactics because they worked so reliably every single time that they provided no new information about the security of internal systems.
The reality is that humans are hard to secure, so defense in depth generally involves preventing compromised accounts from causing lots of damage, detecting them as early as possible, and controls for shutting them down.
While in office you're connected to internal network, supposedly within internal domain and IT dept. would have direct access to push updates automatically. When outside you're suppose to connect via a VPN (best case) or communicate via encrypted something (email, ftp etc) but you'll need to enter your credentials somewhere.
Also, please remember, it's not your laptop, it's company's laptop, merely given to you to do your work on it. Anybody within the company with correct credential would have the right to touch that laptop.
Bring your own device is bad for companies. Any of them using this approach are just begging to have their talent pool drained. If I do work for company on my own device there absolutely no difference between my personal research and the company research and in eyes of the law these companies will always lose if they try to enforce some "secret sauce" to not go to their competition. Wondered why FAANG companies never did this, those that will lick every penny from whatever corner they can? Exactly because they know too well they'll lose badly. Just look at that guy that got bankrupted by Google after he went to Uber - HN had an article a few weeks back.
Shouldn't that exactly be appealing to the talent, not having to worry about the company claiming their side projects as their own?
I very often work on my side projects and it is quite an annoyance having to move around with 2 laptops or paranoidly erasing my personal work from company computer.
Also from my experience working at a fang like company they definitely don't seem to lick every penny. We have company laptops because of security reasons, but phones are bring your own which they pay for. Also they pay for WFH office equipment as long as you can reason it makes you more productive or is good for your health. Basically anything that makes you more productive or sustainable they will pay for.
> Also, please remember, it's not your laptop, it's company's laptop
> Anybody within the company with correct credential would have the right to touch that laptop.
That is only partially correct. In many European countries people enjoy quite some protection also in work life. So in order not to do anything illegal the employer has to carefully control access rights to your PC. And the ones who have access rights cannot do whatever. Reading emails is typically illegal, yes emails on the work account! (Just to mention the legal concepts; of course in today's architecture emails are rarely stored on your PC)
I understand in the US employees enjoy little protection while at work. I could guess video surveillance in the toilets could still be unacceptable. Just to make the point, even if the location, paper and water is paid by the employer and more importantly the time is paid, it shouldn't be like that that the employer controls everything. (Although there have been reports that Amazon warehouse workers in the UK use bottles for their needs, because the employer does not provide for more human arrangements in practice. Some employers are always worse than others and that's why I have stopped ordering from that company.)
Most companies will have a firewall on their corp network so new domains, or malicious-categorized websites will usually be blocked which offers additional protection above working from home. You can obviously use an always-on-VPN for wfh companies, or tools like Cisco Umbrella, ZScaler or Netskope, but many companies haven't done that yet.
I think there's some sort of anti-work-from-home agenda going on here. It's completely irrelevant to the story. If you were in an office you'd get exactly the same email and presumably respond to it in exactly the same way.
It's relevant to the story because so many people are currently in their first months of WFH so a headline that mentions WFH will be more interesting to them than one that doesn't. Another way to put it would be "WFH pioneer gitlab phished its own staff", nothing wrong with that.
> If you fail, the last page is corporate training on the topic.
In my work, the policy is 3 strikes and you are gone. First two fails are trainings with tests and third fail is an instant fireable event. As we work with clients and and their data, this is strictly enforced too.
I've never gotten in trouble for missing a phishing test, but everywhere I've worked there are real emails that have all the hallmarks of a phishing one. Like, misspellings, weird domains, etc. So I don't think it's reasonable to punish people, nor it is sufficient to raise awareness. The security people don't address the issue of real emails that look fake that condition people to click on similar things, because obviously it's outside of their area of responsibility and control.
Also, what do you do if you have a draconian policy and someone important clicks on one?
I guess that depends if failure is visiting the unique URL they've sent you or actually inputting credentials.
I got curious about an obvious internal phishing test and decided copy the link to another machine and see how convincing it was... I hadn't clicked, it wasn't my work machine, and I didn't enter any details - but instantly received an email informing me I'd failed.
Yeah right, I obviously haven't done the associated failure training and I will forever refuse to do so out of principle.
concur. I do hope that the "well meaning" security team that thought this up is diligent in investigating and accounting for false positives. "Oh, I clicked the link in the fishing email IN A VM to see what the F* it was" and "I entered 'fakeceo' and 'mrpassword123'".
People have different methods of exploring and learning to decide if something is legit or not. Nor should any "security policy" should be a 3 strikes zero tolerance policy. Everything needs context.
P.S. I'm pretty sure that the mental and behavioral damage done by this 3 strikes policy can easily be weaponized.
> Hunt said GitLab has implemented multi-factor authentication and that would have protected employees had the attack not been a simulation.
"Protected employees" is a weird way to put it to say the least. It's not about protecting employees, it's about protecting gitlab company and their customers. And the protection would have failed. The attacker would have needed to use the credentials (including the one-time credential) in real-time. That makes the attack-site logic a bit more difficult, but it would have allowed to break in. I doubt gitlab employees have to reauthenticate very often during a working day.
Well, unless they really use a challenge response system. At least what I use as a gitlab customer is not, it's just standard OTP. I would provide a valid one time password to a phishing site, should I fall for it.
(Edit: reworded. Commenting on the phone is never a good idea...)
Most challenge response systems don't help either, the attacker gets to forward the challenge to you, and then your response back to the real site. It's some extra work but you can get ready-made software to help perform this attack.
WebAuthn (and the older U2F) works, because it's recruiting the browser (which knows perfectly well which site this is) to mint site-specific credentials every time.
* Just don't do WebAuthn, now they don't have a second factor and can't get in
* Ask the browser for legitimate WebAuthn credentials for fake-gitlab.example. But, of course GitLab won't accept those credentials, any more than it'd accept a made-up username so they're useless.
* Show the browser the "cookie" GitLab offered for GitLab WebAuthn credentials, the browser will cheerfully give a user's FIDO dongle this cookie and the fake-gitlab.example name, and the dongle will explain that it doesn't recognise the combination, maybe use a different dongle? No joy.
* Show the browser that cookie and tell it this is gitlab.com. But this is fake-gitlab.example not gitlab.com, so the browser will just raise a DOMException SecurityError in the fake site's JS code. The code can hide that easily, but it doesn't get any credentials.
Thanks for mentioning https://en.m.wikipedia.org/wiki/WebAuthn According to Wikipedia Dropbox supports it. Any other widely used adopters? Need to check whether gitlab supports when I am at my computer. So it might well be that they even mandate it for their employees. But the statement or at least the part of the statement that made it to the article was not that specific.
My understanding is that Google mandates U2F (the de facto predecessor to WebAuthn) for employee systems, certainly the Google employees I know have FIDO keys. One interesting thing is that some of them don't really understand how those keys work - and the U2F/WebAuthn design means that doesn't matter at all. I believe way more firms should do this and I've tried to gently encourage it at places I've worked.
Older sites tend to support U2F rather than WebAuthn. If you're on a greenfield install, you should just do WebAuthn, but it can be complicated in some scenarios to migrate from U2F especially if you're huge so it's understandable that not all have. In at least Chrome and Firefox the UX is identical anyway.
So, not differentiating them:
Facebook, GitHub and Google are three popular examples
You can also authenticate for some US Federal Government business on Login.gov (even if you aren't a US citizen)
And the UK's "Gov.uk verify" authentication can use Digidentity's offering which in turn relies on WebAuthn or U2F.
Edited to add:
AWS can do it, but, for some crazy reason they won't let you register more than one FIDO dongle. So I would not advise securing an "admin" AWS account this way, only users who can go to someone with admin privs to reset if they lose the dongle, but it's good for a team of developers I guess.
Not allowing multiple dongles goes against the intended security design, ignores a SHOULD in the WebAuthn standard, and also makes a bunch of the fairly complicated design pointless, I can't tell if Amazon are incompetent or had some particular weird reason to do it.
Right, according to https://en.m.wikipedia.org/wiki/Universal_2nd_Factor it's U2F. So I would not be surprised if gitlab requires their employees to use the dongle instead of simple OTP which they allow for customers/users. A shortcoming of the article not to mention whether that's the case or not.
Phishing emails often look pretty obvious - that’s part of the program! It filters out people you can’t trick and leaves you only with the most gullible ones.
Had the same at a previous company. If you use GMail, IT needs to manually approve the mail to avoid it going into the spam folder. A huge warning saying “this message has been excluded from your spam filter by your IT department” shows up at the top. People still click through...
A better approach is to implement anti-phishing measures way up in the chain -- at the MTA level itself. Simpler ideas like: stripping URLs' from mail, stripping attachments if email origin is outside the organization, converting HTML email to plain-text, disallowing HTML email, yield substantial benefit in stopping phishing.
Basically, don't try to solve a problem by humans when it can be solved more efficiently by technology!
Phishing exercises are absolutely pointless in my experience and contribute zero to increasing the awareness. Shaming does not address the underlying human weaknesses that make us fall for phishing, they simply make the IT Guys look cooler, and increase CISOs' and Red Team budget. :-(
Two decades of experience suggests that "strengthening human security by training" ain't happening, no matter how hard/smart you try. The technical controls have to be beefed up to a point where that human-weak-link is eliminated.
These tests are nothing but CISOs'(and Red Teams, and the whole industry around it) justifying their existence, and potentially doing a song-and-dance about it at the quarterly all-hands. Nothing more, nothing less. We can come back to this thread in another year/two years/five years/decade, and I can bet dollars-to-doughnuts, the industry will still be training humans, and claiming these pointless statistics about phishing. ;-)
The problem seems to me that companies and orgs want to send emails when it is convenient for them to do so (paystub ready, benefits enrollment open click here, etc.) but distribute the cognitive load to its employees/customers to figure out which emails are trustworthy and which emails are not. You eventually get trained to click on links in emails as a form of legitimate interaction.
The company I work at works with and contains a significant amount of PII regularly phishes on our staff. It’s usually between 1 in 5 and 1 in 4 that will click on the link. Despite all of the education and quarterly repeated phishes those number really aren’t improving much. I think at some point you have to accept that end users will click on things, and add additional protections in place to help mitigate the risk.
I regularly perform tests like these. Overall there's a flat 10% 'critical failure' rate across organizations. You send a phishing e-mail pretending to be from the IT department, with some instructions to install the 'anti-virus scanner' or whatever, and 1 out of 10 people will open the e-mail, click the link, give their credentials, follow all instructions, click through all warnings and infect their machines.
If your organization is above a certain size, remote code execution in your network is a given. There's several technical measures you can take to make is _much_ harder to perform these attacks on Windows in general:
* Disable unsigned Office macro execution (if on windows with office)
* Disable mshta.exe or remove the .hta file association
If you can get away with it, productivity wise, enable whitelisting for all software.
Attackers can often times still find weak points in your organization. It's not always the marketing or HR department with Windows that gets phished. I once observed a colleague phish a webdev on a macbook with a recruitment 'challenge'.
100% of people will fall for a good spear-phish, when you fail to accept that you start doing things like punishing people who fail. The point of these tests is to raise awareness and train people so that successful phishing attacks will need that much more targeting precision in addition to accuracy.
It's like combat training, the goal isn't to train your army so they all become elite fighters and martial artists, the goal is to improve their fighting skills so that they fair a good chance at victory against similar ranking enemy troops.
So, if your people fall for an emotet phish,that's bad. If they fell for a pentester's phish where he did background research on his subjects and spoofed email header fields, that's normal, just like a navy seal beating up an airforce sergeant would be normal.
All companies should be doing internal penetration/security testing. If you don't do it, someone in China or Russia will do it for you, you just won't know. I hope GitHub is doing this too. Google, for example, has an entire team whose task it is to exploit such attack vectors and close the holes in all sorts of products and processes, often with stunning results. I'm not sure if the rest of FAANG does this, although I'd be surprised if Facebook doesn't do essentially the same. I would not be surprised if Amazon or Apple don't do it, at least not to the extent you'd see at Google (no holds barred, the red team gets to pwn everything). Netflix, I'm not sure, they probably have something. Microsoft probably doesn't do it, since it'd make people look bad, and in their back-stabbing corporate culture people can't afford to look bad.
Is this newsworthy? My company does this very regularly, and the phishes are well crafted convincing.
20% seems low if they're reasonably well put together emails. In the wild there's plenty of badly made, easy to spot phishing campaigns but one would hope any decent Red Team could put together a good one.
I support this action and wish more companies do it. It would tremendously improve the security in every organization. The people that "bought" the fake login link feel ashamed, I'm sure, and they'd think twice before logging in next time.
Kudos for Gitlab.
Someone told me they did the same thing at his company, send out fishing emails to see who fell for it.
Those who did (management was disproportionately represented) had to attend some training lessons.
They resent another phishing email a few months later.
Most people who fell for it the first time, fell again, despite the training.
I don't think an additional training is needed, at least in an IT company. The fake-phishing success should be enough to make everyone who fall curious enough to at least research the subject.
What company has to make sure to communicate clearly is that the failure in the fake phishing test would not affect the employee's status in the company at all. But eventual failure in a real phishing event would have at least some consequences.
For non-IT companies the training should begin and end with the message above and in between should be short and concise with ideas how and where to learn more about the subject.
As I read through these comments and the linked handbook, it kinda makes me want to work for a company like that. As important as security is, even the security handbook has an appropriate tone vs. treating people (CS talented or not) as idiots who cannot be trusted. Good job gitlabs.
I take the point but I also take it with a degree of realism.
I've been at companies where they did this and I usually 'fail the test'.
I received the email, but given the highly targeted nature (wasn't very generic) I get curious. When you can tell it san internal test its fun to see if you can trace it back to a particular person or department. So I created a VM in a secondary clean laptop and opened it.
So based on the test I failed because they detected I followed a link.
I don't for one second believe that 1 in 5 Gitlab employees also did this, but I'm certainly distrustful of test numbers like this.
My company does this often. It sends legitimate looking emails and at last I fell for one recently.
I thought about it, then I understood why. My company uses a lot of saas products - for submitting expenses, for giving appreciations etc etc. These saas products regularly sends emails, and they come from other domains.
When my company used all home grown or on premise web apps I never ever opened any emails coming from a different domain or open them very cautiously.
And now I think these saas emails have probably taught my brain to trust emails from other domains.
I worked at a place where they sometimes sent phishing emails to see what people did. They also had mandatory annual training on e-risks, which wasn't in fact too painful.
The fun arose when the company employed third-party service providers that required employees to respond to an external email (infrequent but it did happen). Inevitably there had to be a certain amount of internal comms to let people know that this external email was in fact safe to respond to.
That's hilarious but it also highlights that ultimately, there are no 100% inherently safe communications channels. A sufficiently motivated actor can go through extreme lengths to compromise your IT even if it means faking email, voice, letters, physical interaction.
It's not that strange. In a talk at last year's CCC someone explained that it's a good learning experience when you educate the people, that clicked on phishing, right in/after the phishing process.
He also found that the learning effect only applies to the method the people failed in - so learning from phishing doesnt teach anything about passwords.
While sad, I think it's important to acknowledge and don't be do harsh to people who fail in the first attempt...
... also because my biggest learnings also came from really embarrassing moments or failure too..
Man I hate these, and I hate that companies get paid serious real US American Dollars to stage these for other companies.
Every time I see a colleague laid off, and then see one of these stupid phishing tests land in my inbox, I think about losing my job during a pandemic in order to ensure the security team still had the budget to pull this stupid crap.
It doesn't help that our own customers send us stupider looking emails that are actually legitimate.
But the little green padlock was there! It must have been OK.
My company has just decided to enable 2FA in order to combat phishing. I'm not sure how this would help. What amazes me is that we allow HTML email at all. That alone would greatly reduce successful phishing attempts. Requiring all emails to have valid signatures doesn't even seem too difficult for an organisation.
I'd just note that it seems Google documented that U2F keys were the only tech they'd tried capable of reducing credentials to Google systems being stolen from employees in phishing attacks to zero. Maybe we need more of that going around.
I also don't understand why they keep mentioning that their staff is all-remote. I don't see what difference that makes.
"While an attacker would be able to easily capture both the username and password entered into the fake site, the Red Team determined that only capturing email addresses or login names was necessary for this exercise."
It says in the article that they never asked for passwords.
I wonder if the statistics would have been different if they did? You usually think twice before entering a password.
buying a phishing as a service trainer is the single best bang for your buck in the realm of all security. obviously, all computer security is relative to your use-case and threat model, so your mileage will definitely vary. if all your servers are publicly routable with no firewall or antivirus, emails are the least of your worries.
however, spam is not a solved problem. phishing is hard to stop, and spearphishing basically impossible. professionals you know get compromised, upstream toolchains get compromised, etc. the attack effort and risk vs reward is wildy skewed in their favor. it has been a vector of compromise for many highprofile breaches.
find a reputable company, pay them, and whitelist them in your spam filters. they will generate incredible phishing emails (using your domain and corporate info, since you let them) and give you a way to train your users in a way that is irreplaceable.
Seems pretty typical for results of phishing campaigns - although they targeted 50 people, which is not very representative to get overall numbers and stats on different disciplines in a larger organization.
Results are largely driven by the kind of phish that is sent and if its click-worthy.