Hope this isn’t a repeated submission. Funny how they’re trying to deflect blame after they tried to change the EULA post breach.
I’m seeing so much FUD and misinformation being spread about this that I wonder what’s the motivation behind the stories reporting this. These are as close to the facts as I can state from what I’ve read about the situation:
- 23andMe was not hacked or breached.
- Another site (as of yet undisclosed) was breached and a database of usernames, passwords/hashes, last known login location, personal info, and recent IP addresses was accessed and downloaded by an attacker.
- The attacker took the database dump to the dark web and attempted to sell the leaked info.
- Another attacker purchased the data and began testing the logins on 23andMe using a botnet that used the username/passwords retrieved and used the last known location to use nodes that were close to those locations.
- All compromised accounts did not have MFA enabled.
- Data that was available to compromised accounts such as data sharing that was opted-into was available to the people that compromised them as well.
- No data that wasn’t opted into was shared.
- 23andMe now requires MFA on all accounts (started once they were notified of a potential issue).
I agree with 23andMe. I don’t see how it’s their fault that users reused their passwords from other sites and didn’t turn on Multi-Factor Authentication. In my opinion, they should have forced MFA for people but not doing so doesn’t suddenly make them culpable for users’ poor security practices.
I think most internet users are straight up smooth brained, i have to pull my wife’s hair to get her to not use my first name twice and the year we were married as a password and even then I only succeed 30% of the time, and she had the nerve to bitch and moan when her Walmart account got hacked, she’s just lucky she didn’t have the cc attached to it.
And she makes 3 times as much as I do, there is no helping people.
These people remind me of my old roommate who “just wanted to live in a neighborhood where you don’t have to lock your doors.”
We lived kind of in the fucking woods outside of town, and some of our nearest neighbors had a fucking meth lab on their property.
I literally told him you can’t fucking will that want into reality, man.
You can’t just choose to leave your doors unlocked hoping that this will turn out to be that neighborhood.
I eventually moved the fuck out because I can’t deal with that kind of hippie dippie bullshit. Life isn’t fucking The Secret.
I have friends that occasionally bitch about the way things are but refuse to engage with whatever systems are set up to help solve whatever given problem they have. “it shouldn’t be like that! It should work like X”
Well, it doesn’t. We can try to change things for the better but refusal to engage with the current system isn’t an excuse for why your life is shit.
The bootlickers really come out of the woodwork here to suck on corporate boot.Edit: wrong thread.
What in the fuck are you talking about? You’re the one standing up for the corporation
Yeah that is my bad, responded to the wrong thread.
In this case, the corporation isn’t wrong that users aren’t doing due dilligence.
Happens to the best of us
That’s a lot of fucking
I would definitely want my door locked for that.
Lately I try to get people to use Chrome’s built-it password manager. It’s simple and it works across platforms.
I get that people aren’t a fan of Google, and I’m not either, but this is a reasonable option that would be better than what the vast majority of people are doing now…
That’s what I’m getting at. It’s an upgrade for most users and certainly novices. I thought I was being cleaver with a password manager and they got hacked twice (you know who).
Bitwarden is simple, works across platforms, is open source, and isn’t trusting your data to a company whose *checks notes entire business model is based on sucking up as much data as possible to use for ad-targeting.
I’ll trust the company whose business model isn’t built on data-harvesting, thanks.
Also, Firefox is better for the health of the web, Google is using Chrome as a backdoor to dictate web standards, yadda yadda.
You and I can choose our tools as the best for our use case and for the good of the internet in general, but our non-tech friends can’t.
I convinced a friend to use KeePass, but he wouldn’t spend the time to learn it. I now tell him and others like him to just use Chrome’s suggested password.
internet userspeople
I agree, by all accounts 23andMe didn’t do anything wrong, however could they have done more?
For example the 14,000 compromised accounts.
- Did they all login from the same location?
- Did they all login around the same time?
- Did they exhibit strange login behavior like always logged in from California, suddenly logged in from Europe?
- Did these accounts, after logging in, perform actions that seemed automated?
- Did these accounts access more data than the average user?
In hindsight some of these questions might be easier to answer. It’s possible a company with even better security could have detected and shutdown these compromised accounts before they collected the data of millions of accounts. It’s also possible they did everything right.
A full investigation makes sense.
I already said they could have done more. They could have forced MFA.
All the other bullet points were already addressed: they used a botnet that, combined with the “last login location” allowed them to use endpoints from the same country (and possibly even city) that matched that location over the course of several months. So, to put it simply - no, no, no, maybe but no way to tell, maybe but no way to tell.
A full investigation makes sense but the OP is about 23andMe’s statement that the crux is users reusing passwords and not enabling MFA and they’re right about that. They could have done more but, even then, there’s no guarantee that someone with the right username/password combo could be detected.
I’m not sure how much MFA would have mattered in this case.
23andme login is an email address. Most MFAs seem to use email as an option these days. If they’re already reusing passwords, the bad actor already has a password to use for their emails that’s likely going to work for the accounts that were affected. Would it have brought it down? Sure, but doesn’t seem like it would’ve been the silver bullet that everyone thinks it is.
It’s a big enough detractor to make it cumbersome. It’s not that easy to automate pulling an MFA code from an email when there are different providers involved and all that. The people that pulled this off pulled it off via a botnet and I would be very surprised if that botnet was able to recognize an MFA login and also login, get the code, enter it, and then proceed. It seems like more effort than it’s worth at that point.
Those are my questions, too. It boggles my mind that so many accounts didn’t seem to raise a red flag. Did 23&Me have any sort of suspicious behavior detection?
And how did those breached accounts access that much data without it being observed as an obvious pattern?
If the accounts were logged into from geographically similar locations at normal volumes then it wouldn’t look too out of the ordinary.
The part that would probably look suspicious would be the increase in traffic from data exfiltration. However, that would probably be a low priority alert for most engineering orgs.
Even less likely when you have a bot network that is performing normal logins with limited data exfiltration over the course of multiple months to normalize any sort of monitoring and analytics. Rendering such alerting inert, since the data would appear normal.
Setting up monitoring and analysis for user accounts and where they’re logging from and suspicious activity isn’t exactly easy. It’s so difficult that most companies tend to just defer to large players like Google and Microsoft to do this for them. And even if they had this setup which I imagine they already did it was defeated.
If the accounts were logged into from geographically similar locations at normal volumes then it wouldn’t look too out of the ordinary.
I mean, device fingerprinting is used for this purpose. Then there is the geographic pattern, the IP reputation etc. Any difference -> ask MFA.
It’s so difficult that most companies tend to just defer to large players like Google and Microsoft to do this for them.
Cloudflare, Imperva, Akamai I believe all offer these services. These are some of the players who can help against this type of attack, plus of course in-house tools. If you decide to collect sensitive data, you should also provide appropriate security. If you don’t want to pay for services, force MFA at every login.
deleted by creator
deleted by creator
Common thing, a lot of people despise MFA. I somewhat recently talked with 1 person who works in IT (programmer) that has not set up MFA for their personal mail account.
deleted by creator
Credential stuffing is an attack which is well known and that organizations like 23andme definitely should have in their threat model. There are mitigations, such as preventing compromised credentials to be used at registration, protecting from bots (as imperfect as it is), enforcing MFA etc.
This is their breach indeed.
They did. They had MFA available and these users chose not to enable it. Every 23andMe account is prompted to set up MFA when they start. If people chose not to enable it and then someone gets access to their username and password, that is not 23andMe’s fault.
Also, how do you go about “preventing compromised credentials” if you don’t know that the credentials are compromised ahead of time? The dataset in question was never publicly shared. It was being sold privately.
The fact that they did not enforce 2fa on everyone (mandatory, not just having the feature enabled) is their responsibility. You are handling super sensitive data, credential stuffing is an attack with a super low level of complexity and high likelihood.
Similarly, they probably did not enforce complexity requirements on passwords (making an educated guess vere), or at least not sufficiently, which is also their fault.
Regarding the last bit, it might noto have helped against this specific breach, but we don’t know that. There are companies who offer threat intelligence services and buy data breached specifically to offer this service.
Anyway, in general the point I want to make is simple: if your only defense you have against a known attack like this is a user who chooses a strong and unique password, you don’t have sufficient controls.
I guess we just have different ideas of responsibility. It was 23andMe’s responsibility to offer MFA, and they did. It was the user’s responsibility to choose secure passwords and enable MFA and they didn’t. I would even play devil’s advocate and say that sharing your info with strangers was also the user’s responsibility but that 23andMe could have forced MFA on accounts who shared data with other accounts.
Many people hate MFA systems. It’s up to each user to determine how securely they want to protect their data. The users in question clearly didn’t if they reused passwords and didn’t enable MFA when prompted.
My idea is definitely biased by the fact that I am a security engineer by trade. I believe a company is ultimately responsible for the security of their users, even if the threat is the users’ own behavior. The company is the one able to afford a security department who is competent about the attacks their users are exposed to and able to mitigate them (to a certain extent), and that’s why you enforce things.
Very often companies use “ease” or “users don’t like” to justify the absence of security measures such as enforced 2fa. However, this is their choice, who prioritize not pissing off (potentially) a small % of users for the price of more security for all users (especially the less proficient ones). It is a business choice that they need to be accountable for. I also want to stress that despite being mostly useless, different compliance standards also require measures that protect users who use simple or repeated passwords. That’s why complexity requirements are sometimes demanded, or also the trivial bruteforce protection with lockout period (for example, most gambling licenses require both of these, and companies who don’t enforce them cannot operate in a certain market). Preventing credentials stuffing is no different and if we look at OWASP recommendation, it’s clear that enforcing MFA is the way to go, even if maybe in a way that it does not trigger all the time, which would have worked in this case.
It’s up to each user to determine how securely they want to protect their data.
Hard disagree. The company, i.e. the data processor, is the only one who has the full understanding of the data (sensitivity, amount, etc.) and a security department. That’s the entity who needs to understand what threat actors exist for the users and implement controls appropriately. Would you trust a bank that allowed you to login and make bank transfers using just a login/password with no requirements whatsoever on the password and no brute force prevention?
This wasn’t a brute force attack, though. Even if they had brute force detection, which I’m not sure if they don’t or not, that would have done nothing to help this situation as nothing was brute forced in the way that would have been detected. The attempts were spread out over months using bots that were local to the last good login location. That’s the primary issue here. The logins looked legitimate. It wasn’t until after the exposure that they knew it wasn’t and that was because of other signals that 23andMe obviously had in place (I’m guessing usage patterns or automation detection).
Of course this is not a brute force attack, credentials stuffing is different from bruteforcing and I am well aware of it. What I am saying is that the “lockout period” or the rate limiting (useful against brute force attacks) for logins are both security measures that are sometimes demanded from companies. However, even in the case of bruteforcing, it’s the user who picks a “brute-forceable” password. A 100 character password with numbers, letters, symbols and capital letters is essentially not possible to be bruteforced. The industry recognized however that it’s the responsibility of organizations to implement protections from bruteforcing, even though users can already “protect themselves”. So, why would it be different in the case of credentials stuffing? Of course, users can “protect themselves” by using unique passwords, but I still think that it’s the responsibility of the company to implement appropriate controls against this attack, in the same exact way that it’s their responsibility to implement a rate-limiting on logins or a lockout after N failed attempts. In case of stuffing attacks, MFA is the main control that should simply be enforced or at the very least required (e.g., via email - which is weak but better than nothing) when any new pattern in a login emerges (new device, for example). 23andMe failed to implement this, and blaming users is the same as blaming users for having their passwords bruteforced, when no rate-limiting, lockout period, complexity requirements etc. are implemented.
There are services that check provided credentials against a dictionary of compromised ones and reject them. Off the top of my head Microsoft Azure does this and so does Nextcloud.
This assumes that the compromised credentials were made public prior to the exfiltration. In this case, it wasn’t as the data was being sold privately on the dark web. HIBP, Azure, and Nextcloud would have done nothing to prevent this.
Yea, you’re right. Good point.
Is there a standards body web developers should rely on, which suggests requiring MFA for every account? OWASP, for example, only recommends requiring it for administrative users, but for giving regular users the option without requiring it.
There’s some positives to requiring MFA for all users, but like any decision there’s trade offs. How can we throw 23andme under the bus when they were compliant with industry best practices?
I don’t think it’s possible to make a blanket statement in this sense. For example, Lemmy doesn’t handle as sensitive data as 23andMe. In this case, it might be totally acceptable to have the feature, but not requiring it. Banks (at least in Europe) never let you login with just username and password. The definitely comply with different standards and in general, it is well understood that the sensitivity of the data (and actions) needs to be reflected into more severe controls against attacks which are relevant.
For a company with so sensitive data (such as 23andMe), their security model should have definitely included credential stuffing attacks, and therefore they should have implemented the measures that are recommended against this attack. Quoting from OWASP:
Multi-factor authentication (MFA) is by far the best defense against the majority of password-related attacks, including credential stuffing and password spraying, with analysis by Microsoft suggesting that it would have stopped 99.9% of account compromises. As such, it should be implemented wherever possible; however, depending on the audience of the application, it may not be practical or feasible to enforce the use of MFA.
In other words, unless 23andMe had specific reasons not to implement such control, they should have. If they simply chose to do so (because security is an afterthought, because that would have meant losing a few customers, etc.), it’s their fault for not building a security posture appropriate for the risk they are subject to, and therefore they are responsible for it.
Obviously not every service should be worried about credential stuffing, therefore OWASP can’t say “every account needs to have MFA”. It is the responsibility of each organization (and their security department) to do the job of identifying the threats they are exposed to.
I agree. The people blaming the website are ridiculous here.
It’s just odd that people get such big hate boners from ignorance. Everything I’m reading about this is telling me that 23andMe should have enabled forced MFA before this happened rather than after, which I agree with, but that doesn’t mean this result is entirely their fault either. People need to take some personal responsibility sometimes with their own personal info.
deleted by creator
Laziness alone is a pretty big reason. MFA was available and users were prompted to set it up. The fact that they didn’t should tell you something.
deleted by creator
Step 4 is where 23andme got hacked
By your logic I hack into every site I use by … checks notes presenting the correct username and password.
It’s called social hacking,
Would bet that you’re a crypto fan.
How much we talking? I’ll take that bet.
Why?
Would bet your password includes “password” or something anyone could guess in 10 minutes after viewing your Facebook profile.
Edit: Your l33t hacker name is your mother’s maiden name and the last four of your social, bro. Mines hunter1337, what’s yours?
The data breach started with hackers accessing only around 14,000 user accounts. The hackers broke into this first set of victims by brute-forcing accounts with passwords that were known to be associated with the targeted customers
Turns out, it is.
What should a website do when you present it with correct credentials?
- IP based rate limiting
- IP locked login tokens
- Email 2FA on login with new IP
IP-based mitigation strategies are pretty useless for ATO and credential stuffing attacks.
These days, bot nets for hire are easy to come by and you can rotate your IP on every request limiting you controls to simply block known bad IPs and data server IPs.
-
The attackers used IPs situated in their victims regions to log in, across months, bypassing rate limiting or region locks / warnings
-
I don’t know if they did but it would seem trivial to just use the tokens in-situ once they managed to login instead of saving and reusing said tokens. Also those tokens are the end user client tokens, IP locking them would make people with dynamic IPs or logged in 5G throw a fuss after the 5th login in half an hour of subway
-
Yeah 2FA should be a default everywhere but people just throw a fuss at the slightest inconvenience. We very much need 2FA to become the norm so it’s not seen as such
I’m cool with 2fa, I’m not cool with a company demanding my cellphone number to send me SMS for 2fa or to be forced to get a 2fa code via email…like my bank. I can ONLY link 2fa to my phone. So when my phone goes missing or stolen, I can’t access my bank. Only time I have resisted 2fa is when this pooly implemented bullshit happens.
Pro tip, when making a new Google account and putting your phone number in be sure to look into more options. There is a choice to only use it for 2fa and not for data linking.
2 factor beats the hell outta that “match the horse with the direction of the the arrow 10x” bs
-
What should a website do when you present it with correct credentials?
Not then give you access to half their customers’ personal info?
Credential stuffing 1 grandpa who doesn’t understand data security shouldn’t give me access to names and genetics of 500 other people.
That’s a shocking lack of security for some of the most sensitive personal data that exists.
You either didn’t read or just really need this to be the company’s fault.
Those initial breaches lead to more info being leaked because users chose to share data with those breached users before their accounts were compromised.
When you change a setting on a website do you want to have to keep setting it back to what you want or do you want it to stay the first time you set it?
Not then give you access to half their customers’ personal info?
That’s a feature of the service that you opt into when you’re setting up your account. You’re not required to share anything with anyone, but a lot of people choose too. I actually was able to connect with a half-sibling that I knew I had, but didn’t know how to contact, via that system.
Hi! If you’ve used it, there’s something I was curious about - how many people’s names did it show you?
If 50%+ of the 14000 had the feature enabled, it was showing an average of 500-1000 “relatives”. Was that what you saw? What degree of relatedness did they have?
I don’t think that opting in changes a company’s responsibility to not launch a massive, inevitable data security risk, but tbh I’m less interested in discussing who’s to blame than I am in hearing more about your experience using the feature. Thanks in advance!
This list shows 1500 people for me. I assume that’s just some arbitrary limit to the number of results. There’s significantly overlap in the relationship lists, so the total number of people with data available is less than the (14000 x 0.5 x 1500) than the math might indicate.
My list of possible relations goes from 25% to 0.28% shared DNA. That’s half-sibling down to 4th cousin (shared 3rd-great-grandparents).
The only thing I can see for people who I haven’t “connected” with is our shared ancestry and general location (city or state) if they share it. I can see “health reports” if the person has specifically opted to share it with me after “connecting”.
deleted by creator
But why do you need access to any of your half sibling’s personal data to do that?
Nobody “needs” it, lol. People do it because it’s interesting to them. That’s why it’s opt-in.
Why do you need access to everyone who opted in’s data to do that?
Why does Facebook need to show you other people’s profiles and posts? Why does Lemmy show me your profile and posts? It’s how those services work, and people choose to use those services because they work that way.
What should it do? It should ask you to confirm the login with a configured 2FA
Yeah they offered that. I don’t think anyone with it turned on was compromised.
This shouldn’t be “offered” IMHO, this should be mandatory. Yes, people are very ignorant about cyber security (I’ve studied in this field, trust me, I know). But the answer isn’t to put the responsibility on the user! It is to design products and services which are secure by design.
If someone is actually able to crack accounts via brute-forcing common passwords, you did not design a secure service/product.
[Edit: spelling]
I’ve noticed that many users in this thread are just angry that the average person doesn’t take cybersecurity seriously. Blaming the user for using a weak password. I really don’t understand how out of touch these Lemmy users are. The average person is not thinking of cybersecurity. They just want to be able to log into their account and want a password to remember. Most people out there are not techies, don’t really use a computer outside of office work, and even more people only use a smartphone. Its on the company to protect user data because the company knows its value and will suffer from a breach.
How should the company be protecting user data, when - like you said -, the average person doesn’t take cybersecurity seriously, are not techies, don’t use a computer outside the office, and just want to log into their account with a password they remember?
Are you basically just saying the company should’ve enforced 2FA? Or maybe one of those “confirm you’re logging in” emails, every time they want to log in?
From what I’m seeing, the hackers used the weak password accounts to access a larger vulnerability once they were behind the curtain. The company I work for deals with sensitive proprietary data daily and we are keenly aware that individuals should never have an opportunity to access the information if any other user. Things like single-user quarantining of data blocks are a minimum for security. Users log in and live on their own private island floating in a void. On top of that use behavior tracking to detect access patterns that attempt to exit the void and revoke credentials. That is also not even remotely mentioning that you have a single point of access entering thousands of accounts. That on it’s own should be throwing enough red flags to pull down the webserver for a few minutes to hours. There is a lot they could have done.
It wasn’t exploiting a vulnerability, they gained access to other peoples data because the site has a deliberate feature to share your data with your relatives if both have allowed that. That’s why the term used is “scraped”, they copied what the site showed.
When someone logs in to a Facebook account, it’s not a vulnerability that they can now see all of the info their friends have set to “friends only”, essentially.Also they used a botnet so the login attempts weren’t suspicious enough to do anything about - they weren’t brute forcing a single user multiple times, but each trying once with the correct password.
Yes, one of those “confirm it’s you” emails. They’re less intrusive than regular 2FA, and are only needed when a user logs in from a machine without the right cookie
Hello, as I said, it’s about “security by design”, which means to design a system that ‘doesn’t allow for insecure things’ in the first place. Like a microwave oven doesn’t operate when the door is open. IT-/cyber-security is a complex field, but 2FA is a good place to start, regarding user facing services. There are lots more things than that of course.
You’re right, most people either don’t care, or don’t even know enough to care in the first place.
And that’s a huge problem. Yes, companies have some responsibility here, but ultimately it’s the user who decides to use the service, and how to use it.
don’t even know enough to care in the first place.
but ultimately it’s the user who decides to use the service, and how to use it.
So you admit they don’t have access to the knowledge needed to make better choices for their digital security. Then immediately blame them. I think your bias from the point of view of a one that is already more informed on this sort of thing. If they don’t know they need to know more, how can they be expected to do any research? There’s only so much time in a day so you can’t expect people to learn “enough” about literally everything.
I don’t intend to blame them, I’m just making an observation.
The fact that they don’t know is a problem in itself too, and spreading awareness about cybersecurity and teaching general tech literacy and common sense is not done as much as it should be.
It’s exactly like you say. They don’t know, and how would they? No one is ever giving them the information they need.
That’s exactly right. I was about to say how people usually don’t even “not take it seriously” but rather don’t even think or know about it. But you already said that yourself haha :D
Or, worse, they don’t even understand it. I definitely have people in my life who know about the idea of cybersecurity and are terrified of getting hacked, but constantly do things the wrong way or worry about the wrong things. Because it’s just too confusing for them, and it’s always changing.
Just use a VPN bro and you’re fine /s
Fuck mandatory 2FA. Most sites just throw SMS on there and leave it at that. I’m so tired of putting yet more of my information into services that don’t require it to utilize the service.
If TOTP was more prevalent (getting there) I might agree but then we’d be talking about how the typical user doesn’t know how to set that up.
Companies pay SMS, TOTP is free for them (just a computation…). It is utterly dumb to implement the same logic with a paid service rather than TOTP (or security keys, at this point). So yeah, I agree with the idea, but I think nowadays most 2fa is TOTPs (sadly, some require their shitty apps to do just that - Blizzard once was one of them, maybe still is).
It’s a thinly veiled method to gather more info from users when SMS is the only option.
2FA should be forced, it’s not a hard thing to do.
To badbiometric data couldnt be used…
So… we are ignoring the 6+ million users who had nothing to do with the 14 thousand users, because convenience?
Not to mention, the use of “brute force” there insinuates that the site should have had password requirements in place.
Please excuse the rehash from another of my comments:
How do you people want options on websites to work?
These people opted into information sharing.
When I set a setting on a website, device, or service I damn sure want the setting to stick. What else would you want? Force users to set the setting every time they log in? Every day?
I admit, I’ve not used the site so I don’t know the answers to the questions I would need, in order to properly respond:
- Were these opt-in or opt-out?
- Were the risks made clear?
- Were the options fine tuned enough that you could share some info, but not all?
From the sounds of it, I doubt enough was done by the company to ensure people were aware of the risks. Because so many people were shocked by what was able to be skimmed.
I’m convinced that everyone pissed at the company for users reusing passwords has a reading comprehension problem because I definitely already answered your first question in the comment you responded to.
I haven’t used the service either - I don’t want more of my data out there. So I can’t answer the other questions.
Users were probably not thinking about the implications of a breach after sharing but it stands to reason that if you share data with an account, and that account gets compromised, your data is compromised.
We’ve all been through several of those from actual hacks at other companies (looking at you, T-Mobile). I refuse to believe people aren’t aware of this general issue by now.
It was credential stuffing. Basically these people were hacked in other services. Those services probably told them “Hey, you need to change your password because our database was hacked” and then they were like “meh, I’ll keep using this password and won’t update my other services that this password and personally identifiable information about myself and my relatives”.
Both are at fault, but the users reusing passwords with no MFA are dumb as fuck.
by brute-forcing accounts with passwords that were known
That’s not what “brute force” means.
Agreed.
Blaming your customers is definitely a strategy. It’s not a good one, but it is a strategy.
BRB deleting my 23AndMe account
As if deleting your account deletes your data.
Surely they have a GDPR-compliant way to have your info removed. Right?
They’re an American company, and I’m not yet aware of any lawsuits setting the precedent of the GDPR applying to server infrastructure in the USA, which is outside the jurisdiction of the GDPR.
So if they’ve copied your data to their American servers already (you can bet they have), it’s there for good.
UPDATE user_data SET deleted = 1 WHERE ID = you.
Done. Data deleted. All gone forever. Definitely doesn’t just hide it from the user.
You should parameterize that query before Little Bobby enrolls.
OP spreading disinformation.
Users used bad passwords. Their accounts where accessed using their legitimate, bad, passwords.
Users cry about the consequences of their bad passwords.
Yeah, 23AndMe has some culpability here, but the lions share is still in the users themselves
From these 14,000 initial victims, however, the hackers were able to then access the personal data of the other 6.9 million million victims because they had opted-in to 23andMe’s DNA Relatives feature.
How exactly are these 6.9M users at fault? They opted in to a feature of the platform that had nothing to do with their passwords.
On top of that, the company should have enforced strong passwords and forced 2FA for all accounts. What they’re doing is victim blaming.
users knowingly opted into a feature that had a clear privacy risk.
Strong passwords often aren’t at issue, password re-use is. If un-{salted, hashed} passwords were compromised in a previous breach, then it doesn’t matter how strong those passwords are.
Every user who was compromised:
- Put their DNA profile online
- Opted to share their information in some way
A further subset of users failed to use a unique and strong password.
A 2FA token (think Matrix) might have helped here, other than that, individuals need to take a greater responsibility for personal privacy. This isn’t an essential service like water, banking, electricity etc. This is a place to upload your DNA profile…
As I said elsewhere, the company implemented this feature and apparently did not do absolutely jack about the increased risk of account compromise deriving from it. If I would sit in a meeting discussing this feature I would immediately say that accounts which share data with others are way too sensitive and at least these should have 2fa enforced. If you don’t want it, you don’t share data. Probably the company does not have a good security culture and this was not done.
users knowingly opted into a feature that had a clear privacy risk.
Your aunt who still insists she’s part Cherokee is not as capable of understanding data security risks as the IT department of the multi-million dollar that offered the ludicrously stupid feature in the first place.
People use these sites once right? Who’s changing their password on a site they don’t log into anymore? Given that credential stuffing was inevitable and foreseeable, the feature is obviously a massive risk that shouldn’t have been launched.
Are you telling me a password of 23AndMe! Is bad? It meets all the requirements.
Users used bad passwords. Their accounts where accessed using their legitimate, bad, passwords.
Just as an anecdotal counterpoint, I am a 23andMe customer who did receive notification of my account was accessed and personal information obtained.
This was my password at the time: 7Kk5bXjIdfB25
That password was auto-generated for me by the BitWarden app.
So for what it’s worth I don’t think my password was a ‘bad’ password.
Your direct account was accessed or some of your information was access through a compromised account? those are big differences and from what I’ve read only the latter should have been possible. and in my opinion, not such a big deal.
How am I spreading disinformation? I just contributed an article I found interesting for discussion.
It’s worth noting that OP simply used the article title.
The article title is a little biased, individuals must take greater personal responsibility.
I don’t know title etiquette in this forum. I used the author’s title because it is their article, not mine, and thus their opinion/research/AI output.
Oh no, I was just pointing it out for others. I think using the title post is perfectly reasonable.
Thank you for posting, I found it interesting.
The lions share IMHO is at 23&me. Offering such a poorly secured service is negligence, in the face of the data’s high sensitivity nature.
Yeah, 23AndMe has some culpability here, but the lions share is still in the users themselves
Tell me you didn’t read the article without telling me.
If 14,000 users who didn’t change a password on a single use website they probably only ever logged into twice gives you 6.9 million user’s personal info, that’s the company’s fault.
You didn’t read it either. They gained access to shared information between the accounts because both accounts had enabled “share my info with my relatives” option.
Logging into someones Facebook and seeing their friends and all the stuff they posted as “friends only” and their private DM discussions isn’t a hack or a vulnerability, it’s how the website works.
It doesn’t matter. It is a known attack and the company should have implemented measures against it.
At the very least, they should have made a threat modeling exercise and concluded that with this sharing feature, the compromise of a single account can lead to compromise of data for other users. One possible conclusion is that users who shared data should be forced to have 2fa.
It doesn’t matter. It is a known attack and the company should have implemented measures against it.
At the very least, they should have made a threat modeling exercise and concluded that with this sharing feature, the compromise of a single account can lead to compromise of data for other users. One possible conclusion is that users who shared data should be forced to have 2fa.
Laughing a feature that lets an inevitable attack access 500 other people’s info for every comprimised account is a glaring security failure.
Accounting for foreseeable risks to users’ data is the company’s responsibility and they launched a feature that made a massive breach inevitable. It’s not the users’ fault for opting in to a feature that obviously should never have been launched.
deleted by creator
23andMe admitted that hackers had stolen the genetic and ancestry data of 6.9 million users
I’m honestly asking what the impact to the users is from this breach. Wasn’t 23andMe already free to selling or distribute this data to anybody they wanted to, without notifying the users?
That’s not how this works. They are running internationally, and GDPR would hit them like a brick if they did that.
I would assume they had some deals with law enforcement to transmit data one narrow circumstances.
I’m honestly asking what the impact to the users is from this breach.
Well if you signed up there and did an ancestry inquiry, those hackers can now without a doubt link you to your ancestry. They might be able to doxx famous people and in the wrong hands this could lead to stalking, and even more dangerous situations. Basically everyone who is signed up there has lost their privacy and has their sensitive data at the mercy of a criminal.
This is different. This is a breach and if you have a company taking care of such sensitive data, it’s your job to do the best you can to protect it. If they really do blame this on the users, they are in for a class action and hefty fine from the EU, especially now that they’ve established even more guidelines towards companies regarding the maintenance of sensitive data. This will hurt on some regard.
If they really do blame this on the users
It’s not that they said:
It’s your fault your data leaked
What they said was (paraphrasing):
A list of compromised emails/passwords from another site leaked, and people found some of those worked on 23andme. If a DNA relative that you volunteered to share information with was one of those people, then the info you volunteered to share was compromised to a 3rd party.
Which, honestly?
Completely valid. The only way to stop this would be for 23andme to monitor these “hack lists” and notify any email that also has an account on their website.
Side note:
Any tech company can provide info if asked by the police. The good ones require a warrant first, but as data owners they can provide it without a warrant.
That’s not 23 and me fault at all then. Basically boils down to password reuse. All i would say is they should have provided 2fa if they didn’t.
All i would say is they should have provided 2fa if they didn’t.
At this point, every company not using 2FA is at fault for data hacks. Most people using the internet have logins to 100’s of sites. Knowing where to do to change all your passwords is nearly impossible for a seasoned internet user.
A seasoned internet user has a password manager.
Not using one is your negligence, no one else’s.
One password to break them all, and in the dark web bind them.
The sad thing is you have to balance the costs of requiring your customer to use 2FA with the risk of losing business because of it and the risk of losing reputation because your customers got hacked and suffered loss.
The sad thing is some (actuall most) people are brain dead, you will lose business if you make them use a complicated password or MFA and it puts them in the position to make a hard call.
They took the easy route and gave the customer the option to use MfA if they wished and unfortunately a lot of people declined. Those people should not have the ability to claim damages (or vote, for that matter)
I feel like that argument could be made for some things, but inherently cannot apply to companies involved in personal, genetic, or financial information.
The only way to stop this would be for 23andme to monitor these “hack lists”
Unfortunately, from the information that I’ve seen, the hack lists didn’t have these credentials. HIBP is the most popular one and it’s claimed that the database used for these wasn’t posted publicly but was instead sold on the dark web. I’m sure there’s some overlap with previous lists if people used the same passwords but the specific dataset in this case wasn’t made public like others.
I would guess (hope?) that the data sets they sell are somewhat anonymized, like listing people by an i.d. number instead of the person’s name, and not including contact information like home address and telephone number. If so then the datasets sold to companies don’t contain the personal information that hackers got in this security breach.
I’m honestly asking what the impact to the users is from this breach.
The stolen info was used to databases of people with jewish ancestry that were sold on the dark web. I think there was a list of similar DB of people with chinese ancestry. 23andme’s poor security practices have directly helped violent white supremecists find targets.
If you’re so incompetent that you can’t stop white supremecists from getting identifiable information about people from minorities, there is a compelling public interest for your company to be shut down.
That is a whoooolllee lot of assumptions
Why do you think someone would buy illegally obtained lists of people with Jewish or Chinese ancestry? And who do you think would be buying it?
Scammers, that opens up a lot of scam potential.
Hi, I’m your new cousin.
Scammers would buy all info, not specifically targeted to people of Jewish or Chinese descent. That’s not what’s being sold.
Who do you think would want only information about people with Jewish or Chinese ancestry, and why?
OK you’re gonna have to give me a link to what you’re talking about. It feels like you are being specific, and I am being generic.
It’s the same incident, the OP article just didn’t mention it.
deleted by creator
Reusing credentials is their fault. Sure, 23&me should’ve done better, but someone was likely to get fucked, and if you’re using the same password everywhere it is objectively your fault. Get a password manager, don’t make the key the same compromised password, and stop being stupid.
It’s at least 99.8% the company’s fault.
Even if we blame those 14k password reusers, we’re blaming 1 in every 500 victims. Being able to access genetic information and names of 6.9 million people - half your entire customers! - by hacking 0.02% of that is the fault of the company. They structured that access and failed to act on the obvious threat it represents.
But why blame password reusers? Not every grandparent interested in their family tree is capable of even understanding data security, let alone juggling multiple passwords or a PW manager.
Credential stuffing is an inevitable part of security landscape - especially for one time use accounts like genetics sites. A multimillion dollar IT department is just clearly responsible for preventing egregious data security failures.
They didn’t get genetic raw data of anyone beyond the 14K, they got family relationship information. Which is an option you can turn on or off, if you want. It’s very clear that you’re exposing yourself to other people if you choose to see who you’re related to. It doesn’t expose raw data and it doesn’t instantly expose names, just how they’re related to you. (And most of the “relations” are 3rd to 5th cousins, aka strangers.)
Hackers used the genetic ancestry data of the 14K hacked users and their “relatives” connections to deduce large families of Ashkenazi Jews.
Given the sensitivity of the data in both cases they should have had mandatory 2fa set up. However, the other person is right, there’s probably a ton of tech illiterate people using this and they likely saw better security as barriers to entry and making less money.
some people just aren’t that worried about sharing their dna info. Hell, I’d venture I’d give a actual sample to a good % of the population if they asked me in a sexy way.
I would say it’s partially their fault. IMHO 23&me is mainly to blame. They should’ve enforced (proper) 2FA. Sure, people should’ve known better, but they didn’t; they oftenly don’t. But 23&me did know better.
Edit: spelling
“users negligently recycled and failed to update their passwords following these past security incidents, which are unrelated to 23andMe…Therefore, the incident was not a result of 23andMe’s alleged failure to maintain reasonable security measures,”
This is a failure to design securely. Breaking into one account via cred stuffing should give you access to one account’s data, but because of their poor design hackers were able to leverage 14,000 compromised accounts into 500x that much data. What that tells me is that, by design, every account on 23andMe has access to the confidential data of many, many other accounts.
It’s terrible design. If they know their users are going to do this, they’re supposed to work around that. Not leave it as a vulnerability.
I don’t think so. Those users had opted in to share information within a certain group. They’ve already accepted the risk of sharing info with someone who might be untrustworthy.
Plenty of other systems do the same thing. I can share the list of games on my Steam account with my friends - the fact that a hacker might break into one of their accounts and access my data doesn’t mean that this sharing of information is broken by design.
If you choose to share your secrets with someone, you accept the risk that they may not protect them as well as you do.
There may be other reasons to criticise 23andMe’s security, but this isn’t a broken design.
And it’s your fault you have access to them. Stop doing bad things and keep your information secure.
you clearly have no familiarity with the principles of information security. 23andMe failed to follow a basic principle: defense in depth. The system should be designed such that compromises are limited in scope and cannot be leveraged into a greater scope. Password breaches are going to happen. They happen every day, on every system on the internet. They happen to weak passwords, reused passwords and strong passwords. They’re so common that if you don’t design your system assuming the occasional user account will be compromised then you’re completely ignoring a threat vector, which is on you as a designer. 23andMe didn’t force 2 factor auth (https://techcrunch.com/2023/11/07/23andme-ancestry-myheritage-two-factor-by-default/) and they made it so every account had access to information beyond what that account could control. These are two design decisions that enabled this attack to succeed, and then escalate.
Fiivemacs was joking, speaking in 23&me’s voice. They don’t actually believe it’s the user’s fault.
That was very much sarcasm on my part
Didn’t say /s…
Bro just don’t have DNA.
If you were really on your sigma grindset, your DNA would have never existed.
More of a gamma grindset since if you get hit by enough of those rays, you might not have recognizable DNA anymore.
…this checks out. Gamma grindset origin story.
Too late man…
They’re right. It the customer’s fault for giving them the data in the first place.
But hear me out, I have no control over my cousin or aunt or some random relative getting one of these tests and now this shitty company has a pretty good idea what a large chunk of my DNA looks like. If people from both sides of my family do it they have an even better idea what my genetic profile looks like. That’s not my fault, I never consented to it, and it doesn’t seem ok.
I also know about 99.9% of your DNA.
Sorry, I thought it was obvious that we’re talking about the part that varies by individual humans…
It was, just being a smartass.
If your credit card information gets stolen because someone stole it from a website you bought something off of, is that your fault?
I can change my credit card. I can’t change my dna. This wasn’t even for any medical reasons. 23andme is just a vanity service.
I have a relative who did it.
But they are super into genealogy.
At this point, to go deeper, they would need to learn a new language and travel half way across the world.
I was not consulted before this was done. I would have cautioned against it.
And what of the money lost? Should the credit card company say “well you’re an idiot that gave sensitive information to some company, we’re not going to help you?” It’s still victim blaming.
In reality, yes. If the data breach because users were reusing passwords, then they are partially at fault. If someone gets rear ended by a drunk driver and their injuries could have been limited by by wearing a seatbelt, then yes. They are partially at fault for it. People who don’t wear their seatbelts are the same types that reuse passwords. They don’t think it will happen to them and take their luck up to that point for granted.
Even if they are partially at fault, the company tends to have more power to fix security problems than the customer does. That’s why we tend to put the onus on the company to fix these issues. It’s not really fair to put it on either one for something criminals did, but at least the company has more power to control things.
In the case of credit cards, the US industry has implemented PCI compliance to force a level of security on all the individual companies. Now, I happen to think PCI is a flawed approach. Payment gateways in most other countries work something like PayPal or Google Wallet, where only the processing company ever sees payment data. The merchant only sees that the payment is verified and has the correct amount. However, US internet sites evolved where each individual merchant has to hold on to credit card data, and that necessitates PCI. Fortunately, PCI compliance is such a PITA that many companies are turning to payment gateways like everywhere else in the world.
In the case of 23andme, they had a few broken passwords that then affected half their customer base through the relationship feature. Aside from dropping relationships, they also could have used MFA methods. My Steam account uses MFA, and it’s far less important than my DNA information.
Bad analogy. The only people who had their information exposed are people who reused passwords and people who decided to make their info semi-public. It’s more like deciding to tell all your cousins and 2nd cousins your credit card info and one of them leaked it.
And then trying to hold the card issuer liable rather than your cousin…
This is such a fucking braindead, victim blaming take.
They became a victim the moment they gave their data to that company. Why is anyone that works at 23andme more trust worthy then rando hackers? They aren’t obligated to any HIPPA laws.
I SHOULD NOT BE GETTING GASLIT FOR WHAT SEEMED LIKE A NEAT IDEA AT THE TIME
Absolutely; and this is another example in a long list which should serve as a lesson for people to not share their personal data with any company if possible. Yet, I feel that lesson will never be learned.
Gentle reminder to plop your email address in here and see if you, much like 14,000 23andMe users, have had an account compromised somewhere. Enable two-factor where you can and don’t reuse passwords.
Welp my two gmail address have been pwned. Good thing I don’t use them and I have limited use of Google services.
Just to clarify; It doesn’t necessarily mean that your Google account password is compromised. It lists data breaches of services where you used the provided email to register. The password you chose for that service at the time of the breach has been compromised. If you don’t use the same password everywhere, or changed your password after the breach, your other accounts are not compromised.
Also, as OP said, use two-factor authentication. And please also use a password manager.
I understand that. I use KeePassXC and love it. I just notice that those gmail accounts get all the spam so I abandoned them.
It’s saying I’ve been hacked on websites I’ve legitimately never even heard of, websites I have 100% never interacted with. Is this just a normal consequence of companies sharing all my data with other companies?
I can’t speak to how you ended up on the list. The way haveibeenpwned works is that they crawl publicly available credential dumps and grab the associated usernames/emails for each cred pair. However it got there, your email ended up in one of those dumps. Recommend you change your passwords, make sure you don’t repeat the same password across multiple sites and use a password manager so you don’t have to remember dozens of passwords yourself.
deleted by creator
Giving your genetic info to them is the first mistake
I see this trend of websites requesting your identification and all i think is: i don’t even trust my own government with a copy why the hell should i trust a business?
Instant skip.
And I agree with them, I mean 23andMe should have a brute-force resistant login implementation and 2FA, but you know that when you create an account.
If you are reusing creds you should expect to be compromised pretty easily.
A successful breach of a family member’s account due to their bad security shouldn’t result in the breach of my account. That’s the problem.
Edit: so people stop asking, here’s their docs on DNA relatives: https://customercare.23andme.com/hc/en-us/articles/212170838
Showing your genetic ancestry results makes select information available to your matches in DNA Relatives
It clearly says select information, which one could reasonably assume is protecting of your privacy. All the reports seem to imply the hackers got access to much more than just the couple fun numbers the UI shows you.
At minimum I hold them responsible for not thinking this feature through enough that it could be used for racial profiling. That’s the equivalent of being searchable on Facebook but they didn’t think to not make your email, location and phone number available to everyone who searches for you. I want to be discoverable by my friends and family but I’m not intending to make more than my name and picture available.
A successful breach of a family member’s account due to their bad security shouldn’t result in the breach of my account. That’s the problem
I mean…
You volunteered to share your info with that person.
And that person reused a email/password that was compromised.
How can 23andme prevent that?
It sucks, but it’s the fault of your relative that you entrusted with access to your information.
No different than if you handed them a hardcopy and they left it on the table of McDonald’s .
Quick edit:
It sounds like you think your account would be compromised, that’s not what happened. Only info you shared with the compromised relative becomes compromised. They don’t magically get your password.
But you still choose to make it accessible to that relatives account by accepting their request to share
Could I please have your personal information?
No.
See… it’s that easy.
Ok, who else would be able to give me your personal information. I’ll go get it from them instead.
Your mom has my contact information. You can ask her.
/pwn3d.
Oh, so you’re actually not consenting to have some personal information you’ve given to family given to me as well? Odd, you sure seemed ok when it was people having their information snagged from 23andMe.
And that’s exactly how the attackers got in in the first place lol.
The ding dongs used the same creds elsewhere which were leaked.
Thank you for explaining the point I was making to me.
Yep it was 14,000 that were hacked, the other 6.9 million were from that DNA relative functionality they have. Unfortunately 23andMe’s response is what to expect since companies will never put their customers safety ahead of their profits.
I doesn’t. Sharing that info was opt-in only. In this scenario, no 23andMe accounts were breached. The users reused their credentials from other sites. It would be like you sharing your bank account access with a family member’s account and their account getting accessed because their banking password was “Password1” or their PIN was “1234”.
So if you enabled a setting that is opt-in only that allows sharing data between accounts and you are surprised that data was shared between accounts how is that not your fault?
afaik there was no breach of private data, only the kind of data shared to find relatives, which is opt-in and obviously not private to anyone who has seen how this service works. In other words, the only data “leaked” was the kind of data that was already shared with other 23andMe users.
Name, sex and ancestry were sold on the dark web, that’s a breach of private data.
The feature that lets a hacker see 500 other people’s personal information when they hack an account is obviously a massive security risk. Especially if you run a single use service - no one updates their password on a site they don’t use anymore.
Launching the feature in the first place made this inevitable.
Name, sex and ancestry were sold on the dark web, that’s a breach of private data.
It would be a breach if the data was private, but the feature itself exposes this data. That would be like presenting a concert to hundreds of people then complaining your facial attributes were leaked in social media.
You shouldn’t have shared your information with someone who is untrustworthy then. Data sharing is opt-in.
Credential stuffing attacks will always yield results on a single use website because no one changes passwords on a site they don’t use anymore.
Launching a feature that enables an inevitable attack to access 500 other people’s info is very clearly the fault of the company who launched the feature.
How do you and the surprising number of people who upvoted you want options on websites to work?
These people opted into information sharing.
When I set a setting on a website, device, or service I damn sure want the setting to stick. What else would you want? Force users to set the setting every time they log in? Every day?
Wtf?
Even if you didn’t reuse a compromised password yourself, the fact that your relatives did indicates that you’re genetically predisposed to bad security practices. /s
A successful breach of a family member’s account due to their bad security shouldn’t result in the breach of my account. That’s the problem.
How the hell would they prevent that if you voluntarily shared a bunch of information with the breach account? This is like being mad that your buddy’s Facebook account got breached and someone downloaded shared posts from your profile, too. It’s how the fucking service works.
Is it also the User’s fault for the 6,898,600 people that didn’t reuse a password and were still breached?
Yes, because you have to choose to share that data with other people. 23andMe isn’t responsible if grandma uses the same password for every site.
23andMe is responsible for sandboxing that data, however. Which they obviously didn’t do.
User opted-in to share those data
You opt in to share your data with Facebook. Would you still consider it an issue if your data was breached because someone else’s account was hacked?
I would consider normal that my photos that I only share with some people were leaked if one of those people’s accounts got hacked.
Sure, it’s a breach, but I would blame my idiot friend for re-using passwords. I wouldn’t blame the service for doing exactly what I expected the service to do, and is the reason I chose to use the service in the first place.
It’s also the reason I’ve very selective about what I share with anyone online, friend or otherwise.
If you share your nudes with the “friends only” privacy settings on facebook, and someone else accesses one of your friends accounts because they reused their password and proceeds to leak those photos, is it the fault of Facebook, your friend, the person leaking them, or you?
Because that is exactly what happened here. Credit stuffing reused passwords and scraping opt-in “friends only” shared data between accounts.
Private health data was compromised as well, on a smaller scale. It doesn’t make sense to blame users for a security breach of a corporation, literally ever. That’s my point. The friend was dumb, and you shared something maybe you shouldn’t have. But that doesn’t also absolve the company of poor security practices. I very strongly doubt that 14,000 people knew or consciously chose to directly share with a collective 7 million people.
Did you not read my comment? Users opt in to sharing data with other accounts, which means if one account is compromised, then every account that allowed them access would have their data compromised too. That’s not on the company, because they feature can’t work without allowing access.
They weren’t breached. The data they willingly shared with the compromised accounts was available to the people that compromised them.
Pretty sure nobody clicked a button that said “share my data with compromised accounts.”
There was a button that said “share my data with this account”. If that person went and shared that info publicly, how is that any different? The accounts accessed with accessed with valid credentials through the normal login process. They weren’t “breached” or “hacked”.
I mean if you use the same weak password on all websites, even a strong password, it is your fault in a legitimate way. Not your fault for the fact it was leaked or found out or the company having shit security practices, but your fault for not having due diligence given the current state of online security best practices.
Not your fault if you did have a strong password but your data was leaked through the sharing anyways…
Well its also their fault for falling for 23andMe because its basically a scam. The data is originally self-selected data sets then correlating a few markers tested once, to match you to their arbitrary groups, isn’t exactly how genetics work is done.
Its actually cheap as, maybe cheaper to get 50x full genome sequencing from a company that actually doesn’t sell your data; where 23andMe business model was running a few marker tests to appease their audience they kept in the dark of how modern genetics works; then keep the same for full genome sequencing later because that shit only gets more valuable over time.
Its what makes genetics weird. A sample taken 10 years ago, will reveal so much more about you 5 years from now, like massively more.