I had something similar once happen at my previous job. The company was using Google Workspace and GCP. The person who had set up GCP initially left the company. 1 month later, HR deleted his Google acc. Little we knew, the payment profile was attached to he GCP projects through his account, so deleting his account effectively removed the CC. All our stuff was shut down immediately (within minutes).
We first had no idea what was going on. GCP support just told us "it seems you deleted your CC". Eventually, we figured out what happened.
Set up a new payment profile and started migrating our GCP projects to it. Eventually had to create multiple of them, because there is an arbitrary quota of how many projects you can have per payment profile (~4), and support told us it would take days to increase it.
Fortunately, all our data was still there. However, support had initially told us that it's "all gone".
That's why you always use service accounts for those kind of things. Admin, billing, etc. Never let a "daily driver" account hold the keys to the kingdom.
There's not currently a requirement for a real name (though Google did at one time push that when going nuts with Google+) but they do really strongly push a cell phone number, which can easily get attached to someone who later is no longer with the company.
You need to manufacture a persona (with password hints, a cell phone plan, etc) to really be secure - or have multiple avenues to access your system.
Their point still stands. A false positive algorithmic account deactivation coupled with the impossibility of getting a human to review the decision is a very real scenario.
Unfortunately personal accounts have a usable quota, service accounts have to go through all the approvals (at least 4: getting allocated at all, which cost center, what resource allocation, what scheduling priority, and this is assuming you need zero "special permissions")
Doing the same with service accounts as you can do with a personal account takes weeks before you can even get started, and informs the whole management chain what you're doing, which means it informs essentially every manager that could complain about it of exactly the right time to complain to be maximally obstructionist about it.
Or to put it perhaps less ...: using service accounts requires the processes in the company to be well thought out, well-resourced with people who understand the system (which this issue shows: they don't even have those at Google itself), well-planned, and generally cooperative. Often, there will be a problem somewhere.
> Google Cloud accidentally deletes UniSuper’s online account due to ‘unprecedented misconfiguration’
which is a lot more alarming.
I've heard of sudden and unwarranted bans before, but never an accidental deletion of a customer who they only just convinced to migrate to Google Cloud last year!
Yes, surprised this hasn't hit the top of Hacker News and instead gone un-noticed. If Google did delete the account, this is massive.
Large financial pension fund with advertised $124 billion in funds under management so not some toy cat gif startup has account deleted accidentally by Google. That can very easily wipe out a company using cloud as cloud vendors advertise you to. From the article it sounds like they are lucky they had offsite backups but still potential for data loss and restoring offsite backups likely a task in itself.
It's a major incident, I feel for the ops team who'll be working under massive pressure trying to get everything back up.
Indeed. My gut feeling is that most companies using AWS, Azure or Google Cloud are not going to be making backups elsewhere. I wonder how much data would've been lost if they didn't have backups elsewhere?
Interestingly the Australian financial services regulator (APRA) has a requirement that companies have a multi-cloud plan for each of their applications. For example, a 'company critical' application needs to be capable of migrating to a secondary cloud service within 4 weeks.
I'm not sure how common this regulation would be across industries in Australia or whether it's something that is common in other countries as well.
US federal financial regulators and NYDFS have similar concerns and strong opinions, but nothing in statute or regulatory rule making yet (to my knowledge; I haven’t had to sit in those meetings for about 2 years).
No, back up your data to a service independent of the service hosting what you're backing up.
It reminds me of "I use box/iCloud/some-other-cloud-drive-service so don't need backups", but don't understand the model is "I deleted/broke the data on my machine, and then synced that data loss to every machine"
Google Cloud accidentally deleted a company's entire cloud environment (Unisuper, an investment company, which manages $80B). The company had backups in another region, but GCP deleted those too. Luckily, they had yet more backups on another provider.
Whoever made the decision to use GCP should be fired. Google’s random deletion of accounts and their resources is well known. Somehow there wasn’t anyone in the whole organisation who knew about this risk or Google had convinced them it doesn’t happen to big players.
This article doesn’t challenge the assertion by Google that this is a once-off, which is really sloppy journalism.
I really would like to hear the actual-actual story here, since it is basically impossible it actually was "Google let customer data be completely lost in ~hours/days". This is compounded by the bizarre announcements - UniSuper putting up TK quotes on their website, which Google doesn't publish and also doesn't dispute.
if a massive client came and said "hey our thing is completely broken", then there would have been a war room of SREs and SWEs running 24/7 over two continents until it wasn't.
It's just mind-boggling that their architecture allows this to happen so quickly IMO. There are so many resources and dependencies, that completely nuking a cloud account cannot and should not be easy or fast... and should not actually be possible by the cloud vendor.
I suppose they need to guard against anyone setting up costly infrastructure and doing a "runner" (allowing a credit card to lapse) - in that scenario - deleting all the customers data should be the absolute last resort - after it's been reasonably determined they are being malicious.
How does AWS manage these scenarios? I'm sure they follow-up multiple times before hitting the nuke button. In-fact - they know and treat their "larger accounts" with special privileges and assurances. Unisuper is not a small fish.
Hate to disillusion you - but exactly this happened to us on AWS a couple of years ago. 1 months research compute - no biggie, but no more AWS for us either.
To counterpoint this with 'not for thee, but for me.' If you are spending > $1m/yr with AWS this will never happen. The decisions need to go through a TAM who will block this kind of thing.
For smaller users I can imagine this sort of thing happens pretty regularly with every cloud, even smaller ones.
Please reply if you had a TAM and were spending that much! I'd be personally interested to hear that was the case.
No, we had the lowest tier (research - funding is an issue) - but more seriously SME's will get trapped in this as well, and if I had $1m/yr I would definitely be running my own datacenter.
They offered to investigate if we paid for support, I counter-offered with not using the chat script in one of my courses as an example of AWS customer "support", and we ended up getting a full refund at least.
3 of those are about Exchange and one is about Bing (involving AAD, but it was an AAD app that was misconfigured and not an AAD issue itself). The teams that run Azure are in entirely separate organizations with wildly different product stacks.
Exchange has a bunch of decades old infrastructure and is a security nightmare afaik. Dunno much about Bing.
The "org chart" graphic with MS orgs all pointing guns at each other is real shit. Different orgs have very different security postures, and Azure's is much stronger than others.
We first had no idea what was going on. GCP support just told us "it seems you deleted your CC". Eventually, we figured out what happened.
Set up a new payment profile and started migrating our GCP projects to it. Eventually had to create multiple of them, because there is an arbitrary quota of how many projects you can have per payment profile (~4), and support told us it would take days to increase it.
Fortunately, all our data was still there. However, support had initially told us that it's "all gone".