Super fund boss and Google Cloud global CEO issue joint statement apologising for ‘extremely frustrating and disappointing’ outage
A week of downtime and all the servers were recovered only because the customer had a proper disaster recovery protocol and held backups somewhere else, otherwise Google deleted the backups too
Google cloud ceo says "it won't happen anymore", it's insane that there's the possibility of "instant delete everything"
thats why i am trying to explain to my family since forever. their answer always amounts to something like "it would be illegal for them to look at my data!" like those companies would care. .
They said the outage was caused by a misconfiguration that resulted in UniSuper’s cloud account being deleted, something that had never happened to Google Cloud before.
Bullshit. I've heard of people having their Google accounts randomly banned or even deleted before. Remember when the Terraria devs cancelled the Stadia port of Terraria because Google randomly banned their account and then took weeks to acknowledge it? The only reason why Google responded so quickly to this is because the super fund manages over $100b and could sue the absolute fuck out of Google.
This happened to me years ago. Suddenly got a random community guidelines violation on YouTube for a 3 second VFX shot that was not pornographic or violent and that I owned all the rights to. After that my whole Google account was locked down. I never found out what triggered this response and I could never resolve the issue with them since I only ever got automated responses. Fuck Google.
This sort of story is what made me switch away from Google Fi and ultimately mostly degoogling. Privacy was a big part later on, but initially it was realizing that a YouTube comment or a file in my drive could get my cell service turned off.
For large businesses, you essentially have two ways to spend money:
OPEX: "operational expenditure" - this is money that you send on an ongoing basis, things like rent, wages, the 3rd party cleaning company, cloud services etc. The expectation is that when you use OPEX, the money disappears off the books and you don't get a tangible thing back in return. Most departments will have an OPEX budget to spend for the year.
CAPEX: "capital expenditure" - buying physical stuff, things like buildings, stock, machinery and servers. When you buy a physical thing, it gets listed as an asset on the company accounts, usually being "worth" whatever you paid for it. The problem is that things tend to lose value over time (with the exception of property), so when you buy a thing the accountants will want to know a depreciation rate - how much value it will lose per year. For computer equipment, this is typically ~20%, being "worthless" in 5 years. Departments typically don't have a big CAPEX budget, and big purchases typically need to be approved by the company board.
This leaves companies in a slightly odd spot where from an accounting standpoint, it might look better on the books to spend $3 million/year on cloud stuff than $10 million every 5 years on servers
If you are a small company then yes. But i would argue that for larger companies this doesn't hold true. If you have 200 employees you'll need an IT department either way. You need IT expertise either way. So having some people who know how to plan, implement and maintain physical hardware makes sense too.
There is a breaking point between economics of scale and the added efforts to coordinate between your company and the service provider plus paying that service providers overhead and profits.
It's absolutely not. If you are at any kind of scale whatsoever, your yearly spend will be a minimum of 2x at a cloud provider rather then creating and operating the same system locally including all the employees, contracts, etc.
G Suite is a legitimate option for small-medium businesses. It's seen as the cheaper, simpler option versus Azure. I usually recommend it for nonprofits as they have a decent free option for 501c3 orgs.
They had backups at multiple locations, and lost data at multiple (Google Cloud) locations because of the account deletion.
They restored from backups stored at another provider. It may have been more devastating if they relied exclusively on google for backups. So having an "offsite backup" isn't enough in some cases, that offsite location need to be at a different provider.
@Hirom With "offsite" I mean either a different cloud provider or own hardware (if you hold your regular data at some cloud provider, like in this case).
While UniSuper normally has duplication in place in two geographies, to ensure that if one service goes down or is lost then it can be easily restored, because the fund’s cloud subscription was deleted, it caused the deletion across both geographies.
TFW your BCDR gets disastered.
Also "massive misconfiguration" is the "spontaneous disassembly" of cloud computing. i'm sure it's mutiple systems are misconfigured causing chaos but it sounds hilarious.
Just an FYI in case you don't follow Cloud news but Google has deleted customers accounts on multiple occasions and has been for literal years. This time they just did it to someone large enough to make the news. I work in SRE and no longer recommend GCP to anyone.
More than half a million UniSuper fund members went a week with no access to their superannuation accounts after a “one-of-a-kind” Google Cloud “misconfiguration” led to the financial services provider’s private cloud account being deleted, Google and UniSuper have revealed.
Services began being restored for UniSuper customers on Thursday, more than a week after the system went offline.
Investment account balances would reflect last week’s figures and UniSuper said those would be updated as quickly as possible.
In an extraordinary joint statement from Chun and the global CEO for Google Cloud, Thomas Kurian, the pair apologised to members for the outage, and said it had been “extremely frustrating and disappointing”.
“These backups have minimised data loss, and significantly improved the ability of UniSuper and Google Cloud to complete the restoration,” the pair said.
“Restoring UniSuper’s Private Cloud instance has called for an incredible amount of focus, effort, and partnership between our teams to enable an extensive recovery of all the core systems.
The original article contains 412 words, the summary contains 162 words. Saved 61%. I'm a bot and I'm open source!