Garmin CEO at al must be reading this impatiently, looking for some clever-magic clue, which is not gonna arrive, I am afraid.
Meanwhile Garmin watches users (like me) are wondering how it is that syncing my watch that I have bought with an application on my smartphone that I have bought requires presence of some distant online service.
I can understand that some parts like "social" stuff might depend on some central service but, hey, something simple like syncing ones excercise achievements between phone and watch? Really? Who has designed that?
It is surprisingly difficult to make synchronisation work between two devices that might run different hard- and firmware and even potentially software versions. Cloud based APIs as middleware is soo much easier in comparison.
I am completely with you conceptually, but from experience I can tell you that even if there is a commercial incentive to allow for local communication it takes a few days to get it working with the cloud and months to do it locally only. And you really need to know what you are doing to make it safe and reliable in all eventualities.
>It is surprisingly difficult to make synchronisation work between two devices
No it isn't. We were doing it for years before "the cloud" or even the modern Internet even existed using Bluetooth, RF, IR, and cables. Have you ever looked at a .fit file on a Garmin watch? It's a binary format, but is straightforward to convert to CSV, and doesn't contain much beyond timestamp, latitude, longitude, altitude, heart rate, cadence, and a few other fields. Calculating distance, duration, training effect, calories burned, etc. is simple summarization and arithmetic, plotting course on a map just requires an offline map and a plotting library. It already does all of this ON THE WATCH itself without needing any connection to anything else, so there's nothing stopping an app from doing the same thing locally on a much more powerful iPhone or Android phone. The Garmin cloud is only inserted in the process here so they can monetize your data.
And even many Garmin devices sync via the phone and app: My Edge bike computer connects via Bluetooth to the Garmin app, which uploads the file to the connect website and then downloads the analysis again. There is no technical hindrance for doing analysis in the app (I wrote my own decoders for the fit file format some time ago for building my own archive) except that this makes it simple to have consistent state and defined usage of things like altitude correction.
Except battery life. Nobody wants an apps that are battery hogs. Sending the data is less computationally expensive and therefore uses less energy. Also, takes out the difficulty figuring out how this processing might impact Various phone models, one word they didn’t care or were lazy both are bad pr.
Not true. It is much, much more energy intensive to run an RF link up to a tower or WiFi than to sum 2000 rows of 10 column wide data. RF works by emitting energy into free space, and there is absolutely no way it is cheaper for this type of thing (especially if it takes the transmitter out of sleep mode, like if you’re really out there and in airplane mode).
I can tell you that I would welcome the option to sync locally the last few days, even if I had to plug in my phone to support the massive power costs. Maybe they could offload the processing to the watch (since it already is able to display a summary of these files).
I used to sync my old 910 via ANT+ to my former Samsung Galaxy which has ANT+.
Using a third party transfer app. and then viewing it with another third party app, directly on my phone. no battery complaints, but those ANT transfers were pretty flakey.
I dont think you can browse files on the watch through bluetooth, but an OTG cable would do the trick.
I imagine it's more difficult to make it so every one of your different devices (Garmin has an extensive lineup) is able to interface appropriately with every other device than it is to code against a straightforward HTTP(S) server sitting somewhere in the cloud under your complete control. Fits Hanlon's Razor better, too.
> It is surprisingly difficult to make synchronisation work between two devices that might run different hard- and firmware and even potentially software versions. Cloud based APIs as middleware is soo much easier in comparison.
Even if that were true (and it's not), it's not how Garmin sync works. The website isn't cloud based middleware, and the watch and phone sync over bluetooth. It really is as dumb and frustrating as the OP describes.
Before syncing with the watch, over bluetooth, the app connects to the Garmin website. If the web connection fails then the transfer fails.
My understanding[1] was that these types of devices sync by sending a blob over bluetooth to the paired cell phone, and then the cell phone uploads this to the cloud to be decrypted. What kinds of devices are you talking about?
It is not intrinsically difficult, it's made difficult by the fact that the companies themselves specifically want to have their infrastructure in the mix to have access to valuable user data. There's no particularly difficult challenge to sync the phone and watch directly, offline. A good chunk of revenue comes from services which rely on the data being in the cloud.
It was 100% about user expectations, the data was never mined for anything.
People use multiple phones, replace their phone, delete apps to free up space, and still expect their data to be there.
Running a cloud infrastructure for PII isn't exactly low cost, and bundling a life time subscription with a one time device purchase is horrible economics. It isn't done for no reason.
Oh I wasn't implying what's generally seen as "data mining". But for such a company to offer many of the features in the paid services they need that data. They don't have to sell it to others, they only have to sell it back to the users as added services even included in the price of the hardware itself. Those added services can be as simple as sync between multiple phones or sharing on social media, or premium features like advanced analytics.
On the other hand this kind of data can be also sold entirely anonymized. Strava does this and many cities' urban planners buy the data to understand better how the city infrastructure is used by the people (running, cycling, etc.) and how to develop it.
I'm not saying there's no value in it, just that without it the value of the product decreases significantly. So it's in the manufacturer's best interest to have it as part of the basis of their offering.
I agree that gathering user data might be a significant factor why even products of large companies do not support cloudless communication and sync.
I just wanted to point out that since cloud based communication is so easy nowadays, doing sync without cloud support is a significantly larger effort. Not unachievable though.
You need to open a connection, which requires user action. When both devices are already connected to the cloud, the synchronisation can be done "transparently", which is what users today have come to expect.
Another reason is that if you still want to sync with the cloud, you need two synchronisation systems that have to be kept in sync too.
Over two days now, and almost radio silence from Garmin. I can sympathize with their issues, but not keeping us informed at all about what's going on is quickly leading people to become angry on various fitness forums I frequent. Not a good way to treat us customers.
I wonder how many people Garmin has working on this. It seems like there’d be someone in an English speaking country that could come up with a statement and answer some questions. Do they have their marketing team busy negotiating with Evil Corp or something?
Obviously there's a business incentive to maximize gatekeeping. Even if the gate's left wide open (for now), having control of a gate is of tremendous potential value.
Just imagine how much more $$ one can elicit in an acquisition if potential buyers may add tolls to an already established and well-traveled gate.
I never really liked the fact that I need an internet connection to sync the fit files from my activities from my watch via my smartphone to a database, in order to then download it from the web to see it.
Fortunately the watch appears as a usb device when connecting with a charging cable. So it is possible to get data out without relying on accounts and social features.
A Fitbit won't even sync with the app on your phone unless it has internet access and can connect to Fitbit's cloud servers. Without them it's basically a paperweight.
Before this incident I was thinking about getting a Garmin, but now I think I’ll just wait for Apple Watch 6, which will probably be better in every way except battery life.
Meanwhile Garmin watches users (like me) are wondering how it is that syncing my watch that I have bought with an application on my smartphone that I have bought requires presence of some distant online service.
You really wonder that? I'm sorry, how stupid are you? It's obviously to harvest data and control users. We've been warning and educating people about this for decades. When are you guys starting to wake up and voting with your wallet? Open services! Open software! Open formats! Own your data! Own your devices! Your life will be full of such disruptions if you keep using products that let corporations dominate you.
No one likes to admit that they screwed up, though -- and we all know that the truth can hurt sometimes.
---
Unfortunately, you were (likely) downvoted mostly because your blunt, honest statement comes across as condescending and rude: "Maybe you're right, but you didn't have to be such an asshole about it."
What those people are either not realizing or conveniently choosing to forget is that, just as you said, we've all been warned about this exact thing* for decades! Apparently, though, the message isn't geting through. When that happens, this type of "tone" becomes necessary in order to get people to pay attention.
Unfortunately, most will still choose not to heed the warnings. Evntually, when it (inevitably) happens to them, they'll say something like "Meanwhile ... users (like me) are wondering how it is that [this could happen]" and, of course, they'll avoid admitting any responsibility for their choices. They would much rather play the part of "completely innocent victim who could never have imagined something like this might happen".
---
It reminds me a lot of a small child that is told, repeatedly, "don't do 'X' because 'Y' will happen" and, later, is absolutely SHOCKED when they "learn" -- the hard way, typically -- that, when they do 'X', 'Y' happens.
I think saying that that the parent's statement "comes across" as condescending and rude is taking the principle of charity and presumption of good faith a bit far. It is condescending and rude by any standard of good manners and productive conversation that I care to engage under.
If a child learns the hard way that X -> Y I don't berate them and call them stupid for not listening. It doesn't make them more likely to listen the next time. Same with adults, really.
In the interests of making this somewhat productive, what open alternatives are there that we should be using? For instance, is there a commercially available, open source cycling computer that I can put my money towards?
For a different industry, is there an open source e-reader I can support instead of Amazon/kobo/nook?
Maybe I'm stupid as you say, but I genuinely don't know if these things are out there and a quick Google search didn't turn up anything I'd call usable.
None of those are cycling computers. They are applications that can, in some situations, replace cycling computers.
Hardware cycling computers, like the Garmin 830 or Wahoo Elemnt Roam, are physical hardware designed for the use case of being strapped to handlebars and being used in a wide range of environments. The battery life and performance is tuned towards always on GPS and Bluetooth for connecting to sensors. They utilize a wider range of GNSS constellations. They have physical buttons as touchscreens don't work well when wet. The screens are designed to be visible in sunlight.
Like the GP, I'm not aware of any open hardware cycling computers.
the garmin data actually is in an open format. i've written software to decode it using publicly available documentation. the software is free to use. you can copy the (.FIT) file off the watch over USB.
Are there there similar but open and private solutions to Garmin/etc for h/w and Strava/RunKeeper/etc for software?
Some solution where I can track my runs, swims, cyclings, treks just like Strava (et al) does does but I can choose to keep the data wherever I wish - local, or sync to an app on computer or to a self hosted server.
Maybe not Garmin but I would love to use such an iOS Strava alternative with similar accuracy and detail.
> In other words, it’s less a question of how to stop hackers from breaking in than how to best survive the inevitable damage.
There doesn't seem to be conventional wisdom about how to build systems that are easy to restore. How do you optimize for recovery after an attack? How do you ensure that you've eliminated all the backdoors?
My guess is a combination of "continuous restoration", version controlled code, and a complete separation of code from data.
I want to read books about this but they don't seem to exist.
Just having a decent and reasonable way to nuke and pave machines goes a long way. Most organizations don't have a good way to shoot a machine in the face and have it back up and serving in 2 minutes. Most organizations are absolutely married to "stateful services" like SQL databases with local storage, that are hard to kill, hard to restore, and give attackers a place to hang out.
If you can take all your hosts down and bring them all back up quickly, that gives you at least one tool for disrupting the attackers.
I was thinking about this recently. Tight, centralized control over servers, employees computers, and devices is hard to set up, hard to manage, and a huge surface where misconfigurations can allow attackers to jump right in.
Decentralization is the key. Microservices, or segregated services, stateless (as much as possible), and perhaps even partitioning groups of users into totally separated instances. One group gets attacked and service only goes out for the 1000 users in the group. Of course, infrastructure costs would go up but maybe not that much (since you need less resources for 1000 users than 100,000). This is relatively easy to build from scratch these days thanks to various IaaS providers and DevOps tools (obviously hard for established companies with legacy tech).
Then, keep employee computers totally separate from production servers. Let employees back them up themselves, especially since so much can just be stored in the cloud (I'll get there next). Forget about VPNs where everyone can talk to everyone else. Don't try to save money by hosting your own Jira and Bitbucket instances. Pay extra for Atlassian to host for you, then let them deal with security (actually it's cheaper from what I remember). Companies already pay for Office 365 and don't self-host that. Don't host your own email servers. Just focus on the core of what the company needs to do and that's it.
This way you spread your attack surface across a whole bunch of services that are better than you at security, and you get the benefit of not having to deal with other issues. If Atlassian or Cloudflare goes down, no big deal; they'll fix it. And all your other stuff still works.
Historically, centralization in the computer world grew from the glass house in which the one computer a company or institution could afford to buy or lease was installed -- and programmed by the tech gurus colocated there. The centralization and batch processing constraints of that era are now obsolete, having been replaced by inexpensive distributable computing, data storage and network building blocks. I suspect that many organizations haven't fully embraced the far more reliable, dependable and scalable approach to delivering services is related more to managerial issues ("tried and proven") than technical creativity and ability. The "C-Team" needs to be sold on the benefits that are obvious to the tech gurus of today, in language corporate leadership can understand and motivate their commitment.
One of the things I loved when working at a large corp with military contracts. Anything fishy from any PC and the PC was gone. No notice to the employee, no waiting, no nothing. It was removed from the network, wiped, and reimaged as quickly as possible.
There are often 2 threats from a randsomware attack - what you describe mitigates against data loss, but not against data being posted online by an attacker.
I disagree. Restoring quickly to a known-good state is a crucial aspect of shutting down an attack after the intrusion is detected. If you can't get the attacker out of the system, they will have more opportunity to exfiltrate data.
How is ransomware able to spread to all the PCs in a company? (Especially PCs at different locations around the globe)
The malware needs to execute itself on each computer. But I would think this would be thwarted by hardware firewalls as well as apps like Windows Firewall.
If my PC at work gets infected, somehow it can magically infect the guy down the hall's PC too? I thought that was made impossible years ago.
The key point there is that all the recent major events generally are not an automated attack by a simple virus, in such situations the malware opens a command&control link that is [ab]used by multiple skilled people for weeks to gain persistence, move laterally throughout the network, find systems and user accounts with elevated privileges, disable monitoring and backups, deploy to all machines just as your administrators can (because at that point they are the de facto admins of all your systems) and only "pull the trigger" of ransomware when all the prep work is done.
In the case discussed in this article, attackers took three months between the initial compromise and the ransomware attack. One can do a lot in that time.
The common components in the ransomware attacks is Windows and AD.
Some leverage known exploits against elements like LSASS, so if the person infected has credentials for another computer, why not slurp up all the credential tokens on remote computers that you can log into too.
If you use Linux/Unix on the other hand, you can do descent things to contain access. Firstly, elevated management accounts can restrict login sources, either by ssh authorized_keys or deny rules in sshd_config. Secondly, and very importantly, you can contain what applications can access through SELinux.
Running Windows these days is like walking around with "Kick me" hung around your neck.
I think your generalisation doesn't work once you get to sites with advanced staff and budget. For example this will stop all but extremely targeted attacks: https://docs.microsoft.com/en-us/windows/security/threat-pro... but it requires a lot of time managing and the more varied things you do, the more annoying it will be to manage. (+ It's probably impossible for devs)
None of what you mentioned requires a lot of effort on Windows. Exploits in LSASS are no different from exploits in Linux kernel, and if you stay up to date and configure everything correctly you should be fine.
Apart from some special cases like Wannacry/NotPetya, ransomware crews do only as much lateral movement as is required for privilege escalation. Once they have DA, they can just disable protections and push malware centrally through AD.
Depending on how the network is configured, node-to-node spread may be possible. Firewalls - hardware or software - are not magic and can definitely miss things.
It may also have been a matter of servers getting infected, infecting hosted files in shares, and client machines open the files to get infected.
Or, as another user points out, domain controllers can readily do this.
It's common to install security management software on systems to allow for centralized update push. That system was probably comprimizsed and used to push out the ransomware.
Probably through Active Directory, which has the ability to deploy software. If a domain controller was compromised, the payload could be pushed out across the board.
Endpoints like PCs and servers check in with domain controllers at recurring intervals, so even if all endpoints are behind firewalls and can’t talk to one another, they still reach out to domain controllers periodically to pull down configuration updates and so forth.
Firewalls do absolutely nothing once someone got your weakest link to click something and go to town.
From my last penn test it goes, phish, get a click and execute or credentials, use a hack like getting legacy NetBIOS exploit to give up hashes for all your users, crack the hashes and hope someone used a short 12 char password or something dictionary-easy like “Wr3st1ing1!”, then leverage that access again and again until you have a printer that someone gave domain admin access to because it was easier than setting correct policies, an admin actual, a service not account that has good AD privileges, etc. Then start pushing software as admin.
Most of the time it’s not even this complicated.
The only thing that “saves” you from paying the ransom is good backups. But if a group is fairly competent, they’ll encrypt your backups too. So it needs to be offline.
I don’t have much love for Barracuda Backup, but for very little money you get nightly offsite backups that might just save your cyber insurance or company itself from having to pay.
Your backups will contain all the backdoors that the attackers managed to deploy - so even ignoring the normal massive effort of restoring all your computers, you can't simply restore backups, you need to carefully audit everything that you're restoring to clean hardware, and you need everyone to change their credentials (and not just by appending "2" at the end) otherwise you'll be owned again immediately afterwards.
If you want to restore operations of a large company, data files are not really sufficient - if you have the data, but need to rebuild all the internal server and application infrastructure and configuration, then that's going to take you a very, very, very long time. It's tricky to rebuild from scratch even basic things such as email, file sharing, payroll and inventory control systems, etc when spread out over many countries and offices and scaled to, say, 10000 employees over the world; much less something like a smelting plant control system developed 25 years ago by a company who's now out of business.
Disaster recovery can be quick if you can restore hundreds of virtual servers (and you're going to have hundreds), whole key machines, and all the user and network config from backup. If all you have is data files and bare hardware, then the business is going to have a lot of expensive downtime while you rebuild all the infrastructure. "cattle not pets" approach and automated provisioning of machines from config files helps in this regard, but almost no company has that for all their critical infrastructure, especially if we're talking about non-IT companies whose critical infrastructure is not some single consumer-facing app (e.g. Twitter), but a diverse, distributed collection of third-party IT solutions for various business-critical needs.
> The only thing that “saves” you from paying the ransom is good backups. But if a group is fairly competent, they’ll encrypt your backups too. So it needs to be offline.
This is the part I’ve never understood. Surely you should be backing up in an append only fashion initiated from the backup server?
My best guess is that this gets managed from AD as well, so they find it and take over?
> Surely you should be backing up in an append only fashion initiated from the backup server
The key idea is assuming everything is compromised. Whether you use append or whatever, is not helpful if the functionality to change that configuration exists, because that gets changed, backup server is gone, backup storage is gone, etc.
You have to design a system where even a rogue IT admin with full access to everything can’t screw you. Usually that involves third parties where there is no mechanism for a rogue IT person or other attacker to delete your offline backups. So some people have Iron Mountain pick up disks in a lock box daily, or use an online backup service that specifically has features for this, where they keep extra copies of your backups completely offline and provide no mechanism for the customer to delete them.
> You have to design a system where even a rogue IT admin with full access to everything can’t screw you.
OK, brilliant — this is a nice articulation of a fundamental principle.
Do you know of any books that describe how to design such systems?
Guaranteeing that offline backups exist is a great start, but if the backups contain backdoors, restoring could be extremely laborious and yet unsuccessful.
I never considered the rogue sysadmin perspective but it’s a good point!
I do wonder if some of my misunderstanding is because my experience is mostly SaaS companies, so paying external providers is more “natural” vs a company that makes its money selling units.
This is definitely doable, but it's harder than the naive solution so often it's not done. Same as log storage for example, or any other incremental data.
Related - see how many examples of S3 policies split access into read and write rather than read, append, write. It doesn't even matter where the logic lives - only whether the storage service allows you to delete anything.
S3 is easy to handle but not intuitive. S3 always get write access from production systems because gotta be able to send the backup (anything strategy that aims to prevent writing to S3 is doomed).
The trick is to get another S3 account (or any large storage really), to download everything from that bucket periodically. The "replication" needs read-only access to the first bucket. The second account doesn't need to be accessed by anything or anybody so it is relatively safe.
> Firewalls do absolutely nothing once someone got your weakest link to click something and go to town.
Well, fw would be effective if organizations used network segmentation effectively, but of course close to no one does that in practice (e.g. usually IT/support have access to everything).
The few instances I was assisting companies with ongoing ransom ware attacks, all had a similar pattern. Some initial breach of a client system (think malicious office document) gave attackers a foothold inside the network. From there the attackers ultimately pivoted to own the Active Directory. Equipped with this level of access they identified key assets and proceeded to encrypt them. Backups, if not stored offline, were rendered useless.
It is quite challenging to recover from this kind of breach since the attackers had every opportunity to touch every system connected to the AD and leave backdoors behind. I have seen companies trashing their whole AD, re-imaging all machines and basically starting from scratch at great cost.
That doesn't actually say at all. Symantec's report has more detail but it still has gaps:
> The initial compromise of an organization involves the SocGholish framework, which is delivered to the victim in a zipped file via compromised legitimate websites.
> The zipped file contains malicious JavaScript, masquerading as a browser update.
So are people just like "this random website is trying to download a browser update, ok I'll unzip it and run it, even though I never normally have to do this". Seems plausible.
Then:
> Privilege escalation was performed using a publicly documented technique [there's a link] involving the Software Licensing User Interface tool (slui.exe), a Windows command line utility that is responsible for activating and updating the Windows operating system.
> The attackers used the Windows Management Instrumentation Command Line Utility (wmic.exe) to execute commands on remote computers, such as adding a new user or executing additional downloaded PowerShell scripts.
It's not really clear to me how local privilege escalation allows you to execute commands on remote computers though.
If you gain local privilege escalation on some workstation user, you can gain access to credentials of user(s) of that workstation which allow you to impersonate that user throughout the network.
If it's a privileged user, then you can move to many more workstations, if it's a non-privileged user then you may be able to use their normal access (email, network shares, access to internal applications) to try and trip some privileged user into compromising their workstation in a way that you could not from the outside. Or you can wait a month until some tech support person logs in to that workstation and you can steal their credentials.
Windows caches the logons of the last few users as a hash on the local PC, malware can use those hashes to authenticate against network resources as that user. If one of those users was a domain admin, on most networks they can access just about anything
Firewalls can only protect against what's known. Once you've invented or discovered a method the firewall doesn't know about, you're trusted as much as any regular program. Sometimes even changing the binary or payload slightly will thwart some firewalls because they're precise machines looking for precise signatures. It's not super easy to get past a firewall with a known vulnerability, but not impossible. With a 0day the firewall is almost irrelevant.
This is true regarding "next-gen" firewalls. But, if you design a plain old segmentation strategy with simple but well thought out allow/deny rules, then a firewall will be pretty valuable in many situations.
Extreme example: you can think of an air gap as a "firewall" with all deny rules. Air gaps are pretty secure. (Yes, there are still way's in but finding them will be many orders of magnitude harder than finding a 0day in a "next-gen" firewall).
Another example: I put all printers in a dedicated VLAN and block all traffic in and out except specific print ports from the print server IP only. In practice, way more secure than any "next-gen" firewall will ever be.
I know this is always contentious but are there any of these ransomware attacks on non Windows machine? I mean prominent ones? I understand everyone is running Windows on the desktop, but why are Linux servers not targetted by the same thing as they are prominent? I know they get hacked all the time, but I never read stories like this about them. I read that mongo was hacked (and yeah, using mongo, sorry but...) which probably ran on Linux; however pure ransomware attacks I cannot find outside Windows. People keep saying that if other devices would be as popular, they would attack them; but for instance my mother has an ipad, android phone and a windows laptop, and the only (penis enlarger.....) malware is in windows which has an up to date AV. Android is more popular than Windows, Linux on servers is as well, iOS is as well. And yet all the crap is always Windows. I do not get it.
Ransomware attacks absolutely do target Linux servers because one needs to take down all the servers to have a proper business disruption for which someone will pay a million dollar ransom; in all the recent prominent attacks Linux servers were taken down as well.
Perhaps there's some issue with what you mean by "pure ransomware" - if you mean automatically spreading worms, then those aren't that relevant, prominent examples like Petya was four years ago; NotPetya was not ransomware but a destructive weapon, etc. In the current environment, and also in the attack described in this article, a "ransomware attack" means a takeover of your systems by a ransomware crew of hackers manually working on your specific network. They generally start with a spearphishing which targets Windows desktop machines because usually the easiest way to target Linux servers is through client-side attacks, obtaining user credentials and a foothold inside the network that helps with firewall restrictions.
> obtaining user credentials and a foothold inside the network that helps with firewall restrictions.
Yes but those are somewhat human errors; my point is more along the lines that linux might be the primary target for the entire attack, but it always starts with attacks on Windows. I was looking for a case, specifically with ransomware, that started with Linux/Mac OS X instead of Windows.
In my opinion (and to be honest, PCI DSS actually enforces this some extend) it should not be possible to gather linux credentials from singular hacked machines. If you hack my system, you will not be able to login to our prod linux machines; you will need my hardware device to generate OTPs. This is what we actually do for a living, but it is rather weird that people don't just have google-authenticator as standard for lack of a hardware token; then your private key would still not get the hackers anywhere. Use hardware tokens + non-windows then basically none of these attacks would work.
Yes, but my question was specifically about a high profile one; so huge company or huge money stolen. This is just 'yes it exists' but nothing was done with it.
Given there's been a recent trend about ransomware not only encrypting, but also exfiltrating data, backups won't save you from the bad PR of the leak.
The unfortunate thing is that the ransom probably is priced such that it's cheaper than the company resolving the problem on their own. Or the company would just resolve it without paying the ransom. On the other hand, it's bad on many fronts if the company just decides it is the cost of doing business, and doesn't do a great job of securing their systems in the future...
Many companies just get by, rather than doing serious security design. How do you change that culture in a company? Will paying the ransom do that? Probably if it only costs $1M to do. If it costs them $100M to do, would they do it?
Offline backups have been a thing for decades. Why is this not standard practice? Especially for a technology company like Garmin. It can't be about cost savings, businesses still pay for insurance and security systems. For that matter, offsite backups should also be saved in case of fires, floods, tornadoes, theft, etc...
Offline backups are not a complete solution. What if your backups are infected with the virus? Even if the backups are uninfected, your IT department has to manually scrap and rebuild all your computers from data centers to the warehouse to the receptionist. And in the meantime, like the article described, you have to pay your employees and suppliers and continue to ship products to customers.
I think the point here is that it's not as trivial as having an offline copy of your SQL DB or whatever. If the ransomware has encrypted a huge chunk of your infra the chances are that you no longer have anything to restore the backup to — maybe your configurations are encrypted, your DB hosts aren't up, user accounts etc are missing. Assuming that only user data is affected and can be easily restored likely falls very short of the full picture. I expect the folks at Garmin are faced with an infrastructure that looks like a grenade fell into it.
Backup price for 8TB is cheap enough. Backup price for 8PT does not scale well.
I don’t know how much data Garmin has company wide. But it’s a lot different for me to consider offline backups as a simple service than a company this size and complexity.
There are plenty of companies that can backup 8PB of data from a wide variety of sources for you, and make it a relatively staitforward task to interegrate with them.
There is complexity, yes, but it's mostly a solved problem.
The easiest way to back up 8PiB of business critical data is to be a business with a need for exabytes of non-business-critical data.
I think the people who are the worst off have petabytes of business critical customer data, but don't do massive datamining projects on top of this. Then you end up with a data center that is 90% (business critical) prod, and triplicating that becomes much more relatively expensive than having a data center that is 20% prod.
Exactly, Although not always practical, if kept virus free (which of course is possible), offline back ups are the best solution because it's never broken into. ANYTHING online is risky, we all know this, so why do corporations still practically ignore the obvious?
Good story, bad title. There is no real lesson on how to survive ransomware except that company called Hydro was able to use employees to rebuild some of its data needed for running plants. Most companies deal with databases that cannot be similarly rebuilt in just days like that.
Isn't a ransomware attack no different from a catastrophic disk drive failure? You reformat and restore from backup. Of course, the companies profiled in that article had all their computers infected, so it could take some time. Still a recovery boot disk could be distributed and a clean image restored over the network.
No, because you can't consider your backups as a "known good state". A malicious attack is fundamentally different from a disaster or accident.
You should expect that any backup of systems (instead of backups of 100% pure data) will contain backdoors, that any weird systems (routers, printers, phone centrals) may be compromised even if they seem fine, and that the credentials of all the employees and any private keys/certificates have been exfiltrated, so they need to be changed.
Even “100% pure data” isn’t necessarily safe. Word documents, Excel sheets, PowerPoint decks, etc. (and their Google Docs counterparts) are all suspect, because they can contain embedded code. Some “data” formats are really not data formats at all, but code which produces the data you use (e.g. PDF, Postscript, or any Excel sheet with formulas). It’s even possible to corrupt certain otherwise inert data files in such a way as to cause malicious behavior by exploiting bugs in the software that reads them.
So, yes, while you’re technically correct that 100% inert and uncorrupted data files are safe, you have to prove that those files are not corrupted. And, so many data formats either are code or contain embedded code, so these need to be treated as suspect until proven otherwise, as well.
Well, yes, I would not consider arbitrary documents as "pure data" - for that I was thinking as something like a dump of a particular database table contents only, separate from all the database structure/metadata/triggers/functions/etc.
You could restore a dump of pure structured data to a known clean system and that would be safe - but once you include arbitrary files as you describe, no way. Embedding malware in some periodically-accessed document on a public file share is a reasonable persistance mechanism for an attacker.
I would expect a SQL dump of a database to be safe, as long as the schema only contains standard data types and no BLOBs. Once you start throwing BLOBs in there, anything goes.
The network was infiltrated in December, the attack happened in March. So the last clean backup was from November. This would be a very old backup. Not sure how useful that would have been.
I find it very interesting the Volume Shadow Copies and VLANs are basic tools that have been around forever, cost very little and can mitigate a lot of ransomware attempts.
There's no reason for the secretary's computer to be able to connect to the onsite SQL server... unless she uses an application that uses that SQL server.
The complicated part isn't figuring out that you should segment access or finding technology that lets you do it it's actually knowing what to segment in a way that balances risk with speed and cost.
The same is true for most things. Problems are often well known, solutions are often understood, but doing things is where the actual work is.
What if they hacked you months before pulling the trigger? The article mentions they were hacked in December and the attack launched in March. Restoring a backup would then still leave the hackers inside.
And even if most data were backed up, most computers still have to be wiped and reinstalled. I don't think most companies backup the entire disks off all employees, it's normally just a dedicated file area. So while the data can be restored, the IT department still have to set up hundreds of computers for all kinds of different workers or machines on the spot.
Nothing is ever easy, don't be so dismissive about things you haven't thought through.
Companies of non-trivial size often have (and should have) a system allowing for remote device management. Which means:
- It should be easy to reinstall to a known good image with all the relevant software, settings, drivers, etc. then restore the backed up data. This is relatively common in corps.
- Once you observe the malware and know how it reaches the C&C server, you can push rules blocking that host or block the bad binary network-wide.
Of course there will be companies that didn't have good enough system in place and once exploited are doomed.
"Once you observe the malware and know how it reaches the C&C server" presumes a single malware and a single mechanism for reaching the C&C server, which is unrealistic. We're not speaking about some piece of automated malware spreading on its own, which you could reverse engineer and see what it does and does not, we're talking about skilled people working for weeks to compromise your network. You should expect multiple different types of persistence, backdoors in publicly reachable systems and leaked privileged credentials.
Sure, you need to make sure your AD and device management is clean before starting the process. My point was that once you're bootstraped you shouldn't need a fully manual recovery process.
And I'm pointing out that when your attacker has control of device management they can also disable device management on all the devices after their attack is deployed.
> Restoring a backup would then still leave the hackers inside.
Even if they could comfortably restore a backup from a year prior, they are left with hackers who know how to penetrate their network until they determine how it occurred..
some of these companies, need to start suing microsoft. since it's usually windows affected by these malware attacks. if microsoft wants to keep serving the majority of the corp world, they need to have an os, based on user space system. i.e each program runs in it's own sandbox. and any data passed is via message passing.
It's the fault of companies for never upgrading their machines, giving full administrators access to every employee and using Admin123 as the domain administrator password account.
If we believe the article, the virus came from an attachment in an email to a random employee. Why are executable attachments not blocked? Why is an executable running as an unprivileged user able to storm through every computer in the company?
But did they? Were their machines updated? Did they have some "corporate AV" solution that was useless?
> Why are executable attachments not blocked?
Because there are dozens of weird Windows extension types that execute automatically on Windows, though yeah the attachment should have been blocked and a customer service machine should have a whitelist of programs that need to run (or only run signed ones)
I've seen hundreds of enterprise networks and you'd be surprised how many of them have passwords in public shares, no configured firewalls or software policies and only file level backups. We obviously can't know right now what Garmin's setup was but I'd wager they were guilty of something.
Meanwhile Garmin watches users (like me) are wondering how it is that syncing my watch that I have bought with an application on my smartphone that I have bought requires presence of some distant online service.
I can understand that some parts like "social" stuff might depend on some central service but, hey, something simple like syncing ones excercise achievements between phone and watch? Really? Who has designed that?