Crazy World

After I successfully upgraded two Windows 10 VMs to the 1809 release at the beginning of October, I tried to do the same with more VMs and an actual laptop this week. But I couldn’t, no update was offered. While trying to find out how to install the 1809 update, I found out why I couldn’t—Microsoft withdrew it after the update, among other things, deleted users’ data.

Reading about the details left me stunned (though only momentarily, and now I can write about it). The complaints included this now-famous gem: “I have just updated my windows using the October update (10, version 1809) it deleted all my files of 23 years in amount of 220gb. This is unbelievable, I have been using Microsoft products since 1995 and nothing like that ever happened to me. […] I am extremely upset. Not sure what to do….please let me know.”

One would like to answer “easy—just restore the files from your latest backup” but that could perhaps be construed as insensitive.

The fact that someone has been using PCs for 20+ years and still hasn’t figured out that yes, your data can and will go poof is just mind-boggling. External hard disk? NAS? USB stick or two? A cloud backup if there’s nothing else? There are so many options. And are there really people so lucky that in over twenty years, they never had a bad sector on a hard disk, never a bricked SSD, never an accidental file delete caused by a slip of the finger? That’s amazing luck.

But then there is the other side of the story. Files got deleted because of confusion caused by “Known Folder Redirection” aka KFR, which manages to make symlinks look good. The key takeaways are that a) the complexity of Windows is out of control (not news, I know), and b) someone at Microsoft thinks that upon encountering a set of unknown user files, deleting them without asking is an appropriate response. That’s just… wow.

It also turned out that the file deletion bug was reported months ago but ignored, presumably (speculation on my part) because it was a) lost in the deluge of low-quality bug reports, and b) only affected a tiny fraction of users with who were convinced that there was nothing special about their configuration and anyone not seeing the bug was either not looking hard enough or lying. That at least is not surprising.

This entry was posted in Bugs, Microsoft, Random Thoughts, Windows. Bookmark the permalink.

9 Responses to Crazy World

  1. Richard Wells says:

    Normal backup scheduling could lead the home user to lose a week’s worth of work. Very annoying. The announced service pack that provides an opportunity to do an extra backup before starting the install is a better method. Though even with a backup, a several hour long restore will not be received with happy feelings.

    MS seemed to have skipped the wider ring of testers before shipping the Update. MS needs to put less faith in the magic of DevOps preventing the need for QC.

  2. Michal Necasek says:

    Yes, skipping the extra testing step and then having to pull the update because it may cause catastrophic data loss seems… ironic.

  3. Jeff says:

    Perhaps Microsoft should shift gears and get into reality TV with a new show called “MSFT’s Wide World of Windows”, featuring “the thrill of new features” but also “the agony of delete”.

  4. Michal Necasek says:

    I really hope MSFT isn’t looking for product ideas here. Because that sounds like something which would probably be a hit in today’s crazy world.

  5. John Elliott says:

    The fact that someone has been using PCs for 20+ years and still hasn’t figured out that yes, your data can and will go poof is just mind-boggling. External hard disk? NAS? USB stick or two?

    From the ‘now-famous gem’ link – the user in that case did back up to an external hard drive a couple of months prior to loading the update, so ‘only’ lost data since then.

  6. Michal Necasek says:

    Ah, so “deleted all my files of 23 years” was merely a gross exaggeration. That’s good to hear.

  7. MiaM says:

    The problem with backups for a “normal user” (however those can be defined) is that one of the big threats, malware and bugs, has a non-neglibe risk of ending up in the backup too, either during backup or attempts to restore data.

    The other major threat to a “normal user” is hardware failure. It seems like storage nowdays give warnings via S.M.A.R.T. and the firmware is smart enough to write protect deteriorated storage devices so they can be read back instead of being viped by a write attempt. This makes backups less important than they used to be.

    The only really safe backup is some off-site storage and a client-server model where the server is inprenetable enough to withstand attacks, and a server system where every old version of everything is saved. That’s a bit expensive for most “normal users”.

    For users who actually store important data locally on their computer, it actually seems cheaper and simpler to only use that computer for whatever is done with that data, and use another computer for everything else (especially including web browsing).

  8. Michal Necasek says:

    With hard disks that is the case, they tend to fail in stages and give enough warning. SSDs are known to fail catastrophically, i.e. they completely die with no warning.

    I find a home NAS to be a good backup solution, it protects from normal hardware failures with RAID. Good for photos, ISO images, and such. For source code, an off-site repository is useful.

  9. ender says:

    Nowadays I usually suggest a NAS and Veeam Agent to home (and even small business) users – the free edition of Veeam is easy to set up, and it works without the need of user interaction, which is the most important thing about backup – any solution that requires users to do something (even if it’s a single click) will be forgotten sooner or later.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.