Smarty Pants Monday, September 24, 2018

Executive Summary

SuperDuper 3.2 is now available. It includes

In the Less Smart Days of Old(e)

Since the SuperDuper!'s first release, we've had Smart Update, which speeds up copying by quickly evaluating a drive on the fly, copying and deleting where appropriate. It does this in one pass for speed and efficiency. Works great.

However, there's a small downside to this approach: if your disk is relatively full, and a change is made that could temporarily fill the disk during processing, even though the final result would fit, we're trigger a disk full error, and stop.

Recovery typically involved doing an Erase, then copy backup, which took time and was riskier than we'd like.

Safety First (and second)

There are some subtleties in the way Smart Update is done that can aggravate this situation -- but for a good cause.

While we don't "leave all the deletions to the end", as some have suggested (usually via a peeved support email), we consciously delete files as late as is practical: what we call "post-traversal". So, in a depth-first copy, we clean up as we "pop" back up the directory tree.

In human (as opposed to developer) terms, that means when we're about to leave a folder, we tidy it up, removing anything that shouldn't be there.

Why do we do it this way?

Well, when users make mistakes, we want to give them the best chance of recovery with a data salvaging tool. By copying before deleting at a given level, we don't overwrite them with new data as quickly. So, in an emergency, it's much easier for a data salvaging tool to get the files back.

The downside, though, is a potential for disk full errors when there's not much free space on a drive.

Smart Delete

Enter Smart Delete!

This is something we've been thinking about and working on for a while. The problem has always been balancing safety with convenience. But we've finally come up with a idea (and implementation) that works really well.

Basically, if we hit a disk full error, we "peek" ahead and clean things up before Smart Update gets there, just enough so it can do what it needs to do. Once we have the space, Smart Delete stops and allows the regular Smart Update to do its thing.

Smart Update and Smart Delete work hand-in-hand to minimize disk full errors while maximizing speed and safety, with no significant speed penalty.

Everyone Wins!

So there you go: another completely "invisible" feature that improves SuperDuper! in significant ways that you don't have to think about...or even notice. You'll just see (or, rather, not see) fewer failures in more "extreme" copies.

This is especially useful for Photographers and others who typically deal with large data files, and who rename or move huge folders of content. Whereas before those might fill a drive, now the copy will succeed.

Mojave Managed

We're also supporting Mojave in 3.2 with one small caveat: for the moment, we've opted out of Dark Mode. We just didn't have enough time to finish our Dark Mode implementation, didn't like what we had, and rather than delay things, decided to keep it in the lab for more testing and refinement. It'll be in a future update.

More Surprises in Store

We've got more things planned for the future, of course, so thanks for using SuperDuper! -- we really appreciate each and every one of you.

Enjoy the new version, and let us know if you have any questions!

Download SuperDuper! 3.2

3.2 B3: The Revenge! Wednesday, September 12, 2018

(OK, yeah, I should have used "The Revenge" for B4. Stop being such a stickler.)

Announcing SuperDuper 3.2 B3: a cavalcade of unnoticeable changes!

The march towards Mojave continues, and with the SAE (September Apple Event) happening today, I figured we'd release a beta with a bunch of polish that you may or may not notice.

But First...Something Technical!

As I've mentioned in previous posts, we've rewritten our scheduling, moving away from AppleScript to Swift, to avoid the various security prompts that were added to Mojave when doing pretty basic things.

Initially, I followed the basic structure of what I'd done before, effectively implementing a fully functional "proof of concept" to make sure it was going to do what it needed to do, without any downside.

In this Beta, I've moved past the original logic, and have taken advantage of capabilities that weren't possible, or weren't efficient, in AppleScript.

For example: the previously mentioned com.shirtpocket.lastVolumeList.plist was a file that kept track of the list of volumes mounted on the system, generated by sdbackuponmount at login. When a new mount occurred, or when the /Volumes folder changed, launchd would run sdbackuponmount again. It'd get a list of current volumes, compare that to the list of previous volumes, run the appropriate schedules for any new volumes, update com.shirtpocket.lastVolumeList.plist and quit.

This all made sense in AppleScript: the only way to find out about new volumes was to poll, and polling is terrible, so we used launchd to do it intelligently, and kept state in a file. I kept the approach in the rewritten version at first.

But...Why?

When I reworked things to properly handle ThrottleInterval, I initially took this same approach and kept checking for new volumes for 10 seconds, with a sleep in between. I wrote up the blog post to document ThrottleInterval for other developers, and posted it.

That was OK, and worked fine, but also bugged me. Polling is bad. Even slow polling is bad.

So, I spent a while reworking things to block, and use semaphores, and mount notifications to release the semaphore which checked the disk list, adding more stuff to deal with the complex control flow...

...and then, looking at what I had done, I realized I was being a complete and utter fool.

Not by trying to avoid polling. But by not doing this the "right way". The solution was staring me right in the face.

Block-head

Thing is, volume notifications are built into Workspace, and always have been. Those couldn't be used in AppleScript, but they're right there for use in Objective-C or Swift.

So all I had to do was subscribe to those notifications, block waiting for them to happen, and when one came in, react to it. No need to quit, since it's no longer polling at all. And no state file, because it's no longer needed: the notification itself says what volume was mounted.

It's been said many times: if you're writing a lot of code to accomplish something simple, you're not using the Frameworks properly.

Indeed.

There really is nothing much more satisfying than taking code that's become overly complicated and deleting most of it. The new approach is simpler, cleaner, faster and more reliable. All good things.

Download

That change is in there, along with a bunch more. You probably won't notice any big differences, but they're there and they make things better.

Download SuperDuper! 3.2 B3

Backup on Connect, launchd and ThrottleInterval Sunday, September 09, 2018

Warning: this is a technical post, put here in the hopes that it'll help someone else someday.

We've had a problem over the years that our Backup on Connect LaunchAgent produces a ton of logging after a drive is attached and a copy is running. The logging looks something like:

9/2/18 8:00:11.182 AM com.apple.xpc.launchd[1]: (sdbackuponmount) Service only ran for 0 seconds. Pushing respawn out by 60 seconds.

Back when we originally noticed the problem, over 5 years ago, we "fixed" it by adjusting ThrottleInterval to 0 (found experimentally at the time). It had no negative effects, but the problem came back later and I never could understand why...certainly, it didn't make sense based on the man page, which says:

ThrottleInterval <integer>

This key lets one override the default throttling policy imposed on jobs by launchd. The value is in seconds, and by default, jobs will not be spawned more than once every 10 seconds. The principle behind this is that jobs should linger around just in case they are needed again in the near future. This not only reduces the latency of responses, but it encourages developers to amortize the cost of program invocation.

So. That implies that the jobs won't be spawned more often than every n seconds. OK, not a problem! Our agent processes the mounts changes quickly, launches the backups if needed and quits. That seemed sensible--get in, do your thing quickly, and get out. We didn't respawn the jobs, and processed all of the potential intervening mounts and unmounts that might happen in a 10-second "throttled" respawn.

It should have been fine... but wasn't.

The only thing I could come up with was that there must be a weird bug in WatchPaths where under some conditions, it would trigger on writes to child folders, even though it was documented not to. I couldn't figure out how to get around it, so we just put up with the logging.

But that wasn't the problem. The problem is what the man page isn't saying, but is implied in the last part: "jobs should linger around just in case they are needed again" is the key.

Basically, the job must run for at least as long as the ThrottleInterval is set to (default = 10 seconds). If it doesn't run for that long, it respawns the job, adjusted by a certain amount of time, even when the condition isn't triggered again.

So, in our case, we'd do our thing quickly and quit. But we didn't run for the minimum amount of time, and that caused the logging. launchd would then respawn us. We wouldn't have anything to do, so we'd quit quickly again, repeating the cycle.

Setting ThrottleInterval to 0 worked, when that was allowed, because we'd run for more than 0 seconds, so we wouldn't respawn. But when they started disallowing it ("you're not that important")...boom.

Once I figured out what the deal was, it was an easy enough fix. The new agent runs for the full, default, 10-second ThrottleInterval. Rather than quitting immediately after processing the mounts, it sleeps for a second and processes them again. It continues doing this until it's been running for 10 seconds, then quits.

With that change, the logging has stopped, and a long mystery has been solved.

This'll be in the next beta. Yay!

Technical Update! Thursday, September 06, 2018

SuperDuper! 3.2 B1 was well received. We literally had no bugs reported against it, which was pretty gratifying.

So, let's repeat that with SuperDuper! 3.2 B2! (There's a download link at the bottom of this post.)

Remember - SuperDuper! 3.2 runs with macOS 10.10 and later, and has improvements for every user, not just those using Mojave.

Here are some technical things that you might not immediately notice:

  1. If you're running SuperDuper! under Mojave, you need to add it to Full Disk Access. SuperDuper! will prompt you and refuse to run until this permission has been granted.

    Due to the nature of Full Disk Access, it has to be enabled before SuperDuper is launched--that's why we don't wait for you to add it and automatically proceed.

  2. As I explained in the last post, we've completely rewritten our scheduling so it's no longer in AppleScript. We've split that into a number of parts, one of which can be used by you from AppleScript, Automator, shell script--whatever--to automatically perform a copy using saved SuperDuper settings.

    In case you didn't realize it: copy settings, which include the source and destination drives, the copy script and all the options, plus the log from when it was run, can be saved using the File menu, and you can put them anywhere you'd like.

    The command line tool that runs settings is called sdautomatedcopycontroller (so catchy!) and is in our bundle. For convenience, there's a symlink to it available in ~/Library/Application Support/SuperDuper!/Scheduled Copies, and we automatically update that symlink if you move SuperDuper.

    The command takes one or more settings files as parameters (either as Unix paths or file:// URLs), and handles all the details needed to run SuperDuper! automatically. If there's a copy in progress, it waits until SuperDuper! is available. Any number of these can be active, so you could throw 20 of them in the background, supply 20 files on the command line: it's up to you. sdautomatedcopycontroller manages the details of interacting with SuperDuper for you.

  3. We've also created a small Finder extension that lets you select one or more settings files and run them--select "Run SuperDuper! settings" in the Services menu. The location and name of this particular command may change in future betas. (FYI, it's a very simple Automator action and uses the aforementioned sdautomatedcopycontroller.)
  4. We now automatically mount the source and destination volumes during automated copies. Previously, we only mounted the destination. The details are managed by sdautomatedcopycontroller, so the behavior will work for your own runs as well.

    Any volumes that were automatically mounted are automatically scheduled for unmount at the end of a successful copy. The unmounts are performed when SuperDuper quits (unless the unmount is vetoed by other applications such as Spotlight or Antivirus).

  5. There is no #5.
  6. sdautomatedcopycontroller also automatically unlocks source or destination volumes if you have the volume password in the keychain.

    If you have a locked APFS volume and you've scheduled it (or have otherwise set up an automated copy), you'll get two security prompts the first time through. The first authorizes sdautomatedcopycontroller to access your keychain. The second allows it to access the password for the volume.

    To allow things to run automatically, click "Always allow" for both prompts. As you'd expect, once you've authorized for the keychain, other locked volumes will only prompt to access the volume password.

  7. We've added Notification Center support for scheduled copies. If Growl is not present and running, we fall back to Notification Center. Our existing, long-term Growl support remains intact.

    If you have need of more complicated notifications, we still suggest using Growl, since, in addition to supporting "forwarding" to the notification center, it can also be configured to send email and other handy things.

    Plus, supporting other developers is cool. Growl is in the App Store and still works great. We support 3rd party developers and think you should kick them some dough, too! All of us work hard to make your life better.

  8. Minor issue, but macOS used to clean up "local temporary files" (which were deleted on logout) by moving the file to the Trash. We used a local temporary file for Backup on Connect, and would get occasional questions from users asking why they would find a file we were using for that feature in the trash.

    Well, no more. The file has been sent to the land of wind and ghosts.

That'll do for now: enjoy!

Download SuperDuper! 3.2 B2

macOS Mojave: Opening New Vistas in Security for Mac Users Friday, August 31, 2018

Executive summary: sure, it's the Friday before Labor Day weekend, but there's a beta of SuperDuper for Mojave at the bottom of this (interesting?) post!

It Gets Worse

Back when OS X Lion (10.7) was released, the big marketing push was that iOS features were coming "Back to the Mac", after the (pretty stellar) Snow Leopard update that focused on stability, but didn't add much in the way of features.

Mojave (10.14) also focuses on stability and security. But in some ways, it takes an iOS "sandbox" approach to the task, and that makes things worse, not only for "traditional" users who use the Mac as a Mac (as opposed to a faster iPad-with-a-keyboard), but for regular applications as well.

Not Just Automation

Many more advanced Mac users employ AppleScript or Automator to automate complicated or repetitive tasks. Behind the scenes, many applications use Apple Events--which underlay AppleScript--to ask other applications, or parts of the system, to perform tasks for which they are designed.

A Simple Example

A really simple example is Xcode. There's a command in Xcode's File menu to Show in Finder.

When you choose that command, Xcode sends an Apple Event that asks Finder to open the folder where the file is, and to select that file. Pretty basic, and that type of thing has been in Mac applications since well before OS X.

In Beta 8 of Mojave, that action is considered unsafe. When selected, the system alarmingly prompts that "“Xcode” would like to control the application “Finder”." and asks the user if they want to allow it.

Now, there's no real explanation as to why this is alarming, and in this case, the user did ask to show the file in Finder, so they're likely to Allow it, and once done, they won't be prompted when Xcode asks Finder to do things.

A More Complex Example

Back in 2006. when we added scheduling to SuperDuper, we decided to do it in a way that was as user-extensible as possible. We designed and implemented an AppleScript interface, used that interface to run scheduled copies, and provided the schedule driver, "Copy Job", in source form, so users would have an example of how to script SuperDuper.

That's worked out well, but as of Mojave, the approach had to change because of these security prompts.

Wake Up, Time to Die

An AppleScript of any reasonable complexity needs to talk to many different parts of the system in order to do its thing: that is, after all, what it's designed for.

But those parts of the system aren't necessarily things a user would recognize.

For example, our schedule driver needs to talk to System Events, Finder and, of course, SuperDuper itself.

When a schedule starts, those prompts suddenly appear, referencing an invisible application called Copy Job. And while a user might recognize a prompt for SuperDuper, it's quite unlikely they'll know what System Events is, or why they should allow the action.

Worse, a typical schedule runs when the user isn't even present, and so the prompts go without response, and the events time out.

Worse still, a timeout (the system defaults to two minutes) doesn't re-prompt, but assumes the answer is "no".

And even worse yet, a negative response fundamentally breaks scheduling in a way users can't easily recover from. (In Beta 8. a command-line utility is the "solution", but asking the user to resort to an obscure Unix command in order to repair this is unreasonable.)

That's just one example. There are many others.

Reaching an Accommodation

Of course, this is not acceptable. We can't have everything break randomly (and confusingly) for users just because they've installed a new OS version with an ill-considered implementation detail.

Instead, we've worked around the problem.

Scheduling has been completely rewritten for the next version of SuperDuper. We're still using our scripting interface, but the schedule driver is now a command-line application that doesn't need to talk to other system services via AppleEvents to do the things it needs to do. It only needs to talk to SuperDuper, and since it's signed with the same developer certificate, it can do that without prompting. A link to the beta with this change, among others, is at the end of the post.

This does mean, unfortunately, that users who edited our schedule driver can't do that any more: our driver has to be signed, and thus can't be modified. (I'll have more on this in a future post.)

It's more than a bit ironic that an approach that avoids the prompting can do far more, silently, than the original ever could, but that's what happens when you use a 16-ton weight to hammer in a security nail.

When SuperDuper! is started, we've added a blocking prompt for Full Disk Access, which is required to copy your data in Mojave, and--if you're using Sleep or Shut Down--access to the aforementioned System Events, which is used to provide those features. Still ugly, but we've done what we can to minimize the prompts.

What a View

This should remind you of one thing: Windows Vista.

Back when Microsoft released Vista, they added a whole bunch of security prompts that proved to be one of worst ideas Microsoft ever had. And it didn't work. It annoyed users so much, and caused such a huge backlash that they backed off the approach, and got smarter about their prompting in later releases.

Perhaps Apple's marketing team needs to talk to engineering?

Those who ignore history...

Download SuperDuper! 3.2 Public Beta 1

iFixit Drive Replacement with SuperDuper! Sunday, August 19, 2018

iFixIt recently released a video showing how to upgrade to an SSD using SuperDuper.

Thanks, iFixIt!

Antivirus Shenanigans, Part 87655 Sunday, August 19, 2018

This isn’t the first time Antivirus programs have caused problems for us, and it won’t be the last. But, if you’re using Sophos Antivirus and are seeing the error

| 05:21:41 PM | Info | ......COMMAND => Blessing OS X System Folder
| 05:21:41 PM | Error | umount(/private/var/folders/zz/zyxvpxvq6csfxvn_n0000000000000/T/bless.5cmA): Resource busy -- try 'diskutil unmount'
| 05:21:41 PM | Error | /sbin/umount returned non-0 exit status

the problem is that Sophos’ “Anti-Ransomware” module is preventing macOS’s bless tool from working.

This is just another example of Antivirus programs trying to “help” but, in the end, making a system less reliable in an obscure, confusing way.

My guess is that it’s actively scanning the Preboot volume that bless has mounted in order to properly set up things for boot, and thus vetoes the ejection of that volume when bless tries to eject it.

One of the fundamental jobs of an antivirus tool is to not break basic system functionality while doing its thing. At least in its current release, Sophos’ Anti-Ransomware feature fails at that task. If you’re using it, and you’re getting errors, you’ll have to turn that feature off until they fix their bug.

Quick Mojave Public Beta 2 (DP3) Tip Sunday, July 08, 2018

Hey, folks.

As you may know, one of Mojave's focuses is stability and security. Part of that is denying applications, including those with 'root' privileges, the ability to access certain files on the drive.

This started in High Sierra's SIP implementation, where certain system files were inaccessible. But Mojave extends this protection to "sensitive" user files as well.

As you might expect, not being able to copy files throws a bit of a wrench into the whole "backing up" process, and bad things happened when we tried (crashes, etc).

Fortunately, as of Public Beta 2, you can now tell Mojave that SuperDuper is a "good actor", and you should allow us to access your data. Mojave doesn't prompt you to "OK" us, however, so you need to do this before you run.

Open System Preferences and switch to the Security & Privacy preference pane. Under Privacy > Application Data (Full Disk Access in later builds), add SuperDuper.

Once that's done, your backup should work.

On My Mark Wednesday, April 25, 2018

Executive Summary

Changes in macOS 10.13.4's behavior caused some problems for scheduling, and for the final steps involved in preparing an earlier major OS version's copy for boot. The beta resolves these problems (among others), and adds the ability to multi-select schedules in the Scheduled Copies window and run them all with one click.

SuperDuper! 3.1.5 Beta 1 Download

Replacing Gaskets

Setting up a drive for boot involves running the appropriate system tools, from that OS version, so that things are configured as the older OS expects them to be.

Unfortunately, 10.13.4 broke compatibility with earlier major macOS versions of update_dyld_shared_cache, causing the tool to crash on launch.

While the copies are fine, the error was, at the very least, disconcerting. This beta resolves that issue, at the cost of slower boot...or, potentially, the need to attempt the boot more than once the first time it's started from after a copy.

COSC Timing

When 10.13.4 was released, we also started seeing an increase in user reports of either "missed" schedules or schedules that didn't run until the user logged in. (Typically, they'd see SuperDuper! launched, unexpanded, and not running; the run would then start a few seconds after logging in.)

We tried a lot of different things to resolve this problem over a number of weeks, and finally hit on the multi-factor solution we needed to get things working reliably again.

Basically, it all came down to the system driving back to sleep before we had a chance to hold an assertion to keep it awake.

Unfinished Base Movement

In previous versions, our launchd job responsible for running our schedules would run on a repeating interval, every 60 seconds, and check to see if there was a job to run. While it was guaranteed to run once a minute, the second wasn't guaranteed, and thus could potentially be as late as 59 seconds past the minute.

I did this because, at the time, I couldn't figure out how to get a job to run every minute on the minute.

This meant that the wake, typically a minute before, could take more than two minutes before the real copy started...and the system would go to sleep before we had a chance to tell it not to (since we didn't want to tell the system to not sleep when we're just up, but rather while we're running a copy).

Chamfered Edges

The first attempt at this was to use caffeinate in the launchd job to hold the system awake in the background while the scheduled copy had a chance to get going. But, even though this worked in some testing, inside launchd it just didn't work: the assertion wouldn't persist.

As a second cut, we changed SuperDuper! to hold a brief assertion when launched. That helped somewhat...but didn't solve the problem.

To improve things, I delved into launchd some more, and (finally) figured out how to get our launchd job to run every minute, on the minute (for those who want to do this, you counter-intuitively use an empty dictionary). This helped even more: we can now set the wake event for the same time as the schedule...but it didn't fully resolve the issue, especially when users had a bunch of schedules set to run overnight.

Finally, we thoroughly investigated the internal script client scheduler: the part of SuperDuper! that sequences multiple AppleScript clients so they don't interrupt each other or all try to run at once. In there, we found some obscure logic errors that caused individual schedules to sometimes not start immediately, leaving them waiting in the queue for a while, even though they should have run right away.

Since this potential delay was longer than the sleep assertion we added in the worst case, the "real" assertion, set up during the actual copy, would sometimes never get a chance to take hold...and once again, the system would sleep. This resolved that problem.

Blued Screws and Gold Chatons

At the same time, we fixed a longstanding problem with "progress spinners" sticking around in the Scheduled Copies window even though the schedule was complete, and with schedule sequencing when more than one schedule was pending: now, if you space the schedules one minute apart, they're guaranteed to run in the order they're scheduled, rather than in a semi-random order.

And as a final bonus just-in-case we retain a "don't sleep" assertion as long as there are any pending AppleScript clients, to ensure that the system doesn't go to sleep before they get a chance to start their copies.

The sum total of all these changes: problem resolved (with even better behavior than before), and groundwork laid for future improvements as well.

Glashütte Stripes

For a finishing touch, we added in something we've wanted for a while: you can now select multiple schedules in the Scheduled Copies window and run them all on demand by clicking Copy Now.

There are a bunch of other fixes in here, too. We worked around another(!!1one!) very weird system pipe bug that was causing communication between the copy engine and the UI to break. This also improved the retrieval of error information from the copy engine.

We fixed a problem with file copying when a file was deleted out from under us between the data and attribute parts of the copy - now, we continue past that point, since the problem is pretty clear.

We fixed another problem where an image file might get recreated, rather than updated, in some situations.

And we finally turned off the spell checker for the log view (yeah, I know, don't ask), and improved its operation, so it won't scroll out from under you if you open it during a copy...and people won't think that the spell checker underlines are errors.

There are things I'm forgetting, but since we're already nearly 1000 words in...

Final Adjustment in Six Positions

We're pretty confident that this version will work great for you, as it's been tested on a broad spectrum of user systems, macOS versions, etc. To be sure, though, before we update the world, we've decided to release this as a Beta today. As explained in the beta post, if you install this, and we need to release updates to the beta, it will update automatically and separately from the release update feed. And when the final version is released, it'll also automatically become a "release version" and transition to the regular update feed.

So...have at it, and let me know how it works for you.

SuperDuper! 3.1.5 Beta 1 Download

Practices Make Perfect (Backups) Wednesday, February 21, 2018

We've been getting a lot of questions lately about the APFS image bug (recently documented by Mike Bombich, author of CCC) where, when an image container can't grow, APFS doesn't reliably indicate an error, which can silently lose data. Worse still, double-checking the data seems to indicate it's OK—even a checksum succeeds—until you eject, at which point the data is lost.

This looks to be another manifestation of a problem we've known about for years (and documented in the User's Guide back in 2006): when an image can't grow, whether due to disk full errors, or because the host volume doesn't support large files (eg FAT32/EXT2), writes to that image can fail catastrophically.

Failure is Inevitable, Success Requires Planning

Whether the problem is a bug in the OS or a hardware error, failure is inevitable. But if you implement a successful backup plan, you can protect your data, even in the face of this sort of adversity.

So let's take a moment to talk about "best practices" for backup devices and methods.

Simpler is Better

This should be obvious, but is often ignored in the search for convenience (or expedience): the fewer layers there are between your computer and the storage device, the more reliable your backups are going to be.

What do I mean by that?

This: you should write directly to a local drive. That's going to be your most reliable, bootable solution. Don't write to an image stored on that drive. Don't format it as a foreign file system (like NTFS) and use a special driver to access it. Write. Directly. To. The. Drive.

Of course, things can still go wrong! But recovery will be easier and faster. Remember, a single bad sector isn't likely to take out your whole backup...but it could destroy an image.

You Can't Start Up From a Network Drive or Image

This is pretty basic, but important. You can only boot from a backup written to a properly partitioned and formatted, locally connected, non-network drive.

You can't boot from a disk image (it's not a "real" drive), whether it's stored on a local or network drive.

But I Want to Store More Than One Backup on a Drive!

If you need to store more than one backup on a physical device:

  • If you're on 10.12 or earlier, partition the drive into the number of volumes you need to back up. So, three source volumes to back up? Three partitions on the backup drive.
  • If you're running 10.13 or later, format the backup drive as APFS, and use APFS's very flexible "volumes" as your backup destinations, one per source volume.

We hear all the time that people are using images to "make more efficient use of space". That's the wrong thing to do—there's just no other way to put it. An image needs to be able to grow to its maximum size, unimpeded, no matter what. Images do not necessarily "hug" the data inside them, and may grow to full size even when their contents are smaller.

If the image cannot grow, it's quite likely you will corrupt that image and lose the data in it. So you need to have all the space available anyway!

High Sierra's native APFS volumes do this better. All the volumes share the same storage pool, and allocate that storage intelligently, managing that storage better than images ever did. Use them.

But I Want to Store Backups on the Same Volumes as Some Data!

Don't ever do this, except in a short term emergency situation.

What people seem to forget is that the data they're storing their backups alongside also needs to be backed up. And if you're storing your backup on that same volume, you're probably not backing up the data.

This is a mistake. A huge mistake that we see way too often.

Typically, we'll hear from people who work with large amounts of data, such as photographers, designers or musicians. They'll move their "older" work to an "archive" volume, and they'll want to store backups of their current work alongside that archive.

But they'll often forget they need to back up their archive too!

Don't be a goofus. Having an archive volume is fine. Keep it separate from your backup volume. And back it up.

But I Want to Back Up to a Network Volume!

OK, so we've covered the cases where you've got local drives. The best thing to do, to local drives, is to never use images unless you absolutely need to. Never.

But what about a network drive?

First, I truly believe you should never, ever use a network drive as your only backup. It's fine to have a network backup as a secondary backup. But by its very nature, it's going to be the least reliable one. It's not only subject to the potential flakiness of images, it's also subject to the flakiness of networking in general.

It's one thing when you try to access a web site and the page doesn't load. It's another thing entirely when the network "burps" when you are updating the structures of a backup volume.

You've probably already seen the result of this kind of failure. That message Time Machine displays that says "In order to improve reliability, we've started a new Time Machine backup"? That's because the image failed...and your backups have been completely lost. (Kind of a mild sounding message for a catastrophic failure, don't you think?)

So, here's my advice for networked backups.

If you're using a desktop, and you have a modern NAS device that supports iSCSI (like a Synology or QNAP), purchase an iSCSI initiator (like the Studio Network Solutions GlobalSAN initiator, or ATTO's Xtend SAN), create an iSCSI volume, format it natively for the Mac (as HFS+ or APFS) and back up to it directly.

Yes, iSCSI volumes are also containers, but those containers are managed on the far side of the connection, rather than the near side. As such, a near-side failure is much less likely to corrupt the container...and more closely resembles a typical local drive failure, which are almost always repairable by Disk Utility.

If you have a laptop, or can't use iSCSI, create a share that has more space than you need to back up, and back up to a sparse bundle. If your device doesn't support sparse bundles, use a sparse image.

Broad Spectrum Protection

Remember, you should never rely on a single backup device, or a single backup program. No matter what you're using for your backups...use something else too.

So, if you're using SuperDuper!, also use Time Machine, preferably to a separate device. Have multiple backups, with both SuperDuper! and Time Machine. And use something like Backblaze, so you get an offsite backup!

Backup Frequency

As I've indicated in the User's Guide, I suggest a daily, weekly and monthly backup with SuperDuper. It's easy to do - just make three schedules to different drives.

I set up my weekly and monthly backups "on connect" - that is, when I plug in the drive (which I keep separately), SuperDuper! launches, copies and then (since I've set an "On successful completion" action), ejects the backup and quits.

My daily backup is left unmounted but connected. SuperDuper! launches every day, mounts the drive, makes its copy, and then unmounts the backup.

I leave Time Machine on its default schedule, so it's doing a roughly hourly backup.

And Backblaze is backing up continuously.

Be Smart, and Don't Cheap Out

As I've said many times, no one was ever sad because they had too many backups.

Drives are inexpensive. It costs less then $100 (more like $60) to get a 1TB USB3 external drive. For $10 more you can get 2TB. 512GB External SSDs are less than $200. Spend the money. Your data's worth it.

Don't let your data loss story serve as a warning to others. Be the hero who planned ahead and saved your family's precious photographs. We're happy to be the invisible partner alongside you as you save the day.

Just remember the most important rule of all...

Page 6 of 29 pages « First  <  4 5 6 7 8 >  Last »