MacInTouch Reader Reports

Hard Drives: ZFS

Feb. 14, 2012
Feb. 16, 2012
Feb. 17, 2012
Feb. 28, 2012
Feb. 29, 2012
Mar. 1, 2012
Mar. 6, 2012
Mar. 7, 2012
Mar. 12, 2012
Jun. 18, 2012
Jun. 19, 2012
Jun. 20, 2012
Jun. 21, 2012
Jun. 22, 2012
Jun. 23, 2012
Jun. 25, 2012
Jun. 26, 2012
Jun. 28, 2012
Jun. 29, 2012
Jul. 3, 2012
Jul. 5, 2012

Newer entries...
Feb. 14, 2012

item.152247

MacInTouch Reader

Thought I'd share my week's experience with the free and open source MacZFS (http://code.google.com/p/maczfs/) for those interested in maintaining bit accurate integrity of their data.

I have a lot of music in a library I've built up over the years. Now that it's mostly the way I want in iTunes, I dread going through all that effort again and I certainly don't want "bit rot" to set in, ruining my files over time. Until now I've been keeping a source copy on a hard drive in my computer with a copy in a network storage unit and another copy in an identical storage unit at my dad's place, swapping them from time to time. In the last two years I've had two hard drives fail in the network devices, but simply replaced them and copied the files over again.

My interest in ZFS comes from its automatic parity checking of all files read from the drive, guaranteeing data integrity. If you set up a mirror with two drives (or RaidZ with at least three drives) ZFS will automatically correct any errors it finds from the good copy. Once a month you run a "scrub" command which runs through every file and checks the parity. This process takes about 2 hours on my system, a Core i7 with 6 gigs of RAM and 2x1TB mirrored SATA "music" drives running OS 10.6.8.

What I did to set up MacZFS was install the package, shut down the computer, add another drive identical in size to my music drive, reboot, partition and format the drive as ZFS and create a ZFS pool. This involves the command line but it's not hard if you follow the instructions on the MacZFS "getting started" page. A "pool" is what you see on your desktop as a hard drive. This pool can have any number of drives in it but you see it as one drive.

Once formatted, the ZFS drive automatically shows up on the desktop just like any other Mac drive. I used rsync to copy all my music to the ZFS drive. Then, making sure I had updated my network backup, I erased the source HFS+ drive, formatted it ZFS from the command line and added it to the ZFS pool that already included the new drive.

Here's the cool part: As soon as you add the second drive to the pool, ZFS begins rebuilding it as a mirror of the other. This is called "resilvering", a pun on making a mirror. A few hours later, the process is complete and the mirror can withstand a single drive failure as well as correct any errors that creep in with bit rot.

I had been planning to build an Atom-based ZFS file server using FreeNAS but decided against it because such servers are not designed to go to sleep. The D-Link 323's I use only consume 8 watts when in low-power mode and I've been very happy with them in spite of being slow. They're fast enough for watching video and listening to music anywhere in the house.

So now I have the best of both worlds. I have two backups of a redundant, parity-checked source that is only on when my computer is on.

Some things to note: ZFS does not play well with USB drives because of the way USB enclosures report read/write activity. Firewire externals are fine, so long as there is no hardware RAID involved. Check out the MacZFS SystemConfig and UserProfiles pages for more hardware information.

Additionally, ZFS does not work with Spotlight. Drag the ZFS drive to the Privacy tab in the Spotlight System Preference. Check out the KnownIssues page for more information.

Feb. 16, 2012

item.152491

MacInTouch Reader

Sorry, but I got my numbers wrong in my earlier post. Going from 4x1TB to 4x2TB would increase 3TB storage to 6TB storage.

item.152325

John Kratochvil

You had a reader submit their experiences using MacZFS, a project I've been interested in for some time. The feature of ZFS that would be very useful to me is the ability to expand a drive pool - I am constantly running out of room for my media library (passed 12 TB recently!).

Does anyone have any experience with this? I would dearly love to have a bunch of cheaper drives appearing as a single volume, but be able to expand that as needed.

Thanks!

item.152340

Laurence Gatehouse

MacInTouch Reader commented that he used DNS323's as his network storage devices. Two questions really"
How has he got communication set up between the DNS323 and his Mac? SAMBA or AFP and how set up on the DNS323?
Have I got the right end of the stick here? The ZFS driver takes what ever storage devices you tell it (apart from USB) and presents the lot as a single hard drive with all the ZFS wonders RAID mirroring etc happening under the hood and out of sight.
I have (like most people I guess) a legacy of various discs, drives and devices. I have cobbled some of them together to do the network storage (DNS323) and Time Machine stuff. ZFS would seem a way better way of doing all this. I would love to hear more of this.

Thanks Lorenzo

Feb. 17, 2012

item.152490

MacInTouch Reader

John Kratochvil - just add another drive to the pool with the "zpool add" command. The space shows up right away. Look here for more info.

If you are using RaidZ, you need to replace each drive one by one with a larger ones, waiting for re-silvering to complete before adding the next. Once all drives are replaced, the bigger space will become available. (For instance, replace 4x1TB drives with 4x2TB drives will grow your space from 4TB to 8TB and not lose any data... unless you do something wrong, of course!)

Here are some step by step instructions on growing a RaidZ pool.

I have a homebuilt computer with 12 SATA ports and space for 12 drives, so this is easy (and one of main reasons I went the homebuilt route).

To learn more about what ZFS can do, look here.

Just realize that MacZFS is a few versions behind so it can't do everything you can do if you run FreeNAS on a standalone server.

item.152496

MacInTouch Reader

Laurence Gatehouse - I run ZFS on my personal computer, not the DNS323 (which uses Ext3). My computer safely stores the source copy of my music on a mirrored ZFS pool. I edit the iTunes library and music tags, cover art, etc., and then copy this music to the DNS323 to share throughout the house, updating the network copy with rsync from time to time.

I use SAMBA on my DNS323s, which is the default. You need to install "funplug" on the DNS323 and telnet into to run NETATALK which I did have working at one time but haven't found necessary.

One trick I have discovered is that I use the same name for the network share of the music as I have the volume name in my computer. (So I have two ZFS pools named Music_A and Music_B which show up as volumes on my desktop. In the DNS323 I have two drives shared as Music_A and Music_B, which are identical copies of Music_A and Music_B on my computer.)

This allows me to speedily import all the music into iTunes on my desktop from fast local drives and then copy the resulting iTunes library from my computer to the other computers on the network. When the other computers run iTunes and mount the DNS-323 volumes, their local copy of iTunes sees the network volumes as the same path as my computer sees its local volumes. This saves a lot of time trying to import the iTunes library from the very slow network device.

For real flexibility, build a FreeNAS server using a compatible computer and share the drives with NETATALK. But, like I said, this means starting up and shutting down the server to save electricity.

What I wanted was robust data integrity on my source copy. The DNS323 is extremely efficient for sharing a copy of this data throughout the house.

Feb. 28, 2012

item.153198

Randall Voth

Further ramblings in my use of MacZFS, for those who may be interested:

I just successfully increased the storage of a mirrored ZFS pool. These "pools" show up on your desktop as a single hard drive [volume] but can include many drives, mirrored or RAID.

I had two 1TB drives mirrored in a pool, with 700MB in use and 300MB free. After adding two new 2TB drives to the computer, partitioning and formatting them as ZFS, I used the "zpool attach" command to add the new drives to the mirror. This did not yet increase the space but gave me 4 mirrors of parity-checked data, so I could lose 3 drives and still not lose any data. The free space of a mirror is always dependent on the smallest drive.

After "attaching" the new drives to the pool, it took about 2.5 hours to "re-silver" them with the mirrored data. It automatically does both at once! After that process completed, I used the "zpool detach" command to remove the two 1TB drives from the mirror, leaving me with 2x2TB drives mirroring my original 700MB of data and 2x1TB drives ready for something else.

Following a reboot, I now have 1.3TB free space of parity-checked, mirrored data and I can reuse my 2 1TB drives for some other purpose.

How incredibly cool is that?

item.153215

Mark Thomas

I'm wondering what issues, if any, might crop up from storing an Aperture or Lightroom library on a ZFS-formatted partition, or if it would simply behave as expected with the added benefit of being more robust?

item.153220

Robert Mohns

Randall Voth writes:

"Further ramblings in my use of MacZFS, for those who may be interested: [?] How incredibly cool is that?"

Incredibly cool! Thanks for writing up your experience. I've been looking forward to Ten's Complement's ZEVO ZFS implementation for Mac; I'm awaiting Platinum edition.

The one concern I have is that, according to Ten's Complement, Apple File Sharing on Lion is not compatible with their ZFS implementation (SMB is). Does MacZFS have the same issue?

Feb. 29, 2012

item.153260

Emory Lundberg

You can work around the bug in OS X (and OS X Server's) inability to export non-HFS+ volumes via AFP by using liberate-applefileserver[1] though be warned that it obviously isn't supported.

I would urge anyone that wants to export ZFS to OS X clients to open this as a bug with Apple. The expected behavior is clearly that a filesystem selected by the user in Server Admin (and one that shows up there!) should be available to network users.

[1] https://github.com/joshado/liberate-applefileserver

item.153272

Randall Voth

Mark Thomas asked about issues with Aperture or Lightroom. I don't know but I doubt there would be problems for data. I've used ZFS as a data drive for platform-independent music files and also worked with Logic Audio data files, which worked fine from what I could tell. Now that I have so much free space on the drives I'll do some more work with Logic projects on ZFS in the next few weeks and post an update.

The main obstacle with MacZFS is that it is case-dependent, which causes problems when trying to do something radical like move your Applications directory to ZFS. Apparently Adobe programs cannot run from a ZFS drive because of case sensitivity. The solution is to store offending applications on an HFS+ formatted disk image sparsebundle located on a ZFS drive. I suppose that method would work for your library files, if there is a problem. One test without installing ZFS would be to format a drive with HFS+ case-sensitive and see.

Another thing I've noticed is some files with accented characters cause the Finder to keep refreshing the folder contents. If I rename the files from iTunes, using an Applescript, the problem goes away. This has something to do with the Finder's way of handling UTF fonts.

Here is a FAQ page that may help: code.google.com/p/maczfs/wiki/FAQ


Robert Mohns asks about File Sharing under Lion. I don't know because I'm not upgrading to Lion. It appears to work under Snow Leopard. My test was to turn on File Sharing and load up an MP3 on my notebook over the network from the ZFS formatted volume. If that's not what you meant, explain and I'll do some tests for you.

But, really, it's so easy to experiment with MacZFS. Just install the package, attach a spare FireWire drive with a small disk and follow the steps on the Getting Started page found here: http://code.google.com/p/maczfs/wiki/GettingStarted

My computer (running 10.6.8) is rock solid in the month since installing MacZFS, and I am so happy to finally have my data mirrored and protected from bit rot. The drive volumes simply show up and you use them like any drive you've ever had on your desktop.


One more thing I've noticed was the four 2TB drives I formatted with ZFS from the command line partitioned fine with the instructions they gave on "Getting Started" page but did not format and mount properly. The 1TB drives all worked right away.

I didn't explore the issue. My solution was first to partition and format the drives using Disk Utility as HFS+ and then format the drive as ZFS from the command line using their instructions under "Using an Existing Partition" on the Getting Started page. This worked perfectly every time.

Mar. 1, 2012

item.153306

Emory Lundberg

Re:

The main obstacle with MacZFS is that it is case-dependent,

It doesn't have to be, it's a designation set when the zpool is created. One of my pools is case-sensitive:


· @buster · ~ % zfs get casesensitivity cornballer NAME PROPERTY VALUE SOURCE cornballer casesensitivity sensitive -

It can't be changed later because (obviously) that would introduce several other issues. It becomes a read-only value after the pool has been created.

Mar. 6, 2012

item.153540

Marc Heusser

Tried that - storing a copy of my Aperture library, some 90 GB on an external 1 TB 2.5" USB hard disk formatted with ZEVO's ZFS (silver edition). Seems to work like a charm, mounts automatically like any external hard disk, new symbol, reports in disk utility as ZFS disk. With the added benefit of being able to check for data errors.

Mar. 7, 2012

item.153568

Randall Voth

Can you set the "zfs set copies=2" value with the ZEVO Silver version? This way you get redundancy of important files on a single disk and errors can be corrected. Otherwise, you may find out your data is corrupt but must restore it from a backup, if possible.

Mar. 12, 2012

item.153955

Marc Heusser

In the meantime I also use a 2.5" external hard disk via USB formatted with Zevo's ZFS implementation successfully with Time Machine. It does create a sparse disk image bundle on the disk, and then does back up as it should. Entering Time Machine works. Restoring a full disk from this backup obviously would need another computer, as the disk image first needs to be mounted, i.e. you cannot directly read the disk from a newly installed system.

Jun. 18, 2012

item.158775

Marc Heusser

I have been using www.tenscomplement.com Zevo Silver ZFS File System for Mac OS X with great success for my Time Machine backups, but also archives.

Since a few days the website announces "We're making some changes!". Any ideas what they might be?

Of course I'd be hoping for integration into Mac OS X 10.8 Mountain Lion, but that would be too good!

(ZFS is a modern filesystem, which deals with e.g. data corruption from the ground up, developed at Sun a few years ago. But it also makes it easy to manage whole pools of disks, save snapshots like Time Machine, does not have size limitations, implements encryption, deduplication, compression and mirroring as well as RAID configurations in the file system.

From the website:

Peace Of Mind
It's time to stop worrying about the state of your data.

Z  E V O's Checkups provide efficient validation of all your valuable data. Even single-bit errors can be detected anywhere in a file.

This is a huge improvement over traditional repair utilities that can only patch up the system data, but are not concerned with your file contents and require taking your data offline for extended periods.

State-of-the-Art Design
The technology used in Z E V O was designed from the ground up to address today's storage concerns.

The new design throws away obsolete assumptions. Fundamental features like data integrity and scalability are built in, not bolted on.)

item.158795

Larry Nelson

Regarding ZFS: The last time this topic came up on MacInTouch, I looked into it. The idea is very attractive; I have a large music library, and would like to protect it from invisible corruption. I don't want to find errors requiring a re-rip of a CD buried in a box someplace.

The problem is that the instructions I've found have been highly cryptic, at least to me. I'm not a complete newcomer to file systems, having started with CP/M and then moved to Macs and offboard hard drives, and Silverlining. None of that seems to apply to ZFS.

My existing music library is on a Guardian Maximus Mini RAID-1 system, backed up to another RAID-1. A Mac Mini (Spring 2011, 10.6) runs it.

I've heard that ZFS doesn't work well with standalone RAID systems, so I thought of buying two off-board FW800 hard drives and having ZFS use them together in RAID-1. The ZFS info I've found doesn't show me, clearly enough, where to start. Maybe I'm making it more complicated than it really is.

One criterion for this set-up is silence. The computer and disk drives are at my desk, where I listen to music. The G Maximus Mini is nearly silent, and I wish they made a version of it that allowed the two drives to be connected independently. I know Wiebetech makes such enclosures for 3.5 inch drives, but I wonder how loud they are.

I'd very much appreciate hearing from more experienced ZFS users about where to start. Thank you.

Jun. 19, 2012

item.158826

MV Fabbri

I have also turned to ZFS for my valuable data, but I have chosen the ZEVO implementation of ZFS by tenscomplement. Don Brady, the developer, is a former Apple filesystem engineer. When Apple eventually decided not to add ZFS to OS X, he started a new company. I have been in the beta-testing program for several months, and I am very satisfied.
ZEVO comes in three different editions. The silver edition came out at the beginning of 2012. You would need the Gold edition, that supports mirroring. As soon as it comes out, I will grab it myself.

item.158831

Marc Heusser

To Larry Nelson: ZFS makes many things simpler, yet is more powerful - think of it as the Mac OS X-like file system.
As it changes many assumptions not necessary anymore (carried over for decades), some things seem too simple. For example using copy-on-write it simply does not have a write hole (what happens if a disk write gets interrupted, i.e. half of it already changed, the rest not executed), so specialized RAID hardware with nonvolatile RAM is not necessary at all.

The key idea is to use any cheap (not specialized) hard disks connected to a machine and make them reliable etc. through a good file system. For example *every* block of data gets a checksum, not just directories. Data is *never* overwritten, but copied on write. All data can be verified to be good; if anything gets corrupted you will be notified immediately, and often it can "resilver" the data to good state, i.e. correct errors by itself. And it can constantly monitor drive health.

For larger installations you can add more disks to a storage pool. In this total capacity you can then define individual file systems (like Mac OS X volumes, MS DOS drives etc). You do not have to take care of what goes where and whether it fits.
There are many more goodies.

So, in your situation you'd just connect disks and tell ZFS to use two of your disks as mirrored (like RAID1) or protected (like RAID-5). And then you'd put filesystems (Mac OS X volumes) in these pools.

Now to make things easy, Tenscomplement put all this into a Mac-like package. Founder Don Brady had been leading Apple's team integrating ZFS, and before that developing HFS+ etc. I have the Silver package which has already been released to the public.

With that, you get a system preferences panel that will let you format a connected disk with ZFS. After that, such a disk will mount automatically like any other disk you connect, and Disk utility will list it with file system ZFS. To check the health or erase the disk (to ZFS or HFS+) you click on a button in the same system preferences panel.

I have been using such an ZFS disk (2.5 inch 1TB USB) now for a few months for my Time Machine backups, and it works like a charm. I can check data integrity from a system preferences panel.

With the command line tools you get as well (inside Terminal, the two commands zfs and zpool are all you need), you could even set up two copies like RAID1 (simply adding the option copies=2 at file creation time).

The Gold edition will give you that in a control panel, Platinum will add redundancy similar to RAID5 etc.

But those have not been released yet. And there comes the "We're making some changes ..." I had wondered about.

item.158835

Marc Heusser

Here's a description in detail of ZFS, the Sun ZFS admin guide, from "What is ZFS?" onwards:
 ZFS Administration Guide

item.158847

Emmett Gray

Regarding ZFS: I'm in charge of preserving an archive of about 10TB of music masters, which I have stored on 3 redundant sets of 1TB drives (from 3 manufacturers), each set in a different location. When I first read the ZEVO promo material, I thought this is magic: "Get back what you put in. Guaranteed". Well, not so fast. It took some digging to find out that in order for ZFS to be able to recover from bit rot or other corruption, you have to have redundant data: you have to write twice to the same disk if you're using single disks, reducing your storage space by a significant factor. If you don't have redundant data, all ZFS can do for you is flag a problem.

I already have something that flags problems:   http://diglloydtools.com/integritychecker.html
I have been using this for a couple of years to keep an eye on the archive.

At some point in the future I'm going to want to migrate the archive to a new set of drives, and when I get around to that I'll probably give up the space and use write-twice ZFS.

item.158868

Randall Voth

Larry Nelson - Can you set up your Guardian Maximus as a simple two-drive unit where the drives show up independently in the Finder? If so, then all you need to do is install MacZFS and follow the simple instructions here: http://code.google.com/p/maczfs/wiki/GettingStarted.

Use this command to create a mirror of two drives:

# zpool create tank mirror disk2s2 disk3s2

If you want to practice with MacZFS, you can do so with an old disk partitioned into equally sized partitions using Disk Utility. Format these partitions as instructed and bind them together as mirrors or raidz. It thrashes the disk terribly, but you can muck around and gain confidence, adding pools, removing them, etc.

I haven't had any problems with two mirrors of four drives in 5 months use. I even upgraded one mirror from 1TB drives to 2TB drives by removing one of the 1TB drives from the mirror, adding a 2TB drive, letting it resilver and then replacing the first 1TB drive with another 2TB. No data copying was necessary, ZFS handled everything!

One gotcha is the way the Finder handles accented characters versus ZFS. The Finder keeps constantly updating the folder containing accented filenames, making renaming impossible except from the command line. These were files that were imported with iTunes naming and managing them.

I also found that I could not create and format a 2TB drive as easily as the 1TB drive, which I did in one step. I had to partition the 2TB drive with Disk Utility and then follow the instructions for using an existing partition further down the page.

You should also turn off Spotlight on the ZFS formatted drives, because they don't work together.

item.158872

Henry Harrison

Larry, in response to your question about where to start

http://code.google.com/p/maczfs/wiki/GettingStarted

did the trick for me. I'm running two x 1TB in FW 800 cases, one set up as a 1TB mirror giving me the duplication that is needed for a robust error recovery, and the other as a 2TB pool. I use the first for my iTunes library, and once I remembered that zfs is case sensitive, have had no problems with it. I use the other for large files, movies and the like, and don't need the error recovery as I have original media as backups in most cases.

The maczfs people don't have a GUI interface, but their implementation will provide the equivalent to Ten'sComplement Gold level now, which is what I wanted. If you are at all comfortable with the terminal, you should be fine. I started out by experimenting with a bunch of old small drives, until I was sure what the different commands were doing.

Lastly, I love the comparative quietness. I moved the drives from HFS to ZFS, and they are significantly quieter in operation.

item.158875

Michael Blake

I, too, bought Zevo Silver and used it for a while. However, in the end I uninstalled it because it was using a lot of RAM, which seemed to be added to the kernel process. Once I uninstalled the ZFS driver, the kernel's footprint dropped dramatically.

item.158890

Mike Jackson

Regrading all the talk about ZFS: I also looked into it, when TensComplement first released ZFS for OS X, just after I had a 2TB RAID setup go bad. So much for data redundancy.

After researching ZFS and different implementations and how to integrate it with my own workflow and set, I decided to whip together a FreeNAS box which runs ZFS under the hood. I have a 6-drive RAIDZ2 setup with 4x1TB and 2x2TB drives. The array can handle a full 2-drive failure at any point and still have all data available.

I AFP-share out the FreeNAS box and use it for my data and a backup of my Home directory. I also have FreeNAS run a "ZFS Snapshot" every night, so that I get a daily back up much like Time Machine.

FreeNAS is run through a Web browser-based GUI and is more or less straight forward to admin. There are a few nice guides specific to FreeNAS and OS X and Time Machine on the internet that walk you through all the necessary setup.

The FreeNAS box is connected with a GigaBit ethernet connection and I routinely get read/write rates about 100MB/sec to/from the box, which is significantly faster than my old FW800 RAID (Drobo) could muster.

I would have loved to put ZFS on the 4 drives in my 2009 Mac Pro but without a GUI to admin the ZPools, Snapshots and other ZFS utilities I didn't want to go down that path.

Some caveats that the astute ZFS guru will see in my setup are the pair of 2TB drives are not used to their full capacity until I upgrade the other 1TB drives. I can do this one at a time without ever taking the system down, until all drives are 2TB, then my capacity will jump from the current 4TB to 8TB. Again, all without ever having to take the drives off line.

I run an "rsync" script at the end of the day, which syncs my local home directory to my FreeNAS-based home directory backup. This only copies the changed files. Then the ZFS Snapshots kick in later in the evening, which gives me the Time Machine-like backup of my home directory that I am looking for.

For the hardware folks out there I used a quad core AMD Bulldozer-based processor, 8 GB RAM and motherboard with 6 SATA ports on it - all put into a case that can handle 10 drives.

Total cost was $300 (Not including the drives, which were reused from the failed Drobo).

In the end I am very happy with this setup. Is this setup for everyone? Not really, but if you are the "do it yourself" type of person, then this is an inexpensive way to get a ZFS system that can survive a simultaneous 2-drive failure and still stay up.

For *really* long-term permanent storage, I have turned to "M-Discs" which are billed as the "1000 Year DVD". But that is for another topic.

Good Day.

Jun. 20, 2012

item.158906

Ted McCabe

Regarding Emmett Gray's comment for the need to write twice: ZFS has RAIDZ, which is like conventional RAID in that parity data is written instead of duplicate data - the result being that you can have a zpool of say, four disks, so that for any write, one of those disks acts as parity (which one varies, so all act equally). Those four disks then present themselves as three disks' worth of data with the full data recovery ability of mirroring. More disks could be used with still just one extra for parity, and two extra for double parity is also possible. So the storage reduction factor needn't be as onerous as Emmett described.

Regarding Michael Blake's comment about memory usage, I see a similar issue since I started using MacZFS. But my need for ZFS is strong enough that I solved it by having my nightly Carbon Copy Cloner run conclude with a script that runs /usr/bin/purge (since the run itself fills up my 18GB of RAM). If you don't have CCC, you can easily schedule such a script using Automator and iCal. I'll grant, though, that that's not as user-friendly as Zevo's version aims for.

item.158936

Emory Lundberg

Re:

You should also turn off Spotlight on the ZFS formatted drives, because they don't work together."

ZEVO's ZFS works perfectly fine with Spotlight, including child volumes. I recall MacZFS not working reliably with Spotlight, but it's one of many important distinctions between the two.

item.158940

Emory Lundberg

Emmett Gray writes:

...you have to write twice to the same disk if you're using single disks, reducing your storage space by a significant factor. If you don't have redundant data, all ZFS can do for you is flag a problem.

It is not best practice to use ZFS in this way, as it doesn't provide any data redundancy or protect you from downtime when the single disk fails. You can set multiple copies like you said, but then you're still relying on one disk.

At some point in the future I'm going to want to migrate the archive to a new set of drives, and when I get around to that I'll probably give up the space and use write-twice ZFS.

If you opt to use ZFS (and I probably would), a raidz2 pool of six 3TB disks would give you 12TB usable and you could have two disks fail, ZFS would be able to heal any errors. Or you could also go a more vanilla route and buy a NAS appliance and store your masters and par2 files there to rebuild and repair.

You could also get a Synology DS1512+ with 5x4TB disks for about $2300. That'd be 12TB usable with the ability to lose two disks.

Jun. 21, 2012

item.158994

MacInTouch Reader

In the discussion on ZFS, Michael Blake noted on 6/19 that

"...in the end I uninstalled it because it was using a lot of RAM, which seemed to be added to the kernel process. Once I uninstalled the ZFS driver, the kernel's footprint dropped dramatically."

What he describes is actually an intentional feature of ZFS, not a bug. ZFS is designed to grab idle RAM and use it for caching to accelerate the filesystem; in theory, at least, it releases that RAM when the system needs it for other tasks. That methodology is probably more reliable in the more consistent environment of a server or workstation than it is in a desktop or laptop.

Those wanting to understand more about this can find much technical detail available with a search for "ZFS ARC."

item.158995

Michael Blake

On my iMac i7, Zevo ZFS was using active memory in kernel, and purge could not help. I hope they reduce its footprint because I'd like to go back to using it - alas I have only 8GB RAM!

item.159007

Emmett Gray

I previously wrote:

"...you have to write twice to the same disk if you're using single disks, reducing your storage space by a significant factor. If you don't have redundant data, all ZFS can do for you is flag a problem."

Emory Lundberg commented:

"It is not best practice to use ZFS in this way, as it doesn't provide any data redundancy or protect you from downtime when the single disk fails. You can set multiple copies like you said, but then you're still relying on one disk.... If you opt to use ZFS (and I probably would), a raidz2 pool of six 3TB disks would give you 12TB usable and you could have two disks fail, ZFS would be able to heal any errors."

I want to keep using multiple sets of single drives. This is in part because my archive is regularly accessed for restoration of masters for reuse, and it's so convenient to simply mount a drive in a dock and pull something off. I also want to continue storing 3 sets of drives at separate locations. At present, I can already survive the loss of any two drives.

Jun. 22, 2012

item.159008

Robert Mohns

Marc Heusser writes about TensComplement:

Since a few days the website announces "We're making some changes!". Any ideas what they might be?

My guess is they're changing their commerce system. I can't find any way to purchase a license on their site, and the sticky note is very similar in style to the one Apple uses when it takes store.apple.com offline.

I just asked @TensComplement if there was an ETA on being able to buy a license. We'll see if there's a reply -- the account isn't very active.

item.159019

Ken Bowman

The TensComplement website for their Zevo ZFS utility has undergone some interesting changes.

First the ship dates for the Gold and Platinum versions slipped from late spring to summer. That's understandable with a new product.

Now the "We're making some changes..." note has been posted on their website for about a week.

You can no longer buy any versions of the software, including the previously shipped Silver edition, and there are no prices or ship dates for any products.

It makes me wonder if things are not going well or if they are being acquired. They don't seem to be managing their public image very well.

item.159034

Randall Voth

Regarding MacZFS and accented filenames:
I just spent a few minutes experimenting. I changed one of the problematic folder's filename using the Get File Info dialog box, and the problem has completely gone away. I can change the filename back to its accented form, and there is no problem at all.

Perhaps it was some loop the Finder got into, thinking the folder had been updated, and the filename edit solved the problem. It's not the first Finder bug I've encountered!

I wouldn't worry about it. But I would do some experimenting with a small drive just to get comfortable with the filesystem in your own environment. I feel a lot better about all the work I've put into my music library now that it is stored on ZFS.

item.159043

Matthew Griffin

Is there any advantage to using ZFS in a single-drive environment?

I just ordered a SSD for my 13" MacBook Pro, and I'm using OWC's Disk Doubler kit to move the HDD to the optical bay.

My plan is to move my iPhoto, iTunes and (Boot Camp) Steam games to the spinning platter. Would there be any benefit to formatting the HDD as ZFS?

Thanks.

Jun. 23, 2012

item.159061

Emory Lundberg

The benefit and caveats of ZFS in a single-disk scenario:

Copies=1:
- health checks on data: in the event of corruption, you will need to restore from a backup

Copies=2:
- if a file is corrupt it can recover from the other copy: every file you write to the disk will take twice the space

In both scenarios:
- compression will save space, so will de-dupe; high memory requirements

There is a great opportunity for someone to document some common use cases for MacZFS/ZEVO.

item.159062

MacInTouch Reader

Re: ZEVO's "Making some changes" and relative silence

I'm not privy to any insider information regarding the direction of the company or anything, but I will say that ZEVO's development team has been receiving a lot of feedback from the beta testers regarding pricing models and capabilities of their product.

It is my hope that some of this is being taken to heart in order to make the product more compelling for end users and easier to support on their end. It would make me really happy to see the product line get an overhaul, in my opinion it is overly complicated and limited in ways that don't really make sense.

Jun. 25, 2012

item.159119

Marc Heusser

Has anyone tried to set up a Mac Pro with its internal drive bays to use 3 of the disks as a RAIDZ drive, i.e. set up Zevo silver edition with the shell commands zpool to use the three disks like that?

Connecting three external USB disks to a MacBook Pro, I tried the following command (-n says "simulate"):

marc$ zpool create -n 'tank' raidz /dev/disk2s1 /dev/disk3s1 /dev/disk4s1

would create 'tank' with the following layout:

tank
raidz1
/dev/disk2s1
/dev/disk3s1
/dev/disk4s1

As I still need the contents of the disks, I could not execute the command. But it sure looks like it would work.

Jun. 26, 2012

item.159158

Emory Lundberg

re: ZFS and a Mac Pro with multiple disks, ZEVO Silver doesn't support this. From their website:

Multiple drive pools are not supported. Silver is a single disk solution.

This is why I've recommended people wait before buying, unless they only need single-disk and redundant metadata.

Jun. 28, 2012

item.159294

Lyman Taylor

Re:

ZFS and a Mac Pro with multiple disks, ZEVO Silver doesn't support this. From their website...

This seems likely to be a GUI restriction. The command line utilities should still be around. If not, that's a bit dubious. If the developers didn't completely lobotomize the underlying ZFS infrastructure, then the functionality is present, just not easy to use.

IMHO, the product split is flawed. Instead of lobotomizing features that are standard in ZFS, the product should be selling the "value added" Mac-specific stuff, not trying to parcel out functionality that for the most part others have already done.

I think that is why the Silver edition has been retracted and the schedule revamped. A revision of the whole product structure is being developed.

A better split would be Zevo DAS and Zevo NAS: One product would be aimed at users who will attach a large amount of directly-attached storage (DAS) to their Mac, primarily for use on that same Mac. The other would be oriented to people who are trying to turn a Mac into a NAS/iSCSI server. The network attached storage would fully support AFP, compression, dedupe, and put a GUI interface on more admin/replication ZFS features (After the Oracle fork, the compression, encryption, and dedupe features are more out of the common core subset).

Most of the compression/encryption/dedupe features soak up more CPU and memory resources. Workstation users are far more inclined to put a high premium on throughput and want to save those 'extra' resources for applications.
On a server, the primary task is to allocate CPU/RAM allocations for services.

One problem with artificially limiting features is that ZFS is open source. There are going to be competitive ports with the full feature lists around (e.g., MacZFS). So kneecapping the core functionality is problematic.

The core issue that Mac users have is in making it more "Mac friendly" - that is, having a GUI, integration with HFS+ (and Spotlight and other Apple proprietary file system utilities: Disk Utility, Time Machine, versioning etc.), and Apple file sharing AFP. There are also usage mismatches in that ZFS typically isn't on disks that hotplug through Firewire/USB/Thunderbolt (i.e., there is the whole automount mismatch with Mac OS X also). Just keeping up with Apple's changes to all of these aspects is going to be challenging, especially if Apple goes to releasing Mac OS X on a 12-month cycle. Mismatches between ZFS lifecycle time period and Apple's OS X period is going to be challenging also.

"More Mac friendly" features are worth paying for. The core features of ZFS not so much. Support is another factor worth paying for, but that is simpler and more cost effective staying closer to the mainstream ZFS core. That is harder for a small company to make proprietary, unless it is about the software features they are the primary authors of.

The second problem is if the developers go so far as to "fork" the code to lobotomize it, then that is work primarily just for artificial product segmentation. That means time not being spent on better Mac integration. That is the wrong way to allocate effort for a small development team. They should be trying to reuse "as is" as much ZFS code as they can. There is a huge time sink just keeping up with Apple, ZFS, and supporting the product.

Perhaps after they develop a steady revenue stream they could possibly think about what value add there is for a single drive. (It isn't that big to justify even smallish app prices. Especially, if it can't boot... which is another large can of worms.) That isn't a good revenue base to work from. Those users are not the core user base because they're not using the core distinguishing features of the ZFS.

item.159339

Marc Heusser

Answering my own request with new disks in a Mac Pro: whether ZEVO Silver from Tenscomplement would support mirror or raidz from the command line, the answer unfortunately is no, the commands have been limited. Creating a zfs pool on a single internal disk of the Mac Pro works as advertised.

sudo zpool create 'tank' raidz /dev/disk1s2 /dev/disk2s2 /dev/disk3s2
cannot create 'tank': operation not supported on this type of pool
sudo zpool create 'tank' mirror /dev/disk1s2 /dev/disk2s2
cannot create 'tank': operation not supported on this type of pool
sudo zpool create 'tank' /dev/disk1s2
zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
tank 2.73Ti 1.19Mi 2.73Ti 0% 1.00x ONLINE -

Hoping for integration of it all into Mountain Lion or the release of the Platinum and command line version of ZEVO by the end of July.

item.159350

MacInTouch Reader

I mostly agree with Lyman Taylor's assessment but would like to point out that ZEVO is closer to the mainline ZFS code than MacZFS. Some of this is academic, given that ZEVO doesn't yet have a product available for purchase that does dedupe/triple-parity raidz and everything else that came after zpool version 8.

The company is likely willing to listen to feedback and suggestions on how the product line can make more sense for your use cases, I'd really recommend taking the time to let them know what your thoughts are before they are in concrete.

Jun. 29, 2012

item.159370

Ken Bowman

Marc Heusser says ...

"Hoping for integration of it all into Mountain Lion or the release of the Platinum and command line version of ZEVO by the end of July."

Me too, but I believe TensComplement is located in Woodland Park, CO, which is currently partially evacuated due to the devastating Waldo Canyon Fire. The remainder of the city has been issued pre-evacuation notices. The local disruptions have been severe, and my best wishes for safety of life and property go to everyone in the area.

Jul. 3, 2012

item.159506

Doug Weiner

ZFS wish list.

1) ZFS server of some sort, so I can create a RAIDZ setup and just keep adding drives to the pool as I need increased space.

2) I want to be able to read and write to these drives from only two machines - one Mac and one Windows 7.

3) I don't want to be swapping cables.

4) I want to move the drives into a closet, so I want to be able to have semi-long cable lengths - 30 feet?

5) I want it to be semi-easy to setup.

6) Need for big expansion - 16 drives slots up to 32.

7) Of course as cheap as possible.

So it looks like what I want is ZEVO, xTarget, iSanMP - but, of course, Zevo is not really ready for multiple drives, and iSanMp doesn't write to ZFS drives.

Also, the cost gets up there for creating the server - Software (xTarget $900, 2x iSanMp Clients $400), plus hardware, which is either Mac Mini with Thunderbolt to PCIe adapter or a Hackintosh, so let's say anywhere between $600 - $1200 for hardware. That's a total of $1900 - $2500, just for the server and connecting to it.

The upside is that I can buy SAS expander boxes on eBay on the cheap, like 32 slots for ~ $300 total. My current setup is 11 eSATA drives, so having room to nearly triple my drive slots is great.

Anyone have another solution?

Jul. 5, 2012

item.159552

Mike Jackson

Comment on Doug Weiner's wish list:

I built a FreeNAS server where the hardware cost was $300. The machine can only hold 8GB of RAM and 8 hard drives, but for what *I* needed, this works just fine. Building your own FreeNAS server isn't really that difficult, assuming you have ever built your own PC before. (If you have never hand-built a PC, then stop reading...). I'll try to hit each point from Doug's list:

1: FreeNAS does ZFS natively with full RAIDZ support.
2,3,4: FreeNAS supports both AppleShare and Windows Share, so one would network the server with the other two machines.
5: FreeNAS was not difficult to set up and get running. The installer walks you through the choices that it needs to get everything up and running. It has a web front-end to fully administer the server.
6: Purchasing the correct case to house everything is the key here. These larger cases are going to run the cost up.

From the ZFS best practice guide, it is recommended to have 1GB RAM for each TB of storage. So if you really are going to have 32 drives, you are going to need an expensive server board that can handle lots of memory. This again will drive up the cost. The nice part is that FreeNAS does [not] really need that powerful of a CPU to run things. You could pick up a used Xeon server from 2 or 3 years ago and be perfectly fine.

Also iXSystems, who help sponsor FreeNAS, sell prebuilt/preconfigured servers. You could get a quote from them for such a system.

Lastly, just so we are clear with ZFS, once you create a ZFS RAIDZ pool from a set of drives, the only way to increase capacity is to one by one replace each drive with a larger drive. You can not add additional drives to the RAIDZ once it is created. Of course you could create multiple RAIDZ and then pool those together into another ZFS Stripe.

Hope some of that helps
Mike Jackson (Happy FreeNAS User and Mac User).

item.159560

Taylor Armstrong

Re:

ZFS wish list.
1) ZFS server of some sort, so I can create a RAIDZ setup and just keep adding drives to the pool as I need increased space.
2) I want to be able to read and write to these drives from only two machines - one Mac and one Windows 7.
3) I don't want to be swapping cables.
4) I want to move the drives into a closet, so I want to be able to have semi-long cable lengths - 30 feet?
5) I want it to be semi-easy to setup.
6) Need for big expansion - 16 drives slots up to 32.
7) Of course as cheap as possible.
So it looks like what I want is ZEVO, xTarget, iSanMP - but, of course, Zevo is not really ready for multiple drives, and iSanMp doesn't write to ZFS drives.

Also, the cost gets up there for creating the server - Software (xTarget $900, 2x iSanMp Clients $400), plus hardware, which is either Mac Mini with Thunderbolt to PCIe adapter or a Hackintosh, so let's say anywhere between $600 - $1200 for hardware. That's a total of $1900 - $2500, just for the server and connecting to it.

The upside is that I can buy SAS expander boxes on eBay on the cheap, like 32 slots for ~ $300 total. My current setup is 11 eSATA drives, so having room to nearly triple my drive slots is great.
Anyone have another solution?

Yeah... give up  ;)

In all seriousness though - #1 is not going to happen; this is architecturally not something that ZFS supports. It isn't a Drobo... you can't just add drives and expand. Closest thing you can do is to add another *pool* of drives and stripe across them. (e.g. you have a 4 disk raidz, buy another *4* disks, make another raidz, and then add that 2nd raidz to your pool...).

As for the rest of your wishlist... just build the thing! Seriously. Forget about long cables, and all of that. Just get a box (generic PC) and install OpenIndiana, or one of the other ZFS distributions. Now, "long cable" becomes "cat5 network cable". #5 is relative... depends how comfy you are with "alternative" OS's, but again... ZFS was designed for large-scale enterprise storage, you have to get your hands dirty a little bit. #2 can be taken care of via firewall restrictions. #6 - depends on your budget and what case you want to pay for... there are certainly cases on the market that will hold 32 drives, but they're *not* cheap.

Once you do all of this though, you can more or less forget about SanMP, etc.... just treat it like a file server.

FWIW, I've been running zfs at home for several years, currently running OpenIndiana + NappIT to serve up approx 12TB of storage. With 4-disk raidz's, I can pretty easily saturate a gigE connection over AFP or SMB. Internal speeds on the arrays are about 350mb/sec; throughput to clients over the network is usually in the 70-80mb/sec range.

item.159580

Emory Lundberg

Doug Weiner's ZFS wish list may be solved to his satisfaction with free software and inexpensive hardware. For 16-32 disks, you'll need to invest in more hardware, but the cheap seats are pretty excellent, too. Most of the cost will be for disks.

A six-disk storage server may include the Fractal Design Array R2 mini-ITX case with six usable drive bays for USD$200. Adding an eSATA card with a port multiplier would let you chain up an eSATA JBOD, or you could go with USB 3 or even FireWire, if you wanted to recycle what you already have.

The shopping list isn't that involved, once you find the case you want.

- a mini-ITX board with on-board video ports and I/O you want for ~USD$60-90
- an Intel 1155 Sandy Bridge CPU (core-i3 on up with integrated video) from ~USD$65-200 depending on speed
- 8GB of memory minimum ~USD$50-$60
- two USB flash drives (one required, but have a spare) of 2GB or more to boot from (your junk drawer has these!)
- up to six disks of your choosing (cost varies, of course)
* six 2TB disks in a raidz2 ZFS pool gives you the ability to lose two disks with 8TB of usable storage
* six 2TB disks in a raidz1 ZFS pool gives you the ability to lose one disk with 10TB of usable storage.

Then you install FreeNAS on the flash drive, store your config on the spare for safe keeping, and start carving up storage space for use on your network over gigabit ethernet, or, optionally, WiFi, if you don't mind the performance hit.

There is a web interface to the storage service, where you can configure SMB/NFS/AFP access to your ZFS-backed storage. You then create pools with children, export those filesystems, and schedule regular snapshots. Your photos and iTunes library could live on ZFS-protected redundant storage in your closet storage server with hourly snapshots. Or use it for backups, Time Machine for your household Macs - any number of uses.

I recommend trying FreeNAS on VMWare or VirtualBox to kick the tires a bit. Try creating four virtual disks: one 2GB for the FreeNAS/OS installation and then three other images of equal size (3x5GB would be fine); this will let you create a raidz1 pool of 3x5GB disks for a 10GB pool that can lose a disk and still function.

Play around with the web interface for management, create a pool, delete a pool, turn on compression, make a child filesystem, enable dedupe, have some fun!

item.159587

Benhamin Saenz

In response to the ZFS wishlist:

I run Nas4Free BSD-based servers with 4-drive RAIDZ and 6-drive RAIDZ2 arrays. Right now, these run on HP microserver 40Ls, due to their use of ECC RAM, although in the future I will be upgrading to an i5 Supermicro board with ECC and a low-power CPU.

Anyway, 10Gb Ethernet cards using the older CX4/infiniband connectors may be found on ebay for $50-200. Both Nas4Free (based on BSD 9) and Mac OS X have drivers for cards from Myricom and are compatible with the Thunderbolt PCI expansion boxes, although I haven't tried them myself. I get read/writes of close to 250MB/s on the RAIDZ array with direct-connected 10Gb ethernet on my Linux machine (no switch), and 90-100Mb/s to my Mac via regular ethernet.

So, for the hassle of learning to setup Nas4Free, which is GUI-based and not that hard, you are looking at just hardware costs to have a solid ZFS-based disk array with SAN-like speeds.

item.159588

Michael Newbery

Re: Doug Weiner's ZFS wish list"

1) ZFS server of some sort, so I can create a RAIDZ setup and just keep adding drives to the pool as I need increased space."

Just to point out that you can't add disks to a RAIDZ after the fact, one at a time. That is, once you have a RAIDZ set, you can't add disks to that set.

A ZFS pool can contain individual disks, or sets (such as mirrors, or RAIDZ). You can add disks, or sets to a pool. So you can expand a pool easily by adding more disks one at a time (but you don't get RAID), or you can expand a pool by adding a minimum of three more disks in a new RAIDZ set.

Or, if you have enough disks, you can shuffle storage around by import/export, so if you had a 3-disk RAIDZ, you could create a 4-disk RAIDZ (using 4 new disks) and migrate everything over to that, then reuse the original three disks.

ZFS seems to assume you have a giant Sun disk farm, so it doesn't really account for expanding a RAIDZ set, or shrinking it, or increasing the size of one of the disks. (Although if you do replace one set member with a bigger disk, it works fine -- just not using the extra space. If you progressively replace all set members, then when they are all the new, bigger disks, at that point ZFS will begin to use the extra space).

item.159595

Pat Storr

You might want to take a look at Drobo. We use some at work, and even though I don't see ZFS mentioned specifically, it sure looks like that's what's on the back end. The ability to add drives on the fly makes them very attractive, and there is supposed to be a new Drobo Mini with Thunderbolt that looks especially attractive for my uses.

item.159609

Harris Upham

Doug Weiner posted a good storage wishlist. I've seen many like it over the years, and my own isn't that different.

One bit that just leapt from the page was the interest in moving the storage setup into a closet. Doug: be ready for some creative HVAC work and expense on that front!

Larger home storage systems come with substantial penalties for power, cooling & noise abatement. I've seen some great systems worn out in days or weeks by stuffing them into a closet without 24/7 air conditioning. You also want to make sure you have a plan for automatic shutdown if it gets too hot for any reason.

item.159616

Chris Ridd

A simpler/cheaper "ZFS server" solution might be something like the community edition of Nexentastor running on an HP microserver.

Nexentastor is free and is an open source fork of Solaris running ZFS with a reasonably nice web management GUI. Serves Mac clients using NFS or CIFS, serves Windows clients using CIFS. There may be an Appletalk add-in for it.

The HP Proliant microserver (HP URLs are insane; google the phrase instead!) are often found in the UK with a cash back offer. They contain space for 4 data drives, and with some creativity you can get a few more inside, too.

The cost of the disks will dominate the overall price, of course.

Next Page...


MacInTouch Amazon link...

Talk to MacInTouch     Support  •  Find/Go