Tuesday, 29 April 2008

Boot Camp vs Parallels vs VMWare Fusion

Update: (2009/06/01) This post is quite old now, and both companies have released new versions. As a very brief summary, I'm actually finding the performance pretty similar these days (they've both obviously put a lot of work into the areas they lagged behind the other) so I'd say it's now much more an issue of cost, eye candy and utility features (rather than performance) to decide on one or the other. Personally, I'm using Parallels v4... for now (;

I did some benchmarks a while back (August 2007) that I had always intended to post online somewhere, and since I've finally gone and made this blog I'm posting them, albeit a little late.

Now firstly, there have been updates to Parallels and Fusion since August '07, so these benchmarks are out of date, but I don't think that makes them totally worthless. Having updated Parallels and Fusion in April this year, and tested both with Visual Studio 2005, my (subjective) feeling is that the performance (at least for compiling a C++ project in Visual Studio in Windows) has changed very little since August last year. Still, I'm supplying these benchmarks with a few grains of salt...

I ran these benchmarks using SiSoft Sandra, which I've always liked for benchmarking (you get each measure individually, and it doesn't try to combine them all into a single weighted average that has no real-world measurable value - but then again, as an Engineer and Physicist I may be a little biased against calculations that fail on dimensional analysis... sorry, I digress...).

My platform is a MacBook Pro 15", 2.2 GHz Core 2 Duo (4MB), 800 MHz Bus, 2 GiB 667 MHz RAM (2x1 GiB DIMMs), GeForce 8600M GT 128MB 16xPCIe (and so on, as per the stock MacBook Pro from August '07 with the 160 GB HDD upgrade).

The benchmarks were run in SiSoft Sandra as mentioned above, in a Windows XP machine on my MacBook Pro. It was actually the same Windows XP install for all three setups - I installed Windows in Boot Camp and was able to run the system either by launching one of the Virtual Machines (Parallels Desktop or VMWare Fusion) or by booting in via Boot Camp. The upside of this was consistency (and for me, ease of switching between VMs/Boot Camp for my actual work), the downside is that any hardware detection that takes place only at install time may cause a lower than optimal performance on one of the VMs (I mention this only because I recall something about needing to re-install Windows XP if you upgrade to a multi-core processor or one that supports certain new CPU instruction sets if you want to make use of the extra cores/instructions, so it might be relevant... I'm not sure). This would also therefore affect the disk performance for the VMs, since they're directly accessing the disk rather than a virtual disk.

The software versions at the time of benchmarking were
  • Parallels Desktop 3.0 build 4560
  • VMWare Fusion 1.0 (either RC1 or build 51348 - can't remember which)
  • Boot Camp (beta 1.3 or 1.4 - can't remember which)
For the record, current versions are Parallels 3.0 build 5584, Fusion 1.1.2, Boot Camp 2.1.

Where possible the machines were set up identically, however Parallels still only supports a single processor for virtualisation. This obviously caused some major differences in the benchmarks, but is not necessarily a Bad Thing™ when you're using a VM. A single core is fine if the application(s) you use in the VM are, on the whole, restricted by a single thread.

So down to the benchmarks... In each of the images below, the systems are represented as follows
  • Parallels - Red bars/lines, labelled "Current X" (e.g. Current Processor) - as the screenshots were taken from Parallels.
  • VMWare Fusion - Orange bars/lines, labelled "VMWare Fusion 1.0"
  • Boot Camp - Green, labelled "BootCamp"
Apologies to the colour-blind; some of the graphs make it very difficult to differentiate between the data, and it didn't really help that I left a couple of other similar-performing devices in the graphs for comparison... ignore the blue and magenta scores (;

For a good (professional) explanation of these benchmarks (by the people who made them) you should check out SiSoft's Benchmarking 101 Pages.

First up, Processor Multi-Media - Fusion runs almost native speed, Parallels falls way behind at about 25% of native. Now given Parallels is only running with one processor, one would expect it to be half the native speed, so it seems as if the one core is running about half the speed (when it comes to Multi-Media instructions) compared to the cores when running natively or in Fusion.

Processor Arithmetic - Again Fusion is close to native speed (92.2% Dhrystone, 93.0% Whetstone), and Parallels, as expected, is sitting at about half native speed due to the single processor virtualisation (47.6% Dhrystone, 54,0% Whetstone). Actually, 'per-core' Parallels outperforms Fusion here by a few per cent points.
Multi-Core Efficiency - Parallels doesn't show up in this graph, but again you can see Fusion falls just short of the native Boot Camp scores (looks to be around 90% of native values).
Cache and Memory - This benchmark shows memory bandwidth as a function of the block size, and indicates the speeds of the different levels of cache (note the significant steps around 32-64 kiB and 4 MiB). The two VMs perform at pretty much native speed above 128 kiB but things are very different below 32 kiB. Parallels is about 60-70% of native speed while Fusion seems to be about 40-50% faster than native speed in this range. My guess is its actually something odd going on with the timing if this benchmark within the VM. I can't imagine it would be possible to generate memory bandwidths beyond that of the native L1 cache! My guess is the VMWare graph basically follows the native one in the 'real world' for cache speeds.
Memory Bandwidth - This one's quite surprising, as both VMs exceed the native memory bandwidth for both integer and floating point benchmarks. Perhaps (at least in the Boot Camp beta) Apple's drivers for Windows weren't quite up to date. For the integer benchmark, Parallels beats Fusion by 38%, and for the floating point benchmark, Fusion beats Parallels by 13.5%. Unless you really know what your applications are doing in terms of memory access, these results don't really give a clear winner...
Memory Latency (Linear) - Another benchmark that clearly indicates the different cache levels (note the jumps between 16 and 64 kiB, and again either side of 4 MiB). Parallels tends to sit one (or so) clock cycle faster than Fusion for the whole range, and seems to get further ahead as the test range size increases. Parallels only really deviates from native speed at/around the 4 MiB mark (i.e. the size of the L2 cache). From memory the graph is the same (or very similar) for the second processor in Boot Camp and Fusion (not worth graphing twice).
Memory Latency (Random) - Again, Parallels is slightly faster than Fusion for all the test range sizes, and the native Boot Camp benchmark jumps ahead significantly at/around the 4 MiB L2 cache limit.
File Systems - Fairly notable differences between the three platforms here, with Fusion about 20% faster than Parallels, and native Boot Camp disk access about 20% faster than Fusion.
Summary - Certainly back in August 2007 (drivers may have been updated since...) there was no clear winner between Boot Camp, Parallels Desktop and VMware Fusion for all-round performance. Boot Camp came top of everything except memory bandwidth (but lost by quite a large margin to both VMs), Parallels tended to perform better than Fusion in memory and cache access, while Fusion's dual-core virtualisation put it a long way ahead of Parallels for raw CPU performance.

From my personal experience (here comes the subjective bit...) running Windows XP in Parallels and Fusion, even though Parallels only supports a single processor for virtualisation, Fusion seems to use a lot of its second core for processing graphics. I'm not 100% sure of this, but Parallels seems to do a better job (and this might be due to its OpenGL support? since Fusion lacks OpenGL support... clutching at straws though) of passing the work off to the graphics card, while Fusion does more of the rendering in CPU-land. I've seen Fusion sitting there using something like 80% of one processor while Windows is sitting there doing nothing but displaying a few windows; Parallels was down at 10-20% of the one processor (though the GPU does produce plenty of heat while its running).

A more objective test, and this is what has me sold (for now...) is that the C++ library I spend a lot of time using at work takes about 3-4 minutes to compile in Visual Studio 2005 under Parallels, and about 12 minutes under VMWare Fusion. This is a test I ran a few weeks back (April 2008) with the latest version of both Parallels and Fusion, and I came to the same conclusion (that compiling a large C++ project in VS2005 was faster in Parallels than Fusion) in August 2007 as well. As for Boot Camp, I've given up on that completely - I hate rebooting (and the fact that Windows thinks the system clock should be in local time for some really stupid reasons, despite good arguments to the contrary).

I guess this suggests that compiling (at least the combination of C++, VS2005, Windows XP, and our code-base) is more memory-bandwidth/latency-bound than processor/disk-speed-bound.

Fortunately, I've ported this library and some of our applications to Mac OS X (thanks to Leopard's POSIX compliance) so I don't have to spend much time in the VMs any more (;

On a final note, I've noticed that a lot of people compare them by their features, interface, 'feel' and such, saying that one or the other feels more like a Mac program, or feels more robust, etc. I've avoided any mention of these because I didn't find them different enough in these areas to be able to suggest everyone (or even a significant majority) would feel the same way on these subjective matters.

Apart from graphics support and number of CPUs virtualised, they're very very similar applications on the surface, though performance can differ quite a bit for applications that are bound to the performance of specific components of the system.

More ZFS on FreeBSD

Update: (2009/06/01) I should point out that a few pieces of info here are now out of date. Specifically referring to where I have mentioned bugs in the ZFS implementation on FreeBSD. The developers have done, and continue to do an amazing job, and in case anyone's not aware I'm a big fanboy of pjd for the massive effort put in. That said, I think the info herein is still useful, but where I mention 'bugs' they may well have been resolved (;

My last post was a basic intro to ZFS on FreeBSD. This one is meant to provide some information I didn't really get out of the documentation on my first (or second... third...) read. It is basically a collection of potential 'gotchas' that didn't seem obvious to me as a new user. These things become really obvious once you really understand ZFS, but to the newbie some of the features of ZFS can blur together to make them seem like something they're not.

I have intentionally left out example code, as I'm not trying to provide another 'howto' document, or a replacement for the man page; I just want to share some of the features/limitations/bugs/... of ZFS.


Copies=N Property
The first point of confusion for me was about the ZFS property 'copies'. The option allows you to tell a zfs file system to keep multiple copies of a single file. My initial thought/assumption with this was that if you create a zpool out of two disk vdevs (i.e. no redundancy) then you could add in redundancy for some of the file systems within the zpool this way. This is partially true.

If a bit is flipped somehow, or the disk space is overwritten where one of the copies exists, then the other can be used to recover this. However (and this is obvious once you learn more about ZFS) copies does not and cannot provide protection against complete disk failure. If you don't have one of the vdevs (i.e. the disk in my example) in a zpool, then you cannot access the file systems within it, so your zfs on which you have 2 copies of each file is still unavailable.

Put simply, the 'copies' property protects against small-scale errors (e.g. bad sectors, bit flips, accidental overwriting of a portion of the block device) but does not provide redundancy against disk failure. That said, if you have a single disk and can't expand this to multiple disks, then copies is a good way of reducing the likelihood of losing data due to bad sectors, etc, since zfs can only tell you there's an error with 1 copy, but it can (most likely) fix it with 2+ copies.


ZPool vdev Failure
Re-iterating from the previous section, a zpool is created from one or more vdevs, and is effectively a RAID0 across those vdevs, and has the same property as a RAID0 with respect to loss of a device. If one fails or is lost, the zpool cannot be accessed. Running the zpool status command will show that the zpool is in the FAULTED state due to the missing vdev.

A failed block device within a raidz or mirror vdev of course is fine, the block device status will become FAULTED (or similar) and the raidz/mirror vdev status will become DEGRADED until the block device is fixed/replaced.


ZVOLs
For me, on my hardware (amd64 system, 4gb ram, nForce4 chipset, FreeBSD 7.0 release fresh install), ZVOLs do not perform at all well. My advice would be to seriously test them out before using them. I found I was limited to write speeds of around 5 MiB/s and using a ZVOL as a block device in a zpool causes instability of the whole system (kernel would rapidly increase in size and eventually lock up) when writing large files or a lot of small files.

Now the first issue is one that should be fixed, clearly a bug. The latter may not necessarily be an intelligent thing to do, and perhaps using ZVOLs as a block device in a vdev should be denied (or at least warned against).


Size of Multi-block-device vdevs
The total available space of a mirror is the size of the smallest block device within it.

The total available space of a raidz vdev is approximately as follows.
N = the number of block devices in the raidz vdev
P = the number of parity blocks per stripe (i.e. 1 for raidz1, 2 for raidz2)
S = the size of the smallest block device
Total Space = S x (N - P)
(actually it will be a little below this value)


Adding and Removing Block Devices
You can add and remove block devices to/from vdevs only under very specific conditions at present (more add/remove options will probably be added in the future).
  • mirror - you can freely add block devices to a mirror (provided they're at least as big as the smallest block device) and devices can be removed provided there's at least two devices left in the vdev
  • raidz - you cannot add or remove block devices to/from a raidz vdev
  • disk - N/A - a disk vdev is a single block device
You can always add new vdevs to a zpool (provided they're of the same type) so if you have a zpool of disks, you can always add another disk. You cannot remove vdevs from a zpool.

Replacing Block Devices in a vdev
You can replace any block device in a vdev with the following conditions
  • mirror - the new block device must be at least as big as the smallest device in the mirror, the old device does not need to still exist (due to redundancy in the mirror)
  • raidz - the new block device must be at least as big as the smallest device in the raidz, the old device may not need to exist (assuming that without that device, the raidz is not FAULTED)
  • disk - the new block device must be at least as big as the old one, and the old disk must still be in place to copy the data from (as there's no redundancy in a disk vdev)
Expanding Multi-Disk vdevs
A nice feature of the raidz vdev (and, I assume, the mirror vdev, though I have not personally tested it) is that if you increase the size of the smallest block device (by replacing it) the whole vdev size will increase to fill that space. In order to see this expansion, however, you must export and re-import the zpool (actually it might work by just taking it offline also...).

This allows you to increase the size of your raidz vdev (and thus your zpool) very easily (i.e. you don't have to try to fit both the new and old raidz vdevs into the system at once and copy the data to the new zpool). You can simply replace one (or all, if you have the space) drive at a time, and furthermore, if you really lack the space, you can completely remove a drive and replace it with the new one, which will be resilvered from the contents of the other drives in the raidz vdev (thanks to the redundancy of your raidz vdev).

ZFS on FreeBSD

First of all, ZFS (or here) is awesome. I've been using it for a good 3 days now and will be singing its praises for much longer (at least until my raidz volume dies due to a bug and I lose everything...). I'm running ZFS on FreeBSD 7.0 and after spending a week playing with various configurations I have a few things to share that I didn't find obvious from the documentation.

In this post I intend to give an overview of ZFS (the overview I'd like to have had up-front when I was planning my system). In the next post I've got a few more advanced things to discuss. These posts aren't meant to be full how-to guides (though I link to some of those...) just an introduction to the terminology, structure and some ideas of how to adjust to the awesomeness of zfs (lets face it, anyone using FreeBSD, myself included, has been held back in the dark ages by UFS for quite some time now, if for no other reason than the lack of journalling).

First up, if you want to use ZFS, you'll want to read (and follow!) the ZFS Tuning Guide on the FreeBSD wiki. It's still marked as 'experimental' in FreeBSD for a reason (i.e. it won't necessarily play nice straight out of the box, you might have to manually tune some things). Also there's a quick-start guide with some useful commands for those who want to know how to use the 'zfs' and 'zpool' commands. But for now, an introduction...

For those new to ZFS there are a few things to learn up-front that will make it easier to understand. Firstly, despite its name (Zettabyte File System) ZFS is not just a file-system, its a volume management system as well. It can take care of whole disks, splitting them into smaller file systems and/or combining multiple disks into larger file systems. It manages compression, quotas and a bunch of other 'tuneables' at the file system level. It manages the mount points for drives, including boot-time mounting (you cannot, at the time of writing, boot from zfs, but you can put your root on zfs, and everything apart from /boot - guides here and here). Of course its not limited to using whole disks, you can use slices or partitions if you want to do something else with part of the disk, and you can even use files as the base block devices (though they're really only for testing out zfs) - as the documentation says, any GEOM provider.

Understanding the structure of zfs and zpools helps a fair bit in realising what zfs is useful for, so here goes...

Virtual Devices (vdevs)
At the lowest level you have one or more block devices available to put data on; these may be disks/slices/files as mentioned above. One or more of these can form a vdev (Virtual Device), of which there are about 7 types, so first a bit about the vdevs:
  • disk is a vdev that is just a disk/partition/slice-style block device
  • file is a vdev that is created from a file on an existing file system (really just for testing though)
  • mirror is effectively RAID1, and is created out of 2 or more block devices
  • raidz (raidz1 or raidz2) are effectively RAID5/6 (with 1 or 2 volumes worth of parity) which are created from 2 (for raidz1) or 3 (raidz2) or more block devices
  • spare is a hot spare block device (which I believe can be part of a number of vdevs)
  • log and cache are for more specialised purposes and you probably know much more than me about zfs if you know you need to use them (;
So now we know what a vdev might be, we can start to look at what a zpool is and how it's constructed. A zpool is created from one or more such vdevs of the same type (ignoring spares, logs and caches). This is a fairly important point, and is perhaps not an obvious limitation.

Z-Pools
Starting out simple, a zpool (called mypool) might be created from a single disk (/dev/ad0) vdev using the following command:
# zpool create mypool ad0
Actually this command has done a few things. It's created the disk vdev (okay it's just a disk, not very exciting), it's created a zpool using that disk, and a file system (zfs obviously) in that zpool, ready to use, and finally it's mounted the file system at /mypool (which was created if it didn't already exist). That command is analogous to (something along the lines of) the following in UFS land:
# fdisk -I /dev/ad0
# bsdlabel -w /dev/ad0s1
# newfs -U /dev/ad0s1a
# mkdir /mypool
# echo "/dev/ad0s1a /mypool ufs rw 1 2" >> /etc/fstab
# mount /mypool
A more complex example (and there's many many examples around if you hit up google) would be to create a raidz using a bunch of disks (ad4, 6, 8 and 10):
# zpool create mypool raidz ad4 ad6 ad8 ad10
Not much more complex to type, but this changes the simple step from the previous example in which a (boring) disk vdev was created, to creating a raidz vdev out of the four disks listed. Again, all the other steps are also performed. The important point I'm trying to make here is that vdevs, although important to understand, don't actually get explicitly created. Their creation is done when you use zpool create or zpool add. Much the same applies to mirrors; if I want to create a three-disk mirror I'd type something like
# zpool create mypool mirror ad4 ad6 ad8
Now in each of these examples, the zpool has been created from a single vdev (disk, raidz or mirror). And the zpool will act (pretty much) like a simple disk, RAID5 (for raidz) or RAID1 (for mirror). Of course on top of that zpool is the file system with its checksumming, compression, quotas, journal, etc.

However, a zpool can consist of multiple vdevs as mentioned above; in this case it acts as a RAID0 over all vdevs within it. So its very easy to create a RAID0, RAID1+0, RAID5+0 by creating a zpool consisting of multiple disks, mirrors or raidz vdevs respectively. Examples for these three as follows
# zpool create mypool ad4 ad6
# zpool create mypool raidz ad4 ad6 ad8 raidz ad10 ad12 ad14 ad16
# zpool create mypool mirror ad4 ad6 mirror ad8 ad10
In each example, a zpool is created with two vdevs. The first uses two simple disks, the second uses two raidz vdevs separately created out of three and four disks, and the third uses two mirrors each consisting of two disks.

I'm not completely familiar with the way zfs stripes data between vdevs, but I'm fairly sure it does intelligent things with stripe sizes. In any case, there's no requirement for the vdevs to be the same size in a zpool.

Zettabyte File Systems (ZFS's - the actual file systems you put files on, not the whole architecture!)
We've already seen that creating a zpool creates a zfs file system (the root zfs of that zpool) automatically. However it doesn't stop there; you can create file systems within this file system, and each child shares the free space of the sibling/parent file systems. At the very least this is useful for grouping files on your system, setting compression, quotas, etc. But these file systems also inherit properties, which can be useful, for example, having each user's home directory as a separate file system with its own quota, and some inherited properties (e.g. compression or a quota for everyone's home directory) from an encompassing file system for all users' home directories.

Each ZFS file system can have an arbitrary mount point (its an independent property of each file system, with the initial/default value coming from its position in the parent file system), so your directory structure need not follow the file system structure, despite the inheritance of properties. And finally, the file systems (including the root file system of the zpool) don't have to be mounted if you're just using them as a container object.

So as a simple example, setting up a system that has an existing root, /, file system, I might create a few file systems within a zpool 'mypool' as follows
# zfs set mountpoint=none mypool
# zfs create mypool/usr
# zfs create mypool/var
# zfs set mountpoint=/usr mypool/usr
# zfs set mountpoint=/var mypool/var
# zfs create mypool/usr/ports
# zfs set compression=gzip mypool/usr/ports
# zfs create mypool/usr/ports/distfiles
# zfs set compression=none mypool/usr/ports/distfiles
The above examples tell zfs not to mount the root file system within the mypool zpool. It then creates 'usr' and 'var' file systems and sets appropriate mountpoints for these (since we don't want them at /mypool/usr and var). Then two more file systems are created, one for /usr/ports (with compression turned on) and one for distfiles in ports (with compression off, since the distfiles are already compressed). I won't go into any more detail on this stuff, just providing some ideas of why you might create a few different file systems within the zpool.

Of course there's no limitation on the file systems being mounted within each other, since mount points can be set arbitrarily; and there's nothing stopping you mounting UFS and ZFS volumes in directories within each other. Personally I have a UFS boot partition (
because there's no support for booting off zfs yet) mounted (read-only, except when rebuilding FreeBSD) within my root ZFS, and the rest of the system is running on ZFS using a raidz based pool with a couple of other drive based pools as well.

Z-Volumes (ZVOL's)
This section comes with a warning, that I'm not convinced ZVOLs are working properly yet (at least in the FreeBSD implementation of ZFS at the time of writing).

Within a zpool you can create a new block device known as a ZVOL. This can then be used as a block device just like a disk, slice or partition. But its in a zpool on a ZFS, so as you can imagine this has the potential for some cool tricks. You create a ZVOL much like you create a file system, except that you need to specify its size (since its a block of space) in advance.
# zfs create -V 2g mypool/swapspace
This command creates a 2 GiB ZVOL in the 'mypool' zpool called 'swapspace', and this device can thereafter be found at /dev/zvol/mypool/swapspace so you could use it as swap space (as suggested by the name I gave it) or use newfs to create a UFS volume on it.

Now my personal experience with ZFS and FreeBSD 7.0 at the time of writing is that ZVOLs have a few problems, at least on my hardware (I've seen others complain but I'm not sure if this is a global issue or hardware specific). Basically, the ZVOLs I created (both on a single disk zpool and on a raidz of 3-4 drives) had write speeds of around 5 MiB/s. The raidz I'd created at the time had a write speed (outside the ZVOL) of around 100 MiB/s. This was on a fresh install of FreeBSD/amd64 7.0 with 2 GiB RAM and no other data on the zpool. In theory though (and hopefully in the future, in practice as well) this should be a "good" way of creating a volume for swap space, or for a block device taking up part of a drive, allowing you to avoid editing BSD labels and messing about with slices.

I'll make another post about how, in theory, you could set up a psuedo-RAID0+1 or 0+5 setup (i.e. RAID1 or 5 on top of RAID0) using zfs. That's more of an academic exercise though, and is probably a Bad Idea™. Certainly in the current implementation of ZFS in FreeBSD on my hardware this fails pretty badly (creating zpools using ZVOLs as vdevs - I hope you're following all this...) seems to cause the kernel to hang (not panic, just stop responding to anything file system related).

So again, to re-emphasise, be warned that ZVOLs may not work at all well right now. It might just be my hardware, but if you really want to use them, do some testing, including some very large writes (using both dd from /dev/zero and copying files on the file system) before you commit to it. I found they'd be stable for small writes (~100 MiB) but once you started any serious use of them (dumping a ~1GiB file or even just copying a large number of small files) they'd hang the system.

ZFS Notation Summary
In summary, the structure of zfs, zpool and vdevs might look something like this...

A hierarchy of ZFS file systems exists at the top level, each may contain other file systems and ZVOLs
- These file systems sit in a zpool
--- This zpool sits on top of one or more vdevs (of the same type)
----- These vdevs may each be a disk (a single block device)
----- Alternatively they may each be a raidz/mirror (i.e. a collection of block devices)

For the picky, I have specifically ignored spares, logs and caches in this, but if you're looking at those then this introduction to ZFS is aimed below your level (;

Useful Commands
A few useful commands for seeing what you've done (this would be at the top if this were a 'howto' guide)
# zpool status -v tank
This shows you useful information (the structure, any errors or current operations such as a replace/scrub/resilver) about a specific tank and the vdevs and block devices within it.
# zpool list
This shows you a list of all zpools on the system; note that the sizes shown here are the amount of physical space used/available to the zpool. This does not represent their data capacity if the zpool has raidz or mirror vdevs, as these vdevs store less data than their total physical size due to redundancy.
# zfs list
This shows you a list of all zfs file systems on the available zpools. You can see here how free space is shared between different zfs file systems, and also how much data capacity they actually have (as opposed to their physical size as in zpool list).
# zfs get all tank/fs
This shows you all the properties of the 'fs' file system within the zpool 'tank'. This includes things like compression, quotas and how many copies of each file to store (for a different type of redundancy to multi-disk vdevs).

Hopefully this makes ZFS a little clearer for you, it took me a few days to really understand the structure and a few more to find out some of the limitations...

Update: (2009/06/01) Just a short update due to some comments made on this post (worth reading). Most importantly, if you plan on using ZFS on FreeBSD then you need to read the ZFS Tuning Guide. And again, for emphasis... Don't set up ZFS on FreeBSD without reading the ZFS Tuning Guide. It's not that difficult to follow, and there's not a lot to it, and at present some tuning is still a requirement for most systems.