[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Cheap IOMMU hardware and ECC support importance


  • To: xen-users@xxxxxxxxxxxxx
  • From: Kuba <kuba.0000@xxxxx>
  • Date: Sat, 28 Jun 2014 14:23:58 +0200
  • Delivery-date: Sat, 28 Jun 2014 12:24:23 +0000
  • List-id: Xen user discussion <xen-users.lists.xen.org>

W dniu 2014-06-28 13:25, lee pisze:
Kuba <kuba.0000@xxxxx> writes:

W dniu 2014-06-28 09:45, lee pisze:

I don't know about ZFS, though, never used that.  How much CPU overhead
is involved with that?  I don't need any more CPU overhead like comes
with software raid.


ZFS offers you two things RAID controller AFAIK cannot do for you:
end-to-end data checksumming and SSD caching.

There might be RAID controllers that can do SSD caching.  SSD caching
means two extra disks for the cache (or what happens when the cache disk
fails?), and ZFS doesn't increase the number of SAS/SATA ports you have.
I'm not sure what happens if a read or write to the SLOG (the device acting as a write cache) fails. Anyone? As for the reads, if a read from L2ARC (the read cache on SSD in our case) fails, it's just ignored (AFAIK) and data is read from the vdevs ("primary" storage).

How does it do the checksumming?  Read everything after it's been
written to verify?

Each time data is read from the disk, it is checksummed and the checksum is compared with the value stored with the data. This way you know whether data you just read is good or not, not just that it had been written correctly. Take a look here for example, slides 12-16:

http://wiki.illumos.org/download/attachments/1146951/zfs_last.pdf

It's a little bit outdated and Solaris-centric, but might give you some overview.

Kuba

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.