[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Cheap IOMMU hardware and ECC support importance


  • To: xen-users@xxxxxxxxxxxxx
  • From: Gordan Bobic <gordan@xxxxxxxxxx>
  • Date: Sat, 28 Jun 2014 12:56:51 +0100
  • Delivery-date: Sat, 28 Jun 2014 11:58:01 +0000
  • List-id: Xen user discussion <xen-users.lists.xen.org>

On 06/28/2014 08:45 AM, lee wrote:
Gordan Bobic <gordan@xxxxxxxxxx> writes:

On 2014-06-26 18:36, lee wrote:
Gordan Bobic <gordan@xxxxxxxxxx> writes:

On 2014-06-26 17:12, lee wrote:
Mihail Ivanov <mihail.ivanov93@xxxxxxxxx> writes:

So next thing I've read about RAID, so I am thinking of raiding 2
x WD
Black 2 TB. (Should I do software raid or hardware raid?)

Software raid can mean quite a slowdown compared to hardware raid.

The only situation where hardware RAID helps is if you have
a _large_ battery backed write cache, and then it only helps
on small bursty writes. A recent x86 CPU can do the RAID
checksumming orders of magnitude faster than most RAID card
ASICs, and hardware RAID cache is completely useless since
anything that is likely to be caught in it will also be in
the OS page cache.

The CPU may be able to handle the raid faster, and there may be lots of
RAM available for caching.  Both using CPU and RAM draws on resources
that may be occupied otherwise.

A typical caching hardware RAID controller has maybe 3% of RAM of
a typical server. And I'm pretty sure that for the price of one
you could easily get more than an extra 3% of CPU and RAM.

That depends on what you have and need.  I needed at least 9 SATA
ports.  Choices:


+ buy a new board plus CPU plus RAM
   - costs at least 10 times of what I payed for the controller and gives
     me only max. 8 ports

+ max out the RAM
   - means to buy 16GB of RAM, throwing 8GB away, costs more than what I
     payed for the controller

+ buy some relatively cheap SATA controller
   - might not work at all, or not work well, and gives me only 1--2
     additional ports, i. e. a total of only 8.  It would have cost less
     than what I payed for the RAID controller, but is it worth the
     trouble?  It would have blocked a PCIe slot for only 1--2 more
     ports.  I didn't find that worthwhile but a waste of money.


The hardware RAID controller gives me 10fps more with my favourite game
I'm playing, compared to software raid.  Since fps rates can be rather
low (because I'm CPU limited), that means a significant difference.

If your game is grinding onto disk I/O during play all is lost anyway. If your CPU and RAM are _that_ constrained, there is probably a better way to spend whatever you might pay for a new caching RAID controller these days.

The time where hardware RAID was worthwhile has passed.

I'm not sure what you consider "recent".  I have an AMD Phenom 965, and
I do notice the slowdowns due to software raid compared to hardware
raid, on the very same machine.

I can believe that if you have a battery backed cache module

It has one.

and your workload includes a lot of synchronous writes. But
for that workload you would probably be better off getting an
SSD and using ZFS with ZIL in terms of total cost, performance
and reliability.

SSDs still loose badly when you compare price with capacity.  For what I
payed for the RAID controller, I could now buy two 120GB SSDs (couldn't
back then).  That means two disks more, requiring two more SATA ports
(11 in total), and an increased overall chance of disk failures because
the more disks you have, the more can fail.

I don't know about ZFS, though, never used that.  How much CPU overhead
is involved with that?  I don't need any more CPU overhead like comes
with software raid.

If you are that CPU constrained, tuning the storage is the wrong thing to be looking at.

expensive ones.  Perhaps the lack of ports is not so much of a problem
with the available disk capacities nowadays; however, it is what
made me
get a hardware raid controller.

Hardware RAID is, IMO, far too much of a liability with
modern disks. Latent sector errors happen a lot more
often than most people realize, and there are error
situations that hardware RAID cannot meaningfully handle.

So far, it works very well here.  Do you think that software RAID can
handle errors better?

Possibly in some cases.

And where do you find a mainboard that has like
12 SAS/SATA ports?

I use a Marvell 88SX7042 4-port card with two SIL3726 SATA port multipliers on it. This works very well for me and provides more bandwidth that my 12 disks can serve in a realistic usage pattern.

In contrast, I have three SAS RAID cards, two LSI and one Adaptec, none of which work at all on my motherboard with the IOMMU enabled.

I can say that the quality of Debian has been declining quite a lot
over
the years and can't say that about Fedora.  I haven't used Fedora that
long, and it's working quite well.

Depends on what your standards and requirements are, I suppose.
I have long bailed on Fedora other than for experimental testing
purposes to get an idea of what to expect in the next EL. And
enough bugs filter down to EL despite the lengthy stabilization
stage that it's becoming quite depressing.

It seems that things are getting more and more complicated --- despite
they don't need to --- and that people are getting more and more
clueless.  More bugs might be a side effect of that, and things aren't
done as thoroughly as they used to be done.

Indeed. The chances of getting a filed Fedora bug fixed, or even acknowledged before the Fedora's 6-month EOL bug zapper closes it for you are vanishlighly small, in my experience.

I find that on my motherboard most RAID controllers don't work
at all with IOMMU enabled. Something about the way the transparent
bridging native PCIX RAID ASICs to PCIe makes things not work.

Perhaps that's a problem of your board, not of the controllers.

It may well be, but it does show that the idea that a SAS RAID controller with many ports is a better solution does not universally apply.

Cheap SAS cards, OTOH, work just fine, and at a fraction of
the cost.

And they provide only a fraction of the ports and features.

When I said SAS above I meant SATA. And PMPs help. The combination of SATA card and PMPs supports FIS and NCQ which means that the SATA controller's bandwidth per port is used very efficiently.

As I said, I had far more problems with SAS RAID cards than SATA
controllers, and I use PMPs on top of those SAS controllers. I
might look at alternatives if I was running on pure solid state
but for spinning rust SATA+PMP+FIS+NCQ yields results that a
hardware RAID controller wouldn't likely improve on.

I plugged the controller in, connected the disks, created the volumes,
copied the data over, and it has been working without any problems ever
since, eliminating the CPU overhead of software raid.  After some time,
one of the disks failed, so I replaced it with no trouble.

The server is the same --- only that it crashes (unless that is finally
fixed).  That it crashes may be due to a kernel or xen bug, or to the
software for the raid controller being too old.

Anyway, I have come to like hardware RAID better than software RAID.

Whatever works for you. My view is that traditional RAID, certainly anything below RAID6, and even on RAID6 I don't trust the closed, opaque, undocumented implementation that might be in the firmware, is no longer fit for purpose with disks of the kind of size that ship today.

You could as well argue that graphics cards are evil.

It comes down to what makes a good tool for the job. There are jobs that GPUs are good at. When it comes to traditional RAID, there are things that are more fit for the the purpose of ensuring data integrity.

Alternatives aren't better, IMO. Having tried Xen, VMware and KVM,
Xen was the only one I managed to (eventually) get working in the
way I originally envisaged.

Hm, I find that surprising.  I haven't tried VMware and thought that as
a commercial product, it would make it easy to set up some VMs and to
run them reliably.

It's fine as long as you don't have quirky hardware.
Unfortunately, most hardware is buggy to some degree,
in which case things like PCI passthrough are likely
to not work at all.

With Xen there is always the source that can be modified
to work around at least the more workaroundable problems.
And unlike on the KVM development lists, Xen developers
actually respond to questions about working around such
hardware bugs.

So with VMware, you'd have to get certified hardware.

You wouldn't _have_ to get certified hardware. It just means that if you find that there is a total of one motherboard that fits your requirements and it's not on the certified list, you can plausibly take your chances with it even if it doesn't work out of the box. I did that with the SR-2 and got it working eventually in a way that would never have been possible with ESX.

Plus, dom0 being Linux I can use features that simply don't exist on ESX.

After all, I'm not convinced that virtualization as it's done with xen
and the like is the right way to go.
[...]

I am not a fan of virtualization for most workloads, but sometimes
it is convenient, not least in order to work around deficiencies of
other OS-es you might want to run. For example, I don't want to
maintain 3 separate systems - partitioning up one big system is
much more convenient. And I can run Windows gaming VMs while
still having the advantages of easy full system rollbacks by
having my domU disks backed by ZFS volumes. It's not for HPC
workloads, but for some things it is the least unsuitable solution.

Not even for most?  It seems as if everyone is using it quite a lot,
make it sense or not.

Most people haven't realized yet that the king's clothes are not suitable for every occasion, so to speak. In terms of the hype cycle, different users are at different stages. Many are still around the point of "peak of inflated expectations". Those that do the testing for their particular high performance workloads they were hoping to virtualize hit the "trough of disilusionment" pretty quickly most of the time. But there ARE things that it is useful for, as I mentioned in the paragraph above. Consolidating mostly idle machines and using virtualization to augment the ease and convenience backup/restore procedures through adding features that don't exist in the guest OS are obvious examples of uses that virtualization is very good for. That would be the "plateau of productivity".



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.