[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Cheap IOMMU hardware and ECC support importance

Gordan Bobic <gordan@xxxxxxxxxx> writes:

> On 2014-06-26 17:12, lee wrote:
>>> Mihail Ivanov <mihail.ivanov93@xxxxxxxxx> writes:
>>> So next thing I've read about RAID, so I am thinking of raiding 2 x WD
>>> Black 2 TB. (Should I do software raid or hardware raid?)
>> Software raid can mean quite a slowdown compared to hardware raid.
> The only situation where hardware RAID helps is if you have
> a _large_ battery backed write cache, and then it only helps
> on small bursty writes. A recent x86 CPU can do the RAID
> checksumming orders of magnitude faster than most RAID card
> ASICs, and hardware RAID cache is completely useless since
> anything that is likely to be caught in it will also be in
> the OS page cache.

The CPU may be able to handle the raid faster, and there may be lots of
RAM available for caching.  Both using CPU and RAM draws on resources
that may be occupied otherwise.

> The time where hardware RAID was worthwhile has passed.

I'm not sure what you consider "recent".  I have an AMD Phenom 965, and
I do notice the slowdowns due to software raid compared to hardware
raid, on the very same machine.

Besides, try to find a board that has more than only six SATA ports or
that can do both SAS and SATA.  There are few, and they are the more
expensive ones.  Perhaps the lack of ports is not so much of a problem
with the available disk capacities nowadays; however, it is what made me
get a hardware raid controller.

>>> Also I will be using ZFS and my Dom0 will be Fedora.
>> Fedora for a dom0 is a rather bad choice.  Fedora is an experimental
>> testing distribution with a very limited lifetime and prone to
>> experience lots of unexpected or undesirable changes.
> [...]
> True, and very unfortunate. Doubly so because my preferred distro (EL)
> is based on Fedora. The problem is that the quality (or lack thereof)
> trickles down, even after a lot of polishing.

I can say that the quality of Debian has been declining quite a lot over
the years and can't say that about Fedora.  I haven't used Fedora that
long, and it's working quite well.

>>> The question I am still pondering is whether I should get an E3 Xeon
>>> (no E3's with IGP are sold in my country), an 6-core E5 Xeon or AMD FX
>>> 8***,
>> Is power consumption an issue you need to consider?
>> As someone suggested, it might be a good idea to go for certified
>> hardware.  My server is going down about every 24 hours with a flood of
>> messages in dom0 like "aacraid 0000:04:00.0: swiotlb buffer is full
>> (sz:
>> 4096 bytes)", and it's actual server hardware.  I made a bug report a
>> while ago; nobody cares and I keep pressing the reset button.  You
>> probably don't want to end up like that.
> Hardware RAID is just downright evil.

I don't think that the problem is due to the raid controller.  The
driver for it is considered very stable and mature, and there's a theory
that this problem might have to do with some sort of memory
misalignments or something which results in the block layer being
supposed to write out stuff via DMA from places it cannot really access.
If that is true, every dom0 under xen is prone to the same problem,
regardless whether software or hardware raid is used.

I've looked at the code --- it seems that the relevant part of the
kernel hands the write request back to the xen part, telling it that it
failed and expecting the problem to be handled somewhere else.  I didn't
trace it any further because there's no point: I won't be able to fix
this anyway.

I suspect that the problem occurs only under certain circumstances, like
depending on the number of VMs and on how they are set up.

> I use plain old SATA with on-board and cheap add-in controllers and
> find that to be by far the least problematic combination.

I haven't found any cheap SATA controller that looked like it would work
with Linux and like it was a decent piece of hardware.  Look at those
that seem decent, and a used SAS/SATA RAID controller is cheaper and
much more capable than the SATA ones.

>> And after spending quite a bunch of money on hardware, you might
>> want to
>> use something else than xen.  Give it a try and set up a couple VMs
>> on a
>> testing machine so you get an idea of what you're getting into, and
>> reconsider.
> Alternatives aren't better, IMO. Having tried Xen, VMware and KVM,
> Xen was the only one I managed to (eventually) get working in the
> way I originally envisaged.

Hm, I find that surprising.  I haven't tried VMware and thought that as
a commercial product, it would make it easy to set up some VMs and to
run them reliably.  KVM/QEMU I tried years ago, and it seemed much more
straightforward than xen does now, which appears to be very chaotic.

After all, I'm not convinced that virtualization as it's done with xen
and the like is the right way to go.  It has advantages and solves some
problems while creating disadvantages and other problems.  It's like
going back to mainframes because the hardware has become too powerful,
using software to turn this very hardware into "multiframes" --- and
then finding out that it doesn't work so well because the hardware,
though powerful enough, never was designed for it.  It's like using an
axe on the hardware to cut it into pieces and expecting such pieces to
be particularly useful.

Perhaps, given some time, we might find more less powerful hardware that
serves its purpose more efficiently and, if need be, we just plug in
another piece of very efficient hardware to serve the next purpose.
That would make more sense to me.

Knowledge is volatile and fluid.  Software is power.

Xen-users mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.