[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] how to start VMs in a particular order

  • To: xen-users@xxxxxxxxxxxxx
  • From: "J. Roeleveld" <joost@xxxxxxxxxxxx>
  • Date: Thu, 03 Jul 2014 09:08:02 +0200
  • Delivery-date: Thu, 03 Jul 2014 07:09:50 +0000
  • List-id: Xen user discussion <xen-users.lists.xen.org>

On Thursday, July 03, 2014 04:03:26 AM lee wrote:
> Joost Roeleveld <joost@xxxxxxxxxxxx> writes:
> > On Tuesday 01 July 2014 23:48:51 lee wrote:
> >> Joost Roeleveld <joost@xxxxxxxxxxxx> writes:
> >> > Check the howtos for smartctl, they explain how to interpret the data.
> >> > I'd recommend:
> >> > http://www.smartmontools.org/
> >> 
> >> Ok, if I get to see the numbers, I can look there.  I never believed in
> >> this smart thing ...
> > 
> > You just wait for disks to die suddenly?
> yes
> You stock up on new disks just because smart might tell you that your
> disks will die eventually?

No, when SMART shows errors that indicate a dying disk, I order a replacement. 
When it arrives, I swap the disk. The dying disk then either gets send back 
for warranty, or used for testing.

> >> You have seen three (or more) disks going bad all at the same time just
> >> because they were connected to a different controller?
> > 
> > Yes,
> And smart didn't tell you they would go bad? ;)

SMART didn't exist back then.

> > it was a cheap controller though, but it did actually kill any disk I
> > connected to it.
> Hm now that really sucks and is rather unexpected.
> > I was working at a computer shop at the time and the owner wanted us to
> > try
> > different disks even though the first 2(!) died and those wouldn't work on
> > any other system anymore.
> I'll keep this in mind ... and in the future, I might as well connect
> defective disks to unknown controllers before good ones to see if the
> controller kills them.

This was around 1998, I wouldn't expect dodgy hardware like that anymore. 
Especially not from products used by bigger companies.

> >> They really aren't the greatest disk one can imagine.  I'd say they are
> >> ok for what they are and better than their reputation, considering the
> >> price --- you could get them for EUR 65 new a few years ago, maybe even
> >> less, before all disk prices increased.  I'll replace them with
> >> something suitable when they fail.
> > 
> > For twice that, I got 3TB WD Red drives a few years ago, after the
> > factories came back online.
> Are they twice as good?  I know they're quite a bit faster.  However,
> when I bought the WD20EARS, there weren't any red ones, only RE ones,
> which, IIRC, cost about 4 times as much as the WD20EARS.  That was just
> too much.

I agree, the RE ones are too expensive.
They are a lot faster, instead of taking a week(!) to build the Raid-6 array 
with 6 disks, it now takes 20 hours.
Both figures with the 3TB versions.

> >> Systems would go down all the time if exceeding their I/O capacity
> >> would make them crash.
> > 
> > It depends on how big the capacity is and how the underlying hardware
> > handles it.
> The I/O capacity is either exceeded, or it isn't.  It doesn't matter how
> big it is or how the hardware handles it.
> Just copy some data from /dev/null to a file, and you'll exceed the I/O
> capacity of your system.  Does it crash?
> Start an application like seamonkey (with a hundred tabs open).  When
> you have a fast CPU and a slow I/O system, doing so will exceed the I/O
> capacity of your system.  Does it crash?
> Boot some version of MS windoze from a HDD.  That exceeds the I/O
> capacity of your system or otherwise ppl wouldn't see huge improvements
> from booting from SSDs.  When it crashes, is it because the I/O capacity
> was exceeded?

Not normally, I agree. And in the past couple of years, error handling has 
improved sufficiently. I've been using computers for a very long time and also 
using experimental software and hardware on occasion.
Bad handling of filled buffers can lead to crashes. Fortunately, that doesn't 
happen much anymore.

> >> without a backplane in the way.  It is probably true that IBM --- and/or
> >> Adaptec
> > 
> > I believe you are using an IBM raid controller. Not an Adaptec part. At
> > least, I can't see Adaptect in any of the documentation I saw online.
> It's an IBM when you go by the labels and documentation.  Apparently
> Adaptec made it (for IBM).

But who wrote the firmware?

> It's rather weird because it's a card that plugs into a special slot,
> with apparently some/most of the controller integrated into the board.
> Without the board, that card is useless.

If you remove the card, can you still see the drives from an OS?

> >> --- ran into problems with SATA drives connected to the
> >> controller they couldn't really solve, for otherwise there wouldn't be a
> >> need to implement different PHY settings and even a utility in the
> >> controllers' BIOS to let users change them.
> > 
> > The backplane used in these systems, from my understanding, have a port
> > multiplier built-in. I think it is that part causing the problem.
> Hm, did you find any documentation about it?  It would appear to be an
> IBM-ESXS VSC7160 enclosure, and I haven't found any documentation for
> it.  Apparently there are various drivers for it --- why would those be
> needed?

My info was based on that page and from what I can see on pictures.
I see a single SAS-port on the mainboard. If that is all that is connected to 
the backplane (enclosure), then either a PMP is used. Or only 4 disks can be 
The drivers might be needed for:
1) The PMP
2) Reading some environmental values
3) Some other function

> >> The documentation speaks of "different SATA channels" and claims that
> >> improvements have been made to the PHY settings, apparently hiding
> >> what's actually going on.
> > 
> > SAS and SATA controllers often talk about sata channels. My raid
> > controller
> > even still calls them IDE-channels. It's just a name.
> It's obfuscating --- a better explanation would be much more helpful.

It's sticking to old names. As long as I can identify the drives using the 
name (IDE1, IDE2,...), I don't care much about the name given.

> >> Anyway, server uptime is 3 days, 9 hours now.  That's a great
> >> improvement :)
> >> 
> >> So for what's it worth:  For WD20EARS on a ServeRaid 8k, try different
> >> PHY settings.  PHY 2 seems to work much better than 0, 1 and 5.
> > 
> > That is usefull news, especially if that keeps the system running. Maybe
> > post that online somewhere, including on that page?
> That was my intention :)  There are archives of this mailing list,
> aren't there?

Yes, including on my mail server. But not everyone might find this mailing 
list at some point.

> >> > True, but, SATA drives don't always work when used with port
> >> > multipliers,
> >> > which from the above, I think you are actually using.
> >> 
> >> Hm, I doubt it.  The drive slots are numbered 0--5, and I can set a PHY
> >> setting for each drive individually.  Would I be able to do that if a
> >> PMP was used?
> > 
> > Yes, the question is, does the PMP used handle that correctly?
> How would the RAID controller know which disk is in which slot when they
> are all behind a PMP?  It does know that.

It will need to, how else can it tell you which disk to replace?
If it only gives you the port on the raid card, you need to test all disks 
behind the PMP to find out which one died...

> >>  And can a single port keep up with 6 SAS drives?
> > 
> > How many drives do you know of that can provide a sustained datastream of
> > 3Gb/s?
> > Or, in the case of 6 drives, 500Mb/s?
> > Assuming you have a drive that can sustain 200Mb/s, that still means a
> > single port can theoretically handle 3000 / 200 = 15 disks.
> > With SSDs the picture is slightly different. With a sustained read speed
> > of
> > 550Mb/s, you would get nearly 5.5 disks.
> > 
> > So, yes, a single port can easily keep up with 6 SAS drives.
> Aren't you confusing Gbit/sec with MB/sec?

I think I might have mixed them up.

> 3 Gbit/sec divided by 8 gives you Gbytes/sec, i. e. 0.375.  That's 375
> MB/sec.  There's some protocol overhead, so you can keep up with three,
> perhaps four disks, and you can't with six.

Not if you have a lot of constant I/O. I only know of a few types of usage 
where that happens. Storing data from sensor equipment is one of them, but for 
those situations, you wouldn't use PMPs.

I don't see an issue with most usage, provided you use components (controller, 
PMP, disks) that support all the nice features that help performance.

> >> Yes --- I have two PHY settings left I can try if I have to.  If that
> >> doesn't help, I can look into disabling power saving.
> > 
> > I hope setting 2, as you mentioned above, keeps it stable.
> It still hasn't crashed yet :)  I wonder if 3 or 4 might be better ...

It's your hardware and data. But if it were me, I would keep it on 2 :)


Xen-users mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.