[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] questions regarding HVM and maximum block device size



Hi Mark,

Mark Williamson <mark.williamson@xxxxxxxxxxxx> wrote: 
> > right now I run a bunch of PVM Xen guests, everything is fine. But on 
the
> > horizon, there a potential need shows up, that I may have to run one or
> > more HVM guests.
> >
> > Some time ago, I did some tests, and I observed the following on a host:
> > I activated the AMD VT extension in the BIOS, because I wanted to test 
to
> > setup a HVM machine. While this was activated, the PVM domU running on 
the
> > same host, had a unusual slow NFS performance. After I was ready with 
the
> > tests, I disabled the AMD VT in the BIOS again, and the NFS speed
> > was "normal" again. The NFS speed with VT enabled was about 1/3rd slower
> > than without. The dom0 and domU are 64Bit, SLES10SP1 systems.
> > Is this normal what I've seen?
> 
> I don't think that's normal at all - it's certainly not the intended 
> behaviour!  You're *just* running PV domains on the box, right?  The only 
> difference is that you've enabled AMD-V in the bios?  That shouldn't make 
any 
> difference at all, so it's most curious if there's a performance 
difference.
> 
> Have you also tried enquiring about this on SLES mailing lists / forums, 
in 
> case it's a SLES-specific problem?
> 
> > If yes, I guess it's not recommended to run 
> > PVM and HVM systems on the same dom0? Or if no, any idea, what I can do
> > about it?
> 
> It should be fine to mix PV and HVM guests on the same system.  This is a 
> pretty weird problem you're seeing though - I've no idea what would be 
> causing it.  Are you sure that the bios setting is the only thing that 
> changed?  Have you double checked your measurements here?  I don't mean to 
> sound disbelieving, it's just a very very strange problem to see!
> 
> Assuming this is definitely reproducible, further enquiries are the way 
> forward.  Asking on the SLES support channels makes sense.  Asking on 
> xen-devel may also be worthwhile.
> 
> Check xm dmesg and /var/log/xen/xend.log for any differences in output 
between 
> the two cases.  I don't know what I'd expect to see differ but it's worth 
a 
> try.

thank you for these comments, right now I do not have spare hardware 
available to make some new tests. But what you say lets make me hope that I 
either oberserved sth. wrong, or this was specifically to that machine where 
I observed it. I'll retest when I get the new box for the HVM machine, and 
will ask on the -dev and SLES list, if I see the behavior again.

> 
> > Further I'd like to know, whether a xm mem-set will work for HVM domU's?
> > I guess, in case the OS supports it, then it will work?
> > I've also read about paravirtual drivers for HVM guests, and I've seen a
> > xen-balloon.ko for HVM Linux guests, but I want to run MS Windows, are
> > there also such drivers available?
> 
> xm mem-set can work in principle for HVM domUs, yes.  AFAIK you won't be 
able 
> to grow a domain beyond its initial allocation at this point in time but 
you 
> should be able to shrink and grow it within those bounds.
> 
> You need an appropriate driver for the HVM OS though.  As you've noticed, 
> there is a Linux driver available.  For Windows, you'll need to find some 
> PV-on-HVM drivers for your platform.  I seem to recall Novell providing a 
> driver pack for Windows on SLES - maybe you could look into that?  But 
> there's also a free set of PV-on-HVM drivers, with the development being 
led 
> by James Harper although I don't know if these have a balloon driver at 
this 
> time...?  These are still in development, so they may not be recommended 
for 
> use on a system containing important data or requiring high uptimes.  That 
> said, I get the impression quite a few people are using them successfully 
> having worked out any local problems.  Make sure to read through some 
mailing 
> list archives on the drivers so you can learn of possible problems and 
> actions to take to avoid them!
> 
> You may well want to experiment with PV-on-HVM anyhow to get better 
Windows IO 
> performance.
> 
> > VMWare had, or still has, don't use it anymore since there is xen ;), a
> > limit on the maximum size of a block device, at 2TB. So if I wanted to
> > share a disk larger than 2TB, then the VMWare guest was/is only able to 
see
> > the 2TB but not more. Does in Xen exists a similar limit on block device
> > size?
> 
> I think there is a maximum block device size under Xen but I'm not sure 
what 
> it is.  If you search the mailing list archives you may find some useful 
> information on this.

Well, I did, but maybe not with the right keywords, or maybe not intensive 
enough, however, will take a look again.


thanks a lot
Sebastian


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.