[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-users] Debian/Xen usage summary



 

> -----Original Message-----
> From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx 
> [mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of 
> Didier Trosset
> Sent: 27 April 2007 14:56
> To: xen-users@xxxxxxxxxxxxxxxxxxx
> Subject: [Xen-users] Debian/Xen usage summary
> 
> Hello,
> 
> I just setup a few virtual systems, and I came into some 
> limitations using 
> XEN. I'd like to share these, just to know if the limitations 
> are in the 
> system or in the user :)
> 
> I am using a standard Debian 4.0 (etch) GNU/Linux 
> distribiution. The system 
> is an Intel Core 2 duo with virtualization inside. I use the 
> amd64 flavour. 
> Thus my kernel is '2.6.18-4-xen-amd64'. Xen is version 3.0.3.
> 
> Using paravirtualization, I can start only other 64 bits guests.
>    Either the standard linux-image-2.6.18-4-xen-amd64,
>    Or a specially compiled one, with IDE included not-as-module
>    I cannot manage to start the ubuntu 7.04 kernels for amd64.

This is normal and expected - the PV kernel and Dom0 + Hypervisor needs
to be "same type" (64-bit, 32-bit PAE or 32-bit). Release of 3.0.5 (RC3
available now) will change this to allow 32-PAE on top of 64-bit (32-bit
without PAE is much harder as 64-bit and 32PAE share a very similar
page-table format, whilst the NOPAE uses a noticable differnet
page-table format). 

> 
> Using full virtualization (hvm), I can start only 32 bits guests.
>    Starting on an ISO of a Debian amd64 or Fedora x86_64 
> install fails.
>    But starting on a i386 ISO of these allows the install to run OK.

64-bit guests of some sorts should work fine in 3.0.3 - but some may
not. However, if you can't get any guests to work in 64-bit, I guess
that there's some settings missing in the config file: apic=1, pae=1
should be the minimum. Changing the setting of acpi={1,0} may also
contribute to success - try both options to see if one is better than
the other. However, bear in mind that significant effort was put into
3.0.4's 64-bit support, so some guests will work much better with a
3.0.4 or later version of Xen. 

> 
> Using hvm, I have to use ioemu for the network card to be recognized.
>    vif = [ "type=ioemu, ip=..." ] without it, the card is not 
> detected.
>    BTW, I am using NAT for networking.

Yes, you probably need IOEMU here to tell the builder where the network
device is (theoretically, it could be a para-virtual device, so if it's
not declared IOEMU, the builder may set it up as a PV device, which of
course doesn't work for full-virtualization unless the guest has special
para-virtual drivers added to the guest).

> 
> Then, starting to use all of these is a bit of a nightmare. 
> Indeed hvm 
> systems do start but does not show the SDL display. xm list 
> reports the 
> system as running, but I have no access to it (and network not yet 
> configured) although it was present during install. Don't 
> know what happends 
> here. I'd be glad for some hints?

Use VNC instead? I suspect that the SDL option wasn't compiled into your
(i.e. debian's) QEMU model, but that's just a guess. Try checking the
/var/log/xen/qemu-dm.*.log files to see if it says anything about
SDL/VNC/etc in there? [I generally use SDL, but I've used VNC lately too
- both needs to compiled in when building the qemu-dm application]

Of course, it could be any of a hundred other problems... :-(

--
Mats
> 
> One more question, how to mount inside dom0 a logical volume 
> that is given 
> as a whole disk to a hvm (which had it partitionned).
> 
> Thanks in advance
> Didier
> 
> -- 
> Didier Trosset-Moreau
> Agilent Technologies
> Geneva, Switzerland
> 
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users
> 
> 
> 



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.