[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-users] Some Problems with vt on new intel S5000PSL Board (chipset i5000P)



Hi,

I have the job to set up two (identical) servers with xen, so that on each of 
these servers runs one or even more hvm-enabled windows guests (2003 Server).

The first thing I noticed was that the complete systeme freezes when the 
hypervisor starts dom0, if I use the xen packages, that are officially 
available for debian unstable. These packages are based on xen 3.0.2 (testing 
changeset: 9697). After that I started to complile my own version (once 
again) and to my own suprise, the just a bit newer changeset 9762 is working. 
Later on I also tested xen-unstable (11224) which also seems to work.

But there are still some problems, that I am not able to fix on my own, which 
seems to have something to do with the fact that this is quite a new intel 
board (S5000PSL) with intel 5000P chipset.

If I use xen-testing (9762) I have these problems:

 If I configure a xen vm for a windows system with more than 1GB RAM (for 
example 1,5GB or 2GB) of RAM, then I can start the domain with "xm create" 
but it doesn't really starts, even if I get the same output on my console as 
normal. But with "xm list" I see that the new domain doesn't use any cpu 
time. With 1GB or less the hvm guest runs stable & very fast.

If I configure the xen vm to use more than 1 cpu, then the whole machine (not 
only the vm or something like that) reboots without giving any hint why. 
After starting the vm the system doesn't react on key presses anymore and 
about 10sec later it reboots.

If I use xen-unstable, then the memory problem is exactly the same, but In 
xen-unstable the machine doesn't reboot when I assign more than 1 cpu to a 
hvm-enabled guest, but regardless what I configured with the "vcpus" 
parameter, just one cpu is used. all other seems to be paused. But I haven't 
took a closer look at this problem. Maybe it's possible to activate the cpus 
or maybe it was just because windows was still booting as I looked at the "xm 
list" listing.

Additionally I noticed another problem, but that doesn't seem to be related to 
the new server board. It seems to be a general problem:

I initially tried to use a lvm, so I don't have to use real partitons for the 
guest system. With paravirtualized guests that is no problem and a lvm device 
like: "/dev/xen-volumes/vm-abc" can be used without a problem. With a hvm 
guest, qemu-dm seems to have problem with the lvm device. After I started a 
hvm-enabled guest I just can see "qemu-dm [defunct]" in the process listing. 
I configured the lvm with: "disk = [ 'phy:xen-volumes/vm-abc,ioemu:hda,w' ]".

I have tested this always on xen 32bit (with and without pae), but till now 
not with xen 64bit. If I find the time tomorrow I will try the 64bit version 
and then I will report if the memory and vcpu problem also exists there. But 
my problem is, that I should have the systems running on friday.

Fact about the server:
maxdata 3000 server
intel S5000PSL Board
intel 5000P chipset
8x 512MB RAM (4GB all together)
1x intel Dual-Core Xeon CPU with 2,66GHz (with VT)
Intel SATA Controller
2x 250GB SATA HDD
2x Gigabit Ethernet Ports (intel e1000)

os: debian unstable
xen: 3.0.2-3 (9762) and xen-unstable (11224) (pae and normal vers.)

I am already using "dom0_mem" to set the memory size for domain 0, so that 
it's not a problem that has something to do with the automatic managment of 
domain0 memory.

Thx for any help... Please let me know if more detailed information is needed.

--Ralph

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.