[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-users] Does Xen HVM 32bit guest support SMP and PAE?


  • To: "psboy" <psboy.liu@xxxxxxxxx>, xen-users@xxxxxxxxxxxxxxxxxxx
  • From: "Petersson, Mats" <Mats.Petersson@xxxxxxx>
  • Date: Mon, 4 Dec 2006 16:49:25 +0100
  • Delivery-date: Mon, 04 Dec 2006 07:50:50 -0800
  • List-id: Xen user discussion <xen-users.lists.xensource.com>
  • Thread-index: AccWFrJ2rnf27fJqQnqpvXeHlpomDgBpAZkw
  • Thread-topic: [Xen-users] Does Xen HVM 32bit guest support SMP and PAE?

 

> -----Original Message-----
> From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx 
> [mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of psboy
> Sent: 02 December 2006 13:34
> To: xen-users@xxxxxxxxxxxxxxxxxxx
> Subject: [Xen-users] Does Xen HVM 32bit guest support SMP and PAE?
> 
> Hi forks,
> I have built Xen from 3.03 source.
> I have two questions.
> 1.SMP hvm guest
> Even i set "VCPU=2" in w2k3.hvm,
> there still only one cpu in hvm guest domain.
> I am sure Dom0 have built for SMP and PAE support. 
> [root@xenunsvr xen]# xm list
> Name                                      ID Mem(MiB) VCPUs 
> State   Time(s)
> Domain-0                                   0      255     1 
> r-----    534.5
> w2k3                                       4     1536     1 
> ------      0.8
> I try to use command "xm vcpu-set w2k3 2" , but it still does 
> not work.

That's unlikely to work, as there's no way to tell an "unkown" guest
that it's suddenly got a second CPU - you may have somewhat better luck
with setting "vcpus=2" in the configuration file before you start the
DomU - although I'm not entirely sure that works right either - there's
certainly some fixed coming in 3.0.4 that will fix Windows SMP on ACPI
Hal. 


> I can see another vcpu if use this command although another 
> vcpu is in p state. (pause?)
> [root@xenunsvr xen]# xm vcpu-list
> Name                              ID VCPUs   CPU State   
> Time(s) CPU Affinity
> Domain-0                           0     0     0   r--     
> 680.6 any cpu
> Domain-0                           0     1     -   --p      
> 26.2 any cpu
> Domain-0                           0     2     -   --p       
> 4.4 any cpu
> Domain-0                           0     3     -   --p       
> 4.3 any cpu
> w2k3                               4     0     3   -b-      
> 14.5 any cpu
> w2k3                               4     1     -   --p       
> 0.0 any cpu
> 
> 2.Another question is PAE.
> I am sure Dom0 have built PAE support.
> If i set memory > 1536 in w2k3.hvm,
> hvm machine can not start. 
> [root@xenunsvr xen]# xm list
> Name                                      ID Mem(MiB) VCPUs 
> State   Time(s)
> Domain-0                                   0      255     1 
> r-----    811.4
> w2k3                                       5     2048     1 
> ------      0.0
> 
> I have no idea why memory < =1536 can work but > 1536 can 
> start hvm guest.
> I try to change hvm guest OS to RHEL AS4U4 but still got the 
> same result....

Someone else said that 2048MB doesn't work right - not sure if it's
reported as a bug or not. 

I don't know if 3.0.3 has the "dynamic mapping" that is necessary to map
more than around 2GB of memory (even in PAE-mode), as the limit is
essentially set by the amount of space allowed by the QEMU mappings,
which is a total of 3GB altogether, but since there are other "bits" of
memory allocated by QEMU, it will not allow you all of the 3GB for
"RAM", but some of the 3GB is used up for other purposes, which gives
you a bit more than 2GB of QEMU RAM-space if it's not got the "dynamic
mapping" feature. 

You should be able to see if you have PAE or not by "xm info|grep 32p",
which should list hvm-3.0-x86_32p, and also xen-3.0-x86_32p. 

--
Mats
> 
> Greets
> 
> WS Liu
> 
> 



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.