[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-ia64-devel] RE: [Xen-devel] [PATCH][RFC] New command: xm pcpu-list



> I would push a new command "xm pcpu-list" that reports physical
> CPU configuration.
> I suppose that Xen works on a machine where a lot of physical
> CPUs are installed.  It is useful for users to know the
> configuration of physical CPUs so that they can allocate VCPUs
> efficiently. This command offers the means for it.
> 
> I began this patch with ia64 machines.  It is just because I
> have one.
> 
> I would like to make this command work on x86 and powerpc
> machines.  Unfortunately, I don't have any with dual-core and
> multi-thread features.  I don't have much information on them
> either.  I would appreciate if you give any help to make the
> command work on x86 and powerpc.


The example you give below is a truly bizarre enumeration of CPUs. X86
does effectively enumerates [nodes][sockets][cores][threads] (in C
terminology), hence on a hyperthreaded system PCPU 0 and 1 are in the
same core.

I think it would be good if ia64 followed suit.

There was some discussion ages back about making it such that the tools
would interpret hierarchical PCPU 'addressing' rather than just the PCPU
number, i.e. you could refer to CPU 1.2.1.0 for the first hyperthread on
the second core of the third socket of the second node.

For systems that missed levels of the hierarchy e.g. single node, or no
hyperthreads, the hierarchy could be collapsed in the obvious way. 

I'd still like to see this implemented.

pcpu-list would then be less necessary, but you'd still want something
like it to see which CPUs are online once we start to do physical CPU
hotplug.

Thanks,
Ian

> Best regards,
>  Kan
> 
> 
> cf.
> # xm pcpu-list
> PCPU      Node      Socket      Core    Thread     State
>    0         0    0x001802         0         0     online
>    1         0    0x001803         0         0     online
>    2         0    0x001800         1         0     online
>    3         0    0x001801         1         0     online
>    4         0    0x001802         1         0     online
>    5         0    0x001803         1         0     online
>    6         0    0x001800         0         1     online
>    7         0    0x001801         0         1     online
>    8         0    0x001802         0         1     online
>    9         0    0x001803         0         1     online
>   10         0    0x001800         1         1     online
>   11         0    0x001801         1         1     online
>   12         0    0x001802         1         1     online
>   13         0    0x001803         1         1     online
>   14         0    0x001800         0         0     online
>   15         0    0x001801         0         0     online
> # xm info
> host                   : tiger154
> release                : 2.6.16.13-xen
> version                : #1 SMP Fri Sep 22 11:28:14 JST 2006
> machine                : ia64
> nr_cpus                : 16
> nr_nodes               : 1
> sockets_per_node       : 4
> cores_per_socket       : 2
> threads_per_core       : 2
> cpu_mhz                : 1595
> hw_caps                :
>
00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000:
> total_memory           : 8166
> free_memory            : 7586
> xen_major              : 3
> xen_minor              : 0
> xen_extra              : -unstable
> xen_caps               : xen-3.0-ia64 hvm-3.0-ia64
> xen_pagesize           : 16384
> platform_params        : virt_start=0xe800000000000000
> xen_changeset          : Thu Sep 21 15:35:45 2006 -0600 11460:
> da942e577e5e
> cc_compiler            : gcc version 3.4.4 20050721 (Red Hat 3.4.4-2)
> cc_compile_by          : root
> cc_compile_domain      :
> cc_compile_date        : Fri Sep 22 11:23:42 JST 2006
> xend_config_format     : 2


_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.