[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] RFC: very initial PVH design document

> ## SMP discover and bring up ##
> The process of bringing up secondary CPUs is obviously different from native,
> since PVH doesn't have a local APIC. The first thing to do is to figure out
> how many vCPUs the guest has. This is done using the `VCPUOP_is_up` hypercall,
> using for example this simple loop:
>     for (i = 0; i < MAXCPU; i++) {
>         ret = HYPERVISOR_vcpu_op(VCPUOP_is_up, i, NULL);
>         if (ret >= 0)
>             /* vCPU#i is present */
>     }
> Note than when running as Dom0, the ACPI tables might report a different 
> number
> of available CPUs. This is because the value on the ACPI tables is the
> number of physical CPUs the host has, and it might bear no resemblance with 
> the
> number of vCPUs Dom0 actually has so it should be ignored.
> In order to bring up the secondary vCPUs they must be configured first. This 
> is
> achieved using the `VCPUOP_initialise` hypercall. A valid context has to be
> passed to the vCPU in order to boot. The relevant fields for PVH guests are
> the following:
>   * `flags`: contains VGCF_* flags (see `arch-x86/xen.h` public header).
>   * `user_regs`: struct that contains the register values that will be set on
>     the vCPU before booting. The most relevant ones are `rip` and `rsp` in 
> order
>     to set the start address and the stack.

The OS can use 'rdi' and 'rsi' for their own purpose.

[Any other ones that are free to be used?]

>   * `ctrlreg[3]`: contains the address of the page tables that will be used by
>     the vCPU.

Other registers - if not set to zero will cause the hypercall to error with
> After the vCPU is initialized with the proper values, it can be started by
> using the `VCPUOP_up` hypercall. The values of the other control registers of
> the vCPU will be the same as the ones described in the `control registers`
> section.

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.