[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Could you please answer some questions regarding Solaris PVHVM pvclock support.



CC'ing xen-devel and Roger.

On Wed, 15 Jan 2014, Qin Li wrote:
> Hi Stefano,
> 
> How do you do?
> 
> Currently, Solaris only works as PV-on-HVM guest on Xen, but recently,

Actually now you have a better way of running Solaris on Xen: PVH.

http://wiki.xen.org/wiki/Xen_Overview#PV_in_an_HVM_Container_.28PVH.29_-_New_in_Xen_4.4

Roger already ported FreeBSD to Xen as a PVH guest:

http://marc.info/?l=freebsd-current&m=138971161228874&w=2


> we decide to changes this situation by work out below 2 features for
> Solaris guest OS:
> 
> /* x86: Does this Xen host support the HVM callback vector type? */
> #define XENFEAT_hvm_callback_vector 8
> 
> /* x86: pvclock algorithm is safe to use on HVM */
> #define XENFEAT_hvm_safe_pvclock

FYI both these features are available and used by PVH guests.


> For XENFEAT_hvm_callback_vector, it's straightforward that the original
> apic interrupt handler needs to be registered onto each vCPU's IDT.
> But for XENFEAT_hvm_safe_pvclock, I have some confusions as following:
> . Why the pvclock implementation within guest OS, has to depend on
> XENFEAT_hvm_callback_vector?

Because you need to be able to receive timer interrupts on multiple
vcpus and without XENFEAT_hvm_callback_vector you would only receive
interrupts from the Xen Platform PCI device.


> . For a PV-on-HVM guest OS, the "shared_info->vcpu_info->vcpu_time_info"
> is already visiable. Does guest OS still need any action to ask
> hypervisor to update this piece of memory periodically?

I don't think you need to ask the hypervisor to update vcpu_time_info
periodically, what gave you that idea?

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.