[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] [PATCH 3/10] Add HVM support


  • To: "Keir Fraser" <keir@xxxxxxxxxxxxx>
  • From: "Tian, Kevin" <kevin.tian@xxxxxxxxx>
  • Date: Wed, 11 Jul 2007 09:14:15 +0800
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Tue, 10 Jul 2007 18:12:32 -0700
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: Ace4v8AvEHvNElKqQi6Qqle8QqzaiAKGA7QQAAVN3AAAAXlToAABp7KOABeOrXA=
  • Thread-topic: [Xen-devel] [PATCH 3/10] Add HVM support

>From: Keir Fraser [mailto:keir@xxxxxxxxxxxxx]
>Sent: 2007年7月10日 21:52
>
>Okay.
>
>Anyway, back to your patch 3/10. With a view to cleanly adding VMXOFF
>on
>suspend, and to allow efficient VMCLEARing if we need it in future, e.g.,
>for deep-C states, I think you should change the suspend_domain() hook
>into
>suspend_cpu():
> 1. This is then symmetric with the resume_cpu() hook.
> 2. It's a natural place to put VMXOFF (unlike suspend_domain()).
>
>Of course, the question then is: how do you find the active VMCS's that
>need
>clearing? I suggest you add a list_head to arch_vmx_struct, have a
>per-cpu
>list of active VMCS's, enqueue on vmx_load_vmcs() and dequeue on
>__vmx_clear_vmcs().
>
>Could you revise patch 3/10 and resend, please?
>
> -- Keir

Sure, I'll do it.

Thanks,
Kevin

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.