[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Re: [Xen-changelog] [xen-unstable] Clean up handling of IS_PRIV_FOR() and rcu_[un]lock_domain().
Keir Fraser, le Sat 05 Apr 2008 17:31:39 +0100, a écrit : > On 5/4/08 15:28, "Samuel Thibault" <samuel.thibault@xxxxxxxxxxxxx> wrote: > > >> They were all fine, except there was one inexplicable check of > >> IS_PRIV_FOR() > >> in bind_interdomain() which I nuked. It was so bizarre that I assumed you > >> must have put it there for a reason, and this would be one that you'd > >> complain about. > > > > I'm now complaining :) > > > > The bind_interdomain() trick is needed for the ioreq events channel: > > when it gets installed, it is supposed to be between the HVM domain and > > dom0 (the stub domain doesn't exist anyway). The meaning of the test is > > hence to allow the stub domain to hijack that event channel (because it > > has privileges on the remote domain). > > The hack kind of sucks. :-) Add a new hvm_param to indicate the device model > domain. Default it to zero, and if it becomes set to some other value (by > the stub domain itself, when it starts) then re-create the event-channel > port with new remote domid. hvm: Add HVM_PARAM_DM_DOMAIN to let ioreq events go to a stub domain instead of dom0. Signed-off-by: Samuel Thibault <samuel.thibault@xxxxxxxxxxxxx> diff -r fec296bcfd21 tools/ioemu/hw/xen_machine_fv.c --- a/tools/ioemu/hw/xen_machine_fv.c Fri Apr 11 09:57:06 2008 +0100 +++ b/tools/ioemu/hw/xen_machine_fv.c Fri Apr 11 15:27:36 2008 +0100 @@ -205,6 +205,7 @@ static void xen_init_fv(uint64_t ram_siz } #endif + xc_set_hvm_param(xc_handle, domid, HVM_PARAM_DM_DOMAIN, DOMID_SELF); xc_get_hvm_param(xc_handle, domid, HVM_PARAM_IOREQ_PFN, &ioreq_pfn); fprintf(logfile, "shared page at pfn %lx\n", ioreq_pfn); shared_page = xc_map_foreign_range(xc_handle, domid, XC_PAGE_SIZE, diff -r fec296bcfd21 xen/arch/ia64/vmx/vmx_hypercall.c --- a/xen/arch/ia64/vmx/vmx_hypercall.c Fri Apr 11 09:57:06 2008 +0100 +++ b/xen/arch/ia64/vmx/vmx_hypercall.c Fri Apr 11 15:27:36 2008 +0100 @@ -165,6 +165,23 @@ do_hvm_op(unsigned long op, XEN_GUEST_HA iorp = &d->arch.hvm_domain.buf_pioreq; rc = vmx_set_ioreq_page(d, iorp, a.value); break; + case HVM_PARAM_DM_DOMAIN: + /* Recreate ioreq event channels */ + if (a.value == DOMID_SELF) + a.value = current->domain->domain_id; + iorp = &d->arch.hvm_domain.ioreq; + for_each_vcpu ( d, v ) { + rc = alloc_unbound_xen_event_channel(v, a.value); + if (rc < 0) + goto param_fail; + free_xen_event_channel(v, v->arch.arch_vmx.xen_port); + v->arch.arch_vmx.xen_port = rc; + spin_lock(&iorp->lock); + if (iorp->va != NULL) + get_ioreq(v)->vp_eport = rc; + spin_unlock(&iorp->lock); + } + break; default: /* nothing */ break; diff -r fec296bcfd21 xen/arch/x86/hvm/hvm.c --- a/xen/arch/x86/hvm/hvm.c Fri Apr 11 09:57:06 2008 +0100 +++ b/xen/arch/x86/hvm/hvm.c Fri Apr 11 15:27:36 2008 +0100 @@ -2239,6 +2239,23 @@ long do_hvm_op(unsigned long op, XEN_GUE domain_unpause(d); break; + case HVM_PARAM_DM_DOMAIN: + /* Recreate ioreq event channels */ + if (a.value == DOMID_SELF) + a.value = current->domain->domain_id; + iorp = &d->arch.hvm_domain.ioreq; + for_each_vcpu ( d, v ) { + rc = alloc_unbound_xen_event_channel(v, a.value); + if (rc < 0) + goto param_fail; + free_xen_event_channel(v, v->arch.hvm_vcpu.xen_port); + v->arch.hvm_vcpu.xen_port = rc; + spin_lock(&iorp->lock); + if (iorp->va != NULL) + get_ioreq(v)->vp_eport = rc; + spin_unlock(&iorp->lock); + } + break; } d->arch.hvm_domain.params[a.index] = a.value; rc = 0; diff -r fec296bcfd21 xen/include/public/hvm/params.h --- a/xen/include/public/hvm/params.h Fri Apr 11 09:57:06 2008 +0100 +++ b/xen/include/public/hvm/params.h Fri Apr 11 15:27:36 2008 +0100 @@ -85,6 +85,9 @@ #define HVM_PARAM_HPET_ENABLED 11 #define HVM_PARAM_IDENT_PT 12 -#define HVM_NR_PARAMS 13 +/* Device Model domain, defaults to 0 */ +#define HVM_PARAM_DM_DOMAIN 13 + +#define HVM_NR_PARAMS 14 #endif /* __XEN_PUBLIC_HVM_PARAMS_H__ */ _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |