|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 4/4] x86/dom0: re-order DMA remapping enabling for PVH Dom0
On Tue, Aug 22, 2017 at 06:37:15AM -0600, Jan Beulich wrote:
> >>> On 11.08.17 at 18:43, <roger.pau@xxxxxxxxxx> wrote:
> > Make sure the reserved regions are setup before enabling the DMA
> > remapping in the IOMMU, by calling dom0_setup_permissions before
> > iommu_hwdom_init.
>
> I can't match up this part with ...
>
> > --- a/xen/arch/x86/hvm/dom0_build.c
> > +++ b/xen/arch/x86/hvm/dom0_build.c
> > @@ -605,13 +605,6 @@ static int __init pvh_setup_cpus(struct domain *d,
> > paddr_t entry,
> > return rc;
> > }
> >
> > - rc = dom0_setup_permissions(d);
> > - if ( rc )
> > - {
> > - panic("Unable to setup Dom0 permissions: %d\n", rc);
> > - return rc;
> > - }
> > -
> > update_domain_wallclock_time(d);
> >
> > clear_bit(_VPF_down, &v->pause_flags);
> > @@ -1059,7 +1052,12 @@ int __init dom0_construct_pvh(struct domain *d,
> > const module_t *image,
> >
> > printk("** Building a PVH Dom0 **\n");
> >
> > - iommu_hwdom_init(d);
> > + rc = dom0_setup_permissions(d);
> > + if ( rc )
> > + {
> > + printk("Unable to setup Dom0 permissions: %d\n", rc);
> > + return rc;
> > + }
> >
> > rc = pvh_setup_p2m(d);
> > if ( rc )
> > @@ -1068,6 +1066,8 @@ int __init dom0_construct_pvh(struct domain *d, const
> > module_t *image,
> > return rc;
> > }
> >
> > + iommu_hwdom_init(d);
>
> ... you not changing the relative order between these two function
> calls. As to the other half I'm inclined to also wait for better
> understanding of what's going on here, as said in reply to patch 3.
Why not?
dom0_setup_permissions was called from pvh_setup_cpus, while
iommu_hwdom_init was the first function called in
dom0_construct_pvh.
After this patch dom0_setup_permissions is always called before
iommu_hwdom_init.
Roger.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |