[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3] x86/HVM: p2m_ram_ro is incompatible with device pass-through



On Tue, Oct 01, 2019 at 01:40:57PM +0200, Jan Beulich wrote:
> On 01.10.2019 13:30, Roger Pau Monné  wrote:
> > On Tue, Oct 01, 2019 at 11:07:55AM +0200, Jan Beulich wrote:
> >> --- a/xen/arch/x86/hvm/dm.c
> >> +++ b/xen/arch/x86/hvm/dm.c
> >> @@ -255,16 +255,33 @@ static int set_mem_type(struct domain *d
> >>  
> >>      mem_type = array_index_nospec(data->mem_type, ARRAY_SIZE(memtype));
> >>  
> >> -    if ( mem_type == HVMMEM_ioreq_server )
> >> +    switch ( mem_type )
> >>      {
> >>          unsigned int flags;
> >>  
> >> +    case HVMMEM_ioreq_server:
> >>          if ( !hap_enabled(d) )
> >>              return -EOPNOTSUPP;
> >>  
> >>          /* Do not change to HVMMEM_ioreq_server if no ioreq server 
> >> mapped. */
> >>          if ( !p2m_get_ioreq_server(d, &flags) )
> >>              return -EINVAL;
> >> +
> >> +        break;
> >> +
> >> +    case HVMMEM_ram_ro:
> >> +        /* p2m_ram_ro can't be represented in IOMMU mappings. */
> >> +        domain_lock(d);
> >> +        if ( has_arch_pdevs(d) )
> > 
> > I would use is_iommu_enabled because I think it's clearer in this
> > context (giving the comment above explicitly refers to having iommu
> > mappings).
> 
> But the whole point of the re-basing over Paul's change is that now
> the operation gets refused only if a device was actually assigned.
> 
> >> +            rc = -EXDEV;
> > 
> > EOPNOTSUPP might be better, since it's possible that future iommus
> > support such page type?
> 
> I don't think future IOMMU behavior affects the choice of error
> code. I wanted to use something half way reasonable, yet not too
> common, in order to be able to easily identify the source of the
> error. If you and others think this isn't a meaningful concern,
> I'd be okay switching to -EOPNOTSUPP.
> 
> >> --- a/xen/drivers/passthrough/pci.c
> >> +++ b/xen/drivers/passthrough/pci.c
> >> @@ -1486,15 +1486,33 @@ static int assign_device(struct domain *
> >>      if ( !is_iommu_enabled(d) )
> >>          return 0;
> >>  
> >> -    /* Prevent device assign if mem paging or mem sharing have been 
> >> -     * enabled for this domain */
> >> -    if ( unlikely(d->arch.hvm.mem_sharing_enabled ||
> >> -                  vm_event_check_ring(d->vm_event_paging) ||
> >> +    domain_lock(d);
> >> +
> >> +    /*
> >> +     * Prevent device assignment if any of
> >> +     * - mem paging
> >> +     * - mem sharing
> >> +     * - the p2m_ram_ro type
> >> +     * - global log-dirty mode
> >> +     * are in use by this domain.
> >> +     */
> >> +    if ( unlikely(vm_event_check_ring(d->vm_event_paging) ||

Would be nice to have some syntactic sugar like vm_event_enabled or
some such.

> >> +#ifdef CONFIG_HVM
> >> +                  (is_hvm_domain(d) &&
> >> +                   (d->arch.hvm.mem_sharing_enabled ||
> >> +                    d->arch.hvm.p2m_ram_ro_used)) ||
> >> +#endif

Do you really need the CONFIG_HVM guards? is_hvm_domain already has a
IS_ENABLED(CONFIG_HVM).

> >>                    p2m_get_hostp2m(d)->global_logdirty) )
> > 
> > Is such check needed anymore?
> > 
> > With the enabling of the iommu right at domain creation it shouldn't
> > be possible to enable any of the above features at all anymore.
> 
> See above - all such checks should now be / get converted to check
> whether devices are assigned, not whether IOMMU page tables exist.
> After all we want to refuse requests only if strictly necessary.

Oh right, I was missing the whole point then. So we still keep the
iommu enabled together with introspection or ram_ro as long as there
are no devices assigned.

Reviewed-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.