[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1/2] VT-d: re-phrase logic in vtd_set_hwdom_mapping() for clarity



> -----Original Message-----
> From: Roger Pau Monne
> Sent: 11 June 2018 11:31
> To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>
> Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx; Kevin Tian <kevin.tian@xxxxxxxxx>;
> Stefano Stabellini <sstabellini@xxxxxxxxxx>; Wei Liu <wei.liu2@xxxxxxxxxx>;
> George Dunlap <George.Dunlap@xxxxxxxxxx>; Andrew Cooper
> <Andrew.Cooper3@xxxxxxxxxx>; Ian Jackson <Ian.Jackson@xxxxxxxxxx>; Tim
> (Xen.org) <tim@xxxxxxx>; Julien Grall <julien.grall@xxxxxxx>; Jan Beulich
> <jbeulich@xxxxxxxx>
> Subject: Re: [Xen-devel] [PATCH 1/2] VT-d: re-phrase logic in
> vtd_set_hwdom_mapping() for clarity
> 
> On Fri, Jun 08, 2018 at 04:30:29PM +0100, Paul Durrant wrote:
> > diff --git a/docs/misc/xen-command-line.markdown b/docs/misc/xen-
> command-line.markdown
> > index 8712a833a2..6beb28dada 100644
> > --- a/docs/misc/xen-command-line.markdown
> > +++ b/docs/misc/xen-command-line.markdown
> > @@ -1212,8 +1212,8 @@ wait descriptor timed out', try increasing this
> value.
> >
> >  Use this to work around firmware issues providing incorrect RMRR entries.
> >  Rather than only mapping RAM pages for IOMMU accesses for Dom0, with
> this
> > -option all pages not marked as unusable in the E820 table will get a
> mapping
> > -established.
> > +option all pages up to and including 4GB, not marked as unusable in the
> > +E820 table, will get a mapping established.
> 
> Sorry, I've reviewed the patches in the wrong order. You can ignore
> the comments I've made related to this in patch 2.
> 
> >  ### irq\_ratelimit (x86)
> >  > `= <integer>`
> > diff --git a/xen/drivers/passthrough/vtd/x86/vtd.c
> b/xen/drivers/passthrough/vtd/x86/vtd.c
> > index 88a60b3307..5c440ba183 100644
> > --- a/xen/drivers/passthrough/vtd/x86/vtd.c
> > +++ b/xen/drivers/passthrough/vtd/x86/vtd.c
> > @@ -118,22 +118,26 @@ void __hwdom_init
> vtd_set_hwdom_mapping(struct domain *d)
> >
> >      for ( i = 0; i < top; i++ )
> >      {
> > +        unsigned long pfn = pdx_to_pfn(i);
> > +        bool map;
> >          int rc = 0;
> >
> >          /*
> > -         * Set up 1:1 mapping for dom0. Default to use only conventional 
> > RAM
> > -         * areas and let RMRRs include needed reserved regions. When set,
> the
> > -         * inclusive mapping maps in everything below 4GB except unusable
> > -         * ranges.
> > +         * Set up 1:1 mapping for dom0. Default to include only
> > +         * conventional RAM areas and let RMRRs include needed reserved
> > +         * regions. When set, the inclusive mapping maps in every pfn up
> > +         * to and including 4GB except those that fall in unusable ranges.
> >           */
> > -        unsigned long pfn = pdx_to_pfn(i);
> > +        if ( iommu_inclusive_mapping &&
> > +             pfn <= (0xffffffffUL >> PAGE_SHIFT) )
> 
> Please use GB(4) here for clarity.

That would be better. I left it since that's what the old code used but it I'll 
change this one and the one below.

> 
> > +            map = !page_is_ram_type(pfn, RAM_TYPE_UNUSABLE);
> > +        else
> > +            map = page_is_ram_type(pfn, RAM_TYPE_CONVENTIONAL);
> > +
> > +        if ( !map )
> > +            continue;
> >
> > -        if ( pfn > (0xffffffffUL >> PAGE_SHIFT) ?
> > -             (!mfn_valid(_mfn(pfn)) ||
> > -              !page_is_ram_type(pfn, RAM_TYPE_CONVENTIONAL)) :
> > -             iommu_inclusive_mapping ?
> > -             page_is_ram_type(pfn, RAM_TYPE_UNUSABLE) :
> > -             !page_is_ram_type(pfn, RAM_TYPE_CONVENTIONAL) )
> > +        if ( pfn > (0xffffffffUL >> PAGE_SHIFT) && !mfn_valid(_mfn(pfn)) )
> 
> I would maybe do this check before the page_is_ram_type one, so that
> you can discard invalid mfns earlier.

True.

  Paul

> 
> Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.