|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v12 3/3] iommu: add rmrr Xen command line option for extra rmrrs
On Wed, Oct 28, 2015 at 10:05:31AM -0600, Jan Beulich wrote:
> >>> On 27.10.15 at 21:36, <elena.ufimtseva@xxxxxxxxxx> wrote:
> > +static void __init add_extra_rmrr(void)
> > +{
> > + struct acpi_rmrr_unit *acpi_rmrr;
> > + struct acpi_rmrr_unit *rmrru;
> > + unsigned int dev, seg, i;
> > + unsigned long pfn;
> > + bool_t overlap;
> > +
> > + for ( i = 0; i < nr_rmrr; i++ )
> > + {
> > + if ( extra_rmrr_units[i].base_pfn > extra_rmrr_units[i].end_pfn )
> > + {
> > + printk(XENLOG_ERR VTDPREFIX
> > + "Invalid RMRR Range "ERMRRU_FMT"\n",
> > + ERMRRU_ARG(extra_rmrr_units[i]));
> > + continue;
> > + }
> > +
> > + if ( extra_rmrr_units[i].end_pfn - extra_rmrr_units[i].base_pfn >=
> > + MAX_EXTRA_RMRR_PAGES )
> > + {
> > + printk(XENLOG_ERR VTDPREFIX
> > + "RMRR range "ERMRRU_FMT" exceeds
> > "__stringify(MAX_EXTRA_RMRR_PAGES)" pages\n",
> > + ERMRRU_ARG(extra_rmrr_units[i]));
> > + continue;
> > + }
> > +
> > + overlap = 0;
> > + list_for_each_entry(rmrru, &acpi_rmrr_units, list)
> > + {
> > + if ( pfn_to_paddr(extra_rmrr_units[i].base_pfn) <
> > rmrru->end_address &&
> > + rmrru->base_address <
> > pfn_to_paddr(extra_rmrr_units[i].end_pfn + 1) )
>
> Aren't both ranges inclusive? I.e. shouldn't the first one be <= (and
> the second one could be <= too when dropping the +1), matching
> the check acpi_parse_one_rmrr() does?
The end_address is not inclusive, while the start_address is.
These to from rmrr_identity_mapping()
...
ASSERT(rmrr->base_address < rmrr->end_address);
and:
...
while ( base_pfn < end_pfn )
{
int err = set_identity_p2m_entry(d, base_pfn, p2m_access_rw, flag);
if ( err )
return err;
base_pfn++;
}
...
I think this condition should not be a problem. But yes, its not uniform with
acpi_parse_one_rmrr.
I guess I should send another version then?
>
> > + {
> > + printk(XENLOG_ERR VTDPREFIX
> > + "Overlapping RMRRs: "ERMRRU_FMT" and [%lx-%lx]\n",
> > + ERMRRU_ARG(extra_rmrr_units[i]),
> > + paddr_to_pfn(rmrru->base_address),
> > + paddr_to_pfn(rmrru->end_address));
> > + overlap = 1;
> > + break;
> > + }
> > + }
> > + /* Don't add overlapping RMRR. */
> > + if ( overlap )
> > + continue;
> > +
> > + pfn = extra_rmrr_units[i].base_pfn;
> > + do
> > + {
> > + if ( !mfn_valid(pfn) )
> > + {
> > + printk(XENLOG_ERR VTDPREFIX
> > + "Invalid pfn in RMRR range "ERMRRU_FMT"\n",
> > + ERMRRU_ARG(extra_rmrr_units[i]));
> > + break;
> > + }
> > + } while ( pfn++ < extra_rmrr_units[i].end_pfn );
> > +
> > + /* Invalid pfn in range as the loop ended before end_pfn was
> > reached. */
> > + if ( pfn <= extra_rmrr_units[i].end_pfn )
> > + continue;
> > +
> > + acpi_rmrr = xzalloc(struct acpi_rmrr_unit);
> > + if ( !acpi_rmrr )
> > + return;
> > +
> > + acpi_rmrr->scope.devices = xmalloc_array(u16,
> > +
> > extra_rmrr_units[i].dev_count);
> > + if ( !acpi_rmrr->scope.devices )
> > + {
> > + xfree(acpi_rmrr);
> > + return;
> > + }
> > +
> > + seg = 0;
> > + for ( dev = 0; dev < extra_rmrr_units[i].dev_count; dev++ )
> > + {
> > + acpi_rmrr->scope.devices[dev] = extra_rmrr_units[i].sbdf[dev];
> > + seg = seg | PCI_SEG(extra_rmrr_units[i].sbdf[dev]);
>
> Once again - |= please.
>
Missed this one.
> > + }
> > + if ( seg != PCI_SEG(extra_rmrr_units[i].sbdf[0]) )
> > + {
> > + printk(XENLOG_ERR VTDPREFIX
> > + "Segments are not equal for RMRR range "ERMRRU_FMT"\n",
> > + ERMRRU_ARG(extra_rmrr_units[i]));
> > + scope_devices_free(&acpi_rmrr->scope);
> > + xfree(acpi_rmrr);
> > + continue;
> > + }
> > +
> > + acpi_rmrr->segment = seg;
> > + acpi_rmrr->base_address =
> > pfn_to_paddr(extra_rmrr_units[i].base_pfn);
> > + acpi_rmrr->end_address = pfn_to_paddr(extra_rmrr_units[i].end_pfn
> > + 1);
>
> And this seems wrong too, unless I'm mistaken with the inclusive-ness.
>
The end_address is exclusive, see above.
> Jan
>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |