[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH RFC 5/5] pvh: dom0 boot option to specify iommu rw ranges
>>> On 18.02.15 at 19:15, <konrad.wilk@xxxxxxxxxx> wrote: > On Tue, Feb 17, 2015 at 02:14:20PM +0000, Andrew Cooper wrote: >> On 17/02/15 13:39, Jan Beulich wrote: >> >>>> On 17.02.15 at 14:32, <andrew.cooper3@xxxxxxxxxx> wrote: >> >> On 17/02/15 12:36, Jan Beulich wrote: >> >>>>>> On 14.02.15 at 00:21, <elena.ufimtseva@xxxxxxxxxx> wrote: >> >>>> On Fri, Feb 13, 2015 at 10:09:39PM +0000, Andrew Cooper wrote: >> >>>>> If I understand the problem correctly, I believe that the correct >> >>>>> solution would be to add a dmar_rmrr[ command line parameter along the >> >>>>> same lines as ivrs_hpet[ and ivrs_ioapic[ which allows the user to >> >>>>> inject corrections to the ACPI tables via the command line. >> >>>> Yes, if we agree to classify those magic locations as being not reported >> >>>> by ACPI machinery. >> >>> One fundamental problem for someone to use this proposed option >> >>> in practice is - how does(s)he learn which region(s) to specify? >> >> Trial and improvement, or find a manual for the affected system. >> > If such an address range would appear in a manual, it would >> > almost certainly also appear in the ACPI tables >> >> In an ideal world. >> >> > (unless by manual you mean errata documentation). >> >> Also a valid source of information. > > The way Elena found it is by looking at the EPT violations. Perhaps > that should be also mentioned in the Documentation for said parameter? Along with clarifying that this is a rather fragile approach: What if most of the time you see faults on, say, three (perhaps consecutive) MFNs and only after many months one on a fourth? This may be useful for developer purposes, but I very much doubt this would be of much use for an affected production system. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |