|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] acpidump crashes on some machines
On 08/17/2012 10:52 PM, Konrad Rzeszutek Wilk wrote: On Thu, Jul 26, 2012 at 03:02:58PM +0200, Andre Przywara wrote:On 06/30/2012 04:19 AM, Konrad Rzeszutek Wilk wrote: Konrad, David, back on track for this issue. Thanks for your input, I could do some more debugging (see below for a refresh): It seems like it affects only the first page of the 1:1 mapping. I didn't have an issues with the last PFN or the page behind it (which failed properly). David, thanks for the hint with varying dom0_mem parameter. I thought I already checked this, but I did it once again and it turned out that it is only an issue if dom0_mem is smaller than the ACPI area, which generates a hole in the memory map. So we have (simplified) * 1:1 mapping to 1 MB * normal mapping till dom0_mem * unmapped area till ACPI E820 area * ACPI E820 1:1 mapping As far as I could chase it down the 1:1 mapping itself looks OK, I couldn't find any off-by-one bugs here. So maybe it is code that later on invalidates areas between the normal guest mapping and the ACPI mem?I think I found it. Can you try this pls [and if you can't find early_to_phys.. just use the __set_phys_to call] Yes, that works. At least after a quick test on my test box. Both the test module and acpidump work as expected. If I replace the "<" in your patch with the original "<=", I get the warning (and due to the "continue" it also works). I also successfully tested the minimal fix (just replacing <= with <). I will feed it to the testers here to cover more machines. Do you want to keep the warnings in (which exceed 80 characters, btw)? Thanks a lot and: Tested-by: Andre Przywara <andre.przywara@xxxxxxx> Regards, Andre. -- Andre Przywara AMD-Operating System Research Center (OSRC), Dresden, Germany _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |