[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] crash on boot with 4.6.1 on fedora 24

>>> On 11.05.16 at 07:49, <JGross@xxxxxxxx> wrote:
> On 10/05/16 18:35, Boris Ostrovsky wrote:
>> On 05/10/2016 11:43 AM, Juergen Gross wrote:
>>> On 10/05/16 17:35, Jan Beulich wrote:
>>>>>>> On 10.05.16 at 17:19, <JGross@xxxxxxxx> wrote:
>>>>> On 10/05/16 15:57, Jan Beulich wrote:
>>>>>>>>> On 10.05.16 at 15:39, <boris.ostrovsky@xxxxxxxxxx> wrote:
>>>>>>> I didn't finish unwrapping the stack yesterday. Here it is:
>>>>>>> setup_arch -> dmi_scan_machine -> dmi_walk_early -> early_ioremap
>>>>>> Ah, that makes sense. Yet why would early_ioremap() involve an
>>>>>> M2P lookup? As said, MMIO addresses shouldn't be subject to such
>>>>>> lookups.
>>>>> early_ioremap()->
>>>>>   __early_ioremap()->
>>>>>     __early_set_fixmap()->
>>>>>       set_pte()->
>>>>>         xen_set_pte_init()->
>>>>>           mask_rw_pte()->
>>>>>             pte_pfn()->
>>>>>               pte_val()->
>>>>>                 xen_pte_val()->
>>>>>                   pte_mfn_to_pfn()
>>>> Well, I understand (also from Boris' first reply) that's how it is,
>>>> but not why it is so. I.e. the call flow above doesn't answer my
>>>> question.
>>> On x86 early_ioremap() and early_memremap() share a common sub-function
>>> __early_ioremap(). This together with pvops requires a common set_pte()
>>> implementation leading to the mfn validation in the end.
>> Do we make any assumptions about where DMI data lives?
> I don't think so.
> So the basic problem is the page fault due to the sparse m2p map before
> the #PF handler is registered.
> What do you think about registering a minimal #PF handler in
> xen_arch_setup() being capable to handle this problem? This should be
> doable without major problems. I can do a patch.

To me that would feel like working around the issue instead of
admitting that the removal of _PAGE_IOMAP was a mistake.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.