[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [xen-unstable test] 56456: regressions - FAIL

El 18/05/15 a les 12.17, Tim Deegan ha escrit:
> At 09:34 +0100 on 18 May (1431941676), Jan Beulich wrote:
>>>>> On 16.05.15 at 13:45, <roger.pau@xxxxxxxxxx> wrote:
>>> El 16/05/15 a les 10.51, osstest service user ha escrit:
>>>> flight 56456 xen-unstable real [real]
>>>> http://logs.test-lab.xenproject.org/osstest/logs/56456/
>>>> Regressions :-(
>>> This is my fault, paging_gva_to_gfn cannot be used to translate a PV
>>> guest VA to a GFN. The patch above restores the previous path for PV
>>> callers.
>> While Tim would have the final say, I certainly would prefer to revert
>> the offending patch and then apply a correct new version in its stead
>> in this case (where the fix is not a simple, few lines change).
> I would be OK with a follow-up fix here, but I'm not convinced that
> this is it.
> In particular, paging_mode_enabled() should be true for any PV domain
> that's in log-dirty mode, so presumably the failure is only for lgd
> ops on VMs that don't have lgd enabled.  So maybe we can either:
>  - return an error for that case (but we'd want to understand how we
>    got there first); or

The error is caused because we are trying to use paging_gva_to_gfn
against a Dom0 VA, which is a PV guest.

gva_to_gfn is set to sh_gva_to_gfn for a PV Dom0, but calling
vtlb_lookup against a PV guest crashes Xen because the paging structures
are not populated.

>  - have map_dirty_bitmap() DTRT, with something like access_ok() +
>    a linear-pagetable lookup to find the frame.

That was my first intention, but AFAICT we have no function in tree to
resolve a PV guest VA into a GFN/MFN. The closest thing I could find was
using guest_walk_tables + guest_walk_to_gfn in order to obtain the gfn.
Should I send a patch to introduce a pv_gva_to_gfn function based on that?

Thanks, Roger.

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.