[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] xen/arm: Implement domain_get_maximum_gpfn

On Mon, Sep 8, 2014 at 10:47 PM, Tamas K Lengyel <tamas.lengyel@xxxxxxxxxxxx> wrote:

On Mon, Sep 8, 2014 at 10:43 PM, Julien Grall <julien.grall@xxxxxxxxxx> wrote:
Hello Tamas,

On 03/09/14 02:00, Tamas K Lengyel wrote:

On Wed, Sep 3, 2014 at 10:44 AM, Ian Campbell <Ian.Campbell@xxxxxxxxxx
<mailto:Ian.Campbell@citrix.com>> wrote:

    On Mon, 2014-09-01 at 17:32 -0400, Julien Grall wrote:
     > Hi Ian,
     > On 16/07/14 12:02, Ian Campbell wrote:
     > > I'd much prefer to just have the fix to xc_dom_gnttab_hvm_seed
    for ARM
     > > and continue to punt on this interface until it is actually
    needed by
     > > something unavoidable on the guest side (and simultaneously
    hope that
     > > day never comes...).
     > This patch is a requirement to make Xen Memory access working on ARM.
     > Could you reconsider the possibility to apply this patch on Xen?

    Needs more rationale as to why it is required for Xen Memory (do you
    mean xenaccess?). I assume I'll find that in the relevant thread once I
    get to it?

It's used in a non-critical sanity check for performance reasons, as
seen here:
Without the sanity check we might attempt to set mem_access permissions
on gpfn's that don't exist for the guest. It wouldn't break anything to
do that but if we know beforehand that the gpfn is outside the scope of
what the guest has we can skip the entire thing.

It might be better if you carry this patch on your series.


Julien Grall



As a sidenote, if this patch is problematic to merge for some reason, the current implementation still needs to change to return 0 instead of -ENOSYS as to conform to the x86 implementation. On the x86 side 0 is used to indicate failure. See 7ffc9779aa5120c5098d938cb88f69a1dda9a0fe "x86: make certain memory sub-ops return valid values" for more info.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.