[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 4/4] x86: fix pinned cache attribute handling

>>> On 28.03.14 at 15:52, <Ian.Campbell@xxxxxxxxxxxxx> wrote:
> On Fri, 2014-03-28 at 14:43 +0000, Jan Beulich wrote:
>> >>> On 28.03.14 at 14:36, <Ian.Campbell@xxxxxxxxxxxxx> wrote:
>> > I think it is generally better if libxc remains a pretty thin layer over
>> > the hypercall, so adjustments to the inclusiveness of the arguments
>> > don't really belong there IMHO.
>> > 
>> > What is considered more normal for our hypercall interfaces, in- or
>> > exclusive?
>> I think ranges specified via two GFNs should generally be inclusive, in
>> order to make sure they can extend to the end of address space.
> This is a strong argument IMHO and would seem to suggest that qemu is at
> fault and is the one which should be fixed.

With this ...

>> > For the cacheflush thing I added recently you suggested to use a nrpages
>> > instead of end which nicely sidestepped the issue. Is that an option
>> > here while you are changing things anyway?
>> I considered that, but didn't want to propose it because such a
>> change to the interface would go un-noticeable to the caller (at
>> build time). But of course it is an option.
> Right that is a real downside.
> In the domctl struct itself you would naturally change the field name,
> so that would catch that.
> For libxc, which is how all the real callers end up calling it, I guess
> you would have to change the function name, which would also allow the
> existing one to remain for compatibility/transitional purposes (if we
> wanted that).
> xc_domain_pin_memory_cacheattr_range isn't a very good name, but it is
> not terrible I suppose.

... and the basic rule of not changing interfaces without reason, I think
we should leave the interface alone and just fix all qemu variants. May
I hope that the qemu maintainers could take care of this as well as the
ROM issue pointed out earlier today?


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.