[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 02/20] PVH xen: add XENMEM_add_to_physmap_range



>>> On 17.05.13 at 01:56, Mukesh Rathor <mukesh.rathor@xxxxxxxxxx> wrote:
> On Thu, 16 May 2013 08:21:16 +0100
> "Jan Beulich" <JBeulich@xxxxxxxx> wrote:
> 
>> >>> On 16.05.13 at 01:05, Mukesh Rathor <mukesh.rathor@xxxxxxxxxx>
>> >>> wrote:
>> > On Wed, 15 May 2013 10:58:43 +0100
>> > "Jan Beulich" <JBeulich@xxxxxxxx> wrote:
>> > 
>> >> >>> On 15.05.13 at 02:52, Mukesh Rathor <mukesh.rathor@xxxxxxxxxx>
>> >> >>> wrote:
>> >> > --- a/xen/arch/x86/mm.c
>> >> > +++ b/xen/arch/x86/mm.c
>> >> > @@ -4519,7 +4519,8 @@ static int handle_iomem_range(unsigned long
>> >> > s, unsigned long e, void *p) 
>> >> >  static int xenmem_add_to_physmap_once(
>> >> >      struct domain *d,
>> >> > -    const struct xen_add_to_physmap *xatp)
>> >> > +    const struct xen_add_to_physmap *xatp,
>> >> > +    domid_t foreign_domid)
>> >> >  {
>> >> >      struct page_info *page = NULL;
>> >> >      unsigned long gfn = 0; /* gcc ... */
>> >> > @@ -4646,7 +4647,7 @@ static int xenmem_add_to_physmap(struct
>> >> > domain *d,
>> >> 
>> >> I know I said this before: This patch can't be complete, or else
>> >> the new function parameter would actually get used. With the way
>> >> things are, if this patch gets applied, a user of the new XENMEM_
>> >> sub-op would not get the expected behavior.
>> >> 
>> > 
>> > No, the new foreign_domid parameter is meaningful for only the 
>> > XENMAPSPACE_gmfn_foreign OP which is defined in patch 0018. So we 
>> > should be OK here.
>> 
>> Mukesh, please. Go look at your own patch again: It adds handling
>> of XENMEM_add_to_physmap_range to arch_memory_op(), calling
>> xenmem_add_to_physmap_range(), which in turn calls
>> xenmem_add_to_physmap_once() passing xatpr->foreign_domid
>> as the last argument. I don't see anywhere in the patch prevention
>> of that execution flow for an arbitrary guest. If I'm overlooking
>> something, please point me to it.
> 
> Hmm..., looking at the code again, xenmem_add_to_physmap_once() will
> start with setting mfn=0, then in the case statement, it won't find 
> XENMAPSPACE_gmfn_foreign, so will get out of switch to :
> 
>     if ( !paging_mode_translate(d) || (mfn == 0) )
> 
> Since mfn is 0, it will return -EINVAL. The error would be copied
> to the xatpr->errs and guest will see that.  What am I missing?

Okay, I see now, but that was entirely unrecognizable from
patch context and patch description.

> I can
> add full support in just one patch, adding patch 18 here, but then in
> the past I was asked to break big patches. 

With the above briefly stated in the patch description, I'm fine
with you keeping it separate.

>> Furthermore, now that you forced me to look at that code yet
>> another time,
>> 
>> >+        xen_ulong_t idx;
>> >+        xen_pfn_t gpfn;
>> 
>> Pointless variables, ...
>> 
>> >+        struct xen_add_to_physmap xatp;
>> >+
>> >+        if ( copy_from_guest_offset(&idx, xatpr->idxs,
>> >xatpr->size-1, 1)  ||
>> >+             copy_from_guest_offset(&gpfn, xatpr->gpfns,
>> >xatpr->size-1, 1) )
>> 
>> ... you can read directly into the respective xatp fields here.
> 
> I could, but it makes the lines long/wrap in the if statement making the 
> code
> harder to read IMO. The compiler should do exact same thing in both cases.

I'm afraid it's not permitted to do so because of the addresses of
the variables being taken and, through some macro levels, passed
to global functions.

> If it really bothers you, I can change it.

It's certainly a matter of taste to some degree, but to me it's
inefficient code (unless you could prove at least modern gcc indeed
doing said optimization despite the use of the & operator) and
would sooner or later (once stumbling across that code again)
prompt me to submit a cleanup patch...

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.