[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [V13 PATCH 1/2] pvh dom0: Add and remove foreign pages



On Wed, 21 May 2014 08:59:27 +0100
"Jan Beulich" <JBeulich@xxxxxxxx> wrote:

> >>> On 21.05.14 at 01:46, <mukesh.rathor@xxxxxxxxxx> wrote:
> > On Tue, 20 May 2014 11:33:11 +0100
> > "Jan Beulich" <JBeulich@xxxxxxxx> wrote:
> > 
> >> >>> On 20.05.14 at 01:51, <mukesh.rathor@xxxxxxxxxx> wrote:
> >> > +static int atomic_write_ept_entry(ept_entry_t *entryptr,
..
> >> > + out:
> >> > +    if ( rc )
> >> > +        gdprintk(XENLOG_ERR, "epte o:%"PRIx64" n:%"PRIx64"
> >> > rc:%d\n",
> >> > +                 entryptr->epte, new.epte, rc);
> >> > +    return rc;
> >> > +}
> >> 
> >> There's still no sign of any use of is_epte_present() here, and
> >> also no mention anywhere that taking refcounts even for
> >> inaccessible entries is correct. I think this is actually okay,
> >> but the policy
> > 
> > I thought we now have no paths leading to that!  Can you please
> > point me to a path that this would happen for foreign type?
> 
> No I can't - as implied by the rest of the response seen below. _If_
> indeed there are no such paths, a respective ASSERT() might be the
> way to go.
> 
> >> entries is correct. I think this is actually okay, but the policy
> >> (refcount taken even for inaccessible pages) should be spelled out
> >> somewhere.
> > 
> > Can you please point to where this is spelled out for grant types?
> > They are very similar to foreign types.
> 
> Can you please stop using examples of bad situations as justification
> for adding further badness? But then again, it's Tim to decide whether
> he's fine with all sorts of things being implicit rather than
> explicit here.

I feel by just failing atomic_write_ept_entry for non-present entries
we don't need to add any more is_pvh_* checks/asserts which in the past
have been frowned upon.

> Apart from that the question would need to be raised to those not
> having spelled out properly what the rules for grants are -
> considering that it's the second switch() in ept_p2m_type_to_flags()
> that can actually lead to non-present entries associated with a valid
> MFN, it's likely the addition of mem_access to blame here.

That would be denied for pvh as mem_event_enable() would fail for pvh
because only param allowed to be set for pvh is HVM_PARAM_CALLBACK_IRQ.

> >> > @@ -688,10 +748,10 @@ ept_set_entry(struct p2m_domain *p2m,
> >> > unsigned long gfn, mfn_t mfn, ept_p2m_type_to_flags(&new_entry,
> >> > p2mt, p2ma); }
> >> >  
> >> > -    atomic_write_ept_entry(ept_entry, new_entry);
> >> > +    rc = atomic_write_ept_entry(ept_entry, new_entry, target);
> >> 
> >> To me it would seem cleaner to clear old_entry here right away, so
> >> there's no confusion about it needing freeing on eventual new error
> >> paths getting added in the future.
> > 
> > Not sure I understand why. If the error happens only before the
> > entry is ever written, leaving the old entry seems reasonable. IOW,
> > if going from A to B, if there's an error, nothing is changed, A is
> > still around. Clearing the old entry may make things worse,
> > specially if clearing the entry needs any special handling, like
> > clearing old refcnt etc.. Having an api change state from A to C
> > when failing to set to B seems odd to me.
> 
> You're right with what you state, yet you apparently didn't spot that
> I talked about "old_entry", since all your response refers to "old
> entry". I.e. I was asking for the local variable to be cleared right
> away, not an entry in the active table.

Sorry, my eyes missed the underscore between old and entry... According
to the comment, it needs to be done after sync domain... I'd rather just
leave it as is for now, I'm unable to anticipate how the function might
change in future.

Sendig V14, please take a look.

thanks,
Mukesh



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.