[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH for-next 07/16] xen/arm: Introduce copy_to_guest_phys_flush_dcache



On Tue, 12 Dec 2017, Julien Grall wrote:
> Hi Stefano,
> 
> On 12/08/2017 10:43 PM, Stefano Stabellini wrote:
> > On Fri, 8 Dec 2017, Julien Grall wrote:
> > > On 8 Dec 2017 22:26, "Stefano Stabellini" <sstabellini@xxxxxxxxxx> wrote:
> > >        On Fri, 8 Dec 2017, Julien Grall wrote:
> > >        > On 06/12/17 12:27, Julien Grall wrote:
> > >        > > On 12/06/2017 01:26 AM, Stefano Stabellini wrote:
> > >        > > > On Thu, 23 Nov 2017, Julien Grall wrote:
> > >        > > > > Hi Andrew,
> > >        > > > >
> > >        > > > > On 23/11/17 18:49, Andrew Cooper wrote:
> > >        > > > > > On 23/11/17 18:32, Julien Grall wrote:
> > >        > > > > > > This new function will be used in a follow-up patch to
> > > copy data to
> > >        > > > > > > the
> > >        > > > > > > guest
> > >        > > > > > > using the IPA (aka guest physical address) and then
> > > clean the cache.
> > >        > > > > > >
> > >        > > > > > > Signed-off-by: Julien Grall <julien.grall@xxxxxxxxxx>
> > >        > > > > > > ---
> > >        > > > > > >    xen/arch/arm/guestcopy.c           | 10 ++++++++++
> > >        > > > > > >    xen/include/asm-arm/guest_access.h |  6 ++++++
> > >        > > > > > >    2 files changed, 16 insertions(+)
> > >        > > > > > >
> > >        > > > > > > diff --git a/xen/arch/arm/guestcopy.c
> > > b/xen/arch/arm/guestcopy.c
> > >        > > > > > > index be53bee559..7958663970 100644
> > >        > > > > > > --- a/xen/arch/arm/guestcopy.c
> > >        > > > > > > +++ b/xen/arch/arm/guestcopy.c
> > >        > > > > > > @@ -110,6 +110,16 @@ unsigned long
> > > raw_copy_from_guest(void *to,
> > >        > > > > > > const
> > >        > > > > > > void __user *from, unsigned le
> > >        > > > > > >                          COPY_from_guest |
> > > COPY_linear);
> > >        > > > > > >    }
> > >        > > > > > >    +unsigned long
> > > copy_to_guest_phys_flush_dcache(struct domain *d,
> > >        > > > > > > +                                              paddr_t
> > > gpa,
> > >        > > > > > > +                                              void
> > > *buf,
> > >        > > > > > > +                                              unsigned
> > > int len)
> > >        > > > > > > +{
> > >        > > > > > > +    /* P2M is shared between all vCPUs, so the vCPU
> > > used does not
> > >        > > > > > > matter.
> > >        > > > > > > */
> > >        > > > > >
> > >        > > > > > Be very careful with this line of thinking.  It is only
> > > works after
> > >        > > > > > DOMCTL_max_vcpus has succeeded, and before that point, it
> > > is a latent
> > >        > > > > > NULL pointer dereference.
> > >        > > > >
> > >        > > > > I really don't expect that function been used before
> > > DOMCT_max_vcpus is
> > >        > > > > set.
> > >        > > > > It is only used for hardware emulation or Xen loading image
> > > into the
> > >        > > > > hardware
> > >        > > > > domain memory. I could add a check d->vcpus to be safe.
> > >        > > > >
> > >        > > > > >
> > >        > > > > > Also, what about vcpus configured with alternative views?
> > >        > > > >
> > >        > > > > It is not important because the underlying call is
> > > get_page_from_gfn
> > >        > > > > that does
> > >        > > > > not care about the alternative view (that function take a
> > > domain in
> > >        > > > > parameter). I can update the comment.
> > >        > > > Since this is a new function, would it make sense to take a
> > > struct
> > >        > > > vcpu* as parameter, instead of a struct domain* ?
> > >        > >
> > >        > > Well, I suggested this patch this way because likely everyone
> > > will use with
> > >        > > d->vcpus[0]. And then you would have to wonder why d->vcpus[0]
> > > and not
> > >        > > d->vcpus[1]...
> > >        >
> > >        > Thinking a bit more to this, it might be better/safer to pass
> > > either a domain
> > >        > or a vCPU to copy_guest. I can see 2 solutions:
> > >        >       1# Introduce a union that use the same parameter:
> > >        >               union
> > >        >               {
> > >        >                       struct
> > >        >                       {
> > >        >                               struct domain *d;
> > >        >                       } ipa;
> > >        >                       struct
> > >        >                       {
> > >        >                               struct vcpu *v;
> > >        >                       } gva;
> > >        >               }
> > >        >         The structure here would be to ensure that it is clear
> > > that only
> > >        > domain (resp. vcpu) should be used with ipa (resp. gva).
> > >        >
> > >        >       2# Have 2 parameters, vcpu and domain.
> > >        >
> > >        > Any opinions?
> > > 
> > > I think that would be clearer. You could also add a paddr_t/vaddr_t
> > > correspondingly inside the union maybe.
> > > 
> > > 
> > > Well, you will have nameclash happening I think.
> > > 
> > > 
> > > And vaddr_t and paddr_t are confusing because you don't know which address
> > > you speak about (guest vs hypervisor). At least ipa/gpa, gva are known
> > > naming.
> > 
> > That's not what I meant. ipa and gva are good names.
> > 
> > I was suggesting to put an additional address field inside the union to
> > avoid the issue with paddr_t and vaddr_t discussed elsewhere
> > (alpine.DEB.2.10.1712081313070.8052@sstabellini-ThinkPad-X260).
> > 
> > I am happy either way, it was just a suggestion.
> 
> Actually looking at it. It will not be that handy to have the paddr_t/vaddr_t
> inside the union. Mostly because of the common code using the address
> parameter.
> 
> I could add another union for the address, but I don't much like it.
> As you are happy with either way, I will use uint64_t.
 
Sounds good
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.