[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 3/8] dm_op: convert HVMOP_track_dirty_vram



> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Sent: 15 December 2016 15:45
> To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>
> Cc: Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; Wei Liu
> <wei.liu2@xxxxxxxxxx>; George Dunlap <George.Dunlap@xxxxxxxxxx>; Ian
> Jackson <Ian.Jackson@xxxxxxxxxx>; xen-devel@xxxxxxxxxxxxxxxxxxxx; Daniel
> De Graaf <dgdegra@xxxxxxxxxxxxx>; Tim (Xen.org) <tim@xxxxxxx>
> Subject: Re: [PATCH v2 3/8] dm_op: convert HVMOP_track_dirty_vram
> 
> >>> On 06.12.16 at 14:46, <paul.durrant@xxxxxxxxxx> wrote:
> > @@ -68,6 +70,35 @@ static int
> copy_buf_to_guest(XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t)
> bufs,
> >      return copy_to_guest(buf.h, src, size) ? -EFAULT : 0;
> >  }
> >
> > +static int track_dirty_vram(struct domain *d,
> > +                            unsigned int nr_bufs,
> > +                            XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs,
> > +                            xen_pfn_t first_pfn, unsigned int nr)
> > +{
> > +    struct xen_dm_op_buf buf;
> > +    int rc;
> > +
> > +    if ( nr > (GB(1) >> PAGE_SHIFT) )
> > +        return -EINVAL;
> > +
> > +    if ( d->is_dying )
> > +        return -ESRCH;
> > +
> > +    if ( !d->vcpu || !d->vcpu[0] )
> 
> It may not have been this patch, but I think we had settled on the left
> side here wanting to be !d->max_vcpus for consistency with other
> (modern) code.

Oh sorry, I must have missed that.

> 
> > --- a/xen/arch/x86/mm/hap/hap.c
> > +++ b/xen/arch/x86/mm/hap/hap.c
> > @@ -68,7 +68,7 @@
> >  int hap_track_dirty_vram(struct domain *d,
> >                           unsigned long begin_pfn,
> >                           unsigned long nr,
> > -                         XEN_GUEST_HANDLE_64(uint8) guest_dirty_bitmap)
> > +                         XEN_GUEST_HANDLE_PARAM(void) guest_dirty_bitmap)
> >  {
> >      long rc = 0;
> >      struct sh_dirty_vram *dirty_vram;
> > --- a/xen/arch/x86/mm/shadow/common.c
> > +++ b/xen/arch/x86/mm/shadow/common.c
> > @@ -3677,7 +3677,7 @@ static void sh_clean_dirty_bitmap(struct domain
> *d)
> >  int shadow_track_dirty_vram(struct domain *d,
> >                              unsigned long begin_pfn,
> >                              unsigned long nr,
> > -                            XEN_GUEST_HANDLE_64(uint8) guest_dirty_bitmap)
> > +                            XEN_GUEST_HANDLE_PARAM(void) 
> > guest_dirty_bitmap)
> >  {
> >      int rc = 0;
> >      unsigned long end_pfn = begin_pfn + nr;
> > --- a/xen/include/asm-x86/hap.h
> > +++ b/xen/include/asm-x86/hap.h
> > @@ -43,7 +43,7 @@ void  hap_vcpu_init(struct vcpu *v);
> >  int   hap_track_dirty_vram(struct domain *d,
> >                             unsigned long begin_pfn,
> >                             unsigned long nr,
> > -                           XEN_GUEST_HANDLE_64(uint8) dirty_bitmap);
> > +                           XEN_GUEST_HANDLE_PARAM(void) dirty_bitmap);
> >
> >  extern const struct paging_mode *hap_paging_get_mode(struct vcpu *);
> >  int hap_set_allocation(struct domain *d, unsigned int pages, bool
> *preempted);
> > --- a/xen/include/asm-x86/shadow.h
> > +++ b/xen/include/asm-x86/shadow.h
> > @@ -63,7 +63,7 @@ int shadow_enable(struct domain *d, u32 mode);
> >  int shadow_track_dirty_vram(struct domain *d,
> >                              unsigned long first_pfn,
> >                              unsigned long nr,
> > -                            XEN_GUEST_HANDLE_64(uint8) dirty_bitmap);
> > +                            XEN_GUEST_HANDLE_PARAM(void) dirty_bitmap);
> 
> I think all of these will go away with the type change required in the
> earlier patch.
> 

Yes, they probably will.

> With all of this taken care of, hypervisor parts
> Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
> 

Thanks,

  Paul

> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.