[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3 1/8] xen_backend: add grant table helpers
> -----Original Message----- > From: Anthony PERARD [mailto:anthony.perard@xxxxxxxxxx] > Sent: 16 May 2018 14:50 > To: Paul Durrant <Paul.Durrant@xxxxxxxxxx> > Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx; qemu-block@xxxxxxxxxx; qemu- > devel@xxxxxxxxxx; Stefano Stabellini <sstabellini@xxxxxxxxxx> > Subject: Re: [PATCH v3 1/8] xen_backend: add grant table helpers > > On Fri, May 04, 2018 at 08:26:00PM +0100, Paul Durrant wrote: > > This patch adds grant table helper functions to the xen_backend code to > > localize error reporting and use of xen_domid. > > > > The patch also defers the call to xengnttab_open() until just before the > > initialise method in XenDevOps is invoked. This method is responsible for > > mapping the shared ring. No prior method requires access to the grant > table. > > > > Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx> > > --- > > Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx> > > Cc: Anthony Perard <anthony.perard@xxxxxxxxxx> > > > > v2: > > - New in v2 > > --- > > hw/xen/xen_backend.c | 123 > ++++++++++++++++++++++++++++++++++++++----- > > include/hw/xen/xen_backend.h | 33 ++++++++++++ > > 2 files changed, 144 insertions(+), 12 deletions(-) > > > > diff --git a/hw/xen/xen_backend.c b/hw/xen/xen_backend.c > > index 7445b50..50412d6 100644 > > --- a/hw/xen/xen_backend.c > > +++ b/hw/xen/xen_backend.c > > @@ -106,6 +106,103 @@ int xen_be_set_state(struct XenDevice *xendev, > enum xenbus_state state) > > return 0; > > } > > > > +void xen_be_set_max_grant_refs(struct XenDevice *xendev, > > + unsigned int nr_refs) > > Is it fine to ignore error from set_max_grants and continue ? xen_disk.c > seems to fail the initialisation if set_max_grants call fails. On the > other end, xen-usb.c just keep going. > I guess the upshot will be that a subsequent grant map would fail, so I think it should be sufficient to deal with the failure there. As you say it's use is inconsistent, and just plain missing in some cases. > > +{ > > + assert(xendev->ops->flags & DEVOPS_FLAG_NEED_GNTDEV); > > + > > + if (xengnttab_set_max_grants(xendev->gnttabdev, nr_refs)) { > > + xen_pv_printf(xendev, 0, "xengnttab_set_max_grants failed: %s\n", > > + strerror(errno)); > > + } > > +} > > + > > > +int xen_be_copy_grant_refs(struct XenDevice *xendev, > > + bool to_domain, > > + XenGrantCopySegment segs[], > > + unsigned int nr_segs) > > +{ > > + xengnttab_grant_copy_segment_t *xengnttab_segs; > > + unsigned int i; > > + int rc; > > + > > + assert(xendev->ops->flags & DEVOPS_FLAG_NEED_GNTDEV); > > + > > + xengnttab_segs = g_new0(xengnttab_grant_copy_segment_t, > nr_segs); > > + > > + for (i = 0; i < nr_segs; i++) { > > + XenGrantCopySegment *seg = &segs[i]; > > + xengnttab_grant_copy_segment_t *xengnttab_seg = > &xengnttab_segs[i]; > > + > > + if (to_domain) { > > + xengnttab_seg->flags = GNTCOPY_dest_gref; > > + xengnttab_seg->dest.foreign.domid = xen_domid; > > + xengnttab_seg->dest.foreign.ref = seg->dest.foreign.ref; > > + xengnttab_seg->dest.foreign.offset = seg->dest.foreign.offset; > > + xengnttab_seg->source.virt = seg->source.virt; > > + } else { > > + xengnttab_seg->flags = GNTCOPY_source_gref; > > + xengnttab_seg->source.foreign.domid = xen_domid; > > + xengnttab_seg->source.foreign.ref = seg->source.foreign.ref; > > + xengnttab_seg->source.foreign.offset = > > + seg->source.foreign.offset; > > + xengnttab_seg->dest.virt = seg->dest.virt; > > + } > > That's not going to work because xengnttab_grant_copy_segment_t doesn't > exist on Xen 4.7. Ah, I'd missed the ifdef around that in xen_disk. I'll add it here. Cheers, Paul > > > + > > + xengnttab_seg->len = seg->len; > > + } > > + > > + rc = xengnttab_grant_copy(xendev->gnttabdev, nr_segs, > xengnttab_segs); > > Thanks, > > -- > Anthony PERARD _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |