|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v5 6/9] ioreq-server: add support for multiple servers
> -----Original Message-----
> From: Ian Campbell
> Sent: 06 May 2014 11:46
> To: Paul Durrant
> Cc: xen-devel@xxxxxxxxxxxxx; Ian Jackson; Stefano Stabellini; Jan Beulich
> Subject: Re: [PATCH v5 6/9] ioreq-server: add support for multiple servers
>
> On Thu, 2014-05-01 at 13:08 +0100, Paul Durrant wrote:
> > NOTE: To prevent emulators running in non-privileged guests from
> > potentially allocating very large amounts of xen heap, the core
> > rangeset code has been modified to introduce a hard limit of 256
> > ranges per set.
>
> OOI how much RAM does that correspond to?
Each range is two pointers (list_head) and two unsigned longs (start and end),
so that's 32 bytes - so 256 is two pages worth.
>
> (Arguably this and the asprintf change should/could have been separate)
>
I debated that with myself and decided to leave it in the same patch in case
someone objected to me adding a function with no callers. I'll separate it for
v6.
> I've only really looks at tools/ and xen/include/public here.
>
> > diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
> > index 369c3f3..b3ed029 100644
> > --- a/tools/libxc/xc_domain.c
> > +++ b/tools/libxc/xc_domain.c
> > @@ -1284,6 +1284,225 @@ int xc_get_hvm_param(xc_interface *handle,
> domid_t dom, int param, unsigned long
> > return rc;
> > }
> >
> > +int xc_hvm_create_ioreq_server(xc_interface *xch,
> > + domid_t domid,
> > + ioservid_t *id)
> > +{
> [...]
> > +}
> > +
> > +int xc_hvm_get_ioreq_server_info(xc_interface *xch,
> > + domid_t domid,
> > + ioservid_t id,
> > + xen_pfn_t *ioreq_pfn,
> > + xen_pfn_t *bufioreq_pfn,
> > + evtchn_port_t *bufioreq_port)
> > +{
> [...]
> > +}
> > +
> > +int xc_hvm_map_io_range_to_ioreq_server(xc_interface *xch, domid_t
> domid,
> > + ioservid_t id, int is_mmio,
> > + uint64_t start, uint64_t end)
> > +{
> [...]
> > +}
> > +
> > +int xc_hvm_unmap_io_range_from_ioreq_server(xc_interface *xch,
> domid_t domid,
> > + ioservid_t id, int is_mmio,
> > + uint64_t start, uint64_t end)
> > +{
> [...]
> > +}
>
> Those all look like reasonable layers over an underlying hypercall, so
> if the hypervisor guys are happy with the hypervisor interface then I'm
> happy with this.
>
Ok.
> > +
> > +int xc_hvm_map_pcidev_to_ioreq_server(xc_interface *xch, domid_t
> domid,
> > + ioservid_t id, uint16_t segment,
> > + uint8_t bus, uint8_t device,
> > + uint8_t function)
> > +{
> > + DECLARE_HYPERCALL;
> > + DECLARE_HYPERCALL_BUFFER(xen_hvm_io_range_t, arg);
> > + int rc;
> > +
> > + if (device > 0x1f || function > 0x7) {
> > + errno = EINVAL;
> > + return -1;
>
> I suppose without this HVMOP_PCI_SBDF will produce nonsense, which the
> hypervisor may or may not reject. Hopefully we aren't relying on this
> check here for any security properties.
>
I don't think there are any security implications. As you say it's just to make
sure the caller sees a failure if they pass in stupid values as opposed to them
just sitting there wondering why they are not seeing any ioreqs.
> > + }
> > +
> > + arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
> > + if ( arg == NULL )
> > + return -1;
> > +
> > + hypercall.op = __HYPERVISOR_hvm_op;
> > + hypercall.arg[0] = HVMOP_map_io_range_to_ioreq_server;
> > + hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
> > +
> > + arg->domid = domid;
> > + arg->id = id;
> > + arg->type = HVMOP_IO_RANGE_PCI;
> > + arg->start = arg->end = HVMOP_PCI_SBDF((uint64_t)segment,
>
> Since you have HVMOP_IO_RANGE_PCI do you not want to expose that via
> this interface?
>
I could have crunched this into the map_range function. I left it separate
because I thought it was more convenient for callers - who I think will most
likely deal with one PCI device at a time.
> > diff --git a/tools/libxc/xc_domain_restore.c
> b/tools/libxc/xc_domain_restore.c
> > index bcb0ae0..af2bf3a 100644
> > --- a/tools/libxc/xc_domain_restore.c
> > +++ b/tools/libxc/xc_domain_restore.c
> > @@ -1748,6 +1770,29 @@ int xc_domain_restore(xc_interface *xch, int
> io_fd, uint32_t dom,
> > if (pagebuf.viridian != 0)
> > xc_set_hvm_param(xch, dom, HVM_PARAM_VIRIDIAN, 1);
> >
> > + /*
> > + * If we are migrating in from a host that does not support
> > + * secondary emulators then nr_ioreq_server_pages will be 0, since
> > + * there will be no XC_SAVE_ID_HVM_NR_IOREQ_SERVER_PAGES
> chunk in
> > + * the image.
> > + * If we are migrating from a host that does support secondary
> > + * emulators then the XC_SAVE_ID_HVM_NR_IOREQ_SERVER_PAGES
> chunk
> > + * will exist and is guaranteed to have a non-zero value. The
> > + * existence of that chunk also implies the existence of the
> > + * XC_SAVE_ID_HVM_IOREQ_SERVER_PFN chunk, which is also
> guaranteed
> > + * to have a non-zero value.
>
> Please can you also note this both or neither behaviour in
> xg_save_restore.h
Ok.
>
> > + */
> > + if (pagebuf.nr_ioreq_server_pages != 0) {
> > + if (pagebuf.ioreq_server_pfn != 0) {
> > + xc_set_hvm_param(xch, dom,
> HVM_PARAM_NR_IOREQ_SERVER_PAGES,
> > + pagebuf.nr_ioreq_server_pages);
> > + xc_set_hvm_param(xch, dom, HVM_PARAM_IOREQ_SERVER_PFN,
> > + pagebuf.ioreq_server_pfn);
> > + } else {
> > + ERROR("ioreq_server_pfn is invalid");
>
> Or ioreq_server_pages was. Perhaps say they are inconsistent? Perhaps
> log their values?
Ok - I'll do that.
>
> > + }
> > + }
> else if (..server_pfn != 0)
> Also an error I think
>
> (there might be better ways to structure things to catch both error
> cases...)
>
>
> > +
> > if (pagebuf.acpi_ioport_location == 1) {
> > DBGPRINTF("Use new firmware ioport from the checkpoint\n");
> > xc_set_hvm_param(xch, dom,
> HVM_PARAM_ACPI_IOPORTS_LOCATION, 1);
> > diff --git a/tools/libxc/xc_domain_save.c b/tools/libxc/xc_domain_save.c
> > index 71f9b59..acf3685 100644
> > --- a/tools/libxc/xc_domain_save.c
> > +++ b/tools/libxc/xc_domain_save.c
> > @@ -1737,6 +1737,30 @@ int xc_domain_save(xc_interface *xch, int io_fd,
> uint32_t dom, uint32_t max_iter
> > PERROR("Error when writing the viridian flag");
> > goto out;
> > }
> > +
> > + chunk.id = XC_SAVE_ID_HVM_IOREQ_SERVER_PFN;
> > + chunk.data = 0;
> > + xc_get_hvm_param(xch, dom, HVM_PARAM_IOREQ_SERVER_PFN,
> > + (unsigned long *)&chunk.data);
> > +
> > + if ( (chunk.data != 0) &&
> > + wrexact(io_fd, &chunk, sizeof(chunk)) )
> > + {
> > + PERROR("Error when writing the ioreq server gmfn base");
> > + goto out;
> > + }
> > +
> > + chunk.id = XC_SAVE_ID_HVM_NR_IOREQ_SERVER_PAGES;
> > + chunk.data = 0;
> > + xc_get_hvm_param(xch, dom,
> HVM_PARAM_NR_IOREQ_SERVER_PAGES,
> > + (unsigned long *)&chunk.data);
> > +
> > + if ( (chunk.data != 0) &&
> > + wrexact(io_fd, &chunk, sizeof(chunk)) )
> > + {
> > + PERROR("Error when writing the ioreq server gmfn count");
> > + goto out;
> > + }
>
> Probably arranging to assert that both of these are either zero or
> non-zero is too much faff.
>
> > @@ -502,6 +505,31 @@ static int setup_guest(xc_interface *xch,
> > special_pfn(SPECIALPAGE_SHARING));
> >
> > /*
> > + * Allocate and clear additional ioreq server pages. The default
> > + * server will use the IOREQ and BUFIOREQ special pages above.
> > + */
> > + for ( i = 0; i < NR_IOREQ_SERVER_PAGES; i++ )
> > + {
> > + xen_pfn_t pfn = ioreq_server_pfn(i);
> > +
> > + rc = xc_domain_populate_physmap_exact(xch, dom, 1, 0, 0, &pfn);
> > + if ( rc != 0 )
> > + {
> > + PERROR("Could not allocate %d'th ioreq server page.", i);
>
> This will say things like "1'th". "Could not allocate ioreq server page
> %d" avoids that.
Ok. It was a cut'n'paste from the special_pfn code just above. I'll fix that
while I'm in the neighbourhood.
>
> You could do this allocation all in one go if pfn was an array of
> [NR_IOREQ_SERVER_PAGES], you can still use order-0 allocations.
>
Ok.
Paul
> Ian.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |