[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v6 11/11] xen, common: do not implicitly permit access to mapped I/O memory



>>> On 28.04.14 at 15:33, <Ian.Campbell@xxxxxxxxxxxxx> wrote:
> On Tue, 2014-04-22 at 10:12 +0100, Jan Beulich wrote:
>> >>> On 21.04.14 at 15:45, <avanzini.arianna@xxxxxxxxx> wrote:
>> >  tools/libxl/libxl_create.c | 17 +++++++++++++++++
>> >  tools/libxl/libxl_pci.c    | 24 ++++++++++++++++++------
>> 
>> I doubt that doing it at this layer is correct: There are quite a few uses
>> of xc_domain_memory_mapping() in qemu,
> 
> Here the toolstack grants permission to access the I/O mem etc
> associated with a PCI device. Then later qemu uses it via the mapping.

Am I reading this right to mean that qemu doesn't really care about
the current side effect of granting permissions? In that case the
change would seem to be okay.

>> so I think as a first step you should retain libxc functionality
>> unchanged by issuing both domctls there.
> 
> This doesn't fit very well with the use of non-PCI passthrough of
> devices without qemu on ARM -- i.e. the direct iomem=[list] stuff which
> Arianna is trying to make work with this series. Or does it?

Why would it not? How the resources got specified (iomem= or
pci=) shouldn't matter here.

>> > --- a/tools/libxl/libxl_create.c
>> > +++ b/tools/libxl/libxl_create.c
>> > @@ -1151,6 +1151,23 @@ static void domcreate_launch_dm(libxl__egc *egc, 
>> > libxl__multidev *multidev,
>> >              libxl__spawn_stub_dm(egc, &dcs->dmss);
>> >          else
>> >              libxl__spawn_local_dm(egc, &dcs->dmss.dm);
>> > +
>> > +        /*
>> > +         * If VGA passthru is enabled by domain config, be sure that the
>> > +         * domain can access VGA-related iomem regions.
>> > +         */
>> > +        if (d_config->b_info.u.hvm.gfx_passthru.val) {
>> > +            uint64_t vga_iomem_start = 0xa0000 >> XC_PAGE_SHIFT;
>> > +            ret = xc_domain_iomem_permission(CTX->xch, domid,
>> > +                                             vga_iomem_start, 0x20, 1);
>> > +            if (ret < 0) {
>> > +                LOGE(ERROR,
>> > +                     "failed to give dom%d access to iomem range "
>> > +                     "%"PRIx64"-%"PRIx64" for VGA passthru",
>> > +                     domid, vga_iomem_start, (vga_iomem_start + 0x20 - 
>> > 1));
>> > +                goto error_out;
>> > +            }
>> > +        }
>> 
>> Considering that this code isn't being moved here - what's the
>> rationale for the sudden need to special case VGA pass-through
>> here?
> 
> I suppose it is because VGA has these "magic" regions, which used to be
> implicitly granted permissions via xc_domain_memory_mapping and now
> aren't, so the toolstack has to grant them explicitly (the goal of the
> patch generally).

If there was anything implicit previously, pointing out where this
happens (perhaps in the commit message, so others can understand
it in the future too) would basically address that question.

>> > @@ -886,14 +889,14 @@ static int do_pci_add(libxl__gc *gc, uint32_t domid, 
>> > libxl_device_pci *pcidev, i
>> >                  continue;
>> >              size = end - start + 1;
>> >              if (start) {
>> > -                if (flags & PCI_BAR_IO) {
>> > +                if ((flags & PCI_BAR_IO) && !hvm) {
>> >                      rc = xc_domain_ioport_permission(ctx->xch, domid, 
>> > start, size, 1);
>> 
>> If this whole thing gets unified, treating I/O ports and MMIO differently
>> isn't an optimal choice.
> 
> Where are these granted for x86 HVM guests today -- I don't see qemu
> calling this function. I must be missing something.

Right here I thought - what the patch does is _exclude_ HVM from
being given the permission here (in favor of doing it elsewhere), and
all I was trying to understand was whether we really need to go
down the same route as in xend again where for everything there
are two places where things get done.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.