[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4.1 4/6] xen/x86: Allow stubdom access to irq created for msi.



On Fri, Mar 08, 2019 at 11:26:28AM +0100, Roger Pau Monné wrote:
> On Thu, Mar 07, 2019 at 11:28:25PM +0100, Marek Marczykowski-Górecki wrote:
> > On Thu, Mar 07, 2019 at 03:48:01PM +0100, Roger Pau Monné wrote:
> > > Another option would be to store which domains are given permissions
> > > over which irqs in a way that's optimized to get the list of domains
> > > given an irq (ie: without having to iterate over the list of domains
> > > like my proposed solution above).
> > 
> > This may make sense, but if that would be instead of the current
> > structure, we'd have another problem: when destroying stubdomain, you'd
> > need to get list IRQs it has access to, to explicitly revoke
> > stubdomain's access.
> 
> Indeed. You would have to come up with a structure that allows to both
> get the list of irqs given a domain, or get the domain list given a
> irq.
> 
> Maybe as a start you could expand irq_desc to contain the list of
> domains that have permission over the irq, and then use this
> information on cleanup?
> 
> The main issue with this approach is that you would also need to
> cleanup this information from irq_desc if the stubdomain is destroyed
> before destroying the IRQ.

In case of stubdomain specifically, such cleanup wouldn't be that hard,
as I can get list if IRQs by iterating over pirqs of d->target. Also, I
can simply lookup to what IRQs such stubdomain have access through
d->irq_caps.  But similar to target->stubdomain mapping idea, this would
introduce a circular dependency, which needs to be broken at some point.
Adding irq_desc->dm_domain (*) is slightly easier than
target->stubdomain mapping, as a single entry is enough here. This is
because specific (stub)domain have created this IRQ.

On the other hand, having generic mechanism revoking IRQ permissions
during destroy_irq(), regardless of where irq_permit_access() was called
would be nice. A generic mechanism would require a domain list in
irq_desc, not a single entry. Is it worth it?

(*) Regarding dm_domain name, if qemu is running in dom0, then dom0 is
device model domain for this thing.

> > In theory it could be done by looking at the target
> > domain and iterating over its IRQs, but this is getting more and more
> > complex.
> > 
> > I think the tricky part is unmap_domain_pirq->msi_free_irq, which can be
> > called:
> > 1. from PHYSDEVOP_unmap_pirq, by stubdomain
> > 2. from PHYSDEVOP_unmap_pirq, by dom0 when device model runs there
> > 3. from PHYSDEVOP_unmap_pirq, by dom0 even with device model in
> > stubdomain - normally it shouldn't happen for MSI allocated by
> > stubdomain, but dom0 is allowed to do so, and it shouldn't cause any
> > harm
> > 4. from free_domain_pirqs, during domain destruction
> > 5. various error paths
> > 
> > If unmap_domain_pirq would know where device model is running, even if
> > not called by it, that would help. What about adding back reference in
> > struct domain to a stubdomain? That would help a lot here. The only
> > problem is a circular dependency stubdom->target->stubdom. This cycle
> > would need to be broken during stubdom teardown. domain_kill(stubdom)
> > looks like a good place to break it. Is it guaranteed to be called, or
> > is there some other path to destroying a stubdomain?
> 
> A stubdomain AFAICT is handled like others domains from a hypervisor
> PoV, there's no distinction between guests and stubdomains inside the
> hypervisor, so I think domain_kill would be an appropriate place.

As Jan said in the other message, this also could be solved by storing
domain id, not a pointer, because it's easy to detect if that domain id
is stale.

> > Can one HVM domain have multiple stubdomains? If so, it should a be
> > list, not a single field.
> 
> Yes, I think that's a supported setup. You can register multiple ioreq
> servers handling accesses for a single domain, and there's no
> restriction that forces running all those ioreq servers in the same
> control domain.

I was looking at it, and adding a list of struct domain pointers would
be easy, but would require breaking circular dependency during teardown
(in domain_kill). And will add yet another struct list_head into
struct domain. On the other hand, having list of domain ids would
require a wrapper structure or such - I don't see any API in Xen for
storing list of just ints (domid_t specifically). Using an array for
something that should really be a list looks clumsy - all the searching
for first available space, ignoring empty slots etc. Maybe rangeset
could be used for that?

> Similarly you could have a single domain that has control permissions
> over multiple domains, ie: like the hardware domain. domain->target
> should likely be a list also in order to support this use case, but I
> guess no one has yet required such use-case.

DOMCTL_set_target currently returns -EINVAL if you try assign a second
stubdomain to a HVM.

> But maybe I'm just overdesigning this when there's no use-case of a
> domain having multiple stubdomains, or a stubdomain serving multiple
> domains.
> 
> Maybe it's enough to have a clear statement of the scope of this
> mechanism and it's current limitations:
> 
>  - A domain different that the hardware domain can only have control
>    permissions over a single other domain.
> 
>  - When a domain with passed through devices that have mediators
>    running in a domain different than the hardware domain is destroyed
>    the domain running those mediators must have been destroyed
>    beforehand.

Yes, both of those are true, even without my patches.

> With those limitations in mind I think you could then use
> get_dm_domain in destroy_irq. IMO I think this is fragile, but would
> be enough to solve the issue you are currently facing.

-- 
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

Attachment: signature.asc
Description: PGP signature

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.