[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v17 06/11] x86/hvm/ioreq: add a new mappable resource type...



>>> On 03.01.18 at 13:19, <paul.durrant@xxxxxxxxxx> wrote:
> +static void hvm_free_ioreq_mfn(struct hvm_ioreq_server *s, bool buf)
> +{
> +    struct domain *d = s->domain;
> +    struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
> +
> +    if ( !iorp->page )
> +        return;
> +
> +    page_list_add_tail(iorp->page, &d->arch.hvm_domain.ioreq_server.pages);

Afaict s->domain is the guest, not the domain containing the
emulator. Hence this new model of freeing the pages is safe only
when the emulator domain is dead by the time the guest is being
cleaned up. I'm not only unaware of this being enforced anywhere,
but also unconvinced this is a desirable restriction: Why shouldn't
emulator domains be allowed to be long lived, servicing more than
a single guest if so desired by the admin?

What is additionally confusing me is the page ownership: Wasn't
the (original) intention to make the pages owned by the emulator
domain rather than the guest? I seem to recall you referring to
restrictions in do_mmu_update(), but a domain should always be
able to map pages it owns, shouldn't it?

Furthermore you continue to use Xen heap pages rather than
domain heap ones.

And finally I'm still missing the is_hvm_domain() check in
hvm_get_ioreq_server_frame(). Nor is there a NULL check for
the return value of get_ioreq_server(), as I notice only now.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.