[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v16 06/11] x86/hvm/ioreq: add a new mappable resource type...



> -----Original Message-----
> From: Xen-devel [mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxxx] On Behalf
> Of Jan Beulich
> Sent: 20 December 2017 16:35
> To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>
> Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>; Wei Liu
> <wei.liu2@xxxxxxxxxx>; George Dunlap <George.Dunlap@xxxxxxxxxx>;
> Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; Ian Jackson
> <Ian.Jackson@xxxxxxxxxx>; Tim (Xen.org) <tim@xxxxxxx>; JulienGrall
> <julien.grall@xxxxxxx>; xen-devel@xxxxxxxxxxxxxxxxxxxx
> Subject: Re: [Xen-devel] [PATCH v16 06/11] x86/hvm/ioreq: add a new
> mappable resource type...
> 
> >>> On 15.12.17 at 11:41, <paul.durrant@xxxxxxxxxx> wrote:
> > +static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf)
> > +{
> > +    struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
> > +
> > +    if ( iorp->page )
> > +    {
> > +        /*
> > +         * If a guest frame has already been mapped (which may happen
> > +         * on demand if hvm_get_ioreq_server_info() is called), then
> > +         * allocating a page is not permitted.
> > +         */
> > +        if ( !gfn_eq(iorp->gfn, INVALID_GFN) )
> > +            return -EPERM;
> > +
> > +        return 0;
> > +    }
> > +
> > +    iorp->va = alloc_xenheap_page();
> > +    if ( !iorp->va )
> > +        return -ENOMEM;
> > +
> > +    clear_page(iorp->va);
> > +
> > +    iorp->page = virt_to_page(iorp->va);
> > +    share_xen_page_with_guest(iorp->page, s->domain,
> XENSHARE_writable);
> > +    return 0;
> > +}
> 
> Why the much more limited (on huge systems) Xen heap all of the
> sudden?

Largely I'm trying to follow the same procedure used for the grant tables. 
Also, Xen is always going to need a mapping for these pages so using xenheap is 
convenient. If you think that's too limited then I can go back to domheap (but 
for the target domain rather than the tools domain) and map the page into Xen 
explicitly.

> 
> > +static void hvm_free_ioreq_mfn(struct hvm_ioreq_server *s, bool buf)
> > +{
> > +    struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
> > +
> > +    if ( !iorp->page )
> > +        return;
> > +
> > +    iorp->page = NULL;
> > +
> > +    free_xenheap_page(iorp->va);
> > +    iorp->va = NULL;
> > +}
> 
> I've looked over the code paths coming here, and I can't convince
> myself that any mapping that the server has established would be
> gone by the time the page is being freed. I'm likely (hopefully)
> overlooking some aspect here.
> 

Hmm. Maybe you're right. The lack of ref counting might be a problem. It was so 
much simpler to allocate from the tools domain's heap, but the restrictions in 
do_mmu_update() rule that out. I'm really not sure how to fix this.

> > +int arch_acquire_resource(struct domain *d, unsigned int type,
> > +                          unsigned int id, unsigned long frame,
> > +                          unsigned int nr_frames, xen_pfn_t mfn_list[])
> > +{
> > +    int rc;
> > +
> > +    switch ( type )
> > +    {
> > +    case XENMEM_resource_ioreq_server:
> > +    {
> > +        ioservid_t ioservid = id;
> > +        unsigned int i;
> > +
> > +        rc = -EINVAL;
> > +        if ( id != (unsigned int)ioservid )
> > +            break;
> > +
> > +        rc = 0;
> > +        for ( i = 0; i < nr_frames; i++ )
> > +        {
> > +            mfn_t mfn;
> > +
> > +            rc = hvm_get_ioreq_server_frame(d, id, frame + i, &mfn);
> 
> Neither up from here nor in the called function it is being checked
> that d is actually a HVM domain.

Yes, that's an oversight.

  Paul

> 
> Jan
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxxx
> https://lists.xenproject.org/mailman/listinfo/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.