[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC Patch v4 2/8] ioreq: bump the number of IOREQ page to 4 pages



> -----Original Message-----
> From: Chao Gao [mailto:chao.gao@xxxxxxxxx]
> Sent: 12 December 2017 23:39
> To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>
> Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>; Wei Liu
> <wei.liu2@xxxxxxxxxx>; Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; Tim
> (Xen.org) <tim@xxxxxxx>; George Dunlap <George.Dunlap@xxxxxxxxxx>;
> xen-devel@xxxxxxxxxxxxx; Jan Beulich <jbeulich@xxxxxxxx>; Ian Jackson
> <Ian.Jackson@xxxxxxxxxx>
> Subject: Re: [RFC Patch v4 2/8] ioreq: bump the number of IOREQ page to 4
> pages
> 
> On Tue, Dec 12, 2017 at 09:07:46AM +0000, Paul Durrant wrote:
> >> -----Original Message-----
> >[snip]
> >>
> >> Hi, Paul.
> >>
> >> I merged the two qemu patches, the privcmd patch [1] and did some
> tests.
> >> I encountered a small issue and report it to you, so you can pay more
> >> attention to it when doing some tests. The symptom is that using the new
> >> interface to map grant table in xc_dom_gnttab_seed() always fails. After
> >> adding some printk in privcmd, I found it is
> >> xen_remap_domain_gfn_array() that fails with errcode -16. Mapping
> ioreq
> >> server doesn't have such an issue.
> >>
> >> [1]
> >>
> http://xenbits.xen.org/gitweb/?p=people/pauldu/linux.git;a=commit;h=ce5
> >> 9a05e6712
> >>
> >
> >Chao,
> >
> >  That privcmd patch is out of date. I've just pushed a new one:
> >
> >http://xenbits.xen.org/gitweb/?p=people/pauldu/linux.git;a=commit;h=9f
> 00199f5f12cef401c6370c94a1140de9b318fc
> >
> >  Give that a try. I've been using it for a few weeks now.
> 
> Mapping ioreq server always fails, while mapping grant table succeeds.
> 
> QEMU fails with following log:
> xenforeignmemory: error: ioctl failed: Device or resource busy
> qemu-system-i386: failed to map ioreq server resources: error 16
> handle=0x5614a6df5e00
> qemu-system-i386: xen hardware virtual machine initialisation failed
> 
> Xen encountered the following error:
> (XEN) [13118.909787] mm.c:1003:d0v109 pg_owner d2 l1e_owner d0, but
> real_pg_owner d0
> (XEN) [13118.918122] mm.c:1079:d0v109 Error getting mfn 5da5841 (pfn
> ffffffffffffffff) from L1 entry 8000005da5841227 for l1e_owner d0, pg_owner
> d2

Hmm. That looks like it is because the ioreq server pages are not owned by the 
correct domain. The Xen patch series underwent some changes later in review and 
I did not re-test my QEMU patch after that so I wonder if mapping IOREQ pages 
has simply become broken. I'll investigate.

  Paul

> 
> I only fixed some obvious issues with a patch to your privcmd patch:
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -181,7 +181,7 @@ int xen_remap_domain_gfn_range(struct
> vm_area_struct *vma,
>         if (xen_feature(XENFEAT_auto_translated_physmap))
>                 return -EOPNOTSUPP;
> 
> -       return do_remap_gfn(vma, addr, &gfn, nr, NULL, prot, domid, pages);
> +       return do_remap_pfn(vma, addr, &gfn, nr, NULL, prot, domid, false,
> pages
>  }
>  EXPORT_SYMBOL_GPL(xen_remap_domain_gfn_range);
> 
> @@ -200,8 +200,8 @@ int xen_remap_domain_gfn_array(struct
> vm_area_struct *vma,
>          * cause of "wrong memory was mapped in".
>          */
>         BUG_ON(err_ptr == NULL);
> -        do_remap_pfn(vma, addr, gfn, nr, err_ptr, prot, domid,
> -                    false, pages);
> +       return do_remap_pfn(vma, addr, gfn, nr, err_ptr, prot, domid,
> +                       false, pages);
>  }
>  EXPORT_SYMBOL_GPL(xen_remap_domain_gfn_array);
> 
> Thanks
> Chao

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.