[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] IOREQ server on Arm



> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Sent: 26 September 2018 11:58
> To: Julien Grall <julien.grall@xxxxxxx>; Paul Durrant
> <Paul.Durrant@xxxxxxxxxx>
> Cc: Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; Roger Pau Monne
> <roger.pau@xxxxxxxxxx>; Stefano Stabellini <sstabellini@xxxxxxxxxx>; xen-
> devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>
> Subject: RE: IOREQ server on Arm
> 
> >>> On 26.09.18 at 12:51, <Paul.Durrant@xxxxxxxxxx> wrote:
> >>  -----Original Message-----
> >> From: Julien Grall [mailto:julien.grall@xxxxxxx]
> >> Sent: 26 September 2018 11:41
> >> To: Jan Beulich <JBeulich@xxxxxxxx>; Paul Durrant
> >> <Paul.Durrant@xxxxxxxxxx>
> >> Cc: Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; Roger Pau Monne
> >> <roger.pau@xxxxxxxxxx>; Stefano Stabellini <sstabellini@xxxxxxxxxx>;
> xen-
> >> devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>
> >> Subject: Re: IOREQ server on Arm
> >>
> >> Hi Jan,
> >>
> >> On 09/26/2018 09:08 AM, Jan Beulich wrote:
> >> >>>> On 26.09.18 at 00:39, <julien.grall@xxxxxxx> wrote:
> >> >> Hi Paul,
> >> >>
> >> >> I am looking at porting the IOREQ server infrastructure on Arm. I
> >> didn't
> >> >> need much modification to make it run for Arm. Although, the
> >> >> implementation could be simplified over the x86 implementation.
> >> >>
> >> >> I noticed some issue while trying to implement the hypercall
> >> >> XENMEM_acquire_resource. Per my understanding, all the page mapped
> via
> >> >> that hypercall will use the type p2m_mapping_foreign.
> >> >>
> >> >> This will result to trigger the ASSERT(fdom != dom) in
> >> get_page_from_gfn
> >> >> (asm-arm/p2m.h) because the IOREQ page has been allocated to the
> >> >> emulator domain and mapped to it. AFAICT x86 has the same assert in
> >> >> p2m_get_page_from_gfn(...).
> >> >>
> >> >> IHMO, the ASSERT makes sense because you are only meant to map page
> >> >> belonging to other domain with that type.
> >> >>
> >> >> So I am wondering whether IOREQ server running in PVH Dom0 has been
> >> >> tested? What would be the best course of action to fix the issue?
> >> >
> >> > I think the p2m type needs to be chosen based on
> >> > XENMEM_rsrc_acq_caller_owned.
> >>
> >> I am thinking to introduce p2m_mapping_owned. Or do we have a p2m_type
> >> that we could re-use?
> >>
> >
> > I think we should be able to just use p2m_ram_rw if it is caller owned.
> 
> Yes, that's what I too would have thought. If there ever was a resource
> which may only be mapped r/o, p2m_ram_ro should then be equally usable.
> 

Yes, that's true. The only existent resources are read-write though so I guess 
another flag passed back to the caller could be used to indicate a read-only 
resource.

I was thinking along the lines of a patch like this:

diff --git a/xen/common/memory.c b/xen/common/memory.c
index 996f94b103..82c18fa9ad 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -1105,8 +1105,11 @@ static int acquire_resource(

         for ( i = 0; !rc && i < xmar.nr_frames; i++ )
         {
-            rc = set_foreign_p2m_entry(currd, gfn_list[i],
-                                       _mfn(mfn_list[i]));
+            rc = (xmar.flags & XENMEM_rsrc_acq_caller_owned) ?
+                guest_physmap_add_entry(currd, gfn_list[i],
+                                        _mfn(mfn_list[i]), 0, p2m_ram_rw) :
+                set_foreign_p2m_entry(currd, gfn_list[i],
+                                      _mfn(mfn_list[i]));
             /* rc should be -EIO for any iteration other than the first */
             if ( rc && i )
                 rc = -EIO;

But the guest_physmap_add_entry() is problematic as it will IOMMU map pages as 
well, which is probably not wanted.

  Paul 

> Jan
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.