[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] docs/design: introduce HVMMEM_ioreq_serverX types



> -----Original Message-----
> From: Tian, Kevin [mailto:kevin.tian@xxxxxxxxx]
> Sent: 26 February 2016 04:25
> To: Paul Durrant; xen-devel@xxxxxxxxxxxxxxxxxxxx
> Subject: RE: [Xen-devel] [PATCH] docs/design: introduce
> HVMMEM_ioreq_serverX types
> 
> > From: Paul Durrant
> > Sent: Thursday, February 25, 2016 11:49 PM
> >
> > This patch adds a new 'designs' subdirectory under docs as a repository
> > for this and future design proposals.
> >
> > Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx>
> > ---
> >
> > For convenience this document can also be viewed in PDF at:
> >
> > http://xenbits.xen.org/people/pauldu/hvmmem_ioreq_server.pdf
> > ---
> >  docs/designs/hvmmem_ioreq_server.md | 63
> > +++++++++++++++++++++++++++++++++++++
> >  1 file changed, 63 insertions(+)
> >  create mode 100755 docs/designs/hvmmem_ioreq_server.md
> >
> > diff --git a/docs/designs/hvmmem_ioreq_server.md
> > b/docs/designs/hvmmem_ioreq_server.md
> > new file mode 100755
> > index 0000000..47fa715
> > --- /dev/null
> > +++ b/docs/designs/hvmmem_ioreq_server.md
> > @@ -0,0 +1,63 @@
> > +HVMMEM\_ioreq\_serverX
> > +----------------------
> > +
> > +Background
> > +==========
> > +
> > +The concept of the IOREQ server was introduced to allow multiple distinct
> > +device emulators to a single VM. The XenGT project uses an IOREQ server
> to
> > +provide mediated pass-through of Intel GPUs to guests and, as part of the
> > +mediation, needs to intercept accesses to GPU page-tables (or GTTs) that
> > +reside in guest RAM.
> > +
> > +The current implementation of this sets the type of GTT pages to type
> > +HVMMEM\_mmio\_write\_dm, which causes Xen to emulate writes to
> such pages,
> > +and then maps the guest physical addresses of those pages to the XenGT
> > +IOREQ server using the HVMOP\_map\_io\_range\_to\_ioreq\_server
> hypercall.
> > +However, because the number of GTTs is potentially large, using this
> > +approach does not scale well.
> > +
> > +Proposal
> > +========
> > +
> > +Because the number of spare types available in the P2M type-space is
> > +currently very limited it is proposed that HVMMEM\_mmio\_write\_dm
> be
> > +replaced by a single new type HVMMEM\_ioreq\_server. In future, if the
> > +P2M type-space is increased, this can be renamed to
> HVMMEM\_ioreq\_server0
> > +and new HVMMEM\_ioreq\_server1, HVMMEM\_ioreq\_server2, etc.
> types
> > +can be added.
> > +
> > +Accesses to a page of type HVMMEM\_ioreq\_serverX should be the
> same as
> > +HVMMEM\_ram\_rw until the type is _claimed_ by an IOREQ server.
> Furthermore
> > +it should only be possible to set the type of a page to
> > +HVMMEM\_ioreq\_serverX if that page is currently of type
> HVMMEM\_ram\_rw.
> 
> Is there similar assumption on the opposite change, i.e. from ioreq_serverX
> only to ram_rw?
> 

Yes, I will call that out.

> > +
> > +To allow an IOREQ server to claim or release a claim to a type a new pair
> > +of hypercalls will be introduced:
> > +
> > +- HVMOP\_map\_mem\_type\_to\_ioreq\_server
> > +- HVMOP\_unmap\_mem\_type\_from\_ioreq\_server
> > +
> > +and an associated argument structure:
> > +
> > +           struct hvm_ioreq_mem_type {
> > +                   domid_t domid;      /* IN - domain to be serviced */
> > +                   ioservid_t id;      /* IN - server id */
> > +                   hvmmem_type_t type; /* IN - memory type */
> > +                   uint32_t flags;     /* IN - types of access to be
> > +                                       intercepted */
> > +
> > +   #define _HVMOP_IOREQ_MEM_ACCESS_READ 0
> > +   #define HVMOP_IOREQ_MEM_ACCESS_READ \
> > +           (1 << _HVMOP_IOREQ_MEM_ACCESS_READ)
> > +
> > +   #define _HVMOP_IOREQ_MEM_ACCESS_WRITE 1
> > +   #define HVMOP_IOREQ_MEM_ACCESS_WRITE \
> > +           (1 << _HVMOP_IOREQ_MEM_ACCESS_WRITE)
> > +
> > +           };
> > +
> > +
> > +Once the type has been claimed then the requested types of access to
> any
> > +page of the claimed type will be passed to the IOREQ server for handling.
> > +Only HVMMEM\_ioreq\_serverX types may be claimed.
> > --
> 
> It'd good to also add how to handle multiple ioreq servers claiming
> one same type for a given domain.
> 

That would clearly be an error so I imagine -EBUSY would be an appropriate 
return value from the map hypercall.

  Paul

> Thanks
> Kevin
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.