[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v6 1/4] x86/ioreq server: Add HVMOP to map guest ram with p2m_ioreq_server to an ioreq server.





On 9/6/2016 4:13 PM, Jan Beulich wrote:
On 06.09.16 at 10:03, <Paul.Durrant@xxxxxxxxxx> wrote:
  -----Original Message-----
From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
Sent: 06 September 2016 08:58
To: George Dunlap <George.Dunlap@xxxxxxxxxx>; Yu Zhang
<yu.c.zhang@xxxxxxxxxxxxxxx>
Cc: Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; Paul Durrant
<Paul.Durrant@xxxxxxxxxx>; George Dunlap <George.Dunlap@xxxxxxxxxx>;
JunNakajima <jun.nakajima@xxxxxxxxx>; Kevin Tian <kevin.tian@xxxxxxxxx>;
zhiyuan.lv@xxxxxxxxx; xen-devel@xxxxxxxxxxxxx; Tim (Xen.org)
<tim@xxxxxxx>
Subject: Re: [PATCH v6 1/4] x86/ioreq server: Add HVMOP to map guest ram
with p2m_ioreq_server to an ioreq server.

On 05.09.16 at 19:20, <george.dunlap@xxxxxxxxxx> wrote:
On 05/09/16 14:31, Jan Beulich wrote:
On 02.09.16 at 12:47, <yu.c.zhang@xxxxxxxxxxxxxxx> wrote:
@@ -178,8 +179,27 @@ static int hvmemul_do_io(
          break;
      case X86EMUL_UNHANDLEABLE:
      {
-        struct hvm_ioreq_server *s =
-            hvm_select_ioreq_server(curr->domain, &p);
+        struct hvm_ioreq_server *s = NULL;
+        p2m_type_t p2mt = p2m_invalid;
+
+        if ( is_mmio )
+        {
+            unsigned long gmfn = paddr_to_pfn(addr);
+
+            (void) get_gfn_query_unlocked(currd, gmfn, &p2mt);
+
+            if ( p2mt == p2m_ioreq_server && dir == IOREQ_WRITE )
+            {
+                unsigned int flags;
+
+                s = p2m_get_ioreq_server(currd, &flags);
+                if ( !(flags & XEN_HVMOP_IOREQ_MEM_ACCESS_WRITE) )
+                    s = NULL;
+            }
+        }
+
+        if ( !s && p2mt != p2m_ioreq_server )
+            s = hvm_select_ioreq_server(currd, &p);
What I recall is that we had agreed on p2m_ioreq_server pages to be
treated as ordinary RAM ones as long as no server can be found. The
type check here contradicts that. Is there a reason?
I think it must be a confusion as to what "treated like ordinary RAM
ones" means.  p2m_ram_rw types that gets here will go through
hvm_select_ioreq_server(), and (therefore) potentially be treated as
MMIO accesses, which is not how "ordinary RAM" would behave.  If what
you meant was that you want p2m_ioreq_server to behave like
p2m_ram_rw
(and be treated as MMIO if it matches an iorange) then yes.  If what
you want is for p2m_ioreq_server to actually act like RAM, then
probably something more needs to happen here.
Well, all I'm questioning is the special casing of p2m_ioreq_server in the if().
That's imo orthogonal to p2m_ram_rw pages not being supposed to make it
here (hence the is_mmio check in the earlier if() also looks questionable).
Perhaps it would already help if there was a comment explaining what the
exact intended behavior here is.

My understanding is that we want accesses that make it here for pages that
are not of type 'ioreq_server' to result in MMIO emulation (i.e. they hit an
emulator's requested ranges, or the access is completed as unclaimed MMIO by
Xen). Accesses that make it here because the page *is* of type 'ioreq server'
should be sent to the emulator that has claimed the type and, if no emulator
does currently have a claim to the type, be handled as if the access was to
r/w RAM.
Makes sense, but it doesn't look like the code above does that, as
- keeping s to be NULL means ignoring the access
- finding a server by some means would imply handling the access as I/O
   instead of RAM.

Jan

Thanks Paul for helping me explain this. :)
And I'd like to give some clarification:
Handling of accesses to p2m_ioreq_server pages are supposed to be delivered to device model selected from p2m_get_ioreq_server(); only accesses to port I/O and MMIO are supposed to use
routine hvm_select_ioreq_server() to select the device model.

For p2m_ioreq_server pages, variable s(for ioreq server) could be NULL in following cases: 1> Since we now only support the emulation of write accesses, we shall not forward current access to the device model if it is a read access. Therefore, you can see the p2m_get_ioreq_server() is only called with "if ( p2mt == p2m_ioreq_server && dir == IOREQ_WRITE )" restriction. However, we may encounter a read-modify-write instruction in guest, trying to access this page, and the read emulation should be handled in hypervisor first and then deliver the write access to device model. In such case, the s is NULL, and will be handled by the mem_handler in patch 3/4.

2> Even when p2m_get_ioreq_server() return a valid ioreq server, it could be reset to NULL, if the (flags & XEN_HVMOP_IOREQ_MEM_ACCESS_WRITE) is not true. But this part is not indispensable. It is for future support of emualtion of read accesses. For now, we have code in hvm_map_mem_type_to_ioreq_server() to guarantee the flag is XEN_HVMOP_IOREQ_MEM_ACCESS_WRITE;

Note: we do not need to worry about the situation when an p2m_ioreq_server page is written,
yet no ioreq server is bound. Because:
1> hvmop_set_mem_type() requires an ioreq server have been bound when we set a page to
p2m_ioreq_server;

2> hvmop_map_mem_type_to_ioreq_server() also makes sure all pages with p2m_ioreq_server be reset back to p2m_ram_rw when it unbind an ioreq server(in patch 4/4). So when no ioreq
server is bound with p2m_ioreq_server, there should be no such page.

Thanks
Yu




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.