[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [Qemu-devel] [PATCH v5 2/2] Xen: Use the ioreq-server API when available



On 01/29/15 14:14, Don Slutz wrote:
> On 01/29/15 07:09, Paul Durrant wrote:
>>> -----Original Message-----
>>> From: Don Slutz [mailto:dslutz@xxxxxxxxxxx]
>>> Sent: 29 January 2015 00:58
>>> To: Don Slutz; Paul Durrant; qemu-devel@xxxxxxxxxx; Stefano Stabellini
>>> Cc: Peter Maydell; Olaf Hering; Alexey Kardashevskiy; Stefan Weil; Michael
>>> Tokarev; Alexander Graf; Gerd Hoffmann; Stefan Hajnoczi; Paolo Bonzini
>>> Subject: Re: [Qemu-devel] [PATCH v5 2/2] Xen: Use the ioreq-server API
>>> when available
>>>

...

> 
> You can see that the guest is still waiting for the inl from 0x00000cfe.
> 
> 
> 
...

The one line patch:


From 5269b1fb947f207057ca69e320c79b397db3e8f5 Mon Sep 17 00:00:00 2001
From: Don Slutz <dslutz@xxxxxxxxxxx>
Date: Thu, 29 Jan 2015 21:24:05 -0500
Subject: [PATCH] Prevent hang if read of HVM_PARAM_IOREQ_PFN,
 HVM_PARAM_BUFIOREQ_PFN, HVM_PARAM_BUFIOREQ_EVTCHN is done
 before hvmloader starts.

Signed-off-by: Don Slutz <dslutz@xxxxxxxxxxx>
---
 xen/arch/x86/hvm/hvm.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index bad410e..7ac4b45 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -993,7 +993,7 @@ static int hvm_create_ioreq_server(struct domain *d,
domid_t domid,
     spin_lock(&d->arch.hvm_domain.ioreq_server.lock);

     rc = -EEXIST;
-    if ( is_default && d->arch.hvm_domain.default_ioreq_server != NULL )
+    if ( is_default && !list_empty(&d->arch.hvm_domain.ioreq_server.list) )
         goto fail2;

     rc = hvm_ioreq_server_init(s, d, domid, is_default, handle_bufioreq,
-- 
1.7.11.7


Does "fix this" but no idea if this is the way to go.

    -Don Slutz

>    -Don Slutz
> 
> 
>>   Paul
>>
>>>    -Don Slutz
>>>
>>>
>>>>     -Don Slutz
>>>>
>>>>
>>>>> So far I have tracked it back to hvm_select_ioreq_server()
>>>>> which selects the "default_ioreq_server".  Since I have one 1
>>>>> QEMU, it is both the "default_ioreq_server" and an enabled
>>>>> 2nd ioreq_server.  I am continuing to understand why my changes
>>>>> are causing this.  More below.
>>>>>
>>>>> This patch causes QEMU to only call xc_evtchn_bind_interdomain()
>>>>> for the enabled 2nd ioreq_server.  So when (if)
>>>>> hvm_select_ioreq_server() selects the "default_ioreq_server", the
>>>>> guest hangs on an I/O.
>>>>>
>>>>> Using the debug key 'e':
>>>>>
>>>>> (XEN) [2015-01-28 18:57:07] 'e' pressed -> dumping event-channel info
>>>>> (XEN) [2015-01-28 18:57:07] Event channel information for domain 0:
>>>>> (XEN) [2015-01-28 18:57:07] Polling vCPUs: {}
>>>>> (XEN) [2015-01-28 18:57:07]     port [p/m/s]
>>>>> (XEN) [2015-01-28 18:57:07]        1 [0/0/0]: s=5 n=0 x=0 v=0
>>>>> (XEN) [2015-01-28 18:57:07]        2 [0/0/0]: s=6 n=0 x=0
>>>>> (XEN) [2015-01-28 18:57:07]        3 [0/0/0]: s=6 n=0 x=0
>>>>> (XEN) [2015-01-28 18:57:07]        4 [0/0/0]: s=5 n=0 x=0 v=1
>>>>> (XEN) [2015-01-28 18:57:07]        5 [0/0/0]: s=6 n=0 x=0
>>>>> (XEN) [2015-01-28 18:57:07]        6 [0/0/0]: s=6 n=0 x=0
>>>>> (XEN) [2015-01-28 18:57:07]        7 [0/0/0]: s=5 n=1 x=0 v=0
>>>>> (XEN) [2015-01-28 18:57:07]        8 [0/0/0]: s=6 n=1 x=0
>>>>> (XEN) [2015-01-28 18:57:07]        9 [0/0/0]: s=6 n=1 x=0
>>>>> (XEN) [2015-01-28 18:57:07]       10 [0/0/0]: s=5 n=1 x=0 v=1
>>>>> (XEN) [2015-01-28 18:57:07]       11 [0/0/0]: s=6 n=1 x=0
>>>>> (XEN) [2015-01-28 18:57:07]       12 [0/0/0]: s=6 n=1 x=0
>>>>> (XEN) [2015-01-28 18:57:07]       13 [0/0/0]: s=5 n=2 x=0 v=0
>>>>> (XEN) [2015-01-28 18:57:07]       14 [0/0/0]: s=6 n=2 x=0
>>>>> (XEN) [2015-01-28 18:57:07]       15 [0/0/0]: s=6 n=2 x=0
>>>>> (XEN) [2015-01-28 18:57:07]       16 [0/0/0]: s=5 n=2 x=0 v=1
>>>>> (XEN) [2015-01-28 18:57:07]       17 [0/0/0]: s=6 n=2 x=0
>>>>> (XEN) [2015-01-28 18:57:07]       18 [0/0/0]: s=6 n=2 x=0
>>>>> (XEN) [2015-01-28 18:57:07]       19 [0/0/0]: s=5 n=3 x=0 v=0
>>>>> (XEN) [2015-01-28 18:57:07]       20 [0/0/0]: s=6 n=3 x=0
>>>>> (XEN) [2015-01-28 18:57:07]       21 [0/0/0]: s=6 n=3 x=0
>>>>> (XEN) [2015-01-28 18:57:07]       22 [0/0/0]: s=5 n=3 x=0 v=1
>>>>> (XEN) [2015-01-28 18:57:07]       23 [0/0/0]: s=6 n=3 x=0
>>>>> (XEN) [2015-01-28 18:57:07]       24 [0/0/0]: s=6 n=3 x=0
>>>>> (XEN) [2015-01-28 18:57:07]       25 [0/0/0]: s=5 n=4 x=0 v=0
>>>>> (XEN) [2015-01-28 18:57:07]       26 [0/0/0]: s=6 n=4 x=0
>>>>> (XEN) [2015-01-28 18:57:07]       27 [0/0/0]: s=6 n=4 x=0
>>>>> (XEN) [2015-01-28 18:57:07]       28 [0/0/0]: s=5 n=4 x=0 v=1
>>>>> (XEN) [2015-01-28 18:57:07]       29 [0/0/0]: s=6 n=4 x=0
>>>>> (XEN) [2015-01-28 18:57:07]       30 [0/0/0]: s=6 n=4 x=0
>>>>> (XEN) [2015-01-28 18:57:07]       31 [0/0/0]: s=5 n=5 x=0 v=0
>>>>> (XEN) [2015-01-28 18:57:07]       32 [0/0/0]: s=6 n=5 x=0
>>>>> (XEN) [2015-01-28 18:57:07]       33 [0/0/0]: s=6 n=5 x=0
>>>>> (XEN) [2015-01-28 18:57:07]       34 [0/0/0]: s=5 n=5 x=0 v=1
>>>>> (XEN) [2015-01-28 18:57:07]       35 [0/0/0]: s=6 n=5 x=0
>>>>> (XEN) [2015-01-28 18:57:07]       36 [0/0/0]: s=6 n=5 x=0
>>>>> (XEN) [2015-01-28 18:57:07]       37 [0/0/0]: s=5 n=6 x=0 v=0
>>>>> (XEN) [2015-01-28 18:57:07]       38 [0/0/0]: s=6 n=6 x=0
>>>>> (XEN) [2015-01-28 18:57:07]       39 [0/0/0]: s=6 n=6 x=0
>>>>> (XEN) [2015-01-28 18:57:07]       40 [0/0/0]: s=5 n=6 x=0 v=1
>>>>> (XEN) [2015-01-28 18:57:07]       41 [0/0/0]: s=6 n=6 x=0
>>>>> (XEN) [2015-01-28 18:57:07]       42 [0/0/0]: s=6 n=6 x=0
>>>>> (XEN) [2015-01-28 18:57:07]       43 [0/0/0]: s=5 n=7 x=0 v=0
>>>>> (XEN) [2015-01-28 18:57:07]       44 [0/0/0]: s=6 n=7 x=0
>>>>> (XEN) [2015-01-28 18:57:07]       45 [0/0/0]: s=6 n=7 x=0
>>>>> (XEN) [2015-01-28 18:57:07]       46 [0/0/0]: s=5 n=7 x=0 v=1
>>>>> (XEN) [2015-01-28 18:57:07]       47 [0/0/0]: s=6 n=7 x=0
>>>>> (XEN) [2015-01-28 18:57:07]       48 [0/0/0]: s=6 n=7 x=0
>>>>> (XEN) [2015-01-28 18:57:07]       49 [0/0/0]: s=3 n=0 x=0 d=0 p=58
>>>>> (XEN) [2015-01-28 18:57:07]       50 [0/0/0]: s=5 n=0 x=0 v=9
>>>>> (XEN) [2015-01-28 18:57:07]       51 [0/0/0]: s=4 n=0 x=0 p=9 i=9
>>>>> (XEN) [2015-01-28 18:57:07]       52 [0/0/0]: s=5 n=0 x=0 v=2
>>>>> (XEN) [2015-01-28 18:57:07]       53 [0/0/0]: s=4 n=4 x=0 p=16 i=16
>>>>> (XEN) [2015-01-28 18:57:07]       54 [0/0/0]: s=4 n=0 x=0 p=17 i=17
>>>>> (XEN) [2015-01-28 18:57:07]       55 [0/0/0]: s=4 n=6 x=0 p=18 i=18
>>>>> (XEN) [2015-01-28 18:57:07]       56 [0/0/0]: s=4 n=0 x=0 p=8 i=8
>>>>> (XEN) [2015-01-28 18:57:07]       57 [0/0/0]: s=4 n=0 x=0 p=19 i=19
>>>>> (XEN) [2015-01-28 18:57:07]       58 [0/0/0]: s=3 n=0 x=0 d=0 p=49
>>>>> (XEN) [2015-01-28 18:57:07]       59 [0/0/0]: s=5 n=0 x=0 v=3
>>>>> (XEN) [2015-01-28 18:57:07]       60 [0/0/0]: s=5 n=0 x=0 v=4
>>>>> (XEN) [2015-01-28 18:57:07]       61 [0/0/0]: s=3 n=0 x=0 d=1 p=1
>>>>> (XEN) [2015-01-28 18:57:07]       62 [0/0/0]: s=3 n=0 x=0 d=1 p=2
>>>>> (XEN) [2015-01-28 18:57:07]       63 [0/0/0]: s=3 n=0 x=0 d=1 p=3
>>>>> (XEN) [2015-01-28 18:57:07]       64 [0/0/0]: s=3 n=0 x=0 d=1 p=5
>>>>> (XEN) [2015-01-28 18:57:07]       65 [0/0/0]: s=3 n=0 x=0 d=1 p=6
>>>>> (XEN) [2015-01-28 18:57:07]       66 [0/0/0]: s=3 n=0 x=0 d=1 p=7
>>>>> (XEN) [2015-01-28 18:57:07]       67 [0/0/0]: s=3 n=0 x=0 d=1 p=8
>>>>> (XEN) [2015-01-28 18:57:07]       68 [0/0/0]: s=3 n=0 x=0 d=1 p=9
>>>>> (XEN) [2015-01-28 18:57:07]       69 [0/0/0]: s=3 n=0 x=0 d=1 p=4
>>>>> (XEN) [2015-01-28 18:57:07] Event channel information for domain 1:
>>>>> (XEN) [2015-01-28 18:57:07] Polling vCPUs: {}
>>>>> (XEN) [2015-01-28 18:57:07]     port [p/m/s]
>>>>> (XEN) [2015-01-28 18:57:07]        1 [0/0/0]: s=3 n=0 x=0 d=0 p=61
>>>>> (XEN) [2015-01-28 18:57:07]        2 [0/0/0]: s=3 n=0 x=0 d=0 p=62
>>>>> (XEN) [2015-01-28 18:57:07]        3 [0/0/0]: s=3 n=0 x=1 d=0 p=63
>>>>> (XEN) [2015-01-28 18:57:07]        4 [0/0/0]: s=3 n=0 x=1 d=0 p=69
>>>>> (XEN) [2015-01-28 18:57:07]        5 [0/0/0]: s=3 n=1 x=1 d=0 p=64
>>>>> (XEN) [2015-01-28 18:57:07]        6 [0/0/0]: s=3 n=2 x=1 d=0 p=65
>>>>> (XEN) [2015-01-28 18:57:07]        7 [0/0/0]: s=3 n=3 x=1 d=0 p=66
>>>>> (XEN) [2015-01-28 18:57:07]        8 [0/0/0]: s=3 n=4 x=1 d=0 p=67
>>>>> (XEN) [2015-01-28 18:57:07]        9 [0/0/0]: s=3 n=5 x=1 d=0 p=68
>>>>> (XEN) [2015-01-28 18:57:07]       10 [0/0/0]: s=2 n=0 x=1 d=0
>>>>> (XEN) [2015-01-28 18:57:07]       11 [0/0/0]: s=2 n=0 x=1 d=0
>>>>> (XEN) [2015-01-28 18:57:07]       12 [0/0/0]: s=2 n=1 x=1 d=0
>>>>> (XEN) [2015-01-28 18:57:07]       13 [0/0/0]: s=2 n=2 x=1 d=0
>>>>> (XEN) [2015-01-28 18:57:07]       14 [0/0/0]: s=2 n=3 x=1 d=0
>>>>> (XEN) [2015-01-28 18:57:07]       15 [0/0/0]: s=2 n=4 x=1 d=0
>>>>> (XEN) [2015-01-28 18:57:07]       16 [0/0/0]: s=2 n=5 x=1 d=0
>>>>>
>>>>> You can see that domain 1 has only half of it's event channels
>>>>> fully setup.  So when (if) hvm_send_assist_req_to_ioreq_server()
>>>>> does:
>>>>>
>>>>>             notify_via_xen_event_channel(d, port);
>>>>>
>>>>> Nothing happens and you hang in hvm_wait_for_io() forever.
>>>>>
>>>>>
>>>>> This does raise the questions:
>>>>>
>>>>> 1) Does this patch causes extra event channels to be created
>>>>>    that cannot be used?
>>>>>
>>>>> 2) Should the "default_ioreq_server" be deleted?
>>>>>
>>>>>
>>>>> Not sure the right way to go.
>>>>>
>>>>>     -Don Slutz
>>>>>
>>>>>
>>>>>>
>>>>>> Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx>
>>>>>> Acked-by: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
>>>>>> Cc: Peter Maydell <peter.maydell@xxxxxxxxxx>
>>>>>> Cc: Paolo Bonzini <pbonzini@xxxxxxxxxx>
>>>>>> Cc: Michael Tokarev <mjt@xxxxxxxxxx>
>>>>>> Cc: Stefan Hajnoczi <stefanha@xxxxxxxxxx>
>>>>>> Cc: Stefan Weil <sw@xxxxxxxxxxx>
>>>>>> Cc: Olaf Hering <olaf@xxxxxxxxx>
>>>>>> Cc: Gerd Hoffmann <kraxel@xxxxxxxxxx>
>>>>>> Cc: Alexey Kardashevskiy <aik@xxxxxxxxx>
>>>>>> Cc: Alexander Graf <agraf@xxxxxxx>
>>>>>> ---
>>>>>>  configure                   |   29 ++++++
>>>>>>  include/hw/xen/xen_common.h |  223
>>> +++++++++++++++++++++++++++++++++++++++++++
>>>>>>  trace-events                |    9 ++
>>>>>>  xen-hvm.c                   |  160 ++++++++++++++++++++++++++-----
>>>>>>  4 files changed, 399 insertions(+), 22 deletions(-)
>>>>>>
>>>>>> diff --git a/configure b/configure
>>>>>> index 47048f0..b1f8c2a 100755
>>>>>> --- a/configure
>>>>>> +++ b/configure
>>>>>> @@ -1877,6 +1877,32 @@ int main(void) {
>>>>>>    xc_gnttab_open(NULL, 0);
>>>>>>    xc_domain_add_to_physmap(0, 0, XENMAPSPACE_gmfn, 0, 0);
>>>>>>    xc_hvm_inject_msi(xc, 0, 0xf0000000, 0x00000000);
>>>>>> +  xc_hvm_create_ioreq_server(xc, 0, 0, NULL);
>>>>>> +  return 0;
>>>>>> +}
>>>>>> +EOF
>>>>>> +      compile_prog "" "$xen_libs"
>>>>>> +    then
>>>>>> +    xen_ctrl_version=450
>>>>>> +    xen=yes
>>>>>> +
>>>>>> +  elif
>>>>>> +      cat > $TMPC <<EOF &&
>>>>>> +#include <xenctrl.h>
>>>>>> +#include <xenstore.h>
>>>>>> +#include <stdint.h>
>>>>>> +#include <xen/hvm/hvm_info_table.h>
>>>>>> +#if !defined(HVM_MAX_VCPUS)
>>>>>> +# error HVM_MAX_VCPUS not defined
>>>>>> +#endif
>>>>>> +int main(void) {
>>>>>> +  xc_interface *xc;
>>>>>> +  xs_daemon_open();
>>>>>> +  xc = xc_interface_open(0, 0, 0);
>>>>>> +  xc_hvm_set_mem_type(0, 0, HVMMEM_ram_ro, 0, 0);
>>>>>> +  xc_gnttab_open(NULL, 0);
>>>>>> +  xc_domain_add_to_physmap(0, 0, XENMAPSPACE_gmfn, 0, 0);
>>>>>> +  xc_hvm_inject_msi(xc, 0, 0xf0000000, 0x00000000);
>>>>>>    return 0;
>>>>>>  }
>>>>>>  EOF
>>>>>> @@ -4283,6 +4309,9 @@ if test -n "$sparc_cpu"; then
>>>>>>      echo "Target Sparc Arch $sparc_cpu"
>>>>>>  fi
>>>>>>  echo "xen support       $xen"
>>>>>> +if test "$xen" = "yes" ; then
>>>>>> +  echo "xen ctrl version  $xen_ctrl_version"
>>>>>> +fi
>>>>>>  echo "brlapi support    $brlapi"
>>>>>>  echo "bluez  support    $bluez"
>>>>>>  echo "Documentation     $docs"
>>>>>> diff --git a/include/hw/xen/xen_common.h
>>> b/include/hw/xen/xen_common.h
>>>>>> index 95612a4..519696f 100644
>>>>>> --- a/include/hw/xen/xen_common.h
>>>>>> +++ b/include/hw/xen/xen_common.h
>>>>>> @@ -16,7 +16,9 @@
>>>>>>
>>>>>>  #include "hw/hw.h"
>>>>>>  #include "hw/xen/xen.h"
>>>>>> +#include "hw/pci/pci.h"
>>>>>>  #include "qemu/queue.h"
>>>>>> +#include "trace.h"
>>>>>>
>>>>>>  /*
>>>>>>   * We don't support Xen prior to 3.3.0.
>>>>>> @@ -179,4 +181,225 @@ static inline int
>>> xen_get_vmport_regs_pfn(XenXC xc, domid_t dom,
>>>>>>  }
>>>>>>  #endif
>>>>>>
>>>>>> +/* Xen before 4.5 */
>>>>>> +#if CONFIG_XEN_CTRL_INTERFACE_VERSION < 450
>>>>>> +
>>>>>> +#ifndef HVM_PARAM_BUFIOREQ_EVTCHN
>>>>>> +#define HVM_PARAM_BUFIOREQ_EVTCHN 26
>>>>>> +#endif
>>>>>> +
>>>>>> +#define IOREQ_TYPE_PCI_CONFIG 2
>>>>>> +
>>>>>> +typedef uint32_t ioservid_t;
>>>>>> +
>>>>>> +static inline void xen_map_memory_section(XenXC xc, domid_t dom,
>>>>>> +                                          ioservid_t ioservid,
>>>>>> +                                          MemoryRegionSection *section)
>>>>>> +{
>>>>>> +}
>>>>>> +
>>>>>> +static inline void xen_unmap_memory_section(XenXC xc, domid_t
>>> dom,
>>>>>> +                                            ioservid_t ioservid,
>>>>>> +                                            MemoryRegionSection 
>>>>>> *section)
>>>>>> +{
>>>>>> +}
>>>>>> +
>>>>>> +static inline void xen_map_io_section(XenXC xc, domid_t dom,
>>>>>> +                                      ioservid_t ioservid,
>>>>>> +                                      MemoryRegionSection *section)
>>>>>> +{
>>>>>> +}
>>>>>> +
>>>>>> +static inline void xen_unmap_io_section(XenXC xc, domid_t dom,
>>>>>> +                                        ioservid_t ioservid,
>>>>>> +                                        MemoryRegionSection *section)
>>>>>> +{
>>>>>> +}
>>>>>> +
>>>>>> +static inline void xen_map_pcidev(XenXC xc, domid_t dom,
>>>>>> +                                  ioservid_t ioservid,
>>>>>> +                                  PCIDevice *pci_dev)
>>>>>> +{
>>>>>> +}
>>>>>> +
>>>>>> +static inline void xen_unmap_pcidev(XenXC xc, domid_t dom,
>>>>>> +                                    ioservid_t ioservid,
>>>>>> +                                    PCIDevice *pci_dev)
>>>>>> +{
>>>>>> +}
>>>>>> +
>>>>>> +static inline int xen_create_ioreq_server(XenXC xc, domid_t dom,
>>>>>> +                                          ioservid_t *ioservid)
>>>>>> +{
>>>>>> +    return 0;
>>>>>> +}
>>>>>> +
>>>>>> +static inline void xen_destroy_ioreq_server(XenXC xc, domid_t dom,
>>>>>> +                                            ioservid_t ioservid)
>>>>>> +{
>>>>>> +}
>>>>>> +
>>>>>> +static inline int xen_get_ioreq_server_info(XenXC xc, domid_t dom,
>>>>>> +                                            ioservid_t ioservid,
>>>>>> +                                            xen_pfn_t *ioreq_pfn,
>>>>>> +                                            xen_pfn_t *bufioreq_pfn,
>>>>>> +                                            evtchn_port_t 
>>>>>> *bufioreq_evtchn)
>>>>>> +{
>>>>>> +    unsigned long param;
>>>>>> +    int rc;
>>>>>> +
>>>>>> +    rc = xc_get_hvm_param(xc, dom, HVM_PARAM_IOREQ_PFN,
>>> &param);
>>>>>> +    if (rc < 0) {
>>>>>> +        fprintf(stderr, "failed to get HVM_PARAM_IOREQ_PFN\n");
>>>>>> +        return -1;
>>>>>> +    }
>>>>>> +
>>>>>> +    *ioreq_pfn = param;
>>>>>> +
>>>>>> +    rc = xc_get_hvm_param(xc, dom, HVM_PARAM_BUFIOREQ_PFN,
>>> &param);
>>>>>> +    if (rc < 0) {
>>>>>> +        fprintf(stderr, "failed to get HVM_PARAM_BUFIOREQ_PFN\n");
>>>>>> +        return -1;
>>>>>> +    }
>>>>>> +
>>>>>> +    *bufioreq_pfn = param;
>>>>>> +
>>>>>> +    rc = xc_get_hvm_param(xc, dom,
>>> HVM_PARAM_BUFIOREQ_EVTCHN,
>>>>>> +                          &param);
>>>>>> +    if (rc < 0) {
>>>>>> +        fprintf(stderr, "failed to get
>>> HVM_PARAM_BUFIOREQ_EVTCHN\n");
>>>>>> +        return -1;
>>>>>> +    }
>>>>>> +
>>>>>> +    *bufioreq_evtchn = param;
>>>>>> +
>>>>>> +    return 0;
>>>>>> +}
>>>>>> +
>>>>>> +static inline int xen_set_ioreq_server_state(XenXC xc, domid_t dom,
>>>>>> +                                             ioservid_t ioservid,
>>>>>> +                                             bool enable)
>>>>>> +{
>>>>>> +    return 0;
>>>>>> +}
>>>>>> +
>>>>>> +/* Xen 4.5 */
>>>>>> +#else
>>>>>> +
>>>>>> +static inline void xen_map_memory_section(XenXC xc, domid_t dom,
>>>>>> +                                          ioservid_t ioservid,
>>>>>> +                                          MemoryRegionSection *section)
>>>>>> +{
>>>>>> +    hwaddr start_addr = section->offset_within_address_space;
>>>>>> +    ram_addr_t size = int128_get64(section->size);
>>>>>> +    hwaddr end_addr = start_addr + size - 1;
>>>>>> +
>>>>>> +    trace_xen_map_mmio_range(ioservid, start_addr, end_addr);
>>>>>> +    xc_hvm_map_io_range_to_ioreq_server(xc, dom, ioservid, 1,
>>>>>> +                                        start_addr, end_addr);
>>>>>> +}
>>>>>> +
>>>>>> +static inline void xen_unmap_memory_section(XenXC xc, domid_t
>>> dom,
>>>>>> +                                            ioservid_t ioservid,
>>>>>> +                                            MemoryRegionSection 
>>>>>> *section)
>>>>>> +{
>>>>>> +    hwaddr start_addr = section->offset_within_address_space;
>>>>>> +    ram_addr_t size = int128_get64(section->size);
>>>>>> +    hwaddr end_addr = start_addr + size - 1;
>>>>>> +
>>>>>> +    trace_xen_unmap_mmio_range(ioservid, start_addr, end_addr);
>>>>>> +    xc_hvm_unmap_io_range_from_ioreq_server(xc, dom, ioservid, 1,
>>>>>> +                                            start_addr, end_addr);
>>>>>> +}
>>>>>> +
>>>>>> +static inline void xen_map_io_section(XenXC xc, domid_t dom,
>>>>>> +                                      ioservid_t ioservid,
>>>>>> +                                      MemoryRegionSection *section)
>>>>>> +{
>>>>>> +    hwaddr start_addr = section->offset_within_address_space;
>>>>>> +    ram_addr_t size = int128_get64(section->size);
>>>>>> +    hwaddr end_addr = start_addr + size - 1;
>>>>>> +
>>>>>> +    trace_xen_map_portio_range(ioservid, start_addr, end_addr);
>>>>>> +    xc_hvm_map_io_range_to_ioreq_server(xc, dom, ioservid, 0,
>>>>>> +                                        start_addr, end_addr);
>>>>>> +}
>>>>>> +
>>>>>> +static inline void xen_unmap_io_section(XenXC xc, domid_t dom,
>>>>>> +                                        ioservid_t ioservid,
>>>>>> +                                        MemoryRegionSection *section)
>>>>>> +{
>>>>>> +    hwaddr start_addr = section->offset_within_address_space;
>>>>>> +    ram_addr_t size = int128_get64(section->size);
>>>>>> +    hwaddr end_addr = start_addr + size - 1;
>>>>>> +
>>>>>> +    trace_xen_unmap_portio_range(ioservid, start_addr, end_addr);
>>>>>> +    xc_hvm_unmap_io_range_from_ioreq_server(xc, dom, ioservid, 0,
>>>>>> +                                            start_addr, end_addr);
>>>>>> +}
>>>>>> +
>>>>>> +static inline void xen_map_pcidev(XenXC xc, domid_t dom,
>>>>>> +                                  ioservid_t ioservid,
>>>>>> +                                  PCIDevice *pci_dev)
>>>>>> +{
>>>>>> +    trace_xen_map_pcidev(ioservid, pci_bus_num(pci_dev->bus),
>>>>>> +                         PCI_SLOT(pci_dev->devfn), 
>>>>>> PCI_FUNC(pci_dev->devfn));
>>>>>> +    xc_hvm_map_pcidev_to_ioreq_server(xc, dom, ioservid,
>>>>>> +                                      0, pci_bus_num(pci_dev->bus),
>>>>>> +                                      PCI_SLOT(pci_dev->devfn),
>>>>>> +                                      PCI_FUNC(pci_dev->devfn));
>>>>>> +}
>>>>>> +
>>>>>> +static inline void xen_unmap_pcidev(XenXC xc, domid_t dom,
>>>>>> +                                    ioservid_t ioservid,
>>>>>> +                                    PCIDevice *pci_dev)
>>>>>> +{
>>>>>> +    trace_xen_unmap_pcidev(ioservid, pci_bus_num(pci_dev->bus),
>>>>>> +                           PCI_SLOT(pci_dev->devfn), 
>>>>>> PCI_FUNC(pci_dev->devfn));
>>>>>> +    xc_hvm_unmap_pcidev_from_ioreq_server(xc, dom, ioservid,
>>>>>> +                                          0, pci_bus_num(pci_dev->bus),
>>>>>> +                                          PCI_SLOT(pci_dev->devfn),
>>>>>> +                                          PCI_FUNC(pci_dev->devfn));
>>>>>> +}
>>>>>> +
>>>>>> +static inline int xen_create_ioreq_server(XenXC xc, domid_t dom,
>>>>>> +                                          ioservid_t *ioservid)
>>>>>> +{
>>>>>> +    int rc = xc_hvm_create_ioreq_server(xc, dom, 1, ioservid);
>>>>>> +
>>>>>> +    if (rc == 0) {
>>>>>> +        trace_xen_ioreq_server_create(*ioservid);
>>>>>> +    }
>>>>>> +
>>>>>> +    return rc;
>>>>>> +}
>>>>>> +
>>>>>> +static inline void xen_destroy_ioreq_server(XenXC xc, domid_t dom,
>>>>>> +                                            ioservid_t ioservid)
>>>>>> +{
>>>>>> +    trace_xen_ioreq_server_destroy(ioservid);
>>>>>> +    xc_hvm_destroy_ioreq_server(xc, dom, ioservid);
>>>>>> +}
>>>>>> +
>>>>>> +static inline int xen_get_ioreq_server_info(XenXC xc, domid_t dom,
>>>>>> +                                            ioservid_t ioservid,
>>>>>> +                                            xen_pfn_t *ioreq_pfn,
>>>>>> +                                            xen_pfn_t *bufioreq_pfn,
>>>>>> +                                            evtchn_port_t 
>>>>>> *bufioreq_evtchn)
>>>>>> +{
>>>>>> +    return xc_hvm_get_ioreq_server_info(xc, dom, ioservid,
>>>>>> +                                        ioreq_pfn, bufioreq_pfn,
>>>>>> +                                        bufioreq_evtchn);
>>>>>> +}
>>>>>> +
>>>>>> +static inline int xen_set_ioreq_server_state(XenXC xc, domid_t dom,
>>>>>> +                                             ioservid_t ioservid,
>>>>>> +                                             bool enable)
>>>>>> +{
>>>>>> +    trace_xen_ioreq_server_state(ioservid, enable);
>>>>>> +    return xc_hvm_set_ioreq_server_state(xc, dom, ioservid, enable);
>>>>>> +}
>>>>>> +
>>>>>> +#endif
>>>>>> +
>>>>>>  #endif /* QEMU_HW_XEN_COMMON_H */
>>>>>> diff --git a/trace-events b/trace-events
>>>>>> index b5722ea..abd1118 100644
>>>>>> --- a/trace-events
>>>>>> +++ b/trace-events
>>>>>> @@ -897,6 +897,15 @@ pvscsi_tx_rings_num_pages(const char* label,
>>> uint32_t num) "Number of %s pages:
>>>>>>  # xen-hvm.c
>>>>>>  xen_ram_alloc(unsigned long ram_addr, unsigned long size)
>>> "requested: %#lx, size %#lx"
>>>>>>  xen_client_set_memory(uint64_t start_addr, unsigned long size, bool
>>> log_dirty) "%#"PRIx64" size %#lx, log_dirty %i"
>>>>>> +xen_ioreq_server_create(uint32_t id) "id: %u"
>>>>>> +xen_ioreq_server_destroy(uint32_t id) "id: %u"
>>>>>> +xen_ioreq_server_state(uint32_t id, bool enable) "id: %u: enable: %i"
>>>>>> +xen_map_mmio_range(uint32_t id, uint64_t start_addr, uint64_t
>>> end_addr) "id: %u start: %#"PRIx64" end: %#"PRIx64
>>>>>> +xen_unmap_mmio_range(uint32_t id, uint64_t start_addr, uint64_t
>>> end_addr) "id: %u start: %#"PRIx64" end: %#"PRIx64
>>>>>> +xen_map_portio_range(uint32_t id, uint64_t start_addr, uint64_t
>>> end_addr) "id: %u start: %#"PRIx64" end: %#"PRIx64
>>>>>> +xen_unmap_portio_range(uint32_t id, uint64_t start_addr, uint64_t
>>> end_addr) "id: %u start: %#"PRIx64" end: %#"PRIx64
>>>>>> +xen_map_pcidev(uint32_t id, uint8_t bus, uint8_t dev, uint8_t func)
>>> "id: %u bdf: %02x.%02x.%02x"
>>>>>> +xen_unmap_pcidev(uint32_t id, uint8_t bus, uint8_t dev, uint8_t func)
>>> "id: %u bdf: %02x.%02x.%02x"
>>>>>>
>>>>>>  # xen-mapcache.c
>>>>>>  xen_map_cache(uint64_t phys_addr) "want %#"PRIx64
>>>>>> diff --git a/xen-hvm.c b/xen-hvm.c
>>>>>> index 7548794..31cb3ca 100644
>>>>>> --- a/xen-hvm.c
>>>>>> +++ b/xen-hvm.c
>>>>>> @@ -85,9 +85,6 @@ static inline ioreq_t
>>> *xen_vcpu_ioreq(shared_iopage_t *shared_page, int vcpu)
>>>>>>  }
>>>>>>  #  define FMT_ioreq_size "u"
>>>>>>  #endif
>>>>>> -#ifndef HVM_PARAM_BUFIOREQ_EVTCHN
>>>>>> -#define HVM_PARAM_BUFIOREQ_EVTCHN 26
>>>>>> -#endif
>>>>>>
>>>>>>  #define BUFFER_IO_MAX_DELAY  100
>>>>>>
>>>>>> @@ -101,6 +98,7 @@ typedef struct XenPhysmap {
>>>>>>  } XenPhysmap;
>>>>>>
>>>>>>  typedef struct XenIOState {
>>>>>> +    ioservid_t ioservid;
>>>>>>      shared_iopage_t *shared_page;
>>>>>>      shared_vmport_iopage_t *shared_vmport_page;
>>>>>>      buffered_iopage_t *buffered_io_page;
>>>>>> @@ -117,6 +115,8 @@ typedef struct XenIOState {
>>>>>>
>>>>>>      struct xs_handle *xenstore;
>>>>>>      MemoryListener memory_listener;
>>>>>> +    MemoryListener io_listener;
>>>>>> +    DeviceListener device_listener;
>>>>>>      QLIST_HEAD(, XenPhysmap) physmap;
>>>>>>      hwaddr free_phys_offset;
>>>>>>      const XenPhysmap *log_for_dirtybit;
>>>>>> @@ -467,12 +467,23 @@ static void xen_set_memory(struct
>>> MemoryListener *listener,
>>>>>>      bool log_dirty = memory_region_is_logging(section->mr);
>>>>>>      hvmmem_type_t mem_type;
>>>>>>
>>>>>> +    if (section->mr == &ram_memory) {
>>>>>> +        return;
>>>>>> +    } else {
>>>>>> +        if (add) {
>>>>>> +            xen_map_memory_section(xen_xc, xen_domid, state->ioservid,
>>>>>> +                                   section);
>>>>>> +        } else {
>>>>>> +            xen_unmap_memory_section(xen_xc, xen_domid, state-
>>>> ioservid,
>>>>>> +                                     section);
>>>>>> +        }
>>>>>> +    }
>>>>>> +
>>>>>>      if (!memory_region_is_ram(section->mr)) {
>>>>>>          return;
>>>>>>      }
>>>>>>
>>>>>> -    if (!(section->mr != &ram_memory
>>>>>> -          && ( (log_dirty && add) || (!log_dirty && !add)))) {
>>>>>> +    if (log_dirty != add) {
>>>>>>          return;
>>>>>>      }
>>>>>>
>>>>>> @@ -515,6 +526,50 @@ static void xen_region_del(MemoryListener
>>> *listener,
>>>>>>      memory_region_unref(section->mr);
>>>>>>  }
>>>>>>
>>>>>> +static void xen_io_add(MemoryListener *listener,
>>>>>> +                       MemoryRegionSection *section)
>>>>>> +{
>>>>>> +    XenIOState *state = container_of(listener, XenIOState, io_listener);
>>>>>> +
>>>>>> +    memory_region_ref(section->mr);
>>>>>> +
>>>>>> +    xen_map_io_section(xen_xc, xen_domid, state->ioservid, section);
>>>>>> +}
>>>>>> +
>>>>>> +static void xen_io_del(MemoryListener *listener,
>>>>>> +                       MemoryRegionSection *section)
>>>>>> +{
>>>>>> +    XenIOState *state = container_of(listener, XenIOState, io_listener);
>>>>>> +
>>>>>> +    xen_unmap_io_section(xen_xc, xen_domid, state->ioservid,
>>> section);
>>>>>> +
>>>>>> +    memory_region_unref(section->mr);
>>>>>> +}
>>>>>> +
>>>>>> +static void xen_device_realize(DeviceListener *listener,
>>>>>> +                               DeviceState *dev)
>>>>>> +{
>>>>>> +    XenIOState *state = container_of(listener, XenIOState,
>>> device_listener);
>>>>>> +
>>>>>> +    if (object_dynamic_cast(OBJECT(dev), TYPE_PCI_DEVICE)) {
>>>>>> +        PCIDevice *pci_dev = PCI_DEVICE(dev);
>>>>>> +
>>>>>> +        xen_map_pcidev(xen_xc, xen_domid, state->ioservid, pci_dev);
>>>>>> +    }
>>>>>> +}
>>>>>> +
>>>>>> +static void xen_device_unrealize(DeviceListener *listener,
>>>>>> +                                 DeviceState *dev)
>>>>>> +{
>>>>>> +    XenIOState *state = container_of(listener, XenIOState,
>>> device_listener);
>>>>>> +
>>>>>> +    if (object_dynamic_cast(OBJECT(dev), TYPE_PCI_DEVICE)) {
>>>>>> +        PCIDevice *pci_dev = PCI_DEVICE(dev);
>>>>>> +
>>>>>> +        xen_unmap_pcidev(xen_xc, xen_domid, state->ioservid, pci_dev);
>>>>>> +    }
>>>>>> +}
>>>>>> +
>>>>>>  static void xen_sync_dirty_bitmap(XenIOState *state,
>>>>>>                                    hwaddr start_addr,
>>>>>>                                    ram_addr_t size)
>>>>>> @@ -615,6 +670,17 @@ static MemoryListener xen_memory_listener =
>>> {
>>>>>>      .priority = 10,
>>>>>>  };
>>>>>>
>>>>>> +static MemoryListener xen_io_listener = {
>>>>>> +    .region_add = xen_io_add,
>>>>>> +    .region_del = xen_io_del,
>>>>>> +    .priority = 10,
>>>>>> +};
>>>>>> +
>>>>>> +static DeviceListener xen_device_listener = {
>>>>>> +    .realize = xen_device_realize,
>>>>>> +    .unrealize = xen_device_unrealize,
>>>>>> +};
>>>>>> +
>>>>>>  /* get the ioreq packets from share mem */
>>>>>>  static ioreq_t *cpu_get_ioreq_from_shared_memory(XenIOState
>>> *state, int vcpu)
>>>>>>  {
>>>>>> @@ -863,6 +929,27 @@ static void handle_ioreq(XenIOState *state,
>>> ioreq_t *req)
>>>>>>          case IOREQ_TYPE_INVALIDATE:
>>>>>>              xen_invalidate_map_cache();
>>>>>>              break;
>>>>>> +        case IOREQ_TYPE_PCI_CONFIG: {
>>>>>> +            uint32_t sbdf = req->addr >> 32;
>>>>>> +            uint32_t val;
>>>>>> +
>>>>>> +            /* Fake a write to port 0xCF8 so that
>>>>>> +             * the config space access will target the
>>>>>> +             * correct device model.
>>>>>> +             */
>>>>>> +            val = (1u << 31) |
>>>>>> +                  ((req->addr & 0x0f00) << 16) |
>>>>>> +                  ((sbdf & 0xffff) << 8) |
>>>>>> +                  (req->addr & 0xfc);
>>>>>> +            do_outp(0xcf8, 4, val);
>>>>>> +
>>>>>> +            /* Now issue the config space access via
>>>>>> +             * port 0xCFC
>>>>>> +             */
>>>>>> +            req->addr = 0xcfc | (req->addr & 0x03);
>>>>>> +            cpu_ioreq_pio(req);
>>>>>> +            break;
>>>>>> +        }
>>>>>>          default:
>>>>>>              hw_error("Invalid ioreq type 0x%x\n", req->type);
>>>>>>      }
>>>>>> @@ -993,9 +1080,15 @@ static void
>>> xen_main_loop_prepare(XenIOState *state)
>>>>>>  static void xen_hvm_change_state_handler(void *opaque, int running,
>>>>>>                                           RunState rstate)
>>>>>>  {
>>>>>> +    XenIOState *state = opaque;
>>>>>> +
>>>>>>      if (running) {
>>>>>> -        xen_main_loop_prepare((XenIOState *)opaque);
>>>>>> +        xen_main_loop_prepare(state);
>>>>>>      }
>>>>>> +
>>>>>> +    xen_set_ioreq_server_state(xen_xc, xen_domid,
>>>>>> +                               state->ioservid,
>>>>>> +                               (rstate == RUN_STATE_RUNNING));
>>>>>>  }
>>>>>>
>>>>>>  static void xen_exit_notifier(Notifier *n, void *data)
>>>>>> @@ -1064,8 +1157,9 @@ int xen_hvm_init(ram_addr_t
>>> *below_4g_mem_size, ram_addr_t *above_4g_mem_size,
>>>>>>                   MemoryRegion **ram_memory)
>>>>>>  {
>>>>>>      int i, rc;
>>>>>> -    unsigned long ioreq_pfn;
>>>>>> -    unsigned long bufioreq_evtchn;
>>>>>> +    xen_pfn_t ioreq_pfn;
>>>>>> +    xen_pfn_t bufioreq_pfn;
>>>>>> +    evtchn_port_t bufioreq_evtchn;
>>>>>>      XenIOState *state;
>>>>>>
>>>>>>      state = g_malloc0(sizeof (XenIOState));
>>>>>> @@ -1082,6 +1176,12 @@ int xen_hvm_init(ram_addr_t
>>> *below_4g_mem_size, ram_addr_t *above_4g_mem_size,
>>>>>>          return -1;
>>>>>>      }
>>>>>>
>>>>>> +    rc = xen_create_ioreq_server(xen_xc, xen_domid, &state-
>>>> ioservid);
>>>>>> +    if (rc < 0) {
>>>>>> +        perror("xen: ioreq server create");
>>>>>> +        return -1;
>>>>>> +    }
>>>>>> +
>>>>>>      state->exit.notify = xen_exit_notifier;
>>>>>>      qemu_add_exit_notifier(&state->exit);
>>>>>>
>>>>>> @@ -1091,8 +1191,18 @@ int xen_hvm_init(ram_addr_t
>>> *below_4g_mem_size, ram_addr_t *above_4g_mem_size,
>>>>>>      state->wakeup.notify = xen_wakeup_notifier;
>>>>>>      qemu_register_wakeup_notifier(&state->wakeup);
>>>>>>
>>>>>> -    xc_get_hvm_param(xen_xc, xen_domid, HVM_PARAM_IOREQ_PFN,
>>> &ioreq_pfn);
>>>>>> +    rc = xen_get_ioreq_server_info(xen_xc, xen_domid, state-
>>>> ioservid,
>>>>>> +                                   &ioreq_pfn, &bufioreq_pfn,
>>>>>> +                                   &bufioreq_evtchn);
>>>>>> +    if (rc < 0) {
>>>>>> +        hw_error("failed to get ioreq server info: error %d handle="
>>> XC_INTERFACE_FMT,
>>>>>> +                 errno, xen_xc);
>>>>>> +    }
>>>>>> +
>>>>>>      DPRINTF("shared page at pfn %lx\n", ioreq_pfn);
>>>>>> +    DPRINTF("buffered io page at pfn %lx\n", bufioreq_pfn);
>>>>>> +    DPRINTF("buffered io evtchn is %x\n", bufioreq_evtchn);
>>>>>> +
>>>>>>      state->shared_page = xc_map_foreign_range(xen_xc, xen_domid,
>>> XC_PAGE_SIZE,
>>>>>>                                                PROT_READ|PROT_WRITE, 
>>>>>> ioreq_pfn);
>>>>>>      if (state->shared_page == NULL) {
>>>>>> @@ -1114,10 +1224,10 @@ int xen_hvm_init(ram_addr_t
>>> *below_4g_mem_size, ram_addr_t *above_4g_mem_size,
>>>>>>          hw_error("get vmport regs pfn returned error %d, rc=%d", errno,
>>> rc);
>>>>>>      }
>>>>>>
>>>>>> -    xc_get_hvm_param(xen_xc, xen_domid,
>>> HVM_PARAM_BUFIOREQ_PFN, &ioreq_pfn);
>>>>>> -    DPRINTF("buffered io page at pfn %lx\n", ioreq_pfn);
>>>>>> -    state->buffered_io_page = xc_map_foreign_range(xen_xc,
>>> xen_domid, XC_PAGE_SIZE,
>>>>>> -                                                   
>>>>>> PROT_READ|PROT_WRITE, ioreq_pfn);
>>>>>> +    state->buffered_io_page = xc_map_foreign_range(xen_xc,
>>> xen_domid,
>>>>>> +                                                   XC_PAGE_SIZE,
>>>>>> +                                                   PROT_READ|PROT_WRITE,
>>>>>> +                                                   bufioreq_pfn);
>>>>>>      if (state->buffered_io_page == NULL) {
>>>>>>          hw_error("map buffered IO page returned error %d", errno);
>>>>>>      }
>>>>>> @@ -1125,6 +1235,12 @@ int xen_hvm_init(ram_addr_t
>>> *below_4g_mem_size, ram_addr_t *above_4g_mem_size,
>>>>>>      /* Note: cpus is empty at this point in init */
>>>>>>      state->cpu_by_vcpu_id = g_malloc0(max_cpus * sizeof(CPUState *));
>>>>>>
>>>>>> +    rc = xen_set_ioreq_server_state(xen_xc, xen_domid, state-
>>>> ioservid, true);
>>>>>> +    if (rc < 0) {
>>>>>> +        hw_error("failed to enable ioreq server info: error %d handle="
>>> XC_INTERFACE_FMT,
>>>>>> +                 errno, xen_xc);
>>>>>> +    }
>>>>>> +
>>>>>>      state->ioreq_local_port = g_malloc0(max_cpus * sizeof
>>> (evtchn_port_t));
>>>>>>
>>>>>>      /* FIXME: how about if we overflow the page here? */
>>>>>> @@ -1132,22 +1248,16 @@ int xen_hvm_init(ram_addr_t
>>> *below_4g_mem_size, ram_addr_t *above_4g_mem_size,
>>>>>>          rc = xc_evtchn_bind_interdomain(state->xce_handle, xen_domid,
>>>>>>                                          
>>>>>> xen_vcpu_eport(state->shared_page, i));
>>>>>>          if (rc == -1) {
>>>>>> -            fprintf(stderr, "bind interdomain ioctl error %d\n", errno);
>>>>>> +            fprintf(stderr, "shared evtchn %d bind error %d\n", i, 
>>>>>> errno);
>>>>>>              return -1;
>>>>>>          }
>>>>>>          state->ioreq_local_port[i] = rc;
>>>>>>      }
>>>>>>
>>>>>> -    rc = xc_get_hvm_param(xen_xc, xen_domid,
>>> HVM_PARAM_BUFIOREQ_EVTCHN,
>>>>>> -            &bufioreq_evtchn);
>>>>>> -    if (rc < 0) {
>>>>>> -        fprintf(stderr, "failed to get
>>> HVM_PARAM_BUFIOREQ_EVTCHN\n");
>>>>>> -        return -1;
>>>>>> -    }
>>>>>>      rc = xc_evtchn_bind_interdomain(state->xce_handle, xen_domid,
>>>>>> -            (uint32_t)bufioreq_evtchn);
>>>>>> +                                    bufioreq_evtchn);
>>>>>>      if (rc == -1) {
>>>>>> -        fprintf(stderr, "bind interdomain ioctl error %d\n", errno);
>>>>>> +        fprintf(stderr, "buffered evtchn bind error %d\n", errno);
>>>>>>          return -1;
>>>>>>      }
>>>>>>      state->bufioreq_local_port = rc;
>>>>>> @@ -1163,6 +1273,12 @@ int xen_hvm_init(ram_addr_t
>>> *below_4g_mem_size, ram_addr_t *above_4g_mem_size,
>>>>>>      memory_listener_register(&state->memory_listener,
>>> &address_space_memory);
>>>>>>      state->log_for_dirtybit = NULL;
>>>>>>
>>>>>> +    state->io_listener = xen_io_listener;
>>>>>> +    memory_listener_register(&state->io_listener, &address_space_io);
>>>>>> +
>>>>>> +    state->device_listener = xen_device_listener;
>>>>>> +    device_listener_register(&state->device_listener);
>>>>>> +
>>>>>>      /* Initialize backend core & drivers */
>>>>>>      if (xen_be_init() != 0) {
>>>>>>          fprintf(stderr, "%s: xen backend core setup failed\n",
>>> __FUNCTION__);
>>>>>>
>>>>>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.