[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 01/18] OvmfPkg: Add public headers from Xen Project.



On Thu, Sep 04, 2014 at 05:50:56PM +0100, Anthony PERARD wrote:
> This patch imports publics headers in order to use features from Xen
> like XenStore, PV Block... There is only the necessary header files and
> there are only a few modifications in order to facilitate future merge of
> more recent header (that would be necessary to access new features).

I am not exactly sure why copied the full headers instead of 
just copying what is needed?

As in, you have defines for segments, cpu_user_regs, full range
of hypercalls that you are not using,

Why not trim this down to just what you need and rip the rest out?

This link: 
http://stackoverflow.com/questions/1301850/tools-to-find-included-headers-which-are-unused
has an example of a tool that can automatically help with that.

> 
> There is little modification compared to the original files:
> - Use of ZeroMem() instead of memset()
> - Replace types to be more UEFI compliant using a script.
> 
> Command to run to change types:
> find OvmfPkg/Include/IndustryStandard/Xen -type f -name '*.h' -exec sed
>   --regexp-extended --file=fix_type_in_xen_includes.sed --in-place {} \;
> 
> This line is commented instead of been change as I'm not sure why it
> does not compile (when s/char/CHAR8/), and it does not seems necessary
> so far.
>   /* __DEFINE_XEN_GUEST_HANDLE(uchar, unsigned char); */
>   in OvmfPkg/Include/IndustryStandard/Xen/arch-x86/xen.h
> 
> Avoid changing the 'long' that is not a type (with the first line).
> $ cat fix_type_in_xen_includes.sed
> /as long as/b
> 
> s/([^a-zA-Z0-9_]|^)uint8_t([^a-zA-Z0-9_]|$)/\1UINT8\2/g
> s/([^a-zA-Z0-9_]|^)uint16_t([^a-zA-Z0-9_]|$)/\1UINT16\2/g
> s/([^a-zA-Z0-9_]|^)uint32_t([^a-zA-Z0-9_]|$)/\1UINT32\2/g
> s/([^a-zA-Z0-9_]|^)uint64_t([^a-zA-Z0-9_]|$)/\1UINT64\2/g
> 
> s/([^a-zA-Z0-9_]|^)int8_t([^a-zA-Z0-9_]|$)/\1INT8\2/g
> s/([^a-zA-Z0-9_]|^)int16_t([^a-zA-Z0-9_]|$)/\1INT16\2/g
> s/([^a-zA-Z0-9_]|^)int32_t([^a-zA-Z0-9_]|$)/\1INT32\2/g
> s/([^a-zA-Z0-9_]|^)int64_t([^a-zA-Z0-9_]|$)/\1INT64\2/g
> 
> s/([^a-zA-Z0-9_]|^)void([^a-zA-Z0-9_]|$)/\1VOID\2/g
> s/([^a-zA-Z0-9_]|^)unsigned int([^a-zA-Z0-9_]|$)/\1UINT32\2/g
> s/([^a-zA-Z0-9_]|^)int([^a-zA-Z0-9_]|$)/\1INT32\2/g
> s/([^a-zA-Z0-9_]|^)char([^a-zA-Z0-9_]|$)/\1CHAR8\2/g
> s/([^a-zA-Z0-9_]|^)unsigned long([^a-zA-Z0-9_]|$)/\1UINTN\2/g
> s/([^a-zA-Z0-9_]|^)long([^a-zA-Z0-9_]|$)/\1INTN\2/g
> 
> Contributed-under: TianoCore Contribution Agreement 1.0
> Signed-off-by: Anthony PERARD <anthony.perard@xxxxxxxxxx>
> ---
>  .../IndustryStandard/Xen/arch-x86/xen-x86_32.h     | 171 ++++
>  .../IndustryStandard/Xen/arch-x86/xen-x86_64.h     | 202 +++++
>  .../Include/IndustryStandard/Xen/arch-x86/xen.h    | 273 +++++++
>  .../Include/IndustryStandard/Xen/event_channel.h   | 381 +++++++++
>  OvmfPkg/Include/IndustryStandard/Xen/grant_table.h | 662 +++++++++++++++
>  OvmfPkg/Include/IndustryStandard/Xen/hvm/hvm_op.h  | 275 +++++++
>  OvmfPkg/Include/IndustryStandard/Xen/hvm/params.h  | 150 ++++
>  OvmfPkg/Include/IndustryStandard/Xen/io/blkif.h    | 608 ++++++++++++++
>  .../Include/IndustryStandard/Xen/io/protocols.h    |  40 +
>  OvmfPkg/Include/IndustryStandard/Xen/io/ring.h     | 312 +++++++
>  OvmfPkg/Include/IndustryStandard/Xen/io/xenbus.h   |  80 ++
>  OvmfPkg/Include/IndustryStandard/Xen/io/xs_wire.h  | 138 ++++
>  OvmfPkg/Include/IndustryStandard/Xen/memory.h      | 480 +++++++++++
>  OvmfPkg/Include/IndustryStandard/Xen/sched.h       | 174 ++++
>  OvmfPkg/Include/IndustryStandard/Xen/trace.h       | 310 +++++++
>  OvmfPkg/Include/IndustryStandard/Xen/xen-compat.h  |  44 +
>  OvmfPkg/Include/IndustryStandard/Xen/xen.h         | 897 
> +++++++++++++++++++++
>  17 files changed, 5197 insertions(+)
>  create mode 100644 OvmfPkg/Include/IndustryStandard/Xen/arch-x86/xen-x86_32.h
>  create mode 100644 OvmfPkg/Include/IndustryStandard/Xen/arch-x86/xen-x86_64.h
>  create mode 100644 OvmfPkg/Include/IndustryStandard/Xen/arch-x86/xen.h
>  create mode 100644 OvmfPkg/Include/IndustryStandard/Xen/event_channel.h
>  create mode 100644 OvmfPkg/Include/IndustryStandard/Xen/grant_table.h
>  create mode 100644 OvmfPkg/Include/IndustryStandard/Xen/hvm/hvm_op.h
>  create mode 100644 OvmfPkg/Include/IndustryStandard/Xen/hvm/params.h
>  create mode 100644 OvmfPkg/Include/IndustryStandard/Xen/io/blkif.h
>  create mode 100644 OvmfPkg/Include/IndustryStandard/Xen/io/protocols.h
>  create mode 100644 OvmfPkg/Include/IndustryStandard/Xen/io/ring.h
>  create mode 100644 OvmfPkg/Include/IndustryStandard/Xen/io/xenbus.h
>  create mode 100644 OvmfPkg/Include/IndustryStandard/Xen/io/xs_wire.h
>  create mode 100644 OvmfPkg/Include/IndustryStandard/Xen/memory.h
>  create mode 100644 OvmfPkg/Include/IndustryStandard/Xen/sched.h
>  create mode 100644 OvmfPkg/Include/IndustryStandard/Xen/trace.h
>  create mode 100644 OvmfPkg/Include/IndustryStandard/Xen/xen-compat.h
>  create mode 100644 OvmfPkg/Include/IndustryStandard/Xen/xen.h
> 
> diff --git a/OvmfPkg/Include/IndustryStandard/Xen/arch-x86/xen-x86_32.h 
> b/OvmfPkg/Include/IndustryStandard/Xen/arch-x86/xen-x86_32.h
> new file mode 100644
> index 0000000..98f2f03
> --- /dev/null
> +++ b/OvmfPkg/Include/IndustryStandard/Xen/arch-x86/xen-x86_32.h
> @@ -0,0 +1,171 @@
> +/******************************************************************************
> + * xen-x86_32.h
> + * 
> + * Guest OS interface to x86 32-bit Xen.
> + * 
> + * Permission is hereby granted, free of charge, to any person obtaining a 
> copy
> + * of this software and associated documentation files (the "Software"), to
> + * deal in the Software without restriction, including without limitation the
> + * rights to use, copy, modify, merge, publish, distribute, sublicense, 
> and/or
> + * sell copies of the Software, and to permit persons to whom the Software is
> + * furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 
> THE
> + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
> + * DEALINGS IN THE SOFTWARE.
> + *
> + * Copyright (c) 2004-2007, K A Fraser
> + */
> +
> +#ifndef __XEN_PUBLIC_ARCH_X86_XEN_X86_32_H__
> +#define __XEN_PUBLIC_ARCH_X86_XEN_X86_32_H__
> +
> +/*
> + * Hypercall interface:
> + *  Input:  %ebx, %ecx, %edx, %esi, %edi, %ebp (arguments 1-6)
> + *  Output: %eax
> + * Access is via hypercall page (set up by guest loader or via a Xen MSR):
> + *  call hypercall_page + hypercall-number * 32
> + * Clobbered: Argument registers (e.g., 2-arg hypercall clobbers %ebx,%ecx)
> + */
> +
> +/*
> + * These flat segments are in the Xen-private section of every GDT. Since 
> these
> + * are also present in the initial GDT, many OSes will be able to avoid
> + * installing their own GDT.
> + */
> +#define FLAT_RING1_CS 0xe019    /* GDT index 259 */
> +#define FLAT_RING1_DS 0xe021    /* GDT index 260 */
> +#define FLAT_RING1_SS 0xe021    /* GDT index 260 */
> +#define FLAT_RING3_CS 0xe02b    /* GDT index 261 */
> +#define FLAT_RING3_DS 0xe033    /* GDT index 262 */
> +#define FLAT_RING3_SS 0xe033    /* GDT index 262 */
> +
> +#define FLAT_KERNEL_CS FLAT_RING1_CS
> +#define FLAT_KERNEL_DS FLAT_RING1_DS
> +#define FLAT_KERNEL_SS FLAT_RING1_SS
> +#define FLAT_USER_CS    FLAT_RING3_CS
> +#define FLAT_USER_DS    FLAT_RING3_DS
> +#define FLAT_USER_SS    FLAT_RING3_SS
> +
> +#define __HYPERVISOR_VIRT_START_PAE    0xF5800000
> +#define __MACH2PHYS_VIRT_START_PAE     0xF5800000
> +#define __MACH2PHYS_VIRT_END_PAE       0xF6800000
> +#define HYPERVISOR_VIRT_START_PAE      \
> +    mk_unsigned_long(__HYPERVISOR_VIRT_START_PAE)
> +#define MACH2PHYS_VIRT_START_PAE       \
> +    mk_unsigned_long(__MACH2PHYS_VIRT_START_PAE)
> +#define MACH2PHYS_VIRT_END_PAE         \
> +    mk_unsigned_long(__MACH2PHYS_VIRT_END_PAE)
> +
> +/* Non-PAE bounds are obsolete. */
> +#define __HYPERVISOR_VIRT_START_NONPAE 0xFC000000
> +#define __MACH2PHYS_VIRT_START_NONPAE  0xFC000000
> +#define __MACH2PHYS_VIRT_END_NONPAE    0xFC400000
> +#define HYPERVISOR_VIRT_START_NONPAE   \
> +    mk_unsigned_long(__HYPERVISOR_VIRT_START_NONPAE)
> +#define MACH2PHYS_VIRT_START_NONPAE    \
> +    mk_unsigned_long(__MACH2PHYS_VIRT_START_NONPAE)
> +#define MACH2PHYS_VIRT_END_NONPAE      \
> +    mk_unsigned_long(__MACH2PHYS_VIRT_END_NONPAE)
> +
> +#define __HYPERVISOR_VIRT_START __HYPERVISOR_VIRT_START_PAE
> +#define __MACH2PHYS_VIRT_START  __MACH2PHYS_VIRT_START_PAE
> +#define __MACH2PHYS_VIRT_END    __MACH2PHYS_VIRT_END_PAE
> +
> +#ifndef HYPERVISOR_VIRT_START
> +#define HYPERVISOR_VIRT_START mk_unsigned_long(__HYPERVISOR_VIRT_START)
> +#endif
> +
> +#define MACH2PHYS_VIRT_START  mk_unsigned_long(__MACH2PHYS_VIRT_START)
> +#define MACH2PHYS_VIRT_END    mk_unsigned_long(__MACH2PHYS_VIRT_END)
> +#define MACH2PHYS_NR_ENTRIES  ((MACH2PHYS_VIRT_END-MACH2PHYS_VIRT_START)>>2)
> +#ifndef machine_to_phys_mapping
> +#define machine_to_phys_mapping ((UINTN *)MACH2PHYS_VIRT_START)
> +#endif
> +
> +/* 32-/64-bit invariability for control interfaces (domctl/sysctl). */
> +#if defined(__XEN__) || defined(__XEN_TOOLS__)
> +#undef ___DEFINE_XEN_GUEST_HANDLE
> +#define ___DEFINE_XEN_GUEST_HANDLE(name, type)                  \
> +    typedef struct { type *p; }                                 \
> +        __guest_handle_ ## name;                                \
> +    typedef struct { union { type *p; uint64_aligned_t q; }; }  \
> +        __guest_handle_64_ ## name
> +#undef set_xen_guest_handle_raw
> +#define set_xen_guest_handle_raw(hnd, val)                  \
> +    do { if ( sizeof(hnd) == 8 ) *(UINT64 *)&(hnd) = 0;   \
> +         (hnd).p = val;                                     \
> +    } while ( 0 )
> +#define uint64_aligned_t UINT64 __attribute__((aligned(8)))
> +#define __XEN_GUEST_HANDLE_64(name) __guest_handle_64_ ## name
> +#define XEN_GUEST_HANDLE_64(name) __XEN_GUEST_HANDLE_64(name)
> +#endif
> +
> +#ifndef __ASSEMBLY__
> +
> +struct cpu_user_regs {
> +    UINT32 ebx;
> +    UINT32 ecx;
> +    UINT32 edx;
> +    UINT32 esi;
> +    UINT32 edi;
> +    UINT32 ebp;
> +    UINT32 eax;
> +    UINT16 error_code;    /* private */
> +    UINT16 entry_vector;  /* private */
> +    UINT32 eip;
> +    UINT16 cs;
> +    UINT8  saved_upcall_mask;
> +    UINT8  _pad0;
> +    UINT32 eflags;        /* eflags.IF == !saved_upcall_mask */
> +    UINT32 esp;
> +    UINT16 ss, _pad1;
> +    UINT16 es, _pad2;
> +    UINT16 ds, _pad3;
> +    UINT16 fs, _pad4;
> +    UINT16 gs, _pad5;
> +};
> +typedef struct cpu_user_regs cpu_user_regs_t;
> +DEFINE_XEN_GUEST_HANDLE(cpu_user_regs_t);
> +
> +/*
> + * Page-directory addresses above 4GB do not fit into architectural %cr3.
> + * When accessing %cr3, or equivalent field in vcpu_guest_context, guests
> + * must use the following accessor macros to pack/unpack valid MFNs.
> + */
> +#define xen_pfn_to_cr3(pfn) (((unsigned)(pfn) << 12) | ((unsigned)(pfn) >> 
> 20))
> +#define xen_cr3_to_pfn(cr3) (((unsigned)(cr3) >> 12) | ((unsigned)(cr3) << 
> 20))
> +
> +struct arch_vcpu_info {
> +    UINTN cr2;
> +    UINTN pad[5]; /* sizeof(vcpu_info_t) == 64 */
> +};
> +typedef struct arch_vcpu_info arch_vcpu_info_t;
> +
> +struct xen_callback {
> +    UINTN cs;
> +    UINTN eip;
> +};
> +typedef struct xen_callback xen_callback_t;
> +
> +#endif /* !__ASSEMBLY__ */
> +
> +#endif /* __XEN_PUBLIC_ARCH_X86_XEN_X86_32_H__ */
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/OvmfPkg/Include/IndustryStandard/Xen/arch-x86/xen-x86_64.h 
> b/OvmfPkg/Include/IndustryStandard/Xen/arch-x86/xen-x86_64.h
> new file mode 100644
> index 0000000..7fcf68d
> --- /dev/null
> +++ b/OvmfPkg/Include/IndustryStandard/Xen/arch-x86/xen-x86_64.h
> @@ -0,0 +1,202 @@
> +/******************************************************************************
> + * xen-x86_64.h
> + * 
> + * Guest OS interface to x86 64-bit Xen.
> + * 
> + * Permission is hereby granted, free of charge, to any person obtaining a 
> copy
> + * of this software and associated documentation files (the "Software"), to
> + * deal in the Software without restriction, including without limitation the
> + * rights to use, copy, modify, merge, publish, distribute, sublicense, 
> and/or
> + * sell copies of the Software, and to permit persons to whom the Software is
> + * furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 
> THE
> + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
> + * DEALINGS IN THE SOFTWARE.
> + *
> + * Copyright (c) 2004-2006, K A Fraser
> + */
> +
> +#ifndef __XEN_PUBLIC_ARCH_X86_XEN_X86_64_H__
> +#define __XEN_PUBLIC_ARCH_X86_XEN_X86_64_H__
> +
> +/*
> + * Hypercall interface:
> + *  Input:  %rdi, %rsi, %rdx, %r10, %r8, %r9 (arguments 1-6)
> + *  Output: %rax
> + * Access is via hypercall page (set up by guest loader or via a Xen MSR):
> + *  call hypercall_page + hypercall-number * 32
> + * Clobbered: argument registers (e.g., 2-arg hypercall clobbers %rdi,%rsi)
> + */
> +
> +/*
> + * 64-bit segment selectors
> + * These flat segments are in the Xen-private section of every GDT. Since 
> these
> + * are also present in the initial GDT, many OSes will be able to avoid
> + * installing their own GDT.
> + */
> +
> +#define FLAT_RING3_CS32 0xe023  /* GDT index 260 */
> +#define FLAT_RING3_CS64 0xe033  /* GDT index 261 */
> +#define FLAT_RING3_DS32 0xe02b  /* GDT index 262 */
> +#define FLAT_RING3_DS64 0x0000  /* NULL selector */
> +#define FLAT_RING3_SS32 0xe02b  /* GDT index 262 */
> +#define FLAT_RING3_SS64 0xe02b  /* GDT index 262 */
> +
> +#define FLAT_KERNEL_DS64 FLAT_RING3_DS64
> +#define FLAT_KERNEL_DS32 FLAT_RING3_DS32
> +#define FLAT_KERNEL_DS   FLAT_KERNEL_DS64
> +#define FLAT_KERNEL_CS64 FLAT_RING3_CS64
> +#define FLAT_KERNEL_CS32 FLAT_RING3_CS32
> +#define FLAT_KERNEL_CS   FLAT_KERNEL_CS64
> +#define FLAT_KERNEL_SS64 FLAT_RING3_SS64
> +#define FLAT_KERNEL_SS32 FLAT_RING3_SS32
> +#define FLAT_KERNEL_SS   FLAT_KERNEL_SS64
> +
> +#define FLAT_USER_DS64 FLAT_RING3_DS64
> +#define FLAT_USER_DS32 FLAT_RING3_DS32
> +#define FLAT_USER_DS   FLAT_USER_DS64
> +#define FLAT_USER_CS64 FLAT_RING3_CS64
> +#define FLAT_USER_CS32 FLAT_RING3_CS32
> +#define FLAT_USER_CS   FLAT_USER_CS64
> +#define FLAT_USER_SS64 FLAT_RING3_SS64
> +#define FLAT_USER_SS32 FLAT_RING3_SS32
> +#define FLAT_USER_SS   FLAT_USER_SS64
> +
> +#define __HYPERVISOR_VIRT_START 0xFFFF800000000000
> +#define __HYPERVISOR_VIRT_END   0xFFFF880000000000
> +#define __MACH2PHYS_VIRT_START  0xFFFF800000000000
> +#define __MACH2PHYS_VIRT_END    0xFFFF804000000000
> +
> +#ifndef HYPERVISOR_VIRT_START
> +#define HYPERVISOR_VIRT_START mk_unsigned_long(__HYPERVISOR_VIRT_START)
> +#define HYPERVISOR_VIRT_END   mk_unsigned_long(__HYPERVISOR_VIRT_END)
> +#endif
> +
> +#define MACH2PHYS_VIRT_START  mk_unsigned_long(__MACH2PHYS_VIRT_START)
> +#define MACH2PHYS_VIRT_END    mk_unsigned_long(__MACH2PHYS_VIRT_END)
> +#define MACH2PHYS_NR_ENTRIES  ((MACH2PHYS_VIRT_END-MACH2PHYS_VIRT_START)>>3)
> +#ifndef machine_to_phys_mapping
> +#define machine_to_phys_mapping ((UINTN *)HYPERVISOR_VIRT_START)
> +#endif
> +
> +/*
> + * INT32 HYPERVISOR_set_segment_base(UINT32 which, UINTN base)
> + *  @which == SEGBASE_*  ;  @base == 64-bit base address
> + * Returns 0 on success.
> + */
> +#define SEGBASE_FS          0
> +#define SEGBASE_GS_USER     1
> +#define SEGBASE_GS_KERNEL   2
> +#define SEGBASE_GS_USER_SEL 3 /* Set user %gs specified in base[15:0] */
> +
> +/*
> + * INT32 HYPERVISOR_iret(VOID)
> + * All arguments are on the kernel stack, in the following format.
> + * Never returns if successful. Current kernel context is lost.
> + * The saved CS is mapped as follows:
> + *   RING0 -> RING3 kernel mode.
> + *   RING1 -> RING3 kernel mode.
> + *   RING2 -> RING3 kernel mode.
> + *   RING3 -> RING3 user mode.
> + * However RING0 indicates that the guest kernel should return to iteself
> + * directly with
> + *      orb   $3,1*8(%rsp)
> + *      iretq
> + * If flags contains VGCF_in_syscall:
> + *   Restore RAX, RIP, RFLAGS, RSP.
> + *   Discard R11, RCX, CS, SS.
> + * Otherwise:
> + *   Restore RAX, R11, RCX, CS:RIP, RFLAGS, SS:RSP.
> + * All other registers are saved on hypercall entry and restored to user.
> + */
> +/* Guest exited in SYSCALL context? Return to guest with SYSRET? */
> +#define _VGCF_in_syscall 8
> +#define VGCF_in_syscall  (1<<_VGCF_in_syscall)
> +#define VGCF_IN_SYSCALL  VGCF_in_syscall
> +
> +#ifndef __ASSEMBLY__
> +
> +struct iret_context {
> +    /* Top of stack (%rsp at point of hypercall). */
> +    UINT64 rax, r11, rcx, flags, rip, cs, rflags, rsp, ss;
> +    /* Bottom of iret stack frame. */
> +};
> +
> +#if defined(__GNUC__) && !defined(__STRICT_ANSI__)
> +/* Anonymous union includes both 32- and 64-bit names (e.g., eax/rax). */
> +#define __DECL_REG(name) union { \
> +    UINT64 r ## name, e ## name; \
> +    UINT32 _e ## name; \
> +}
> +#else
> +/* Non-gcc sources must always use the proper 64-bit name (e.g., rax). */
> +#define __DECL_REG(name) UINT64 r ## name
> +#endif
> +
> +struct cpu_user_regs {
> +    UINT64 r15;
> +    UINT64 r14;
> +    UINT64 r13;
> +    UINT64 r12;
> +    __DECL_REG(bp);
> +    __DECL_REG(bx);
> +    UINT64 r11;
> +    UINT64 r10;
> +    UINT64 r9;
> +    UINT64 r8;
> +    __DECL_REG(ax);
> +    __DECL_REG(cx);
> +    __DECL_REG(dx);
> +    __DECL_REG(si);
> +    __DECL_REG(di);
> +    UINT32 error_code;    /* private */
> +    UINT32 entry_vector;  /* private */
> +    __DECL_REG(ip);
> +    UINT16 cs, _pad0[1];
> +    UINT8  saved_upcall_mask;
> +    UINT8  _pad1[3];
> +    __DECL_REG(flags);      /* rflags.IF == !saved_upcall_mask */
> +    __DECL_REG(sp);
> +    UINT16 ss, _pad2[3];
> +    UINT16 es, _pad3[3];
> +    UINT16 ds, _pad4[3];
> +    UINT16 fs, _pad5[3]; /* Non-zero => takes precedence over fs_base.     */
> +    UINT16 gs, _pad6[3]; /* Non-zero => takes precedence over gs_base_usr. */
> +};
> +typedef struct cpu_user_regs cpu_user_regs_t;
> +DEFINE_XEN_GUEST_HANDLE(cpu_user_regs_t);
> +
> +#undef __DECL_REG
> +
> +#define xen_pfn_to_cr3(pfn) ((UINTN)(pfn) << 12)
> +#define xen_cr3_to_pfn(cr3) ((UINTN)(cr3) >> 12)
> +
> +struct arch_vcpu_info {
> +    UINTN cr2;
> +    UINTN pad; /* sizeof(vcpu_info_t) == 64 */
> +};
> +typedef struct arch_vcpu_info arch_vcpu_info_t;
> +
> +typedef UINTN xen_callback_t;
> +
> +#endif /* !__ASSEMBLY__ */
> +
> +#endif /* __XEN_PUBLIC_ARCH_X86_XEN_X86_64_H__ */
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/OvmfPkg/Include/IndustryStandard/Xen/arch-x86/xen.h 
> b/OvmfPkg/Include/IndustryStandard/Xen/arch-x86/xen.h
> new file mode 100644
> index 0000000..e2906f0
> --- /dev/null
> +++ b/OvmfPkg/Include/IndustryStandard/Xen/arch-x86/xen.h
> @@ -0,0 +1,273 @@
> +/******************************************************************************
> + * arch-x86/xen.h
> + * 
> + * Guest OS interface to x86 Xen.
> + * 
> + * Permission is hereby granted, free of charge, to any person obtaining a 
> copy
> + * of this software and associated documentation files (the "Software"), to
> + * deal in the Software without restriction, including without limitation the
> + * rights to use, copy, modify, merge, publish, distribute, sublicense, 
> and/or
> + * sell copies of the Software, and to permit persons to whom the Software is
> + * furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 
> THE
> + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
> + * DEALINGS IN THE SOFTWARE.
> + *
> + * Copyright (c) 2004-2006, K A Fraser
> + */
> +
> +#include "../xen.h"
> +
> +#ifndef __XEN_PUBLIC_ARCH_X86_XEN_H__
> +#define __XEN_PUBLIC_ARCH_X86_XEN_H__
> +
> +/* Structural guest handles introduced in 0x00030201. */
> +#if __XEN_INTERFACE_VERSION__ >= 0x00030201
> +#define ___DEFINE_XEN_GUEST_HANDLE(name, type) \
> +    typedef struct { type *p; } __guest_handle_ ## name
> +#else
> +#define ___DEFINE_XEN_GUEST_HANDLE(name, type) \
> +    typedef type * __guest_handle_ ## name
> +#endif
> +
> +/*
> + * XEN_GUEST_HANDLE represents a guest pointer, when passed as a field
> + * in a struct in memory.
> + * XEN_GUEST_HANDLE_PARAM represent a guest pointer, when passed as an
> + * hypercall argument.
> + * XEN_GUEST_HANDLE_PARAM and XEN_GUEST_HANDLE are the same on X86 but
> + * they might not be on other architectures.
> + */
> +#define __DEFINE_XEN_GUEST_HANDLE(name, type) \
> +    ___DEFINE_XEN_GUEST_HANDLE(name, type);   \
> +    ___DEFINE_XEN_GUEST_HANDLE(const_##name, const type)
> +#define DEFINE_XEN_GUEST_HANDLE(name)   __DEFINE_XEN_GUEST_HANDLE(name, name)
> +#define __XEN_GUEST_HANDLE(name)        __guest_handle_ ## name
> +#define XEN_GUEST_HANDLE(name)          __XEN_GUEST_HANDLE(name)
> +#define XEN_GUEST_HANDLE_PARAM(name)    XEN_GUEST_HANDLE(name)
> +#define set_xen_guest_handle_raw(hnd, val)  do { (hnd).p = val; } while (0)
> +#ifdef __XEN_TOOLS__
> +#define get_xen_guest_handle(val, hnd)  do { val = (hnd).p; } while (0)
> +#endif
> +#define set_xen_guest_handle(hnd, val) set_xen_guest_handle_raw(hnd, val)
> +
> +#if defined(__i386__)
> +#include "xen-x86_32.h"
> +#elif defined(__x86_64__)
> +#include "xen-x86_64.h"
> +#endif
> +
> +#ifndef __ASSEMBLY__
> +typedef UINTN xen_pfn_t;
> +#define PRI_xen_pfn "lx"
> +#endif
> +
> +#define XEN_HAVE_PV_GUEST_ENTRY 1
> +
> +#define XEN_HAVE_PV_UPCALL_MASK 1
> +
> +/*
> + * `incontents 200 segdesc Segment Descriptor Tables
> + */
> +/*
> + * ` enum neg_errnoval
> + * ` HYPERVISOR_set_gdt(const xen_pfn_t frames[], UINT32 entries);
> + * `
> + */
> +/*
> + * A number of GDT entries are reserved by Xen. These are not situated at the
> + * start of the GDT because some stupid OSes export hard-coded selector 
> values
> + * in their ABI. These hard-coded values are always near the start of the 
> GDT,
> + * so Xen places itself out of the way, at the far end of the GDT.
> + *
> + * NB The LDT is set using the MMUEXT_SET_LDT op of HYPERVISOR_mmuext_op
> + */
> +#define FIRST_RESERVED_GDT_PAGE  14
> +#define FIRST_RESERVED_GDT_BYTE  (FIRST_RESERVED_GDT_PAGE * 4096)
> +#define FIRST_RESERVED_GDT_ENTRY (FIRST_RESERVED_GDT_BYTE / 8)
> +
> +
> +/*
> + * ` enum neg_errnoval
> + * ` HYPERVISOR_update_descriptor(u64 pa, u64 desc);
> + * `
> + * ` @pa   The machine physical address of the descriptor to
> + * `       update. Must be either a descriptor page or writable.
> + * ` @desc The descriptor value to update, in the same format as a
> + * `       native descriptor table entry.
> + */
> +
> +/* Maximum number of virtual CPUs in legacy multi-processor guests. */
> +#define XEN_LEGACY_MAX_VCPUS 32
> +
> +#ifndef __ASSEMBLY__
> +
> +typedef UINTN xen_ulong_t;
> +#define PRI_xen_ulong "lx"
> +
> +/*
> + * ` enum neg_errnoval
> + * ` HYPERVISOR_stack_switch(UINTN ss, UINTN esp);
> + * `
> + * Sets the stack segment and pointer for the current vcpu.
> + */
> +
> +/*
> + * ` enum neg_errnoval
> + * ` HYPERVISOR_set_trap_table(const struct trap_info traps[]);
> + * `
> + */
> +/*
> + * Send an array of these to HYPERVISOR_set_trap_table().
> + * Terminate the array with a sentinel entry, with traps[].address==0.
> + * The privilege level specifies which modes may enter a trap via a software
> + * interrupt. On x86/64, since rings 1 and 2 are unavailable, we allocate
> + * privilege levels as follows:
> + *  Level == 0: Noone may enter
> + *  Level == 1: Kernel may enter
> + *  Level == 2: Kernel may enter
> + *  Level == 3: Everyone may enter
> + */
> +#define TI_GET_DPL(_ti)      ((_ti)->flags & 3)
> +#define TI_GET_IF(_ti)       ((_ti)->flags & 4)
> +#define TI_SET_DPL(_ti,_dpl) ((_ti)->flags |= (_dpl))
> +#define TI_SET_IF(_ti,_if)   ((_ti)->flags |= ((!!(_if))<<2))
> +struct trap_info {
> +    UINT8       vector;  /* exception vector                              */
> +    UINT8       flags;   /* 0-3: privilege level; 4: clear event enable?  */
> +    UINT16      cs;      /* code selector                                 */
> +    UINTN address; /* code offset                                   */
> +};
> +typedef struct trap_info trap_info_t;
> +DEFINE_XEN_GUEST_HANDLE(trap_info_t);
> +
> +typedef UINT64 tsc_timestamp_t; /* RDTSC timestamp */
> +
> +/*
> + * The following is all CPU context. Note that the fpu_ctxt block is filled 
> + * in by FXSAVE if the CPU has feature FXSR; otherwise FSAVE is used.
> + *
> + * Also note that when calling DOMCTL_setvcpucontext and VCPU_initialise
> + * for HVM and PVH guests, not all information in this structure is updated:
> + *
> + * - For HVM guests, the structures read include: fpu_ctxt (if
> + * VGCT_I387_VALID is set), flags, user_regs, debugreg[*]
> + *
> + * - PVH guests are the same as HVM guests, but additionally use ctrlreg[3] 
> to
> + * set cr3. All other fields not used should be set to 0.
> + */
> +struct vcpu_guest_context {
> +    /* FPU registers come first so they can be aligned for FXSAVE/FXRSTOR. */
> +    struct { CHAR8 x[512]; } fpu_ctxt;       /* User-level FPU registers     
> */
> +#define VGCF_I387_VALID                (1<<0)
> +#define VGCF_IN_KERNEL                 (1<<2)
> +#define _VGCF_i387_valid               0
> +#define VGCF_i387_valid                (1<<_VGCF_i387_valid)
> +#define _VGCF_in_kernel                2
> +#define VGCF_in_kernel                 (1<<_VGCF_in_kernel)
> +#define _VGCF_failsafe_disables_events 3
> +#define VGCF_failsafe_disables_events  (1<<_VGCF_failsafe_disables_events)
> +#define _VGCF_syscall_disables_events  4
> +#define VGCF_syscall_disables_events   (1<<_VGCF_syscall_disables_events)
> +#define _VGCF_online                   5
> +#define VGCF_online                    (1<<_VGCF_online)
> +    UINTN flags;                    /* VGCF_* flags                 */
> +    struct cpu_user_regs user_regs;         /* User-level CPU registers     
> */
> +    struct trap_info trap_ctxt[256];        /* Virtual IDT                  
> */
> +    UINTN ldt_base, ldt_ents;       /* LDT (linear address, # ents) */
> +    UINTN gdt_frames[16], gdt_ents; /* GDT (machine frames, # ents) */
> +    UINTN kernel_ss, kernel_sp;     /* Virtual TSS (only SS1/SP1)   */
> +    /* NB. User pagetable on x86/64 is placed in ctrlreg[1]. */
> +    UINTN ctrlreg[8];               /* CR0-CR7 (control registers)  */
> +    UINTN debugreg[8];              /* DB0-DB7 (debug registers)    */
> +#ifdef __i386__
> +    UINTN event_callback_cs;        /* CS:EIP of event callback     */
> +    UINTN event_callback_eip;
> +    UINTN failsafe_callback_cs;     /* CS:EIP of failsafe callback  */
> +    UINTN failsafe_callback_eip;
> +#else
> +    UINTN event_callback_eip;
> +    UINTN failsafe_callback_eip;
> +#ifdef __XEN__
> +    union {
> +        UINTN syscall_callback_eip;
> +        struct {
> +            UINT32 event_callback_cs;    /* compat CS of event cb     */
> +            UINT32 failsafe_callback_cs; /* compat CS of failsafe cb  */
> +        };
> +    };
> +#else
> +    UINTN syscall_callback_eip;
> +#endif
> +#endif
> +    UINTN vm_assist;                /* VMASST_TYPE_* bitmap */
> +#ifdef __x86_64__
> +    /* Segment base addresses. */
> +    UINT64      fs_base;
> +    UINT64      gs_base_kernel;
> +    UINT64      gs_base_user;
> +#endif
> +};
> +typedef struct vcpu_guest_context vcpu_guest_context_t;
> +DEFINE_XEN_GUEST_HANDLE(vcpu_guest_context_t);
> +
> +struct arch_shared_info {
> +    UINTN max_pfn;                  /* max pfn that appears in table */
> +    /* Frame containing list of mfns containing list of mfns containing p2m. 
> */
> +    xen_pfn_t     pfn_to_mfn_frame_list_list;
> +    UINTN nmi_reason;
> +    UINT64 pad[32];
> +};
> +typedef struct arch_shared_info arch_shared_info_t;
> +
> +#endif /* !__ASSEMBLY__ */
> +
> +/*
> + * ` enum neg_errnoval
> + * ` HYPERVISOR_fpu_taskswitch(INT32 set);
> + * `
> + * Sets (if set!=0) or clears (if set==0) CR0.TS.
> + */
> +
> +/*
> + * ` enum neg_errnoval
> + * ` HYPERVISOR_set_debugreg(INT32 regno, UINTN value);
> + *
> + * ` UINTN
> + * ` HYPERVISOR_get_debugreg(INT32 regno);
> + * For 0<=reg<=7, returns the debug register value.
> + * For other values of reg, returns ((UINTN)-EINVAL).
> + * (Unfortunately, this interface is defective.)
> + */
> +
> +/*
> + * Prefix forces emulation of some non-trapping instructions.
> + * Currently only CPUID.
> + */
> +#ifdef __ASSEMBLY__
> +#define XEN_EMULATE_PREFIX .byte 0x0f,0x0b,0x78,0x65,0x6e ;
> +#define XEN_CPUID          XEN_EMULATE_PREFIX cpuid
> +#else
> +#define XEN_EMULATE_PREFIX ".byte 0x0f,0x0b,0x78,0x65,0x6e ; "
> +#define XEN_CPUID          XEN_EMULATE_PREFIX "cpuid"
> +#endif
> +
> +#endif /* __XEN_PUBLIC_ARCH_X86_XEN_H__ */
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/OvmfPkg/Include/IndustryStandard/Xen/event_channel.h 
> b/OvmfPkg/Include/IndustryStandard/Xen/event_channel.h
> new file mode 100644
> index 0000000..a45c65c
> --- /dev/null
> +++ b/OvmfPkg/Include/IndustryStandard/Xen/event_channel.h
> @@ -0,0 +1,381 @@
> +/******************************************************************************
> + * event_channel.h
> + *
> + * Event channels between domains.
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a 
> copy
> + * of this software and associated documentation files (the "Software"), to
> + * deal in the Software without restriction, including without limitation the
> + * rights to use, copy, modify, merge, publish, distribute, sublicense, 
> and/or
> + * sell copies of the Software, and to permit persons to whom the Software is
> + * furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 
> THE
> + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
> + * DEALINGS IN THE SOFTWARE.
> + *
> + * Copyright (c) 2003-2004, K A Fraser.
> + */
> +
> +#ifndef __XEN_PUBLIC_EVENT_CHANNEL_H__
> +#define __XEN_PUBLIC_EVENT_CHANNEL_H__
> +
> +#include "xen.h"
> +
> +/*
> + * `incontents 150 evtchn Event Channels
> + *
> + * Event channels are the basic primitive provided by Xen for event
> + * notifications. An event is the Xen equivalent of a hardware
> + * interrupt. They essentially store one bit of information, the event
> + * of interest is signalled by transitioning this bit from 0 to 1.
> + *
> + * Notifications are received by a guest via an upcall from Xen,
> + * indicating when an event arrives (setting the bit). Further
> + * notifications are masked until the bit is cleared again (therefore,
> + * guests must check the value of the bit after re-enabling event
> + * delivery to ensure no missed notifications).
> + *
> + * Event notifications can be masked by setting a flag; this is
> + * equivalent to disabling interrupts and can be used to ensure
> + * atomicity of certain operations in the guest kernel.
> + *
> + * Event channels are represented by the evtchn_* fields in
> + * struct shared_info and struct vcpu_info.
> + */
> +
> +/*
> + * ` enum neg_errnoval
> + * ` HYPERVISOR_event_channel_op(enum event_channel_op cmd, VOID *args)
> + * `
> + * @cmd  == EVTCHNOP_* (event-channel operation).
> + * @args == struct evtchn_* Operation-specific extra arguments (NULL if 
> none).
> + */
> +
> +/* ` enum event_channel_op { // EVTCHNOP_* => struct evtchn_* */
> +#define EVTCHNOP_bind_interdomain 0
> +#define EVTCHNOP_bind_virq        1
> +#define EVTCHNOP_bind_pirq        2
> +#define EVTCHNOP_close            3
> +#define EVTCHNOP_send             4
> +#define EVTCHNOP_status           5
> +#define EVTCHNOP_alloc_unbound    6
> +#define EVTCHNOP_bind_ipi         7
> +#define EVTCHNOP_bind_vcpu        8
> +#define EVTCHNOP_unmask           9
> +#define EVTCHNOP_reset           10
> +#define EVTCHNOP_init_control    11
> +#define EVTCHNOP_expand_array    12
> +#define EVTCHNOP_set_priority    13
> +/* ` } */
> +
> +typedef UINT32 evtchn_port_t;
> +DEFINE_XEN_GUEST_HANDLE(evtchn_port_t);
> +
> +/*
> + * EVTCHNOP_alloc_unbound: Allocate a port in domain <dom> and mark as
> + * accepting interdomain bindings from domain <remote_dom>. A fresh port
> + * is allocated in <dom> and returned as <port>.
> + * NOTES:
> + *  1. If the caller is unprivileged then <dom> must be DOMID_SELF.
> + *  2. <rdom> may be DOMID_SELF, allowing loopback connections.
> + */
> +struct evtchn_alloc_unbound {
> +    /* IN parameters */
> +    domid_t dom, remote_dom;
> +    /* OUT parameters */
> +    evtchn_port_t port;
> +};
> +typedef struct evtchn_alloc_unbound evtchn_alloc_unbound_t;
> +
> +/*
> + * EVTCHNOP_bind_interdomain: Construct an interdomain event channel between
> + * the calling domain and <remote_dom>. <remote_dom,remote_port> must 
> identify
> + * a port that is unbound and marked as accepting bindings from the calling
> + * domain. A fresh port is allocated in the calling domain and returned as
> + * <local_port>.
> + *
> + * In case the peer domain has already tried to set our event channel
> + * pending, before it was bound, EVTCHNOP_bind_interdomain always sets
> + * the local event channel pending.
> + *
> + * The usual pattern of use, in the guest's upcall (or subsequent
> + * handler) is as follows: (Re-enable the event channel for subsequent
> + * signalling and then) check for the existence of whatever condition
> + * is being waited for by other means, and take whatever action is
> + * needed (if any).
> + *
> + * NOTES:
> + *  1. <remote_dom> may be DOMID_SELF, allowing loopback connections.
> + */
> +struct evtchn_bind_interdomain {
> +    /* IN parameters. */
> +    domid_t remote_dom;
> +    evtchn_port_t remote_port;
> +    /* OUT parameters. */
> +    evtchn_port_t local_port;
> +};
> +typedef struct evtchn_bind_interdomain evtchn_bind_interdomain_t;
> +
> +/*
> + * EVTCHNOP_bind_virq: Bind a local event channel to VIRQ <irq> on specified
> + * vcpu.
> + * NOTES:
> + *  1. Virtual IRQs are classified as per-vcpu or global. See the VIRQ list
> + *     in xen.h for the classification of each VIRQ.
> + *  2. Global VIRQs must be allocated on VCPU0 but can subsequently be
> + *     re-bound via EVTCHNOP_bind_vcpu.
> + *  3. Per-vcpu VIRQs may be bound to at most one event channel per vcpu.
> + *     The allocated event channel is bound to the specified vcpu and the
> + *     binding cannot be changed.
> + */
> +struct evtchn_bind_virq {
> +    /* IN parameters. */
> +    UINT32 virq; /* enum virq */
> +    UINT32 vcpu;
> +    /* OUT parameters. */
> +    evtchn_port_t port;
> +};
> +typedef struct evtchn_bind_virq evtchn_bind_virq_t;
> +
> +/*
> + * EVTCHNOP_bind_pirq: Bind a local event channel to a real IRQ (PIRQ <irq>).
> + * NOTES:
> + *  1. A physical IRQ may be bound to at most one event channel per domain.
> + *  2. Only a sufficiently-privileged domain may bind to a physical IRQ.
> + */
> +struct evtchn_bind_pirq {
> +    /* IN parameters. */
> +    UINT32 pirq;
> +#define BIND_PIRQ__WILL_SHARE 1
> +    UINT32 flags; /* BIND_PIRQ__* */
> +    /* OUT parameters. */
> +    evtchn_port_t port;
> +};
> +typedef struct evtchn_bind_pirq evtchn_bind_pirq_t;
> +
> +/*
> + * EVTCHNOP_bind_ipi: Bind a local event channel to receive events.
> + * NOTES:
> + *  1. The allocated event channel is bound to the specified vcpu. The 
> binding
> + *     may not be changed.
> + */
> +struct evtchn_bind_ipi {
> +    UINT32 vcpu;
> +    /* OUT parameters. */
> +    evtchn_port_t port;
> +};
> +typedef struct evtchn_bind_ipi evtchn_bind_ipi_t;
> +
> +/*
> + * EVTCHNOP_close: Close a local event channel <port>. If the channel is
> + * interdomain then the remote end is placed in the unbound state
> + * (EVTCHNSTAT_unbound), awaiting a new connection.
> + */
> +struct evtchn_close {
> +    /* IN parameters. */
> +    evtchn_port_t port;
> +};
> +typedef struct evtchn_close evtchn_close_t;
> +
> +/*
> + * EVTCHNOP_send: Send an event to the remote end of the channel whose local
> + * endpoint is <port>.
> + */
> +struct evtchn_send {
> +    /* IN parameters. */
> +    evtchn_port_t port;
> +};
> +typedef struct evtchn_send evtchn_send_t;
> +
> +/*
> + * EVTCHNOP_status: Get the current status of the communication channel which
> + * has an endpoint at <dom, port>.
> + * NOTES:
> + *  1. <dom> may be specified as DOMID_SELF.
> + *  2. Only a sufficiently-privileged domain may obtain the status of an 
> event
> + *     channel for which <dom> is not DOMID_SELF.
> + */
> +struct evtchn_status {
> +    /* IN parameters */
> +    domid_t  dom;
> +    evtchn_port_t port;
> +    /* OUT parameters */
> +#define EVTCHNSTAT_closed       0  /* Channel is not in use.                 
> */
> +#define EVTCHNSTAT_unbound      1  /* Channel is waiting interdom 
> connection.*/
> +#define EVTCHNSTAT_interdomain  2  /* Channel is connected to remote domain. 
> */
> +#define EVTCHNSTAT_pirq         3  /* Channel is bound to a phys IRQ line.   
> */
> +#define EVTCHNSTAT_virq         4  /* Channel is bound to a virtual IRQ line 
> */
> +#define EVTCHNSTAT_ipi          5  /* Channel is bound to a virtual IPI line 
> */
> +    UINT32 status;
> +    UINT32 vcpu;                 /* VCPU to which this channel is bound.   */
> +    union {
> +        struct {
> +            domid_t dom;
> +        } unbound;                 /* EVTCHNSTAT_unbound */
> +        struct {
> +            domid_t dom;
> +            evtchn_port_t port;
> +        } interdomain;             /* EVTCHNSTAT_interdomain */
> +        UINT32 pirq;             /* EVTCHNSTAT_pirq        */
> +        UINT32 virq;             /* EVTCHNSTAT_virq        */
> +    } u;
> +};
> +typedef struct evtchn_status evtchn_status_t;
> +
> +/*
> + * EVTCHNOP_bind_vcpu: Specify which vcpu a channel should notify when an
> + * event is pending.
> + * NOTES:
> + *  1. IPI-bound channels always notify the vcpu specified at bind time.
> + *     This binding cannot be changed.
> + *  2. Per-VCPU VIRQ channels always notify the vcpu specified at bind time.
> + *     This binding cannot be changed.
> + *  3. All other channels notify vcpu0 by default. This default is set when
> + *     the channel is allocated (a port that is freed and subsequently reused
> + *     has its binding reset to vcpu0).
> + */
> +struct evtchn_bind_vcpu {
> +    /* IN parameters. */
> +    evtchn_port_t port;
> +    UINT32 vcpu;
> +};
> +typedef struct evtchn_bind_vcpu evtchn_bind_vcpu_t;
> +
> +/*
> + * EVTCHNOP_unmask: Unmask the specified local event-channel port and deliver
> + * a notification to the appropriate VCPU if an event is pending.
> + */
> +struct evtchn_unmask {
> +    /* IN parameters. */
> +    evtchn_port_t port;
> +};
> +typedef struct evtchn_unmask evtchn_unmask_t;
> +
> +/*
> + * EVTCHNOP_reset: Close all event channels associated with specified domain.
> + * NOTES:
> + *  1. <dom> may be specified as DOMID_SELF.
> + *  2. Only a sufficiently-privileged domain may specify other than 
> DOMID_SELF.
> + */
> +struct evtchn_reset {
> +    /* IN parameters. */
> +    domid_t dom;
> +};
> +typedef struct evtchn_reset evtchn_reset_t;
> +
> +/*
> + * EVTCHNOP_init_control: initialize the control block for the FIFO ABI.
> + *
> + * Note: any events that are currently pending will not be resent and
> + * will be lost.  Guests should call this before binding any event to
> + * avoid losing any events.
> + */
> +struct evtchn_init_control {
> +    /* IN parameters. */
> +    UINT64 control_gfn;
> +    UINT32 offset;
> +    UINT32 vcpu;
> +    /* OUT parameters. */
> +    UINT8 link_bits;
> +    UINT8 _pad[7];
> +};
> +typedef struct evtchn_init_control evtchn_init_control_t;
> +
> +/*
> + * EVTCHNOP_expand_array: add an additional page to the event array.
> + */
> +struct evtchn_expand_array {
> +    /* IN parameters. */
> +    UINT64 array_gfn;
> +};
> +typedef struct evtchn_expand_array evtchn_expand_array_t;
> +
> +/*
> + * EVTCHNOP_set_priority: set the priority for an event channel.
> + */
> +struct evtchn_set_priority {
> +    /* IN parameters. */
> +    UINT32 port;
> +    UINT32 priority;
> +};
> +typedef struct evtchn_set_priority evtchn_set_priority_t;
> +
> +/*
> + * ` enum neg_errnoval
> + * ` HYPERVISOR_event_channel_op_compat(struct evtchn_op *op)
> + * `
> + * Superceded by new event_channel_op() hypercall since 0x00030202.
> + */
> +struct evtchn_op {
> +    UINT32 cmd; /* enum event_channel_op */
> +    union {
> +        struct evtchn_alloc_unbound    alloc_unbound;
> +        struct evtchn_bind_interdomain bind_interdomain;
> +        struct evtchn_bind_virq        bind_virq;
> +        struct evtchn_bind_pirq        bind_pirq;
> +        struct evtchn_bind_ipi         bind_ipi;
> +        struct evtchn_close            close;
> +        struct evtchn_send             send;
> +        struct evtchn_status           status;
> +        struct evtchn_bind_vcpu        bind_vcpu;
> +        struct evtchn_unmask           unmask;
> +    } u;
> +};
> +typedef struct evtchn_op evtchn_op_t;
> +DEFINE_XEN_GUEST_HANDLE(evtchn_op_t);
> +
> +/*
> + * 2-level ABI
> + */
> +
> +#define EVTCHN_2L_NR_CHANNELS (sizeof(xen_ulong_t) * sizeof(xen_ulong_t) * 
> 64)
> +
> +/*
> + * FIFO ABI
> + */
> +
> +/* Events may have priorities from 0 (highest) to 15 (lowest). */
> +#define EVTCHN_FIFO_PRIORITY_MAX     0
> +#define EVTCHN_FIFO_PRIORITY_DEFAULT 7
> +#define EVTCHN_FIFO_PRIORITY_MIN     15
> +
> +#define EVTCHN_FIFO_MAX_QUEUES (EVTCHN_FIFO_PRIORITY_MIN + 1)
> +
> +typedef UINT32 event_word_t;
> +
> +#define EVTCHN_FIFO_PENDING 31
> +#define EVTCHN_FIFO_MASKED  30
> +#define EVTCHN_FIFO_LINKED  29
> +#define EVTCHN_FIFO_BUSY    28
> +
> +#define EVTCHN_FIFO_LINK_BITS 17
> +#define EVTCHN_FIFO_LINK_MASK ((1 << EVTCHN_FIFO_LINK_BITS) - 1)
> +
> +#define EVTCHN_FIFO_NR_CHANNELS (1 << EVTCHN_FIFO_LINK_BITS)
> +
> +struct evtchn_fifo_control_block {
> +    UINT32 ready;
> +    UINT32 _rsvd;
> +    UINT32 head[EVTCHN_FIFO_MAX_QUEUES];
> +};
> +typedef struct evtchn_fifo_control_block evtchn_fifo_control_block_t;
> +
> +#endif /* __XEN_PUBLIC_EVENT_CHANNEL_H__ */
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/OvmfPkg/Include/IndustryStandard/Xen/grant_table.h 
> b/OvmfPkg/Include/IndustryStandard/Xen/grant_table.h
> new file mode 100644
> index 0000000..f46b48d
> --- /dev/null
> +++ b/OvmfPkg/Include/IndustryStandard/Xen/grant_table.h
> @@ -0,0 +1,662 @@
> +/******************************************************************************
> + * grant_table.h
> + *
> + * Interface for granting foreign access to page frames, and receiving
> + * page-ownership transfers.
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a 
> copy
> + * of this software and associated documentation files (the "Software"), to
> + * deal in the Software without restriction, including without limitation the
> + * rights to use, copy, modify, merge, publish, distribute, sublicense, 
> and/or
> + * sell copies of the Software, and to permit persons to whom the Software is
> + * furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 
> THE
> + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
> + * DEALINGS IN THE SOFTWARE.
> + *
> + * Copyright (c) 2004, K A Fraser
> + */
> +
> +#ifndef __XEN_PUBLIC_GRANT_TABLE_H__
> +#define __XEN_PUBLIC_GRANT_TABLE_H__
> +
> +#include "xen.h"
> +
> +/*
> + * `incontents 150 gnttab Grant Tables
> + *
> + * Xen's grant tables provide a generic mechanism to memory sharing
> + * between domains. This shared memory interface underpins the split
> + * device drivers for block and network IO.
> + *
> + * Each domain has its own grant table. This is a data structure that
> + * is shared with Xen; it allows the domain to tell Xen what kind of
> + * permissions other domains have on its pages. Entries in the grant
> + * table are identified by grant references. A grant reference is an
> + * integer, which indexes into the grant table. It acts as a
> + * capability which the grantee can use to perform operations on the
> + * granterâs memory.
> + *
> + * This capability-based system allows shared-memory communications
> + * between unprivileged domains. A grant reference also encapsulates
> + * the details of a shared page, removing the need for a domain to
> + * know the real machine address of a page it is sharing. This makes
> + * it possible to share memory correctly with domains running in
> + * fully virtualised memory.
> + */
> +
> +/***********************************
> + * GRANT TABLE REPRESENTATION
> + */
> +
> +/* Some rough guidelines on accessing and updating grant-table entries
> + * in a concurrency-safe manner. For more information, Linux contains a
> + * reference implementation for guest OSes (drivers/xen/grant_table.c, see
> + * 
> http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=blob;f=drivers/xen/grant-table.c;hb=HEAD
> + *
> + * NB. WMB is a no-op on current-generation x86 processors. However, a
> + *     compiler barrier will still be required.
> + *
> + * Introducing a valid entry into the grant table:
> + *  1. Write ent->domid.
> + *  2. Write ent->frame:
> + *      GTF_permit_access:   Frame to which access is permitted.
> + *      GTF_accept_transfer: Pseudo-phys frame slot being filled by new
> + *                           frame, or zero if none.
> + *  3. Write memory barrier (WMB).
> + *  4. Write ent->flags, inc. valid type.
> + *
> + * Invalidating an unused GTF_permit_access entry:
> + *  1. flags = ent->flags.
> + *  2. Observe that !(flags & (GTF_reading|GTF_writing)).
> + *  3. Check result of SMP-safe CMPXCHG(&ent->flags, flags, 0).
> + *  NB. No need for WMB as reuse of entry is control-dependent on success of
> + *      step 3, and all architectures guarantee ordering of ctrl-dep writes.
> + *
> + * Invalidating an in-use GTF_permit_access entry:
> + *  This cannot be done directly. Request assistance from the domain 
> controller
> + *  which can set a timeout on the use of a grant entry and take necessary
> + *  action. (NB. This is not yet implemented!).
> + *
> + * Invalidating an unused GTF_accept_transfer entry:
> + *  1. flags = ent->flags.
> + *  2. Observe that !(flags & GTF_transfer_committed). [*]
> + *  3. Check result of SMP-safe CMPXCHG(&ent->flags, flags, 0).
> + *  NB. No need for WMB as reuse of entry is control-dependent on success of
> + *      step 3, and all architectures guarantee ordering of ctrl-dep writes.
> + *  [*] If GTF_transfer_committed is set then the grant entry is 'committed'.
> + *      The guest must /not/ modify the grant entry until the address of the
> + *      transferred frame is written. It is safe for the guest to spin 
> waiting
> + *      for this to occur (detect by observing GTF_transfer_completed in
> + *      ent->flags).
> + *
> + * Invalidating a committed GTF_accept_transfer entry:
> + *  1. Wait for (ent->flags & GTF_transfer_completed).
> + *
> + * Changing a GTF_permit_access from writable to read-only:
> + *  Use SMP-safe CMPXCHG to set GTF_readonly, while checking !GTF_writing.
> + *
> + * Changing a GTF_permit_access from read-only to writable:
> + *  Use SMP-safe bit-setting instruction.
> + */
> +
> +/*
> + * Reference to a grant entry in a specified domain's grant table.
> + */
> +typedef UINT32 grant_ref_t;
> +
> +/*
> + * A grant table comprises a packed array of grant entries in one or more
> + * page frames shared between Xen and a guest.
> + * [XEN]: This field is written by Xen and read by the sharing guest.
> + * [GST]: This field is written by the guest and read by Xen.
> + */
> +
> +/*
> + * Version 1 of the grant table entry structure is maintained purely
> + * for backwards compatibility.  New guests should use version 2.
> + */
> +#if __XEN_INTERFACE_VERSION__ < 0x0003020a
> +#define grant_entry_v1 grant_entry
> +#define grant_entry_v1_t grant_entry_t
> +#endif
> +struct grant_entry_v1 {
> +    /* GTF_xxx: various type and flag information.  [XEN,GST] */
> +    UINT16 flags;
> +    /* The domain being granted foreign privileges. [GST] */
> +    domid_t  domid;
> +    /*
> +     * GTF_permit_access: Frame that @domid is allowed to map and access. 
> [GST]
> +     * GTF_accept_transfer: Frame whose ownership transferred by @domid. 
> [XEN]
> +     */
> +    UINT32 frame;
> +};
> +typedef struct grant_entry_v1 grant_entry_v1_t;
> +
> +/* The first few grant table entries will be preserved across grant table
> + * version changes and may be pre-populated at domain creation by tools.
> + */
> +#define GNTTAB_NR_RESERVED_ENTRIES     8
> +#define GNTTAB_RESERVED_CONSOLE        0
> +#define GNTTAB_RESERVED_XENSTORE       1
> +
> +/*
> + * Type of grant entry.
> + *  GTF_invalid: This grant entry grants no privileges.
> + *  GTF_permit_access: Allow @domid to map/access @frame.
> + *  GTF_accept_transfer: Allow @domid to transfer ownership of one page frame
> + *                       to this guest. Xen writes the page number to @frame.
> + *  GTF_transitive: Allow @domid to transitively access a subrange of
> + *                  @trans_grant in @trans_domid.  No mappings are allowed.
> + */
> +#define GTF_invalid         (0U<<0)
> +#define GTF_permit_access   (1U<<0)
> +#define GTF_accept_transfer (2U<<0)
> +#define GTF_transitive      (3U<<0)
> +#define GTF_type_mask       (3U<<0)
> +
> +/*
> + * Subflags for GTF_permit_access.
> + *  GTF_readonly: Restrict @domid to read-only mappings and accesses. [GST]
> + *  GTF_reading: Grant entry is currently mapped for reading by @domid. [XEN]
> + *  GTF_writing: Grant entry is currently mapped for writing by @domid. [XEN]
> + *  GTF_PAT, GTF_PWT, GTF_PCD: (x86) cache attribute flags for the grant 
> [GST]
> + *  GTF_sub_page: Grant access to only a subrange of the page.  @domid
> + *                will only be allowed to copy from the grant, and not
> + *                map it. [GST]
> + */
> +#define _GTF_readonly       (2)
> +#define GTF_readonly        (1U<<_GTF_readonly)
> +#define _GTF_reading        (3)
> +#define GTF_reading         (1U<<_GTF_reading)
> +#define _GTF_writing        (4)
> +#define GTF_writing         (1U<<_GTF_writing)
> +#define _GTF_PWT            (5)
> +#define GTF_PWT             (1U<<_GTF_PWT)
> +#define _GTF_PCD            (6)
> +#define GTF_PCD             (1U<<_GTF_PCD)
> +#define _GTF_PAT            (7)
> +#define GTF_PAT             (1U<<_GTF_PAT)
> +#define _GTF_sub_page       (8)
> +#define GTF_sub_page        (1U<<_GTF_sub_page)
> +
> +/*
> + * Subflags for GTF_accept_transfer:
> + *  GTF_transfer_committed: Xen sets this flag to indicate that it is 
> committed
> + *      to transferring ownership of a page frame. When a guest sees this 
> flag
> + *      it must /not/ modify the grant entry until GTF_transfer_completed is
> + *      set by Xen.
> + *  GTF_transfer_completed: It is safe for the guest to spin-wait on this 
> flag
> + *      after reading GTF_transfer_committed. Xen will always write the frame
> + *      address, followed by ORing this flag, in a timely manner.
> + */
> +#define _GTF_transfer_committed (2)
> +#define GTF_transfer_committed  (1U<<_GTF_transfer_committed)
> +#define _GTF_transfer_completed (3)
> +#define GTF_transfer_completed  (1U<<_GTF_transfer_completed)
> +
> +/*
> + * Version 2 grant table entries.  These fulfil the same role as
> + * version 1 entries, but can represent more complicated operations.
> + * Any given domain will have either a version 1 or a version 2 table,
> + * and every entry in the table will be the same version.
> + *
> + * The interface by which domains use grant references does not depend
> + * on the grant table version in use by the other domain.
> + */
> +#if __XEN_INTERFACE_VERSION__ >= 0x0003020a
> +/*
> + * Version 1 and version 2 grant entries share a common prefix.  The
> + * fields of the prefix are documented as part of struct
> + * grant_entry_v1.
> + */
> +struct grant_entry_header {
> +    UINT16 flags;
> +    domid_t  domid;
> +};
> +typedef struct grant_entry_header grant_entry_header_t;
> +
> +/*
> + * Version 2 of the grant entry structure.
> + */
> +union grant_entry_v2 {
> +    grant_entry_header_t hdr;
> +
> +    /*
> +     * This member is used for V1-style full page grants, where either:
> +     *
> +     * -- hdr.type is GTF_accept_transfer, or
> +     * -- hdr.type is GTF_permit_access and GTF_sub_page is not set.
> +     *
> +     * In that case, the frame field has the same semantics as the
> +     * field of the same name in the V1 entry structure.
> +     */
> +    struct {
> +        grant_entry_header_t hdr;
> +        UINT32 pad0;
> +        UINT64 frame;
> +    } full_page;
> +
> +    /*
> +     * If the grant type is GTF_grant_access and GTF_sub_page is set,
> +     * @domid is allowed to access bytes [@page_off,@page_off+@length)
> +     * in frame @frame.
> +     */
> +    struct {
> +        grant_entry_header_t hdr;
> +        UINT16 page_off;
> +        UINT16 length;
> +        UINT64 frame;
> +    } sub_page;
> +
> +    /*
> +     * If the grant is GTF_transitive, @domid is allowed to use the
> +     * grant @gref in domain @trans_domid, as if it was the local
> +     * domain.  Obviously, the transitive access must be compatible
> +     * with the original grant.
> +     *
> +     * The current version of Xen does not allow transitive grants
> +     * to be mapped.
> +     */
> +    struct {
> +        grant_entry_header_t hdr;
> +        domid_t trans_domid;
> +        UINT16 pad0;
> +        grant_ref_t gref;
> +    } transitive;
> +
> +    UINT32 __spacer[4]; /* Pad to a power of two */
> +};
> +typedef union grant_entry_v2 grant_entry_v2_t;
> +
> +typedef UINT16 grant_status_t;
> +
> +#endif /* __XEN_INTERFACE_VERSION__ */
> +
> +/***********************************
> + * GRANT TABLE QUERIES AND USES
> + */
> +
> +/* ` enum neg_errnoval
> + * ` HYPERVISOR_grant_table_op(enum grant_table_op cmd,
> + * `                           VOID *args,
> + * `                           UINT32 count)
> + * `
> + *
> + * @args points to an array of a per-command data structure. The array
> + * has @count members
> + */
> +
> +/* ` enum grant_table_op { // GNTTABOP_* => struct gnttab_* */
> +#define GNTTABOP_map_grant_ref        0
> +#define GNTTABOP_unmap_grant_ref      1
> +#define GNTTABOP_setup_table          2
> +#define GNTTABOP_dump_table           3
> +#define GNTTABOP_transfer             4
> +#define GNTTABOP_copy                 5
> +#define GNTTABOP_query_size           6
> +#define GNTTABOP_unmap_and_replace    7
> +#if __XEN_INTERFACE_VERSION__ >= 0x0003020a
> +#define GNTTABOP_set_version          8
> +#define GNTTABOP_get_status_frames    9
> +#define GNTTABOP_get_version          10
> +#define GNTTABOP_swap_grant_ref            11
> +#endif /* __XEN_INTERFACE_VERSION__ */
> +/* ` } */
> +
> +/*
> + * Handle to track a mapping created via a grant reference.
> + */
> +typedef UINT32 grant_handle_t;
> +
> +/*
> + * GNTTABOP_map_grant_ref: Map the grant entry (<dom>,<ref>) for access
> + * by devices and/or host CPUs. If successful, <handle> is a tracking number
> + * that must be presented later to destroy the mapping(s). On error, <handle>
> + * is a negative status code.
> + * NOTES:
> + *  1. If GNTMAP_device_map is specified then <dev_bus_addr> is the address
> + *     via which I/O devices may access the granted frame.
> + *  2. If GNTMAP_host_map is specified then a mapping will be added at
> + *     either a host virtual address in the current address space, or at
> + *     a PTE at the specified machine address.  The type of mapping to
> + *     perform is selected through the GNTMAP_contains_pte flag, and the
> + *     address is specified in <host_addr>.
> + *  3. Mappings should only be destroyed via GNTTABOP_unmap_grant_ref. If a
> + *     host mapping is destroyed by other means then it is *NOT* guaranteed
> + *     to be accounted to the correct grant reference!
> + */
> +struct gnttab_map_grant_ref {
> +    /* IN parameters. */
> +    UINT64 host_addr;
> +    UINT32 flags;               /* GNTMAP_* */
> +    grant_ref_t ref;
> +    domid_t  dom;
> +    /* OUT parameters. */
> +    INT16  status;              /* => enum grant_status */
> +    grant_handle_t handle;
> +    UINT64 dev_bus_addr;
> +};
> +typedef struct gnttab_map_grant_ref gnttab_map_grant_ref_t;
> +DEFINE_XEN_GUEST_HANDLE(gnttab_map_grant_ref_t);
> +
> +/*
> + * GNTTABOP_unmap_grant_ref: Destroy one or more grant-reference mappings
> + * tracked by <handle>. If <host_addr> or <dev_bus_addr> is zero, that
> + * field is ignored. If non-zero, they must refer to a device/host mapping
> + * that is tracked by <handle>
> + * NOTES:
> + *  1. The call may fail in an undefined manner if either mapping is not
> + *     tracked by <handle>.
> + *  3. After executing a batch of unmaps, it is guaranteed that no stale
> + *     mappings will remain in the device or host TLBs.
> + */
> +struct gnttab_unmap_grant_ref {
> +    /* IN parameters. */
> +    UINT64 host_addr;
> +    UINT64 dev_bus_addr;
> +    grant_handle_t handle;
> +    /* OUT parameters. */
> +    INT16  status;              /* => enum grant_status */
> +};
> +typedef struct gnttab_unmap_grant_ref gnttab_unmap_grant_ref_t;
> +DEFINE_XEN_GUEST_HANDLE(gnttab_unmap_grant_ref_t);
> +
> +/*
> + * GNTTABOP_setup_table: Set up a grant table for <dom> comprising at least
> + * <nr_frames> pages. The frame addresses are written to the <frame_list>.
> + * Only <nr_frames> addresses are written, even if the table is larger.
> + * NOTES:
> + *  1. <dom> may be specified as DOMID_SELF.
> + *  2. Only a sufficiently-privileged domain may specify <dom> != DOMID_SELF.
> + *  3. Xen may not support more than a single grant-table page per domain.
> + */
> +struct gnttab_setup_table {
> +    /* IN parameters. */
> +    domid_t  dom;
> +    UINT32 nr_frames;
> +    /* OUT parameters. */
> +    INT16  status;              /* => enum grant_status */
> +#if __XEN_INTERFACE_VERSION__ < 0x00040300
> +    XEN_GUEST_HANDLE(ulong) frame_list;
> +#else
> +    XEN_GUEST_HANDLE(xen_pfn_t) frame_list;
> +#endif
> +};
> +typedef struct gnttab_setup_table gnttab_setup_table_t;
> +DEFINE_XEN_GUEST_HANDLE(gnttab_setup_table_t);
> +
> +/*
> + * GNTTABOP_dump_table: Dump the contents of the grant table to the
> + * xen console. Debugging use only.
> + */
> +struct gnttab_dump_table {
> +    /* IN parameters. */
> +    domid_t dom;
> +    /* OUT parameters. */
> +    INT16 status;               /* => enum grant_status */
> +};
> +typedef struct gnttab_dump_table gnttab_dump_table_t;
> +DEFINE_XEN_GUEST_HANDLE(gnttab_dump_table_t);
> +
> +/*
> + * GNTTABOP_transfer_grant_ref: Transfer <frame> to a foreign domain. The
> + * foreign domain has previously registered its interest in the transfer via
> + * <domid, ref>.
> + *
> + * Note that, even if the transfer fails, the specified page no longer 
> belongs
> + * to the calling domain *unless* the error is GNTST_bad_page.
> + */
> +struct gnttab_transfer {
> +    /* IN parameters. */
> +    xen_pfn_t     mfn;
> +    domid_t       domid;
> +    grant_ref_t   ref;
> +    /* OUT parameters. */
> +    INT16       status;
> +};
> +typedef struct gnttab_transfer gnttab_transfer_t;
> +DEFINE_XEN_GUEST_HANDLE(gnttab_transfer_t);
> +
> +
> +/*
> + * GNTTABOP_copy: Hypervisor based copy
> + * source and destinations can be eithers MFNs or, for foreign domains,
> + * grant references. the foreign domain has to grant read/write access
> + * in its grant table.
> + *
> + * The flags specify what type source and destinations are (either MFN
> + * or grant reference).
> + *
> + * Note that this can also be used to copy data between two domains
> + * via a third party if the source and destination domains had previously
> + * grant appropriate access to their pages to the third party.
> + *
> + * source_offset specifies an offset in the source frame, dest_offset
> + * the offset in the target frame and  len specifies the number of
> + * bytes to be copied.
> + */
> +
> +#define _GNTCOPY_source_gref      (0)
> +#define GNTCOPY_source_gref       (1<<_GNTCOPY_source_gref)
> +#define _GNTCOPY_dest_gref        (1)
> +#define GNTCOPY_dest_gref         (1<<_GNTCOPY_dest_gref)
> +
> +struct gnttab_copy {
> +    /* IN parameters. */
> +    struct {
> +        union {
> +            grant_ref_t ref;
> +            xen_pfn_t   gmfn;
> +        } u;
> +        domid_t  domid;
> +        UINT16 offset;
> +    } source, dest;
> +    UINT16      len;
> +    UINT16      flags;          /* GNTCOPY_* */
> +    /* OUT parameters. */
> +    INT16       status;
> +};
> +typedef struct gnttab_copy  gnttab_copy_t;
> +DEFINE_XEN_GUEST_HANDLE(gnttab_copy_t);
> +
> +/*
> + * GNTTABOP_query_size: Query the current and maximum sizes of the shared
> + * grant table.
> + * NOTES:
> + *  1. <dom> may be specified as DOMID_SELF.
> + *  2. Only a sufficiently-privileged domain may specify <dom> != DOMID_SELF.
> + */
> +struct gnttab_query_size {
> +    /* IN parameters. */
> +    domid_t  dom;
> +    /* OUT parameters. */
> +    UINT32 nr_frames;
> +    UINT32 max_nr_frames;
> +    INT16  status;              /* => enum grant_status */
> +};
> +typedef struct gnttab_query_size gnttab_query_size_t;
> +DEFINE_XEN_GUEST_HANDLE(gnttab_query_size_t);
> +
> +/*
> + * GNTTABOP_unmap_and_replace: Destroy one or more grant-reference mappings
> + * tracked by <handle> but atomically replace the page table entry with one
> + * pointing to the machine address under <new_addr>.  <new_addr> will be
> + * redirected to the null entry.
> + * NOTES:
> + *  1. The call may fail in an undefined manner if either mapping is not
> + *     tracked by <handle>.
> + *  2. After executing a batch of unmaps, it is guaranteed that no stale
> + *     mappings will remain in the device or host TLBs.
> + */
> +struct gnttab_unmap_and_replace {
> +    /* IN parameters. */
> +    UINT64 host_addr;
> +    UINT64 new_addr;
> +    grant_handle_t handle;
> +    /* OUT parameters. */
> +    INT16  status;              /* => enum grant_status */
> +};
> +typedef struct gnttab_unmap_and_replace gnttab_unmap_and_replace_t;
> +DEFINE_XEN_GUEST_HANDLE(gnttab_unmap_and_replace_t);
> +
> +#if __XEN_INTERFACE_VERSION__ >= 0x0003020a
> +/*
> + * GNTTABOP_set_version: Request a particular version of the grant
> + * table shared table structure.  This operation can only be performed
> + * once in any given domain.  It must be performed before any grants
> + * are activated; otherwise, the domain will be stuck with version 1.
> + * The only defined versions are 1 and 2.
> + */
> +struct gnttab_set_version {
> +    /* IN/OUT parameters */
> +    UINT32 version;
> +};
> +typedef struct gnttab_set_version gnttab_set_version_t;
> +DEFINE_XEN_GUEST_HANDLE(gnttab_set_version_t);
> +
> +
> +/*
> + * GNTTABOP_get_status_frames: Get the list of frames used to store grant
> + * status for <dom>. In grant format version 2, the status is separated
> + * from the other shared grant fields to allow more efficient synchronization
> + * using barriers instead of atomic cmpexch operations.
> + * <nr_frames> specify the size of vector <frame_list>.
> + * The frame addresses are returned in the <frame_list>.
> + * Only <nr_frames> addresses are returned, even if the table is larger.
> + * NOTES:
> + *  1. <dom> may be specified as DOMID_SELF.
> + *  2. Only a sufficiently-privileged domain may specify <dom> != DOMID_SELF.
> + */
> +struct gnttab_get_status_frames {
> +    /* IN parameters. */
> +    UINT32 nr_frames;
> +    domid_t  dom;
> +    /* OUT parameters. */
> +    INT16  status;              /* => enum grant_status */
> +    XEN_GUEST_HANDLE(UINT64) frame_list;
> +};
> +typedef struct gnttab_get_status_frames gnttab_get_status_frames_t;
> +DEFINE_XEN_GUEST_HANDLE(gnttab_get_status_frames_t);
> +
> +/*
> + * GNTTABOP_get_version: Get the grant table version which is in
> + * effect for domain <dom>.
> + */
> +struct gnttab_get_version {
> +    /* IN parameters */
> +    domid_t dom;
> +    UINT16 pad;
> +    /* OUT parameters */
> +    UINT32 version;
> +};
> +typedef struct gnttab_get_version gnttab_get_version_t;
> +DEFINE_XEN_GUEST_HANDLE(gnttab_get_version_t);
> +
> +/*
> + * GNTTABOP_swap_grant_ref: Swap the contents of two grant entries.
> + */
> +struct gnttab_swap_grant_ref {
> +    /* IN parameters */
> +    grant_ref_t ref_a;
> +    grant_ref_t ref_b;
> +    /* OUT parameters */
> +    INT16 status;             /* => enum grant_status */
> +};
> +typedef struct gnttab_swap_grant_ref gnttab_swap_grant_ref_t;
> +DEFINE_XEN_GUEST_HANDLE(gnttab_swap_grant_ref_t);
> +
> +#endif /* __XEN_INTERFACE_VERSION__ */
> +
> +/*
> + * Bitfield values for gnttab_map_grant_ref.flags.
> + */
> + /* Map the grant entry for access by I/O devices. */
> +#define _GNTMAP_device_map      (0)
> +#define GNTMAP_device_map       (1<<_GNTMAP_device_map)
> + /* Map the grant entry for access by host CPUs. */
> +#define _GNTMAP_host_map        (1)
> +#define GNTMAP_host_map         (1<<_GNTMAP_host_map)
> + /* Accesses to the granted frame will be restricted to read-only access. */
> +#define _GNTMAP_readonly        (2)
> +#define GNTMAP_readonly         (1<<_GNTMAP_readonly)
> + /*
> +  * GNTMAP_host_map subflag:
> +  *  0 => The host mapping is usable only by the guest OS.
> +  *  1 => The host mapping is usable by guest OS + current application.
> +  */
> +#define _GNTMAP_application_map (3)
> +#define GNTMAP_application_map  (1<<_GNTMAP_application_map)
> +
> + /*
> +  * GNTMAP_contains_pte subflag:
> +  *  0 => This map request contains a host virtual address.
> +  *  1 => This map request contains the machine addess of the PTE to update.
> +  */
> +#define _GNTMAP_contains_pte    (4)
> +#define GNTMAP_contains_pte     (1<<_GNTMAP_contains_pte)
> +
> +#define _GNTMAP_can_fail        (5)
> +#define GNTMAP_can_fail         (1<<_GNTMAP_can_fail)
> +
> +/*
> + * Bits to be placed in guest kernel available PTE bits (architecture
> + * dependent; only supported when XENFEAT_gnttab_map_avail_bits is set).
> + */
> +#define _GNTMAP_guest_avail0    (16)
> +#define GNTMAP_guest_avail_mask ((UINT32)~0 << _GNTMAP_guest_avail0)
> +
> +/*
> + * Values for error status returns. All errors are -ve.
> + */
> +/* ` enum grant_status { */
> +#define GNTST_okay             (0)  /* Normal return.                        
> */
> +#define GNTST_general_error    (-1) /* General undefined error.              
> */
> +#define GNTST_bad_domain       (-2) /* Unrecognsed domain id.                
> */
> +#define GNTST_bad_gntref       (-3) /* Unrecognised or inappropriate gntref. 
> */
> +#define GNTST_bad_handle       (-4) /* Unrecognised or inappropriate handle. 
> */
> +#define GNTST_bad_virt_addr    (-5) /* Inappropriate virtual address to map. 
> */
> +#define GNTST_bad_dev_addr     (-6) /* Inappropriate device address to 
> unmap.*/
> +#define GNTST_no_device_space  (-7) /* Out of space in I/O MMU.              
> */
> +#define GNTST_permission_denied (-8) /* Not enough privilege for operation.  
> */
> +#define GNTST_bad_page         (-9) /* Specified page was invalid for op.    
> */
> +#define GNTST_bad_copy_arg    (-10) /* copy arguments cross page boundary.   
> */
> +#define GNTST_address_too_big (-11) /* transfer page address too large.      
> */
> +#define GNTST_eagain          (-12) /* Operation not done; try again.        
> */
> +/* ` } */
> +
> +#define GNTTABOP_error_msgs {                   \
> +    "okay",                                     \
> +    "undefined error",                          \
> +    "unrecognised domain id",                   \
> +    "invalid grant reference",                  \
> +    "invalid mapping handle",                   \
> +    "invalid virtual address",                  \
> +    "invalid device address",                   \
> +    "no spare translation slot in the I/O MMU", \
> +    "permission denied",                        \
> +    "bad page",                                 \
> +    "copy arguments cross page boundary",       \
> +    "page address size too large",              \
> +    "operation not done; try again"             \
> +}
> +
> +#endif /* __XEN_PUBLIC_GRANT_TABLE_H__ */
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/OvmfPkg/Include/IndustryStandard/Xen/hvm/hvm_op.h 
> b/OvmfPkg/Include/IndustryStandard/Xen/hvm/hvm_op.h
> new file mode 100644
> index 0000000..6bfe115
> --- /dev/null
> +++ b/OvmfPkg/Include/IndustryStandard/Xen/hvm/hvm_op.h
> @@ -0,0 +1,275 @@
> +/*
> + * Permission is hereby granted, free of charge, to any person obtaining a 
> copy
> + * of this software and associated documentation files (the "Software"), to
> + * deal in the Software without restriction, including without limitation the
> + * rights to use, copy, modify, merge, publish, distribute, sublicense, 
> and/or
> + * sell copies of the Software, and to permit persons to whom the Software is
> + * furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 
> THE
> + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
> + * DEALINGS IN THE SOFTWARE.
> + */
> +
> +#ifndef __XEN_PUBLIC_HVM_HVM_OP_H__
> +#define __XEN_PUBLIC_HVM_HVM_OP_H__
> +
> +#include "../xen.h"
> +#include "../trace.h"
> +
> +/* Get/set subcommands: extra argument == pointer to xen_hvm_param struct. */
> +#define HVMOP_set_param           0
> +#define HVMOP_get_param           1
> +struct xen_hvm_param {
> +    domid_t  domid;    /* IN */
> +    UINT32 index;    /* IN */
> +    UINT64 value;    /* IN/OUT */
> +};
> +typedef struct xen_hvm_param xen_hvm_param_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_hvm_param_t);
> +
> +/* Set the logical level of one of a domain's PCI INTx wires. */
> +#define HVMOP_set_pci_intx_level  2
> +struct xen_hvm_set_pci_intx_level {
> +    /* Domain to be updated. */
> +    domid_t  domid;
> +    /* PCI INTx identification in PCI topology (domain:bus:device:intx). */
> +    UINT8  domain, bus, device, intx;
> +    /* Assertion level (0 = unasserted, 1 = asserted). */
> +    UINT8  level;
> +};
> +typedef struct xen_hvm_set_pci_intx_level xen_hvm_set_pci_intx_level_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_hvm_set_pci_intx_level_t);
> +
> +/* Set the logical level of one of a domain's ISA IRQ wires. */
> +#define HVMOP_set_isa_irq_level   3
> +struct xen_hvm_set_isa_irq_level {
> +    /* Domain to be updated. */
> +    domid_t  domid;
> +    /* ISA device identification, by ISA IRQ (0-15). */
> +    UINT8  isa_irq;
> +    /* Assertion level (0 = unasserted, 1 = asserted). */
> +    UINT8  level;
> +};
> +typedef struct xen_hvm_set_isa_irq_level xen_hvm_set_isa_irq_level_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_hvm_set_isa_irq_level_t);
> +
> +#define HVMOP_set_pci_link_route  4
> +struct xen_hvm_set_pci_link_route {
> +    /* Domain to be updated. */
> +    domid_t  domid;
> +    /* PCI link identifier (0-3). */
> +    UINT8  link;
> +    /* ISA IRQ (1-15), or 0 (disable link). */
> +    UINT8  isa_irq;
> +};
> +typedef struct xen_hvm_set_pci_link_route xen_hvm_set_pci_link_route_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_hvm_set_pci_link_route_t);
> +
> +/* Flushes all VCPU TLBs: @arg must be NULL. */
> +#define HVMOP_flush_tlbs          5
> +
> +typedef enum {
> +    HVMMEM_ram_rw,             /* Normal read/write guest RAM */
> +    HVMMEM_ram_ro,             /* Read-only; writes are discarded */
> +    HVMMEM_mmio_dm,            /* Reads and write go to the device model */
> +} hvmmem_type_t;
> +
> +/* Following tools-only interfaces may change in future. */
> +#if defined(__XEN__) || defined(__XEN_TOOLS__)
> +
> +/* Track dirty VRAM. */
> +#define HVMOP_track_dirty_vram    6
> +struct xen_hvm_track_dirty_vram {
> +    /* Domain to be tracked. */
> +    domid_t  domid;
> +    /* First pfn to track. */
> +    uint64_aligned_t first_pfn;
> +    /* Number of pages to track. */
> +    uint64_aligned_t nr;
> +    /* OUT variable. */
> +    /* Dirty bitmap buffer. */
> +    XEN_GUEST_HANDLE_64(uint8) dirty_bitmap;
> +};
> +typedef struct xen_hvm_track_dirty_vram xen_hvm_track_dirty_vram_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_hvm_track_dirty_vram_t);
> +
> +/* Notify that some pages got modified by the Device Model. */
> +#define HVMOP_modified_memory    7
> +struct xen_hvm_modified_memory {
> +    /* Domain to be updated. */
> +    domid_t  domid;
> +    /* First pfn. */
> +    uint64_aligned_t first_pfn;
> +    /* Number of pages. */
> +    uint64_aligned_t nr;
> +};
> +typedef struct xen_hvm_modified_memory xen_hvm_modified_memory_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_hvm_modified_memory_t);
> +
> +#define HVMOP_set_mem_type    8
> +/* Notify that a region of memory is to be treated in a specific way. */
> +struct xen_hvm_set_mem_type {
> +    /* Domain to be updated. */
> +    domid_t domid;
> +    /* Memory type */
> +    UINT16 hvmmem_type;
> +    /* Number of pages. */
> +    UINT32 nr;
> +    /* First pfn. */
> +    uint64_aligned_t first_pfn;
> +};
> +typedef struct xen_hvm_set_mem_type xen_hvm_set_mem_type_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_hvm_set_mem_type_t);
> +
> +#endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
> +
> +/* Hint from PV drivers for pagetable destruction. */
> +#define HVMOP_pagetable_dying        9
> +struct xen_hvm_pagetable_dying {
> +    /* Domain with a pagetable about to be destroyed. */
> +    domid_t  domid;
> +    UINT16 pad[3]; /* align next field on 8-byte boundary */
> +    /* guest physical address of the toplevel pagetable dying */
> +    UINT64 gpa;
> +};
> +typedef struct xen_hvm_pagetable_dying xen_hvm_pagetable_dying_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_hvm_pagetable_dying_t);
> +
> +/* Get the current Xen time, in nanoseconds since system boot. */
> +#define HVMOP_get_time              10
> +struct xen_hvm_get_time {
> +    UINT64 now;      /* OUT */
> +};
> +typedef struct xen_hvm_get_time xen_hvm_get_time_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_hvm_get_time_t);
> +
> +#define HVMOP_xentrace              11
> +struct xen_hvm_xentrace {
> +    UINT16 event, extra_bytes;
> +    UINT8 extra[TRACE_EXTRA_MAX * sizeof(UINT32)];
> +};
> +typedef struct xen_hvm_xentrace xen_hvm_xentrace_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_hvm_xentrace_t);
> +
> +/* Following tools-only interfaces may change in future. */
> +#if defined(__XEN__) || defined(__XEN_TOOLS__)
> +
> +#define HVMOP_set_mem_access        12
> +typedef enum {
> +    HVMMEM_access_n,
> +    HVMMEM_access_r,
> +    HVMMEM_access_w,
> +    HVMMEM_access_rw,
> +    HVMMEM_access_x,
> +    HVMMEM_access_rx,
> +    HVMMEM_access_wx,
> +    HVMMEM_access_rwx,
> +    HVMMEM_access_rx2rw,       /* Page starts off as r-x, but automatically
> +                                * change to r-w on a write */
> +    HVMMEM_access_n2rwx,       /* Log access: starts off as n, automatically 
> +                                * goes to rwx, generating an event without
> +                                * pausing the vcpu */
> +    HVMMEM_access_default      /* Take the domain default */
> +} hvmmem_access_t;
> +/* Notify that a region of memory is to have specific access types */
> +struct xen_hvm_set_mem_access {
> +    /* Domain to be updated. */
> +    domid_t domid;
> +    /* Memory type */
> +    UINT16 hvmmem_access; /* hvm_access_t */
> +    /* Number of pages, ignored on setting default access */
> +    UINT32 nr;
> +    /* First pfn, or ~0ull to set the default access for new pages */
> +    uint64_aligned_t first_pfn;
> +};
> +typedef struct xen_hvm_set_mem_access xen_hvm_set_mem_access_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_hvm_set_mem_access_t);
> +
> +#define HVMOP_get_mem_access        13
> +/* Get the specific access type for that region of memory */
> +struct xen_hvm_get_mem_access {
> +    /* Domain to be queried. */
> +    domid_t domid;
> +    /* Memory type: OUT */
> +    UINT16 hvmmem_access; /* hvm_access_t */
> +    /* pfn, or ~0ull for default access for new pages.  IN */
> +    uint64_aligned_t pfn;
> +};
> +typedef struct xen_hvm_get_mem_access xen_hvm_get_mem_access_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_hvm_get_mem_access_t);
> +
> +#define HVMOP_inject_trap            14
> +/* Inject a trap into a VCPU, which will get taken up on the next
> + * scheduling of it. Note that the caller should know enough of the
> + * state of the CPU before injecting, to know what the effect of
> + * injecting the trap will be.
> + */
> +struct xen_hvm_inject_trap {
> +    /* Domain to be queried. */
> +    domid_t domid;
> +    /* VCPU */
> +    UINT32 vcpuid;
> +    /* Vector number */
> +    UINT32 vector;
> +    /* Trap type (HVMOP_TRAP_*) */
> +    UINT32 type;
> +/* NB. This enumeration precisely matches hvm.h:X86_EVENTTYPE_* */
> +# define HVMOP_TRAP_ext_int    0 /* external interrupt */
> +# define HVMOP_TRAP_nmi        2 /* nmi */
> +# define HVMOP_TRAP_hw_exc     3 /* hardware exception */
> +# define HVMOP_TRAP_sw_int     4 /* software interrupt (CD nn) */
> +# define HVMOP_TRAP_pri_sw_exc 5 /* ICEBP (F1) */
> +# define HVMOP_TRAP_sw_exc     6 /* INT3 (CC), INTO (CE) */
> +    /* Error code, or ~0u to skip */
> +    UINT32 error_code;
> +    /* Intruction length */
> +    UINT32 insn_len;
> +    /* CR2 for page faults */
> +    uint64_aligned_t cr2;
> +};
> +typedef struct xen_hvm_inject_trap xen_hvm_inject_trap_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_hvm_inject_trap_t);
> +
> +#endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
> +
> +#define HVMOP_get_mem_type    15
> +/* Return hvmmem_type_t for the specified pfn. */
> +struct xen_hvm_get_mem_type {
> +    /* Domain to be queried. */
> +    domid_t domid;
> +    /* OUT variable. */
> +    UINT16 mem_type;
> +    UINT16 pad[2]; /* align next field on 8-byte boundary */
> +    /* IN variable. */
> +    UINT64 pfn;
> +};
> +typedef struct xen_hvm_get_mem_type xen_hvm_get_mem_type_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_hvm_get_mem_type_t);
> +
> +/* Following tools-only interfaces may change in future. */
> +#if defined(__XEN__) || defined(__XEN_TOOLS__)
> +
> +/* MSI injection for emulated devices */
> +#define HVMOP_inject_msi         16
> +struct xen_hvm_inject_msi {
> +    /* Domain to be injected */
> +    domid_t   domid;
> +    /* Data -- lower 32 bits */
> +    UINT32  data;
> +    /* Address (0xfeexxxxx) */
> +    UINT64  addr;
> +};
> +typedef struct xen_hvm_inject_msi xen_hvm_inject_msi_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_hvm_inject_msi_t);
> +
> +#endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
> +
> +#endif /* __XEN_PUBLIC_HVM_HVM_OP_H__ */
> diff --git a/OvmfPkg/Include/IndustryStandard/Xen/hvm/params.h 
> b/OvmfPkg/Include/IndustryStandard/Xen/hvm/params.h
> new file mode 100644
> index 0000000..517a184
> --- /dev/null
> +++ b/OvmfPkg/Include/IndustryStandard/Xen/hvm/params.h
> @@ -0,0 +1,150 @@
> +/*
> + * Permission is hereby granted, free of charge, to any person obtaining a 
> copy
> + * of this software and associated documentation files (the "Software"), to
> + * deal in the Software without restriction, including without limitation the
> + * rights to use, copy, modify, merge, publish, distribute, sublicense, 
> and/or
> + * sell copies of the Software, and to permit persons to whom the Software is
> + * furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 
> THE
> + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
> + * DEALINGS IN THE SOFTWARE.
> + */
> +
> +#ifndef __XEN_PUBLIC_HVM_PARAMS_H__
> +#define __XEN_PUBLIC_HVM_PARAMS_H__
> +
> +#include "hvm_op.h"
> +
> +/*
> + * Parameter space for HVMOP_{set,get}_param.
> + */
> +
> +/*
> + * How should CPU0 event-channel notifications be delivered?
> + * val[63:56] == 0: val[55:0] is a delivery GSI (Global System Interrupt).
> + * val[63:56] == 1: val[55:0] is a delivery PCI INTx line, as follows:
> + *                  Domain = val[47:32], Bus  = val[31:16],
> + *                  DevFn  = val[15: 8], IntX = val[ 1: 0]
> + * val[63:56] == 2: val[7:0] is a vector number, check for
> + *                  XENFEAT_hvm_callback_vector to know if this delivery
> + *                  method is available.
> + * If val == 0 then CPU0 event-channel notifications are not delivered.
> + */
> +#define HVM_PARAM_CALLBACK_IRQ 0
> +
> +/*
> + * These are not used by Xen. They are here for convenience of HVM-guest
> + * xenbus implementations.
> + */
> +#define HVM_PARAM_STORE_PFN    1
> +#define HVM_PARAM_STORE_EVTCHN 2
> +
> +#define HVM_PARAM_PAE_ENABLED  4
> +
> +#define HVM_PARAM_IOREQ_PFN    5
> +
> +#define HVM_PARAM_BUFIOREQ_PFN 6
> +#define HVM_PARAM_BUFIOREQ_EVTCHN 26
> +
> +#if defined(__i386__) || defined(__x86_64__)
> +
> +/* Expose Viridian interfaces to this HVM guest? */
> +#define HVM_PARAM_VIRIDIAN     9
> +
> +#endif
> +
> +/*
> + * Set mode for virtual timers (currently x86 only):
> + *  delay_for_missed_ticks (default):
> + *   Do not advance a vcpu's time beyond the correct delivery time for
> + *   interrupts that have been missed due to preemption. Deliver missed
> + *   interrupts when the vcpu is rescheduled and advance the vcpu's virtual
> + *   time stepwise for each one.
> + *  no_delay_for_missed_ticks:
> + *   As above, missed interrupts are delivered, but guest time always tracks
> + *   wallclock (i.e., real) time while doing so.
> + *  no_missed_ticks_pending:
> + *   No missed interrupts are held pending. Instead, to ensure ticks are
> + *   delivered at some non-zero rate, if we detect missed ticks then the
> + *   internal tick alarm is not disabled if the VCPU is preempted during the
> + *   next tick period.
> + *  one_missed_tick_pending:
> + *   Missed interrupts are collapsed together and delivered as one 'late 
> tick'.
> + *   Guest time always tracks wallclock (i.e., real) time.
> + */
> +#define HVM_PARAM_TIMER_MODE   10
> +#define HVMPTM_delay_for_missed_ticks    0
> +#define HVMPTM_no_delay_for_missed_ticks 1
> +#define HVMPTM_no_missed_ticks_pending   2
> +#define HVMPTM_one_missed_tick_pending   3
> +
> +/* Boolean: Enable virtual HPET (high-precision event timer)? (x86-only) */
> +#define HVM_PARAM_HPET_ENABLED 11
> +
> +/* Identity-map page directory used by Intel EPT when CR0.PG=0. */
> +#define HVM_PARAM_IDENT_PT     12
> +
> +/* Device Model domain, defaults to 0. */
> +#define HVM_PARAM_DM_DOMAIN    13
> +
> +/* ACPI S state: currently support S0 and S3 on x86. */
> +#define HVM_PARAM_ACPI_S_STATE 14
> +
> +/* TSS used on Intel when CR0.PE=0. */
> +#define HVM_PARAM_VM86_TSS     15
> +
> +/* Boolean: Enable aligning all periodic vpts to reduce interrupts */
> +#define HVM_PARAM_VPT_ALIGN    16
> +
> +/* Console debug shared memory ring and event channel */
> +#define HVM_PARAM_CONSOLE_PFN    17
> +#define HVM_PARAM_CONSOLE_EVTCHN 18
> +
> +/*
> + * Select location of ACPI PM1a and TMR control blocks. Currently two 
> locations
> + * are supported, specified by version 0 or 1 in this parameter:
> + *   - 0: default, use the old addresses
> + *        PM1A_EVT == 0x1f40; PM1A_CNT == 0x1f44; PM_TMR == 0x1f48
> + *   - 1: use the new default qemu addresses
> + *        PM1A_EVT == 0xb000; PM1A_CNT == 0xb004; PM_TMR == 0xb008
> + * You can find these address definitions in <hvm/ioreq.h>
> + */
> +#define HVM_PARAM_ACPI_IOPORTS_LOCATION 19
> +
> +/* Enable blocking memory events, async or sync (pause vcpu until response) 
> + * onchangeonly indicates messages only on a change of value */
> +#define HVM_PARAM_MEMORY_EVENT_CR0          20
> +#define HVM_PARAM_MEMORY_EVENT_CR3          21
> +#define HVM_PARAM_MEMORY_EVENT_CR4          22
> +#define HVM_PARAM_MEMORY_EVENT_INT3         23
> +#define HVM_PARAM_MEMORY_EVENT_SINGLE_STEP  25
> +#define HVM_PARAM_MEMORY_EVENT_MSR          30
> +
> +#define HVMPME_MODE_MASK       (3 << 0)
> +#define HVMPME_mode_disabled   0
> +#define HVMPME_mode_async      1
> +#define HVMPME_mode_sync       2
> +#define HVMPME_onchangeonly    (1 << 2)
> +
> +/* Boolean: Enable nestedhvm (hvm only) */
> +#define HVM_PARAM_NESTEDHVM    24
> +
> +/* Params for the mem event rings */
> +#define HVM_PARAM_PAGING_RING_PFN   27
> +#define HVM_PARAM_ACCESS_RING_PFN   28
> +#define HVM_PARAM_SHARING_RING_PFN  29
> +
> +/* SHUTDOWN_* action in case of a triple fault */
> +#define HVM_PARAM_TRIPLE_FAULT_REASON 31
> +
> +#define HVM_NR_PARAMS          32
> +
> +#endif /* __XEN_PUBLIC_HVM_PARAMS_H__ */
> diff --git a/OvmfPkg/Include/IndustryStandard/Xen/io/blkif.h 
> b/OvmfPkg/Include/IndustryStandard/Xen/io/blkif.h
> new file mode 100644
> index 0000000..2f0bb8b
> --- /dev/null
> +++ b/OvmfPkg/Include/IndustryStandard/Xen/io/blkif.h
> @@ -0,0 +1,608 @@
> +/******************************************************************************
> + * blkif.h
> + *
> + * Unified block-device I/O interface for Xen guest OSes.
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a 
> copy
> + * of this software and associated documentation files (the "Software"), to
> + * deal in the Software without restriction, including without limitation the
> + * rights to use, copy, modify, merge, publish, distribute, sublicense, 
> and/or
> + * sell copies of the Software, and to permit persons to whom the Software is
> + * furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 
> THE
> + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
> + * DEALINGS IN THE SOFTWARE.
> + *
> + * Copyright (c) 2003-2004, Keir Fraser
> + * Copyright (c) 2012, Spectra Logic Corporation
> + */
> +
> +#ifndef __XEN_PUBLIC_IO_BLKIF_H__
> +#define __XEN_PUBLIC_IO_BLKIF_H__
> +
> +#include "ring.h"
> +#include "../grant_table.h"
> +
> +/*
> + * Front->back notifications: When enqueuing a new request, sending a
> + * notification can be made conditional on req_event (i.e., the generic
> + * hold-off mechanism provided by the ring macros). Backends must set
> + * req_event appropriately (e.g., using RING_FINAL_CHECK_FOR_REQUESTS()).
> + *
> + * Back->front notifications: When enqueuing a new response, sending a
> + * notification can be made conditional on rsp_event (i.e., the generic
> + * hold-off mechanism provided by the ring macros). Frontends must set
> + * rsp_event appropriately (e.g., using RING_FINAL_CHECK_FOR_RESPONSES()).
> + */
> +
> +#ifndef blkif_vdev_t
> +#define blkif_vdev_t   UINT16
> +#endif
> +#define blkif_sector_t UINT64
> +
> +/*
> + * Feature and Parameter Negotiation
> + * =================================
> + * The two halves of a Xen block driver utilize nodes within the XenStore to
> + * communicate capabilities and to negotiate operating parameters.  This
> + * section enumerates these nodes which reside in the respective front and
> + * backend portions of the XenStore, following the XenBus convention.
> + *
> + * All data in the XenStore is stored as strings.  Nodes specifying numeric
> + * values are encoded in decimal.  Integer value ranges listed below are
> + * expressed as fixed sized integer types capable of storing the conversion
> + * of a properly formated node string, without loss of information.
> + *
> + * Any specified default value is in effect if the corresponding XenBus node
> + * is not present in the XenStore.
> + *
> + * XenStore nodes in sections marked "PRIVATE" are solely for use by the
> + * driver side whose XenBus tree contains them.
> + *
> + * XenStore nodes marked "DEPRECATED" in their notes section should only be
> + * used to provide interoperability with legacy implementations.
> + *
> + * See the XenBus state transition diagram below for details on when XenBus
> + * nodes must be published and when they can be queried.
> + *
> + 
> *****************************************************************************
> + *                            Backend XenBus Nodes
> + 
> *****************************************************************************
> + *
> + *------------------ Backend Device Identification (PRIVATE) 
> ------------------
> + *
> + * mode
> + *      Values:         "r" (read only), "w" (writable)
> + *
> + *      The read or write access permissions to the backing store to be
> + *      granted to the frontend.
> + *
> + * params
> + *      Values:         string
> + *
> + *      A free formatted string providing sufficient information for the
> + *      backend driver to open the backing device.  (e.g. the path to the
> + *      file or block device representing the backing store.)
> + *
> + * type
> + *      Values:         "file", "phy", "tap"
> + *
> + *      The type of the backing device/object.
> + *
> + *--------------------------------- Features 
> ---------------------------------
> + *
> + * feature-barrier
> + *      Values:         0/1 (boolean)
> + *      Default Value:  0
> + *
> + *      A value of "1" indicates that the backend can process requests
> + *      containing the BLKIF_OP_WRITE_BARRIER request opcode.  Requests
> + *      of this type may still be returned at any time with the
> + *      BLKIF_RSP_EOPNOTSUPP result code.
> + *
> + * feature-flush-cache
> + *      Values:         0/1 (boolean)
> + *      Default Value:  0
> + *
> + *      A value of "1" indicates that the backend can process requests
> + *      containing the BLKIF_OP_FLUSH_DISKCACHE request opcode.  Requests
> + *      of this type may still be returned at any time with the
> + *      BLKIF_RSP_EOPNOTSUPP result code.
> + *
> + * feature-discard
> + *      Values:         0/1 (boolean)
> + *      Default Value:  0
> + *
> + *      A value of "1" indicates that the backend can process requests
> + *      containing the BLKIF_OP_DISCARD request opcode.  Requests
> + *      of this type may still be returned at any time with the
> + *      BLKIF_RSP_EOPNOTSUPP result code.
> + *
> + * feature-persistent
> + *      Values:         0/1 (boolean)
> + *      Default Value:  0
> + *      Notes: 7
> + *
> + *      A value of "1" indicates that the backend can keep the grants used
> + *      by the frontend driver mapped, so the same set of grants should be
> + *      used in all transactions. The maximum number of grants the backend
> + *      can map persistently depends on the implementation, but ideally it
> + *      should be RING_SIZE * BLKIF_MAX_SEGMENTS_PER_REQUEST. Using this
> + *      feature the backend doesn't need to unmap each grant, preventing
> + *      costly TLB flushes. The backend driver should only map grants
> + *      persistently if the frontend supports it. If a backend driver chooses
> + *      to use the persistent protocol when the frontend doesn't support it,
> + *      it will probably hit the maximum number of persistently mapped grants
> + *      (due to the fact that the frontend won't be reusing the same grants),
> + *      and fall back to non-persistent mode. Backend implementations may
> + *      shrink or expand the number of persistently mapped grants without
> + *      notifying the frontend depending on memory constraints (this might
> + *      cause a performance degradation).
> + *
> + *      If a backend driver wants to limit the maximum number of persistently
> + *      mapped grants to a value less than RING_SIZE *
> + *      BLKIF_MAX_SEGMENTS_PER_REQUEST a LRU strategy should be used to
> + *      discard the grants that are less commonly used. Using a LRU in the
> + *      backend driver paired with a LIFO queue in the frontend will
> + *      allow us to have better performance in this scenario.
> + *
> + *----------------------- Request Transport Parameters 
> ------------------------
> + *
> + * max-ring-page-order
> + *      Values:         <UINT32>
> + *      Default Value:  0
> + *      Notes:          1, 3
> + *
> + *      The maximum supported size of the request ring buffer in units of
> + *      lb(machine pages). (e.g. 0 == 1 page,  1 = 2 pages, 2 == 4 pages,
> + *      etc.).
> + *
> + * max-ring-pages
> + *      Values:         <UINT32>
> + *      Default Value:  1
> + *      Notes:          DEPRECATED, 2, 3
> + *
> + *      The maximum supported size of the request ring buffer in units of
> + *      machine pages.  The value must be a power of 2.
> + *
> + *------------------------- Backend Device Properties 
> -------------------------
> + *
> + * discard-alignment
> + *      Values:         <UINT32>
> + *      Default Value:  0
> + *      Notes:          4, 5
> + *
> + *      The offset, in bytes from the beginning of the virtual block device,
> + *      to the first, addressable, discard extent on the underlying device.
> + *
> + * discard-granularity
> + *      Values:         <UINT32>
> + *      Default Value:  <"sector-size">
> + *      Notes:          4
> + *
> + *      The size, in bytes, of the individually addressable discard extents
> + *      of the underlying device.
> + *
> + * discard-secure
> + *      Values:         0/1 (boolean)
> + *      Default Value:  0
> + *      Notes:          10
> + *
> + *      A value of "1" indicates that the backend can process 
> BLKIF_OP_DISCARD
> + *      requests with the BLKIF_DISCARD_SECURE flag set.
> + *
> + * info
> + *      Values:         <UINT32> (bitmap)
> + *
> + *      A collection of bit flags describing attributes of the backing
> + *      device.  The VDISK_* macros define the meaning of each bit
> + *      location.
> + *
> + * sector-size
> + *      Values:         <UINT32>
> + *
> + *      The logical sector size, in bytes, of the backend device.
> + *
> + * physical-sector-size
> + *      Values:         <UINT32>
> + *
> + *      The physical sector size, in bytes, of the backend device.
> + *
> + * sectors
> + *      Values:         <UINT64>
> + *
> + *      The size of the backend device, expressed in units of its logical
> + *      sector size ("sector-size").
> + *
> + 
> *****************************************************************************
> + *                            Frontend XenBus Nodes
> + 
> *****************************************************************************
> + *
> + *----------------------- Request Transport Parameters 
> -----------------------
> + *
> + * event-channel
> + *      Values:         <UINT32>
> + *
> + *      The identifier of the Xen event channel used to signal activity
> + *      in the ring buffer.
> + *
> + * ring-ref
> + *      Values:         <UINT32>
> + *      Notes:          6
> + *
> + *      The Xen grant reference granting permission for the backend to map
> + *      the sole page in a single page sized ring buffer.
> + *
> + * ring-ref%u
> + *      Values:         <UINT32>
> + *      Notes:          6
> + *
> + *      For a frontend providing a multi-page ring, a "number of ring pages"
> + *      sized list of nodes, each containing a Xen grant reference granting
> + *      permission for the backend to map the page of the ring located
> + *      at page index "%u".  Page indexes are zero based.
> + *
> + * protocol
> + *      Values:         string (XEN_IO_PROTO_ABI_*)
> + *      Default Value:  XEN_IO_PROTO_ABI_NATIVE
> + *
> + *      The machine ABI rules governing the format of all ring request and
> + *      response structures.
> + *
> + * ring-page-order
> + *      Values:         <UINT32>
> + *      Default Value:  0
> + *      Maximum Value:  MAX(ffs(max-ring-pages) - 1, max-ring-page-order)
> + *      Notes:          1, 3
> + *
> + *      The size of the frontend allocated request ring buffer in units
> + *      of lb(machine pages). (e.g. 0 == 1 page, 1 = 2 pages, 2 == 4 pages,
> + *      etc.).
> + *
> + * num-ring-pages
> + *      Values:         <UINT32>
> + *      Default Value:  1
> + *      Maximum Value:  MAX(max-ring-pages,(0x1 << max-ring-page-order))
> + *      Notes:          DEPRECATED, 2, 3
> + *
> + *      The size of the frontend allocated request ring buffer in units of
> + *      machine pages.  The value must be a power of 2.
> + *
> + * feature-persistent
> + *      Values:         0/1 (boolean)
> + *      Default Value:  0
> + *      Notes: 7, 8, 9
> + *
> + *      A value of "1" indicates that the frontend will reuse the same grants
> + *      for all transactions, allowing the backend to map them with write
> + *      access (even when it should be read-only). If the frontend hits the
> + *      maximum number of allowed persistently mapped grants, it can fallback
> + *      to non persistent mode. This will cause a performance degradation,
> + *      since the the backend driver will still try to map those grants
> + *      persistently. Since the persistent grants protocol is compatible with
> + *      the previous protocol, a frontend driver can choose to work in
> + *      persistent mode even when the backend doesn't support it.
> + *
> + *      It is recommended that the frontend driver stores the persistently
> + *      mapped grants in a LIFO queue, so a subset of all persistently mapped
> + *      grants gets used commonly. This is done in case the backend driver
> + *      decides to limit the maximum number of persistently mapped grants
> + *      to a value less than RING_SIZE * BLKIF_MAX_SEGMENTS_PER_REQUEST.
> + *
> + *------------------------- Virtual Device Properties 
> -------------------------
> + *
> + * device-type
> + *      Values:         "disk", "cdrom", "floppy", etc.
> + *
> + * virtual-device
> + *      Values:         <UINT32>
> + *
> + *      A value indicating the physical device to virtualize within the
> + *      frontend's domain.  (e.g. "The first ATA disk", "The third SCSI
> + *      disk", etc.)
> + *
> + *      See docs/misc/vbd-interface.txt for details on the format of this
> + *      value.
> + *
> + * Notes
> + * -----
> + * (1) Multi-page ring buffer scheme first developed in the Citrix XenServer
> + *     PV drivers.
> + * (2) Multi-page ring buffer scheme first used in some RedHat distributions
> + *     including a distribution deployed on certain nodes of the Amazon
> + *     EC2 cluster.
> + * (3) Support for multi-page ring buffers was implemented independently,
> + *     in slightly different forms, by both Citrix and RedHat/Amazon.
> + *     For full interoperability, block front and backends should publish
> + *     identical ring parameters, adjusted for unit differences, to the
> + *     XenStore nodes used in both schemes.
> + * (4) Devices that support discard functionality may internally allocate 
> space
> + *     (discardable extents) in units that are larger than the exported 
> logical
> + *     block size. If the backing device has such discardable extents the
> + *     backend should provide both discard-granularity and discard-alignment.
> + *     Providing just one of the two may be considered an error by the 
> frontend.
> + *     Backends supporting discard should include discard-granularity and
> + *     discard-alignment even if it supports discarding individual sectors.
> + *     Frontends should assume discard-alignment == 0 and discard-granularity
> + *     == sector size if these keys are missing.
> + * (5) The discard-alignment parameter allows a physical device to be
> + *     partitioned into virtual devices that do not necessarily begin or
> + *     end on a discardable extent boundary.
> + * (6) When there is only a single page allocated to the request ring,
> + *     'ring-ref' is used to communicate the grant reference for this
> + *     page to the backend.  When using a multi-page ring, the 'ring-ref'
> + *     node is not created.  Instead 'ring-ref0' - 'ring-refN' are used.
> + * (7) When using persistent grants data has to be copied from/to the page
> + *     where the grant is currently mapped. The overhead of doing this copy
> + *     however doesn't suppress the speed improvement of not having to unmap
> + *     the grants.
> + * (8) The frontend driver has to allow the backend driver to map all grants
> + *     with write access, even when they should be mapped read-only, since
> + *     further requests may reuse these grants and require write permissions.
> + * (9) Linux implementation doesn't have a limit on the maximum number of
> + *     grants that can be persistently mapped in the frontend driver, but
> + *     due to the frontent driver implementation it should never be bigger
> + *     than RING_SIZE * BLKIF_MAX_SEGMENTS_PER_REQUEST.
> + *(10) The discard-secure property may be present and will be set to 1 if the
> + *     backing device supports secure discard.
> + */
> +
> +/*
> + * STATE DIAGRAMS
> + *
> + 
> *****************************************************************************
> + *                                   Startup                                 
> *
> + 
> *****************************************************************************
> + *
> + * Tool stack creates front and back nodes with state 
> XenbusStateInitialising.
> + *
> + * Front                                Back
> + * =================================    =====================================
> + * XenbusStateInitialising              XenbusStateInitialising
> + *  o Query virtual device               o Query backend device 
> identification
> + *    properties.                          data.
> + *  o Setup OS device instance.          o Open and validate backend device.
> + *                                       o Publish backend features and
> + *                                         transport parameters.
> + *                                                      |
> + *                                                      |
> + *                                                      V
> + *                                      XenbusStateInitWait
> + *
> + * o Query backend features and
> + *   transport parameters.
> + * o Allocate and initialize the
> + *   request ring.
> + * o Publish transport parameters
> + *   that will be in effect during
> + *   this connection.
> + *              |
> + *              |
> + *              V
> + * XenbusStateInitialised
> + *
> + *                                       o Query frontend transport 
> parameters.
> + *                                       o Connect to the request ring and
> + *                                         event channel.
> + *                                       o Publish backend device properties.
> + *                                                      |
> + *                                                      |
> + *                                                      V
> + *                                      XenbusStateConnected
> + *
> + *  o Query backend device properties.
> + *  o Finalize OS virtual device
> + *    instance.
> + *              |
> + *              |
> + *              V
> + * XenbusStateConnected
> + *
> + * Note: Drivers that do not support any optional features, or the 
> negotiation
> + *       of transport parameters, can skip certain states in the state 
> machine:
> + *
> + *       o A frontend may transition to XenbusStateInitialised without
> + *         waiting for the backend to enter XenbusStateInitWait.  In this
> + *         case, default transport parameters are in effect and any
> + *         transport parameters published by the frontend must contain
> + *         their default values.
> + *
> + *       o A backend may transition to XenbusStateInitialised, bypassing
> + *         XenbusStateInitWait, without waiting for the frontend to first
> + *         enter the XenbusStateInitialised state.  In this case, default
> + *         transport parameters are in effect and any transport parameters
> + *         published by the backend must contain their default values.
> + *
> + *       Drivers that support optional features and/or transport parameter
> + *       negotiation must tolerate these additional state transition paths.
> + *       In general this means performing the work of any skipped state
> + *       transition, if it has not already been performed, in addition to the
> + *       work associated with entry into the current state.
> + */
> +
> +/*
> + * REQUEST CODES.
> + */
> +#define BLKIF_OP_READ              0
> +#define BLKIF_OP_WRITE             1
> +/*
> + * All writes issued prior to a request with the BLKIF_OP_WRITE_BARRIER
> + * operation code ("barrier request") must be completed prior to the
> + * execution of the barrier request.  All writes issued after the barrier
> + * request must not execute until after the completion of the barrier 
> request.
> + *
> + * Optional.  See "feature-barrier" XenBus node documentation above.
> + */
> +#define BLKIF_OP_WRITE_BARRIER     2
> +/*
> + * Commit any uncommitted contents of the backing device's volatile cache
> + * to stable storage.
> + *
> + * Optional.  See "feature-flush-cache" XenBus node documentation above.
> + */
> +#define BLKIF_OP_FLUSH_DISKCACHE   3
> +/*
> + * Used in SLES sources for device specific command packet
> + * contained within the request. Reserved for that purpose.
> + */
> +#define BLKIF_OP_RESERVED_1        4
> +/*
> + * Indicate to the backend device that a region of storage is no longer in
> + * use, and may be discarded at any time without impact to the client.  If
> + * the BLKIF_DISCARD_SECURE flag is set on the request, all copies of the
> + * discarded region on the device must be rendered unrecoverable before the
> + * command returns.
> + *
> + * This operation is analogous to performing a trim (ATA) or unamp (SCSI),
> + * command on a native device.
> + *
> + * More information about trim/unmap operations can be found at:
> + * http://t13.org/Documents/UploadedDocuments/docs2008/
> + *     e07154r6-Data_Set_Management_Proposal_for_ATA-ACS2.doc
> + * http://www.seagate.com/staticfiles/support/disc/manuals/
> + *     Interface%20manuals/100293068c.pdf
> + *
> + * Optional.  See "feature-discard", "discard-alignment",
> + * "discard-granularity", and "discard-secure" in the XenBus node
> + * documentation above.
> + */
> +#define BLKIF_OP_DISCARD           5
> +
> +/*
> + * Recognized if "feature-max-indirect-segments" in present in the backend
> + * xenbus info. The "feature-max-indirect-segments" node contains the maximum
> + * number of segments allowed by the backend per request. If the node is
> + * present, the frontend might use blkif_request_indirect structs in order to
> + * issue requests with more than BLKIF_MAX_SEGMENTS_PER_REQUEST (11). The
> + * maximum number of indirect segments is fixed by the backend, but the
> + * frontend can issue requests with any number of indirect segments as long 
> as
> + * it's less than the number provided by the backend. The indirect_grefs 
> field
> + * in blkif_request_indirect should be filled by the frontend with the
> + * grant references of the pages that are holding the indirect segments.
> + * These pages are filled with an array of blkif_request_segment that hold 
> the
> + * information about the segments. The number of indirect pages to use is
> + * determined by the number of segments an indirect request contains. Every
> + * indirect page can contain a maximum of
> + * (PAGE_SIZE / sizeof(struct blkif_request_segment)) segments, so to
> + * calculate the number of indirect pages to use we have to do
> + * ceil(indirect_segments / (PAGE_SIZE / sizeof(struct 
> blkif_request_segment))).
> + *
> + * If a backend does not recognize BLKIF_OP_INDIRECT, it should *not*
> + * create the "feature-max-indirect-segments" node!
> + */
> +#define BLKIF_OP_INDIRECT          6
> +
> +/*
> + * Maximum scatter/gather segments per request.
> + * This is carefully chosen so that sizeof(blkif_ring_t) <= PAGE_SIZE.
> + * NB. This could be 12 if the ring indexes weren't stored in the same page.
> + */
> +#define BLKIF_MAX_SEGMENTS_PER_REQUEST 11
> +
> +/*
> + * Maximum number of indirect pages to use per request.
> + */
> +#define BLKIF_MAX_INDIRECT_PAGES_PER_REQUEST 8
> +
> +/*
> + * NB. first_sect and last_sect in blkif_request_segment, as well as
> + * sector_number in blkif_request, are always expressed in 512-byte units.
> + * However they must be properly aligned to the real sector size of the
> + * physical disk, which is reported in the "physical-sector-size" node in
> + * the backend xenbus info. Also the xenbus "sectors" node is expressed in
> + * 512-byte units.
> + */
> +struct blkif_request_segment {
> +    grant_ref_t gref;        /* reference to I/O buffer frame        */
> +    /* @first_sect: first sector in frame to transfer (inclusive).   */
> +    /* @last_sect: last sector in frame to transfer (inclusive).     */
> +    UINT8     first_sect, last_sect;
> +};
> +
> +/*
> + * Starting ring element for any I/O request.
> + */
> +struct blkif_request {
> +    UINT8        operation;    /* BLKIF_OP_???                         */
> +    UINT8        nr_segments;  /* number of segments                   */
> +    blkif_vdev_t   handle;       /* only for read/write requests         */
> +    UINT64       id;           /* private guest value, echoed in resp  */
> +    blkif_sector_t sector_number;/* start sector idx on disk (r/w only)  */
> +    struct blkif_request_segment seg[BLKIF_MAX_SEGMENTS_PER_REQUEST];
> +};
> +typedef struct blkif_request blkif_request_t;
> +
> +/*
> + * Cast to this structure when blkif_request.operation == BLKIF_OP_DISCARD
> + * sizeof(struct blkif_request_discard) <= sizeof(struct blkif_request)
> + */
> +struct blkif_request_discard {
> +    UINT8        operation;    /* BLKIF_OP_DISCARD                     */
> +    UINT8        flag;         /* BLKIF_DISCARD_SECURE or zero         */
> +#define BLKIF_DISCARD_SECURE (1<<0)  /* ignored if discard-secure=0      */
> +    blkif_vdev_t   handle;       /* same as for read/write requests      */
> +    UINT64       id;           /* private guest value, echoed in resp  */
> +    blkif_sector_t sector_number;/* start sector idx on disk             */
> +    UINT64       nr_sectors;   /* number of contiguous sectors to discard*/
> +};
> +typedef struct blkif_request_discard blkif_request_discard_t;
> +
> +struct blkif_request_indirect {
> +    UINT8        operation;    /* BLKIF_OP_INDIRECT                    */
> +    UINT8        indirect_op;  /* BLKIF_OP_{READ/WRITE}                */
> +    UINT16       nr_segments;  /* number of segments                   */
> +    UINT64       id;           /* private guest value, echoed in resp  */
> +    blkif_sector_t sector_number;/* start sector idx on disk (r/w only)  */
> +    blkif_vdev_t   handle;       /* same as for read/write requests      */
> +    grant_ref_t    indirect_grefs[BLKIF_MAX_INDIRECT_PAGES_PER_REQUEST];
> +#ifdef __i386__
> +    UINT64       pad;          /* Make it 64 byte aligned on i386      */
> +#endif
> +};
> +typedef struct blkif_request_indirect blkif_request_indirect_t;
> +
> +struct blkif_response {
> +    UINT64        id;              /* copied from request */
> +    UINT8         operation;       /* copied from request */
> +    INT16         status;          /* BLKIF_RSP_???       */
> +};
> +typedef struct blkif_response blkif_response_t;
> +
> +/*
> + * STATUS RETURN CODES.
> + */
> + /* Operation not supported (only happens on barrier writes). */
> +#define BLKIF_RSP_EOPNOTSUPP  -2
> + /* Operation failed for some unspecified reason (-EIO). */
> +#define BLKIF_RSP_ERROR       -1
> + /* Operation completed successfully. */
> +#define BLKIF_RSP_OKAY         0
> +
> +/*
> + * Generate blkif ring structures and types.
> + */
> +DEFINE_RING_TYPES(blkif, struct blkif_request, struct blkif_response);
> +
> +#define VDISK_CDROM        0x1
> +#define VDISK_REMOVABLE    0x2
> +#define VDISK_READONLY     0x4
> +
> +#endif /* __XEN_PUBLIC_IO_BLKIF_H__ */
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/OvmfPkg/Include/IndustryStandard/Xen/io/protocols.h 
> b/OvmfPkg/Include/IndustryStandard/Xen/io/protocols.h
> new file mode 100644
> index 0000000..80b196b
> --- /dev/null
> +++ b/OvmfPkg/Include/IndustryStandard/Xen/io/protocols.h
> @@ -0,0 +1,40 @@
> +/******************************************************************************
> + * protocols.h
> + * 
> + * Permission is hereby granted, free of charge, to any person obtaining a 
> copy
> + * of this software and associated documentation files (the "Software"), to
> + * deal in the Software without restriction, including without limitation the
> + * rights to use, copy, modify, merge, publish, distribute, sublicense, 
> and/or
> + * sell copies of the Software, and to permit persons to whom the Software is
> + * furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 
> THE
> + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
> + * DEALINGS IN THE SOFTWARE.
> + */
> +
> +#ifndef __XEN_PROTOCOLS_H__
> +#define __XEN_PROTOCOLS_H__
> +
> +#define XEN_IO_PROTO_ABI_X86_32     "x86_32-abi"
> +#define XEN_IO_PROTO_ABI_X86_64     "x86_64-abi"
> +#define XEN_IO_PROTO_ABI_ARM        "arm-abi"
> +
> +#if defined(__i386__)
> +# define XEN_IO_PROTO_ABI_NATIVE XEN_IO_PROTO_ABI_X86_32
> +#elif defined(__x86_64__)
> +# define XEN_IO_PROTO_ABI_NATIVE XEN_IO_PROTO_ABI_X86_64
> +#elif defined(__arm__) || defined(__aarch64__)
> +# define XEN_IO_PROTO_ABI_NATIVE XEN_IO_PROTO_ABI_ARM
> +#else
> +# error arch fixup needed here
> +#endif
> +
> +#endif
> diff --git a/OvmfPkg/Include/IndustryStandard/Xen/io/ring.h 
> b/OvmfPkg/Include/IndustryStandard/Xen/io/ring.h
> new file mode 100644
> index 0000000..a8e9ea0
> --- /dev/null
> +++ b/OvmfPkg/Include/IndustryStandard/Xen/io/ring.h
> @@ -0,0 +1,312 @@
> +/******************************************************************************
> + * ring.h
> + * 
> + * Shared producer-consumer ring macros.
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a 
> copy
> + * of this software and associated documentation files (the "Software"), to
> + * deal in the Software without restriction, including without limitation the
> + * rights to use, copy, modify, merge, publish, distribute, sublicense, 
> and/or
> + * sell copies of the Software, and to permit persons to whom the Software is
> + * furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 
> THE
> + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
> + * DEALINGS IN THE SOFTWARE.
> + *
> + * Tim Deegan and Andrew Warfield November 2004.
> + */
> +
> +#ifndef __XEN_PUBLIC_IO_RING_H__
> +#define __XEN_PUBLIC_IO_RING_H__
> +
> +#include "../xen-compat.h"
> +
> +#if __XEN_INTERFACE_VERSION__ < 0x00030208
> +#define xen_mb()  mb()
> +#define xen_rmb() rmb()
> +#define xen_wmb() wmb()
> +#endif
> +
> +typedef UINT32 RING_IDX;
> +
> +/* Round a 32-bit unsigned constant down to the nearest power of two. */
> +#define __RD2(_x)  (((_x) & 0x00000002) ? 0x2                  : ((_x) & 
> 0x1))
> +#define __RD4(_x)  (((_x) & 0x0000000c) ? __RD2((_x)>>2)<<2    : __RD2(_x))
> +#define __RD8(_x)  (((_x) & 0x000000f0) ? __RD4((_x)>>4)<<4    : __RD4(_x))
> +#define __RD16(_x) (((_x) & 0x0000ff00) ? __RD8((_x)>>8)<<8    : __RD8(_x))
> +#define __RD32(_x) (((_x) & 0xffff0000) ? __RD16((_x)>>16)<<16 : __RD16(_x))
> +
> +/*
> + * Calculate size of a shared ring, given the total available space for the
> + * ring and indexes (_sz), and the name tag of the request/response 
> structure.
> + * A ring contains as many entries as will fit, rounded down to the nearest 
> + * power of two (so we can mask with (size-1) to loop around).
> + */
> +#define __CONST_RING_SIZE(_s, _sz) \
> +    (__RD32(((_sz) - offsetof(struct _s##_sring, ring)) / \
> +         sizeof(((struct _s##_sring *)0)->ring[0])))
> +/*
> + * The same for passing in an actual pointer instead of a name tag.
> + */
> +#define __RING_SIZE(_s, _sz) \
> +    (__RD32(((_sz) - (INTN)(_s)->ring + (INTN)(_s)) / sizeof((_s)->ring[0])))
> +
> +/*
> + * Macros to make the correct C datatypes for a new kind of ring.
> + * 
> + * To make a new ring datatype, you need to have two message structures,
> + * let's say request_t, and response_t already defined.
> + *
> + * In a header where you want the ring datatype declared, you then do:
> + *
> + *     DEFINE_RING_TYPES(mytag, request_t, response_t);
> + *
> + * These expand out to give you a set of types, as you can see below.
> + * The most important of these are:
> + * 
> + *     mytag_sring_t      - The shared ring.
> + *     mytag_front_ring_t - The 'front' half of the ring.
> + *     mytag_back_ring_t  - The 'back' half of the ring.
> + *
> + * To initialize a ring in your code you need to know the location and size
> + * of the shared memory area (PAGE_SIZE, for instance). To initialise
> + * the front half:
> + *
> + *     mytag_front_ring_t front_ring;
> + *     SHARED_RING_INIT((mytag_sring_t *)shared_page);
> + *     FRONT_RING_INIT(&front_ring, (mytag_sring_t *)shared_page, PAGE_SIZE);
> + *
> + * Initializing the back follows similarly (note that only the front
> + * initializes the shared ring):
> + *
> + *     mytag_back_ring_t back_ring;
> + *     BACK_RING_INIT(&back_ring, (mytag_sring_t *)shared_page, PAGE_SIZE);
> + */
> +
> +#define DEFINE_RING_TYPES(__name, __req_t, __rsp_t)                     \
> +                                                                        \
> +/* Shared ring entry */                                                 \
> +union __name##_sring_entry {                                            \
> +    __req_t req;                                                        \
> +    __rsp_t rsp;                                                        \
> +};                                                                      \
> +                                                                        \
> +/* Shared ring page */                                                  \
> +struct __name##_sring {                                                 \
> +    RING_IDX req_prod, req_event;                                       \
> +    RING_IDX rsp_prod, rsp_event;                                       \
> +    union {                                                             \
> +        struct {                                                        \
> +            UINT8 smartpoll_active;                                   \
> +        } netif;                                                        \
> +        struct {                                                        \
> +            UINT8 msg;                                                \
> +        } tapif_user;                                                   \
> +        UINT8 pvt_pad[4];                                             \
> +    } private;                                                          \
> +    UINT8 __pad[44];                                                  \
> +    union __name##_sring_entry ring[1]; /* variable-length */           \
> +};                                                                      \
> +                                                                        \
> +/* "Front" end's private variables */                                   \
> +struct __name##_front_ring {                                            \
> +    RING_IDX req_prod_pvt;                                              \
> +    RING_IDX rsp_cons;                                                  \
> +    UINT32 nr_ents;                                               \
> +    struct __name##_sring *sring;                                       \
> +};                                                                      \
> +                                                                        \
> +/* "Back" end's private variables */                                    \
> +struct __name##_back_ring {                                             \
> +    RING_IDX rsp_prod_pvt;                                              \
> +    RING_IDX req_cons;                                                  \
> +    UINT32 nr_ents;                                               \
> +    struct __name##_sring *sring;                                       \
> +};                                                                      \
> +                                                                        \
> +/* Syntactic sugar */                                                   \
> +typedef struct __name##_sring __name##_sring_t;                         \
> +typedef struct __name##_front_ring __name##_front_ring_t;               \
> +typedef struct __name##_back_ring __name##_back_ring_t
> +
> +/*
> + * Macros for manipulating rings.
> + * 
> + * FRONT_RING_whatever works on the "front end" of a ring: here 
> + * requests are pushed on to the ring and responses taken off it.
> + * 
> + * BACK_RING_whatever works on the "back end" of a ring: here 
> + * requests are taken off the ring and responses put on.
> + * 
> + * N.B. these macros do NO INTERLOCKS OR FLOW CONTROL. 
> + * This is OK in 1-for-1 request-response situations where the 
> + * requestor (front end) never has more than RING_SIZE()-1
> + * outstanding requests.
> + */
> +
> +/* Initialising empty rings */
> +#define SHARED_RING_INIT(_s) do {                                       \
> +    (_s)->req_prod  = (_s)->rsp_prod  = 0;                              \
> +    (_s)->req_event = (_s)->rsp_event = 1;                              \
> +    (VOID)ZeroMem((_s)->private.pvt_pad, sizeof((_s)->private.pvt_pad)); \
> +    (VOID)ZeroMem((_s)->__pad, sizeof((_s)->__pad));                  \
> +} while(0)
> +
> +#define FRONT_RING_INIT(_r, _s, __size) do {                            \
> +    (_r)->req_prod_pvt = 0;                                             \
> +    (_r)->rsp_cons = 0;                                                 \
> +    (_r)->nr_ents = __RING_SIZE(_s, __size);                            \
> +    (_r)->sring = (_s);                                                 \
> +} while (0)
> +
> +#define BACK_RING_INIT(_r, _s, __size) do {                             \
> +    (_r)->rsp_prod_pvt = 0;                                             \
> +    (_r)->req_cons = 0;                                                 \
> +    (_r)->nr_ents = __RING_SIZE(_s, __size);                            \
> +    (_r)->sring = (_s);                                                 \
> +} while (0)
> +
> +/* How big is this ring? */
> +#define RING_SIZE(_r)                                                   \
> +    ((_r)->nr_ents)
> +
> +/* Number of free requests (for use on front side only). */
> +#define RING_FREE_REQUESTS(_r)                                          \
> +    (RING_SIZE(_r) - ((_r)->req_prod_pvt - (_r)->rsp_cons))
> +
> +/* Test if there is an empty slot available on the front ring.
> + * (This is only meaningful from the front. )
> + */
> +#define RING_FULL(_r)                                                   \
> +    (RING_FREE_REQUESTS(_r) == 0)
> +
> +/* Test if there are outstanding messages to be processed on a ring. */
> +#define RING_HAS_UNCONSUMED_RESPONSES(_r)                               \
> +    ((_r)->sring->rsp_prod - (_r)->rsp_cons)
> +
> +#ifdef __GNUC__
> +#define RING_HAS_UNCONSUMED_REQUESTS(_r) ({                             \
> +    UINT32 req = (_r)->sring->req_prod - (_r)->req_cons;          \
> +    UINT32 rsp = RING_SIZE(_r) -                                  \
> +        ((_r)->req_cons - (_r)->rsp_prod_pvt);                          \
> +    req < rsp ? req : rsp;                                              \
> +})
> +#else
> +/* Same as above, but without the nice GCC ({ ... }) syntax. */
> +#define RING_HAS_UNCONSUMED_REQUESTS(_r)                                \
> +    ((((_r)->sring->req_prod - (_r)->req_cons) <                        \
> +      (RING_SIZE(_r) - ((_r)->req_cons - (_r)->rsp_prod_pvt))) ?        \
> +     ((_r)->sring->req_prod - (_r)->req_cons) :                         \
> +     (RING_SIZE(_r) - ((_r)->req_cons - (_r)->rsp_prod_pvt)))
> +#endif
> +
> +/* Direct access to individual ring elements, by index. */
> +#define RING_GET_REQUEST(_r, _idx)                                      \
> +    (&((_r)->sring->ring[((_idx) & (RING_SIZE(_r) - 1))].req))
> +
> +#define RING_GET_RESPONSE(_r, _idx)                                     \
> +    (&((_r)->sring->ring[((_idx) & (RING_SIZE(_r) - 1))].rsp))
> +
> +/* Loop termination condition: Would the specified index overflow the ring? 
> */
> +#define RING_REQUEST_CONS_OVERFLOW(_r, _cons)                           \
> +    (((_cons) - (_r)->rsp_prod_pvt) >= RING_SIZE(_r))
> +
> +/* Ill-behaved frontend determination: Can there be this many requests? */
> +#define RING_REQUEST_PROD_OVERFLOW(_r, _prod)                           \
> +    (((_prod) - (_r)->rsp_prod_pvt) > RING_SIZE(_r))
> +
> +#define RING_PUSH_REQUESTS(_r) do {                                     \
> +    xen_wmb(); /* back sees requests /before/ updated producer index */ \
> +    (_r)->sring->req_prod = (_r)->req_prod_pvt;                         \
> +} while (0)
> +
> +#define RING_PUSH_RESPONSES(_r) do {                                    \
> +    xen_wmb(); /* front sees resps /before/ updated producer index */   \
> +    (_r)->sring->rsp_prod = (_r)->rsp_prod_pvt;                         \
> +} while (0)
> +
> +/*
> + * Notification hold-off (req_event and rsp_event):
> + * 
> + * When queueing requests or responses on a shared ring, it may not always be
> + * necessary to notify the remote end. For example, if requests are in flight
> + * in a backend, the front may be able to queue further requests without
> + * notifying the back (if the back checks for new requests when it queues
> + * responses).
> + * 
> + * When enqueuing requests or responses:
> + * 
> + *  Use RING_PUSH_{REQUESTS,RESPONSES}_AND_CHECK_NOTIFY(). The second 
> argument
> + *  is a boolean return value. True indicates that the receiver requires an
> + *  asynchronous notification.
> + * 
> + * After dequeuing requests or responses (before sleeping the connection):
> + * 
> + *  Use RING_FINAL_CHECK_FOR_REQUESTS() or RING_FINAL_CHECK_FOR_RESPONSES().
> + *  The second argument is a boolean return value. True indicates that there
> + *  are pending messages on the ring (i.e., the connection should not be put
> + *  to sleep).
> + * 
> + *  These macros will set the req_event/rsp_event field to trigger a
> + *  notification on the very next message that is enqueued. If you want to
> + *  create batches of work (i.e., only receive a notification after several
> + *  messages have been enqueued) then you will need to create a customised
> + *  version of the FINAL_CHECK macro in your own code, which sets the event
> + *  field appropriately.
> + */
> +
> +#define RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(_r, _notify) do {           \
> +    RING_IDX __old = (_r)->sring->req_prod;                             \
> +    RING_IDX __new = (_r)->req_prod_pvt;                                \
> +    xen_wmb(); /* back sees requests /before/ updated producer index */ \
> +    (_r)->sring->req_prod = __new;                                      \
> +    xen_mb(); /* back sees new requests /before/ we check req_event */  \
> +    (_notify) = ((RING_IDX)(__new - (_r)->sring->req_event) <           \
> +                 (RING_IDX)(__new - __old));                            \
> +} while (0)
> +
> +#define RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(_r, _notify) do {          \
> +    RING_IDX __old = (_r)->sring->rsp_prod;                             \
> +    RING_IDX __new = (_r)->rsp_prod_pvt;                                \
> +    xen_wmb(); /* front sees resps /before/ updated producer index */   \
> +    (_r)->sring->rsp_prod = __new;                                      \
> +    xen_mb(); /* front sees new resps /before/ we check rsp_event */    \
> +    (_notify) = ((RING_IDX)(__new - (_r)->sring->rsp_event) <           \
> +                 (RING_IDX)(__new - __old));                            \
> +} while (0)
> +
> +#define RING_FINAL_CHECK_FOR_REQUESTS(_r, _work_to_do) do {             \
> +    (_work_to_do) = RING_HAS_UNCONSUMED_REQUESTS(_r);                   \
> +    if (_work_to_do) break;                                             \
> +    (_r)->sring->req_event = (_r)->req_cons + 1;                        \
> +    xen_mb();                                                           \
> +    (_work_to_do) = RING_HAS_UNCONSUMED_REQUESTS(_r);                   \
> +} while (0)
> +
> +#define RING_FINAL_CHECK_FOR_RESPONSES(_r, _work_to_do) do {            \
> +    (_work_to_do) = RING_HAS_UNCONSUMED_RESPONSES(_r);                  \
> +    if (_work_to_do) break;                                             \
> +    (_r)->sring->rsp_event = (_r)->rsp_cons + 1;                        \
> +    xen_mb();                                                           \
> +    (_work_to_do) = RING_HAS_UNCONSUMED_RESPONSES(_r);                  \
> +} while (0)
> +
> +#endif /* __XEN_PUBLIC_IO_RING_H__ */
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/OvmfPkg/Include/IndustryStandard/Xen/io/xenbus.h 
> b/OvmfPkg/Include/IndustryStandard/Xen/io/xenbus.h
> new file mode 100644
> index 0000000..927f9db
> --- /dev/null
> +++ b/OvmfPkg/Include/IndustryStandard/Xen/io/xenbus.h
> @@ -0,0 +1,80 @@
> +/*****************************************************************************
> + * xenbus.h
> + *
> + * Xenbus protocol details.
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a 
> copy
> + * of this software and associated documentation files (the "Software"), to
> + * deal in the Software without restriction, including without limitation the
> + * rights to use, copy, modify, merge, publish, distribute, sublicense, 
> and/or
> + * sell copies of the Software, and to permit persons to whom the Software is
> + * furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 
> THE
> + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
> + * DEALINGS IN THE SOFTWARE.
> + *
> + * Copyright (C) 2005 XenSource Ltd.
> + */
> +
> +#ifndef _XEN_PUBLIC_IO_XENBUS_H
> +#define _XEN_PUBLIC_IO_XENBUS_H
> +
> +/*
> + * The state of either end of the Xenbus, i.e. the current communication
> + * status of initialisation across the bus.  States here imply nothing about
> + * the state of the connection between the driver and the kernel's device
> + * layers.
> + */
> +enum xenbus_state {
> +    XenbusStateUnknown       = 0,
> +
> +    XenbusStateInitialising  = 1,
> +
> +    /*
> +     * InitWait: Finished early initialisation but waiting for information
> +     * from the peer or hotplug scripts.
> +     */
> +    XenbusStateInitWait      = 2,
> +
> +    /*
> +     * Initialised: Waiting for a connection from the peer.
> +     */
> +    XenbusStateInitialised   = 3,
> +
> +    XenbusStateConnected     = 4,
> +
> +    /*
> +     * Closing: The device is being closed due to an error or an unplug 
> event.
> +     */
> +    XenbusStateClosing       = 5,
> +
> +    XenbusStateClosed        = 6,
> +
> +    /*
> +     * Reconfiguring: The device is being reconfigured.
> +     */
> +    XenbusStateReconfiguring = 7,
> +
> +    XenbusStateReconfigured  = 8
> +};
> +typedef enum xenbus_state XenbusState;
> +
> +#endif /* _XEN_PUBLIC_IO_XENBUS_H */
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/OvmfPkg/Include/IndustryStandard/Xen/io/xs_wire.h 
> b/OvmfPkg/Include/IndustryStandard/Xen/io/xs_wire.h
> new file mode 100644
> index 0000000..4a9e667
> --- /dev/null
> +++ b/OvmfPkg/Include/IndustryStandard/Xen/io/xs_wire.h
> @@ -0,0 +1,138 @@
> +/*
> + * Details of the "wire" protocol between Xen Store Daemon and client
> + * library or guest kernel.
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a 
> copy
> + * of this software and associated documentation files (the "Software"), to
> + * deal in the Software without restriction, including without limitation the
> + * rights to use, copy, modify, merge, publish, distribute, sublicense, 
> and/or
> + * sell copies of the Software, and to permit persons to whom the Software is
> + * furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 
> THE
> + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
> + * DEALINGS IN THE SOFTWARE.
> + *
> + * Copyright (C) 2005 Rusty Russell IBM Corporation
> + */
> +
> +#ifndef _XS_WIRE_H
> +#define _XS_WIRE_H
> +
> +enum xsd_sockmsg_type
> +{
> +    XS_DEBUG,
> +    XS_DIRECTORY,
> +    XS_READ,
> +    XS_GET_PERMS,
> +    XS_WATCH,
> +    XS_UNWATCH,
> +    XS_TRANSACTION_START,
> +    XS_TRANSACTION_END,
> +    XS_INTRODUCE,
> +    XS_RELEASE,
> +    XS_GET_DOMAIN_PATH,
> +    XS_WRITE,
> +    XS_MKDIR,
> +    XS_RM,
> +    XS_SET_PERMS,
> +    XS_WATCH_EVENT,
> +    XS_ERROR,
> +    XS_IS_DOMAIN_INTRODUCED,
> +    XS_RESUME,
> +    XS_SET_TARGET,
> +    XS_RESTRICT,
> +    XS_RESET_WATCHES
> +};
> +
> +#define XS_WRITE_NONE "NONE"
> +#define XS_WRITE_CREATE "CREATE"
> +#define XS_WRITE_CREATE_EXCL "CREATE|EXCL"
> +
> +/* We hand errors as strings, for portability. */
> +struct xsd_errors
> +{
> +    INT32 errnum;
> +    const CHAR8 *errstring;
> +};
> +#ifdef EINVAL
> +#define XSD_ERROR(x) { x, #x }
> +/* LINTED: static unused */
> +static struct xsd_errors xsd_errors[]
> +#if defined(__GNUC__)
> +__attribute__((unused))
> +#endif
> +    = {
> +    XSD_ERROR(EINVAL),
> +    XSD_ERROR(EACCES),
> +    XSD_ERROR(EEXIST),
> +    XSD_ERROR(EISDIR),
> +    XSD_ERROR(ENOENT),
> +    XSD_ERROR(ENOMEM),
> +    XSD_ERROR(ENOSPC),
> +    XSD_ERROR(EIO),
> +    XSD_ERROR(ENOTEMPTY),
> +    XSD_ERROR(ENOSYS),
> +    XSD_ERROR(EROFS),
> +    XSD_ERROR(EBUSY),
> +    XSD_ERROR(EAGAIN),
> +    XSD_ERROR(EISCONN),
> +    XSD_ERROR(E2BIG)
> +};
> +#endif
> +
> +struct xsd_sockmsg
> +{
> +    UINT32 type;  /* XS_??? */
> +    UINT32 req_id;/* Request identifier, echoed in daemon's response.  */
> +    UINT32 tx_id; /* Transaction id (0 if not related to a transaction). */
> +    UINT32 len;   /* Length of data following this. */
> +
> +    /* Generally followed by nul-terminated string(s). */
> +};
> +
> +enum xs_watch_type
> +{
> +    XS_WATCH_PATH = 0,
> +    XS_WATCH_TOKEN
> +};
> +
> +/*
> + * `incontents 150 xenstore_struct XenStore wire protocol.
> + *
> + * Inter-domain shared memory communications. */
> +#define XENSTORE_RING_SIZE 1024
> +typedef UINT32 XENSTORE_RING_IDX;
> +#define MASK_XENSTORE_IDX(idx) ((idx) & (XENSTORE_RING_SIZE-1))
> +struct xenstore_domain_interface {
> +    CHAR8 req[XENSTORE_RING_SIZE]; /* Requests to xenstore daemon. */
> +    CHAR8 rsp[XENSTORE_RING_SIZE]; /* Replies and async watch events. */
> +    XENSTORE_RING_IDX req_cons, req_prod;
> +    XENSTORE_RING_IDX rsp_cons, rsp_prod;
> +};
> +
> +/* Violating this is very bad.  See docs/misc/xenstore.txt. */
> +#define XENSTORE_PAYLOAD_MAX 4096
> +
> +/* Violating these just gets you an error back */
> +#define XENSTORE_ABS_PATH_MAX 3072
> +#define XENSTORE_REL_PATH_MAX 2048
> +
> +#endif /* _XS_WIRE_H */
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/OvmfPkg/Include/IndustryStandard/Xen/memory.h 
> b/OvmfPkg/Include/IndustryStandard/Xen/memory.h
> new file mode 100644
> index 0000000..93b73b5
> --- /dev/null
> +++ b/OvmfPkg/Include/IndustryStandard/Xen/memory.h
> @@ -0,0 +1,480 @@
> +/******************************************************************************
> + * memory.h
> + * 
> + * Memory reservation and information.
> + * 
> + * Permission is hereby granted, free of charge, to any person obtaining a 
> copy
> + * of this software and associated documentation files (the "Software"), to
> + * deal in the Software without restriction, including without limitation the
> + * rights to use, copy, modify, merge, publish, distribute, sublicense, 
> and/or
> + * sell copies of the Software, and to permit persons to whom the Software is
> + * furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 
> THE
> + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
> + * DEALINGS IN THE SOFTWARE.
> + *
> + * Copyright (c) 2005, Keir Fraser <keir@xxxxxxxxxxxxx>
> + */
> +
> +#ifndef __XEN_PUBLIC_MEMORY_H__
> +#define __XEN_PUBLIC_MEMORY_H__
> +
> +#include "xen.h"
> +
> +/*
> + * Increase or decrease the specified domain's memory reservation. Returns 
> the
> + * number of extents successfully allocated or freed.
> + * arg == addr of struct xen_memory_reservation.
> + */
> +#define XENMEM_increase_reservation 0
> +#define XENMEM_decrease_reservation 1
> +#define XENMEM_populate_physmap     6
> +
> +#if __XEN_INTERFACE_VERSION__ >= 0x00030209
> +/*
> + * Maximum # bits addressable by the user of the allocated region (e.g., I/O 
> + * devices often have a 32-bit limitation even in 64-bit systems). If zero 
> + * then the user has no addressing restriction. This field is not used by 
> + * XENMEM_decrease_reservation.
> + */
> +#define XENMEMF_address_bits(x)     (x)
> +#define XENMEMF_get_address_bits(x) ((x) & 0xffu)
> +/* NUMA node to allocate from. */
> +#define XENMEMF_node(x)     (((x) + 1) << 8)
> +#define XENMEMF_get_node(x) ((((x) >> 8) - 1) & 0xffu)
> +/* Flag to populate physmap with populate-on-demand entries */
> +#define XENMEMF_populate_on_demand (1<<16)
> +/* Flag to request allocation only from the node specified */
> +#define XENMEMF_exact_node_request  (1<<17)
> +#define XENMEMF_exact_node(n) (XENMEMF_node(n) | XENMEMF_exact_node_request)
> +#endif
> +
> +struct xen_memory_reservation {
> +
> +    /*
> +     * XENMEM_increase_reservation:
> +     *   OUT: MFN (*not* GMFN) bases of extents that were allocated
> +     * XENMEM_decrease_reservation:
> +     *   IN:  GMFN bases of extents to free
> +     * XENMEM_populate_physmap:
> +     *   IN:  GPFN bases of extents to populate with memory
> +     *   OUT: GMFN bases of extents that were allocated
> +     *   (NB. This command also updates the mach_to_phys translation table)
> +     * XENMEM_claim_pages:
> +     *   IN: must be zero
> +     */
> +    XEN_GUEST_HANDLE(xen_pfn_t) extent_start;
> +
> +    /* Number of extents, and size/alignment of each (2^extent_order pages). 
> */
> +    xen_ulong_t    nr_extents;
> +    UINT32   extent_order;
> +
> +#if __XEN_INTERFACE_VERSION__ >= 0x00030209
> +    /* XENMEMF flags. */
> +    UINT32   mem_flags;
> +#else
> +    UINT32   address_bits;
> +#endif
> +
> +    /*
> +     * Domain whose reservation is being changed.
> +     * Unprivileged domains can specify only DOMID_SELF.
> +     */
> +    domid_t        domid;
> +};
> +typedef struct xen_memory_reservation xen_memory_reservation_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_memory_reservation_t);
> +
> +/*
> + * An atomic exchange of memory pages. If return code is zero then
> + * @out.extent_list provides GMFNs of the newly-allocated memory.
> + * Returns zero on complete success, otherwise a negative error code.
> + * On complete success then always @nr_exchanged == @in.nr_extents.
> + * On partial success @nr_exchanged indicates how much work was done.
> + */
> +#define XENMEM_exchange             11
> +struct xen_memory_exchange {
> +    /*
> +     * [IN] Details of memory extents to be exchanged (GMFN bases).
> +     * Note that @in.address_bits is ignored and unused.
> +     */
> +    struct xen_memory_reservation in;
> +
> +    /*
> +     * [IN/OUT] Details of new memory extents.
> +     * We require that:
> +     *  1. @in.domid == @out.domid
> +     *  2. @in.nr_extents  << @in.extent_order == 
> +     *     @out.nr_extents << @out.extent_order
> +     *  3. @in.extent_start and @out.extent_start lists must not overlap
> +     *  4. @out.extent_start lists GPFN bases to be populated
> +     *  5. @out.extent_start is overwritten with allocated GMFN bases
> +     */
> +    struct xen_memory_reservation out;
> +
> +    /*
> +     * [OUT] Number of input extents that were successfully exchanged:
> +     *  1. The first @nr_exchanged input extents were successfully
> +     *     deallocated.
> +     *  2. The corresponding first entries in the output extent list 
> correctly
> +     *     indicate the GMFNs that were successfully exchanged.
> +     *  3. All other input and output extents are untouched.
> +     *  4. If not all input exents are exchanged then the return code of this
> +     *     command will be non-zero.
> +     *  5. THIS FIELD MUST BE INITIALISED TO ZERO BY THE CALLER!
> +     */
> +    xen_ulong_t nr_exchanged;
> +};
> +typedef struct xen_memory_exchange xen_memory_exchange_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_memory_exchange_t);
> +
> +/*
> + * Returns the maximum machine frame number of mapped RAM in this system.
> + * This command always succeeds (it never returns an error code).
> + * arg == NULL.
> + */
> +#define XENMEM_maximum_ram_page     2
> +
> +/*
> + * Returns the current or maximum memory reservation, in pages, of the
> + * specified domain (may be DOMID_SELF). Returns -ve errcode on failure.
> + * arg == addr of domid_t.
> + */
> +#define XENMEM_current_reservation  3
> +#define XENMEM_maximum_reservation  4
> +
> +/*
> + * Returns the maximum GPFN in use by the guest, or -ve errcode on failure.
> + */
> +#define XENMEM_maximum_gpfn         14
> +
> +/*
> + * Returns a list of MFN bases of 2MB extents comprising the machine_to_phys
> + * mapping table. Architectures which do not have a m2p table do not 
> implement
> + * this command.
> + * arg == addr of xen_machphys_mfn_list_t.
> + */
> +#define XENMEM_machphys_mfn_list    5
> +struct xen_machphys_mfn_list {
> +    /*
> +     * Size of the 'extent_start' array. Fewer entries will be filled if the
> +     * machphys table is smaller than max_extents * 2MB.
> +     */
> +    UINT32 max_extents;
> +
> +    /*
> +     * Pointer to buffer to fill with list of extent starts. If there are
> +     * any large discontiguities in the machine address space, 2MB gaps in
> +     * the machphys table will be represented by an MFN base of zero.
> +     */
> +    XEN_GUEST_HANDLE(xen_pfn_t) extent_start;
> +
> +    /*
> +     * Number of extents written to the above array. This will be smaller
> +     * than 'max_extents' if the machphys table is smaller than max_e * 2MB.
> +     */
> +    UINT32 nr_extents;
> +};
> +typedef struct xen_machphys_mfn_list xen_machphys_mfn_list_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_machphys_mfn_list_t);
> +
> +/*
> + * Returns the location in virtual address space of the machine_to_phys
> + * mapping table. Architectures which do not have a m2p table, or which do 
> not
> + * map it by default into guest address space, do not implement this command.
> + * arg == addr of xen_machphys_mapping_t.
> + */
> +#define XENMEM_machphys_mapping     12
> +struct xen_machphys_mapping {
> +    xen_ulong_t v_start, v_end; /* Start and end virtual addresses.   */
> +    xen_ulong_t max_mfn;        /* Maximum MFN that can be looked up. */
> +};
> +typedef struct xen_machphys_mapping xen_machphys_mapping_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_machphys_mapping_t);
> +
> +/* Source mapping space. */
> +/* ` enum phys_map_space { */
> +#define XENMAPSPACE_shared_info  0 /* shared info page */
> +#define XENMAPSPACE_grant_table  1 /* grant table page */
> +#define XENMAPSPACE_gmfn         2 /* GMFN */
> +#define XENMAPSPACE_gmfn_range   3 /* GMFN range, XENMEM_add_to_physmap 
> only. */
> +#define XENMAPSPACE_gmfn_foreign 4 /* GMFN from another dom,
> +                                    * XENMEM_add_to_physmap_batch only. */
> +/* ` } */
> +
> +/*
> + * Sets the GPFN at which a particular page appears in the specified guest's
> + * pseudophysical address space.
> + * arg == addr of xen_add_to_physmap_t.
> + */
> +#define XENMEM_add_to_physmap      7
> +struct xen_add_to_physmap {
> +    /* Which domain to change the mapping for. */
> +    domid_t domid;
> +
> +    /* Number of pages to go through for gmfn_range */
> +    UINT16    size;
> +
> +    UINT32 space; /* => enum phys_map_space */
> +
> +#define XENMAPIDX_grant_table_status 0x80000000
> +
> +    /* Index into space being mapped. */
> +    xen_ulong_t idx;
> +
> +    /* GPFN in domid where the source mapping page should appear. */
> +    xen_pfn_t     gpfn;
> +};
> +typedef struct xen_add_to_physmap xen_add_to_physmap_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_add_to_physmap_t);
> +
> +/* A batched version of add_to_physmap. */
> +#define XENMEM_add_to_physmap_batch 23
> +struct xen_add_to_physmap_batch {
> +    /* IN */
> +    /* Which domain to change the mapping for. */
> +    domid_t domid;
> +    UINT16 space; /* => enum phys_map_space */
> +
> +    /* Number of pages to go through */
> +    UINT16 size;
> +    domid_t foreign_domid; /* IFF gmfn_foreign */
> +
> +    /* Indexes into space being mapped. */
> +    XEN_GUEST_HANDLE(xen_ulong_t) idxs;
> +
> +    /* GPFN in domid where the source mapping page should appear. */
> +    XEN_GUEST_HANDLE(xen_pfn_t) gpfns;
> +
> +    /* OUT */
> +
> +    /* Per index error code. */
> +    XEN_GUEST_HANDLE(INT32) errs;
> +};
> +typedef struct xen_add_to_physmap_batch xen_add_to_physmap_batch_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_add_to_physmap_batch_t);
> +
> +#if __XEN_INTERFACE_VERSION__ < 0x00040400
> +#define XENMEM_add_to_physmap_range XENMEM_add_to_physmap_batch
> +#define xen_add_to_physmap_range xen_add_to_physmap_batch
> +typedef struct xen_add_to_physmap_batch xen_add_to_physmap_range_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_add_to_physmap_range_t);
> +#endif
> +
> +/*
> + * Unmaps the page appearing at a particular GPFN from the specified guest's
> + * pseudophysical address space.
> + * arg == addr of xen_remove_from_physmap_t.
> + */
> +#define XENMEM_remove_from_physmap      15
> +struct xen_remove_from_physmap {
> +    /* Which domain to change the mapping for. */
> +    domid_t domid;
> +
> +    /* GPFN of the current mapping of the page. */
> +    xen_pfn_t     gpfn;
> +};
> +typedef struct xen_remove_from_physmap xen_remove_from_physmap_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_remove_from_physmap_t);
> +
> +/*** REMOVED ***/
> +/*#define XENMEM_translate_gpfn_list  8*/
> +
> +/*
> + * Returns the pseudo-physical memory map as it was when the domain
> + * was started (specified by XENMEM_set_memory_map).
> + * arg == addr of xen_memory_map_t.
> + */
> +#define XENMEM_memory_map           9
> +struct xen_memory_map {
> +    /*
> +     * On call the number of entries which can be stored in buffer. On
> +     * return the number of entries which have been stored in
> +     * buffer.
> +     */
> +    UINT32 nr_entries;
> +
> +    /*
> +     * Entries in the buffer are in the same format as returned by the
> +     * BIOS INT 0x15 EAX=0xE820 call.
> +     */
> +    XEN_GUEST_HANDLE(VOID) buffer;
> +};
> +typedef struct xen_memory_map xen_memory_map_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_memory_map_t);
> +
> +/*
> + * Returns the real physical memory map. Passes the same structure as
> + * XENMEM_memory_map.
> + * arg == addr of xen_memory_map_t.
> + */
> +#define XENMEM_machine_memory_map   10
> +
> +/*
> + * Set the pseudo-physical memory map of a domain, as returned by
> + * XENMEM_memory_map.
> + * arg == addr of xen_foreign_memory_map_t.
> + */
> +#define XENMEM_set_memory_map       13
> +struct xen_foreign_memory_map {
> +    domid_t domid;
> +    struct xen_memory_map map;
> +};
> +typedef struct xen_foreign_memory_map xen_foreign_memory_map_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_foreign_memory_map_t);
> +
> +#define XENMEM_set_pod_target       16
> +#define XENMEM_get_pod_target       17
> +struct xen_pod_target {
> +    /* IN */
> +    UINT64 target_pages;
> +    /* OUT */
> +    UINT64 tot_pages;
> +    UINT64 pod_cache_pages;
> +    UINT64 pod_entries;
> +    /* IN */
> +    domid_t domid;
> +};
> +typedef struct xen_pod_target xen_pod_target_t;
> +
> +#if defined(__XEN__) || defined(__XEN_TOOLS__)
> +
> +#ifndef uint64_aligned_t
> +#define uint64_aligned_t UINT64
> +#endif
> +
> +/*
> + * Get the number of MFNs saved through memory sharing.
> + * The call never fails. 
> + */
> +#define XENMEM_get_sharing_freed_pages    18
> +#define XENMEM_get_sharing_shared_pages   19
> +
> +#define XENMEM_paging_op                    20
> +#define XENMEM_paging_op_nominate           0
> +#define XENMEM_paging_op_evict              1
> +#define XENMEM_paging_op_prep               2
> +
> +#define XENMEM_access_op                    21
> +#define XENMEM_access_op_resume             0
> +
> +struct xen_mem_event_op {
> +    UINT8     op;         /* XENMEM_*_op_* */
> +    domid_t     domain;
> +    
> +
> +    /* PAGING_PREP IN: buffer to immediately fill page in */
> +    uint64_aligned_t    buffer;
> +    /* Other OPs */
> +    uint64_aligned_t    gfn;           /* IN:  gfn of page being operated on 
> */
> +};
> +typedef struct xen_mem_event_op xen_mem_event_op_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_mem_event_op_t);
> +
> +#define XENMEM_sharing_op                   22
> +#define XENMEM_sharing_op_nominate_gfn      0
> +#define XENMEM_sharing_op_nominate_gref     1
> +#define XENMEM_sharing_op_share             2
> +#define XENMEM_sharing_op_resume            3
> +#define XENMEM_sharing_op_debug_gfn         4
> +#define XENMEM_sharing_op_debug_mfn         5
> +#define XENMEM_sharing_op_debug_gref        6
> +#define XENMEM_sharing_op_add_physmap       7
> +#define XENMEM_sharing_op_audit             8
> +
> +#define XENMEM_SHARING_OP_S_HANDLE_INVALID  (-10)
> +#define XENMEM_SHARING_OP_C_HANDLE_INVALID  (-9)
> +
> +/* The following allows sharing of grant refs. This is useful
> + * for sharing utilities sitting as "filters" in IO backends
> + * (e.g. memshr + blktap(2)). The IO backend is only exposed 
> + * to grant references, and this allows sharing of the grefs */
> +#define XENMEM_SHARING_OP_FIELD_IS_GREF_FLAG   (1ULL << 62)
> +
> +#define XENMEM_SHARING_OP_FIELD_MAKE_GREF(field, val)  \
> +    (field) = (XENMEM_SHARING_OP_FIELD_IS_GREF_FLAG | val)
> +#define XENMEM_SHARING_OP_FIELD_IS_GREF(field)         \
> +    ((field) & XENMEM_SHARING_OP_FIELD_IS_GREF_FLAG)
> +#define XENMEM_SHARING_OP_FIELD_GET_GREF(field)        \
> +    ((field) & (~XENMEM_SHARING_OP_FIELD_IS_GREF_FLAG))
> +
> +struct xen_mem_sharing_op {
> +    UINT8     op;     /* XENMEM_sharing_op_* */
> +    domid_t     domain;
> +
> +    union {
> +        struct mem_sharing_op_nominate {  /* OP_NOMINATE_xxx           */
> +            union {
> +                uint64_aligned_t gfn;     /* IN: gfn to nominate       */
> +                UINT32      grant_ref;  /* IN: grant ref to nominate */
> +            } u;
> +            uint64_aligned_t  handle;     /* OUT: the handle           */
> +        } nominate;
> +        struct mem_sharing_op_share {     /* OP_SHARE/ADD_PHYSMAP */
> +            uint64_aligned_t source_gfn;    /* IN: the gfn of the source 
> page */
> +            uint64_aligned_t source_handle; /* IN: handle to the source page 
> */
> +            uint64_aligned_t client_gfn;    /* IN: the client gfn */
> +            uint64_aligned_t client_handle; /* IN: handle to the client page 
> */
> +            domid_t  client_domain; /* IN: the client domain id */
> +        } share; 
> +        struct mem_sharing_op_debug {     /* OP_DEBUG_xxx */
> +            union {
> +                uint64_aligned_t gfn;      /* IN: gfn to debug          */
> +                uint64_aligned_t mfn;      /* IN: mfn to debug          */
> +                UINT32 gref;     /* IN: gref to debug         */
> +            } u;
> +        } debug;
> +    } u;
> +};
> +typedef struct xen_mem_sharing_op xen_mem_sharing_op_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_mem_sharing_op_t);
> +
> +/*
> + * Attempt to stake a claim for a domain on a quantity of pages
> + * of system RAM, but _not_ assign specific pageframes.  Only
> + * arithmetic is performed so the hypercall is very fast and need
> + * not be preemptible, thus sidestepping time-of-check-time-of-use
> + * races for memory allocation.  Returns 0 if the hypervisor page
> + * allocator has atomically and successfully claimed the requested
> + * number of pages, else non-zero.
> + *
> + * Any domain may have only one active claim.  When sufficient memory
> + * has been allocated to resolve the claim, the claim silently expires.
> + * Claiming zero pages effectively resets any outstanding claim and
> + * is always successful.
> + *
> + * Note that a valid claim may be staked even after memory has been
> + * allocated for a domain.  In this case, the claim is not incremental,
> + * i.e. if the domain's tot_pages is 3, and a claim is staked for 10,
> + * only 7 additional pages are claimed.
> + *
> + * Caller must be privileged or the hypercall fails.
> + */
> +#define XENMEM_claim_pages                  24
> +
> +/*
> + * XENMEM_claim_pages flags - the are no flags at this time.
> + * The zero value is appropiate.
> + */
> +
> +#endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
> +
> +#endif /* __XEN_PUBLIC_MEMORY_H__ */
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/OvmfPkg/Include/IndustryStandard/Xen/sched.h 
> b/OvmfPkg/Include/IndustryStandard/Xen/sched.h
> new file mode 100644
> index 0000000..b93123f
> --- /dev/null
> +++ b/OvmfPkg/Include/IndustryStandard/Xen/sched.h
> @@ -0,0 +1,174 @@
> +/******************************************************************************
> + * sched.h
> + *
> + * Scheduler state interactions
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a 
> copy
> + * of this software and associated documentation files (the "Software"), to
> + * deal in the Software without restriction, including without limitation the
> + * rights to use, copy, modify, merge, publish, distribute, sublicense, 
> and/or
> + * sell copies of the Software, and to permit persons to whom the Software is
> + * furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 
> THE
> + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
> + * DEALINGS IN THE SOFTWARE.
> + *
> + * Copyright (c) 2005, Keir Fraser <keir@xxxxxxxxxxxxx>
> + */
> +
> +#ifndef __XEN_PUBLIC_SCHED_H__
> +#define __XEN_PUBLIC_SCHED_H__
> +
> +#include "event_channel.h"
> +
> +/*
> + * `incontents 150 sched Guest Scheduler Operations
> + *
> + * The SCHEDOP interface provides mechanisms for a guest to interact
> + * with the scheduler, including yield, blocking and shutting itself
> + * down.
> + */
> +
> +/*
> + * The prototype for this hypercall is:
> + * ` INTN HYPERVISOR_sched_op(enum sched_op cmd, VOID *arg, ...)
> + *
> + * @cmd == SCHEDOP_??? (scheduler operation).
> + * @arg == Operation-specific extra argument(s), as described below.
> + * ...  == Additional Operation-specific extra arguments, described below.
> + *
> + * Versions of Xen prior to 3.0.2 provided only the following legacy version
> + * of this hypercall, supporting only the commands yield, block and shutdown:
> + *  INTN sched_op(INT32 cmd, UINTN arg)
> + * @cmd == SCHEDOP_??? (scheduler operation).
> + * @arg == 0               (SCHEDOP_yield and SCHEDOP_block)
> + *      == SHUTDOWN_* code (SCHEDOP_shutdown)
> + *
> + * This legacy version is available to new guests as:
> + * ` INTN HYPERVISOR_sched_op_compat(enum sched_op cmd, UINTN arg)
> + */
> +
> +/* ` enum sched_op { // SCHEDOP_* => struct sched_* */
> +/*
> + * Voluntarily yield the CPU.
> + * @arg == NULL.
> + */
> +#define SCHEDOP_yield       0
> +
> +/*
> + * Block execution of this VCPU until an event is received for processing.
> + * If called with event upcalls masked, this operation will atomically
> + * reenable event delivery and check for pending events before blocking the
> + * VCPU. This avoids a "wakeup waiting" race.
> + * @arg == NULL.
> + */
> +#define SCHEDOP_block       1
> +
> +/*
> + * Halt execution of this domain (all VCPUs) and notify the system 
> controller.
> + * @arg == pointer to sched_shutdown_t structure.
> + *
> + * If the sched_shutdown_t reason is SHUTDOWN_suspend then this
> + * hypercall takes an additional extra argument which should be the
> + * MFN of the guest's start_info_t.
> + *
> + * In addition, which reason is SHUTDOWN_suspend this hypercall
> + * returns 1 if suspend was cancelled or the domain was merely
> + * checkpointed, and 0 if it is resuming in a new domain.
> + */
> +#define SCHEDOP_shutdown    2
> +
> +/*
> + * Poll a set of event-channel ports. Return when one or more are pending. An
> + * optional timeout may be specified.
> + * @arg == pointer to sched_poll_t structure.
> + */
> +#define SCHEDOP_poll        3
> +
> +/*
> + * Declare a shutdown for another domain. The main use of this function is
> + * in interpreting shutdown requests and reasons for fully-virtualized
> + * domains.  A para-virtualized domain may use SCHEDOP_shutdown directly.
> + * @arg == pointer to sched_remote_shutdown_t structure.
> + */
> +#define SCHEDOP_remote_shutdown        4
> +
> +/*
> + * Latch a shutdown code, so that when the domain later shuts down it
> + * reports this code to the control tools.
> + * @arg == sched_shutdown_t, as for SCHEDOP_shutdown.
> + */
> +#define SCHEDOP_shutdown_code 5
> +
> +/*
> + * Setup, poke and destroy a domain watchdog timer.
> + * @arg == pointer to sched_watchdog_t structure.
> + * With id == 0, setup a domain watchdog timer to cause domain shutdown
> + *               after timeout, returns watchdog id.
> + * With id != 0 and timeout == 0, destroy domain watchdog timer.
> + * With id != 0 and timeout != 0, poke watchdog timer and set new timeout.
> + */
> +#define SCHEDOP_watchdog    6
> +/* ` } */
> +
> +struct sched_shutdown {
> +    UINT32 reason; /* SHUTDOWN_* => enum sched_shutdown_reason */
> +};
> +typedef struct sched_shutdown sched_shutdown_t;
> +DEFINE_XEN_GUEST_HANDLE(sched_shutdown_t);
> +
> +struct sched_poll {
> +    XEN_GUEST_HANDLE(evtchn_port_t) ports;
> +    UINT32 nr_ports;
> +    UINT64 timeout;
> +};
> +typedef struct sched_poll sched_poll_t;
> +DEFINE_XEN_GUEST_HANDLE(sched_poll_t);
> +
> +struct sched_remote_shutdown {
> +    domid_t domain_id;         /* Remote domain ID */
> +    UINT32 reason;       /* SHUTDOWN_* => enum sched_shutdown_reason */
> +};
> +typedef struct sched_remote_shutdown sched_remote_shutdown_t;
> +DEFINE_XEN_GUEST_HANDLE(sched_remote_shutdown_t);
> +
> +struct sched_watchdog {
> +    UINT32 id;                /* watchdog ID */
> +    UINT32 timeout;           /* timeout */
> +};
> +typedef struct sched_watchdog sched_watchdog_t;
> +DEFINE_XEN_GUEST_HANDLE(sched_watchdog_t);
> +
> +/*
> + * Reason codes for SCHEDOP_shutdown. These may be interpreted by control
> + * software to determine the appropriate action. For the most part, Xen does
> + * not care about the shutdown code.
> + */
> +/* ` enum sched_shutdown_reason { */
> +#define SHUTDOWN_poweroff   0  /* Domain exited normally. Clean up and kill. 
> */
> +#define SHUTDOWN_reboot     1  /* Clean up, kill, and then restart.          
> */
> +#define SHUTDOWN_suspend    2  /* Clean up, save suspend info, kill.         
> */
> +#define SHUTDOWN_crash      3  /* Tell controller we've crashed.             
> */
> +#define SHUTDOWN_watchdog   4  /* Restart because watchdog time expired.     
> */
> +#define SHUTDOWN_MAX        4  /* Maximum valid shutdown reason.             
> */
> +/* ` } */
> +
> +#endif /* __XEN_PUBLIC_SCHED_H__ */
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/OvmfPkg/Include/IndustryStandard/Xen/trace.h 
> b/OvmfPkg/Include/IndustryStandard/Xen/trace.h
> new file mode 100644
> index 0000000..f7334ce
> --- /dev/null
> +++ b/OvmfPkg/Include/IndustryStandard/Xen/trace.h
> @@ -0,0 +1,310 @@
> +/******************************************************************************
> + * include/public/trace.h
> + * 
> + * Permission is hereby granted, free of charge, to any person obtaining a 
> copy
> + * of this software and associated documentation files (the "Software"), to
> + * deal in the Software without restriction, including without limitation the
> + * rights to use, copy, modify, merge, publish, distribute, sublicense, 
> and/or
> + * sell copies of the Software, and to permit persons to whom the Software is
> + * furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 
> THE
> + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
> + * DEALINGS IN THE SOFTWARE.
> + *
> + * Mark Williamson, (C) 2004 Intel Research Cambridge
> + * Copyright (C) 2005 Bin Ren
> + */
> +
> +#ifndef __XEN_PUBLIC_TRACE_H__
> +#define __XEN_PUBLIC_TRACE_H__
> +
> +#define TRACE_EXTRA_MAX    7
> +#define TRACE_EXTRA_SHIFT 28
> +
> +/* Trace classes */
> +#define TRC_CLS_SHIFT 16
> +#define TRC_GEN      0x0001f000    /* General trace            */
> +#define TRC_SCHED    0x0002f000    /* Xen Scheduler trace      */
> +#define TRC_DOM0OP   0x0004f000    /* Xen DOM0 operation trace */
> +#define TRC_HVM      0x0008f000    /* Xen HVM trace            */
> +#define TRC_MEM      0x0010f000    /* Xen memory trace         */
> +#define TRC_PV       0x0020f000    /* Xen PV traces            */
> +#define TRC_SHADOW   0x0040f000    /* Xen shadow tracing       */
> +#define TRC_HW       0x0080f000    /* Xen hardware-related traces */
> +#define TRC_GUEST    0x0800f000    /* Guest-generated traces   */
> +#define TRC_ALL      0x0ffff000
> +#define TRC_HD_TO_EVENT(x) ((x)&0x0fffffff)
> +#define TRC_HD_CYCLE_FLAG (1UL<<31)
> +#define TRC_HD_INCLUDES_CYCLE_COUNT(x) ( !!( (x) & TRC_HD_CYCLE_FLAG ) )
> +#define TRC_HD_EXTRA(x)    (((x)>>TRACE_EXTRA_SHIFT)&TRACE_EXTRA_MAX)
> +
> +/* Trace subclasses */
> +#define TRC_SUBCLS_SHIFT 12
> +
> +/* trace subclasses for SVM */
> +#define TRC_HVM_ENTRYEXIT 0x00081000   /* VMENTRY and #VMEXIT       */
> +#define TRC_HVM_HANDLER   0x00082000   /* various HVM handlers      */
> +
> +#define TRC_SCHED_MIN       0x00021000   /* Just runstate changes */
> +#define TRC_SCHED_CLASS     0x00022000   /* Scheduler-specific    */
> +#define TRC_SCHED_VERBOSE   0x00028000   /* More inclusive scheduling */
> +
> +/*
> + * The highest 3 bits of the last 12 bits of TRC_SCHED_CLASS above are
> + * reserved for encoding what scheduler produced the information. The
> + * actual event is encoded in the last 9 bits.
> + *
> + * This means we have 8 scheduling IDs available (which means at most 8
> + * schedulers generating events) and, in each scheduler, up to 512
> + * different events.
> + */
> +#define TRC_SCHED_ID_BITS 3
> +#define TRC_SCHED_ID_SHIFT (TRC_SUBCLS_SHIFT - TRC_SCHED_ID_BITS)
> +#define TRC_SCHED_ID_MASK (((1UL<<TRC_SCHED_ID_BITS) - 1) << 
> TRC_SCHED_ID_SHIFT)
> +#define TRC_SCHED_EVT_MASK (~(TRC_SCHED_ID_MASK))
> +
> +/* Per-scheduler IDs, to identify scheduler specific events */
> +#define TRC_SCHED_CSCHED   0
> +#define TRC_SCHED_CSCHED2  1
> +#define TRC_SCHED_SEDF     2
> +#define TRC_SCHED_ARINC653 3
> +
> +/* Per-scheduler tracing */
> +#define TRC_SCHED_CLASS_EVT(_c, _e) \
> +  ( ( TRC_SCHED_CLASS | \
> +      ((TRC_SCHED_##_c << TRC_SCHED_ID_SHIFT) & TRC_SCHED_ID_MASK) ) + \
> +    (_e & TRC_SCHED_EVT_MASK) )
> +
> +/* Trace classes for Hardware */
> +#define TRC_HW_PM           0x00801000   /* Power management traces */
> +#define TRC_HW_IRQ          0x00802000   /* Traces relating to the handling 
> of IRQs */
> +
> +/* Trace events per class */
> +#define TRC_LOST_RECORDS        (TRC_GEN + 1)
> +#define TRC_TRACE_WRAP_BUFFER  (TRC_GEN + 2)
> +#define TRC_TRACE_CPU_CHANGE    (TRC_GEN + 3)
> +
> +#define TRC_SCHED_RUNSTATE_CHANGE   (TRC_SCHED_MIN + 1)
> +#define TRC_SCHED_CONTINUE_RUNNING  (TRC_SCHED_MIN + 2)
> +#define TRC_SCHED_DOM_ADD        (TRC_SCHED_VERBOSE +  1)
> +#define TRC_SCHED_DOM_REM        (TRC_SCHED_VERBOSE +  2)
> +#define TRC_SCHED_SLEEP          (TRC_SCHED_VERBOSE +  3)
> +#define TRC_SCHED_WAKE           (TRC_SCHED_VERBOSE +  4)
> +#define TRC_SCHED_YIELD          (TRC_SCHED_VERBOSE +  5)
> +#define TRC_SCHED_BLOCK          (TRC_SCHED_VERBOSE +  6)
> +#define TRC_SCHED_SHUTDOWN       (TRC_SCHED_VERBOSE +  7)
> +#define TRC_SCHED_CTL            (TRC_SCHED_VERBOSE +  8)
> +#define TRC_SCHED_ADJDOM         (TRC_SCHED_VERBOSE +  9)
> +#define TRC_SCHED_SWITCH         (TRC_SCHED_VERBOSE + 10)
> +#define TRC_SCHED_S_TIMER_FN     (TRC_SCHED_VERBOSE + 11)
> +#define TRC_SCHED_T_TIMER_FN     (TRC_SCHED_VERBOSE + 12)
> +#define TRC_SCHED_DOM_TIMER_FN   (TRC_SCHED_VERBOSE + 13)
> +#define TRC_SCHED_SWITCH_INFPREV (TRC_SCHED_VERBOSE + 14)
> +#define TRC_SCHED_SWITCH_INFNEXT (TRC_SCHED_VERBOSE + 15)
> +#define TRC_SCHED_SHUTDOWN_CODE  (TRC_SCHED_VERBOSE + 16)
> +
> +#define TRC_MEM_PAGE_GRANT_MAP      (TRC_MEM + 1)
> +#define TRC_MEM_PAGE_GRANT_UNMAP    (TRC_MEM + 2)
> +#define TRC_MEM_PAGE_GRANT_TRANSFER (TRC_MEM + 3)
> +#define TRC_MEM_SET_P2M_ENTRY       (TRC_MEM + 4)
> +#define TRC_MEM_DECREASE_RESERVATION (TRC_MEM + 5)
> +#define TRC_MEM_POD_POPULATE        (TRC_MEM + 16)
> +#define TRC_MEM_POD_ZERO_RECLAIM    (TRC_MEM + 17)
> +#define TRC_MEM_POD_SUPERPAGE_SPLINTER (TRC_MEM + 18)
> +
> +#define TRC_PV_ENTRY   0x00201000 /* Hypervisor entry points for PV guests. 
> */
> +#define TRC_PV_SUBCALL 0x00202000 /* Sub-call in a multicall hypercall */
> +
> +#define TRC_PV_HYPERCALL             (TRC_PV_ENTRY +  1)
> +#define TRC_PV_TRAP                  (TRC_PV_ENTRY +  3)
> +#define TRC_PV_PAGE_FAULT            (TRC_PV_ENTRY +  4)
> +#define TRC_PV_FORCED_INVALID_OP     (TRC_PV_ENTRY +  5)
> +#define TRC_PV_EMULATE_PRIVOP        (TRC_PV_ENTRY +  6)
> +#define TRC_PV_EMULATE_4GB           (TRC_PV_ENTRY +  7)
> +#define TRC_PV_MATH_STATE_RESTORE    (TRC_PV_ENTRY +  8)
> +#define TRC_PV_PAGING_FIXUP          (TRC_PV_ENTRY +  9)
> +#define TRC_PV_GDT_LDT_MAPPING_FAULT (TRC_PV_ENTRY + 10)
> +#define TRC_PV_PTWR_EMULATION        (TRC_PV_ENTRY + 11)
> +#define TRC_PV_PTWR_EMULATION_PAE    (TRC_PV_ENTRY + 12)
> +#define TRC_PV_HYPERCALL_V2          (TRC_PV_ENTRY + 13)
> +#define TRC_PV_HYPERCALL_SUBCALL     (TRC_PV_SUBCALL + 14)
> +
> +/*
> + * TRC_PV_HYPERCALL_V2 format
> + *
> + * Only some of the hypercall argument are recorded. Bit fields A0 to
> + * A5 in the first extra word are set if the argument is present and
> + * the arguments themselves are packed sequentially in the following
> + * words.
> + *
> + * The TRC_64_FLAG bit is not set for these events (even if there are
> + * 64-bit arguments in the record).
> + *
> + * Word
> + * 0    bit 31 30|29 28|27 26|25 24|23 22|21 20|19 ... 0
> + *          A5   |A4   |A3   |A2   |A1   |A0   |Hypercall op
> + * 1    First 32 bit (or low word of first 64 bit) arg in record
> + * 2    Second 32 bit (or high word of first 64 bit) arg in record
> + * ...
> + *
> + * A0-A5 bitfield values:
> + *
> + *   00b  Argument not present
> + *   01b  32-bit argument present
> + *   10b  64-bit argument present
> + *   11b  Reserved
> + */
> +#define TRC_PV_HYPERCALL_V2_ARG_32(i) (0x1 << (20 + 2*(i)))
> +#define TRC_PV_HYPERCALL_V2_ARG_64(i) (0x2 << (20 + 2*(i)))
> +#define TRC_PV_HYPERCALL_V2_ARG_MASK  (0xfff00000)
> +
> +#define TRC_SHADOW_NOT_SHADOW                 (TRC_SHADOW +  1)
> +#define TRC_SHADOW_FAST_PROPAGATE             (TRC_SHADOW +  2)
> +#define TRC_SHADOW_FAST_MMIO                  (TRC_SHADOW +  3)
> +#define TRC_SHADOW_FALSE_FAST_PATH            (TRC_SHADOW +  4)
> +#define TRC_SHADOW_MMIO                       (TRC_SHADOW +  5)
> +#define TRC_SHADOW_FIXUP                      (TRC_SHADOW +  6)
> +#define TRC_SHADOW_DOMF_DYING                 (TRC_SHADOW +  7)
> +#define TRC_SHADOW_EMULATE                    (TRC_SHADOW +  8)
> +#define TRC_SHADOW_EMULATE_UNSHADOW_USER      (TRC_SHADOW +  9)
> +#define TRC_SHADOW_EMULATE_UNSHADOW_EVTINJ    (TRC_SHADOW + 10)
> +#define TRC_SHADOW_EMULATE_UNSHADOW_UNHANDLED (TRC_SHADOW + 11)
> +#define TRC_SHADOW_WRMAP_BF                   (TRC_SHADOW + 12)
> +#define TRC_SHADOW_PREALLOC_UNPIN             (TRC_SHADOW + 13)
> +#define TRC_SHADOW_RESYNC_FULL                (TRC_SHADOW + 14)
> +#define TRC_SHADOW_RESYNC_ONLY                (TRC_SHADOW + 15)
> +
> +/* trace events per subclass */
> +#define TRC_HVM_NESTEDFLAG      (0x400)
> +#define TRC_HVM_VMENTRY         (TRC_HVM_ENTRYEXIT + 0x01)
> +#define TRC_HVM_VMEXIT          (TRC_HVM_ENTRYEXIT + 0x02)
> +#define TRC_HVM_VMEXIT64        (TRC_HVM_ENTRYEXIT + TRC_64_FLAG + 0x02)
> +#define TRC_HVM_PF_XEN          (TRC_HVM_HANDLER + 0x01)
> +#define TRC_HVM_PF_XEN64        (TRC_HVM_HANDLER + TRC_64_FLAG + 0x01)
> +#define TRC_HVM_PF_INJECT       (TRC_HVM_HANDLER + 0x02)
> +#define TRC_HVM_PF_INJECT64     (TRC_HVM_HANDLER + TRC_64_FLAG + 0x02)
> +#define TRC_HVM_INJ_EXC         (TRC_HVM_HANDLER + 0x03)
> +#define TRC_HVM_INJ_VIRQ        (TRC_HVM_HANDLER + 0x04)
> +#define TRC_HVM_REINJ_VIRQ      (TRC_HVM_HANDLER + 0x05)
> +#define TRC_HVM_IO_READ         (TRC_HVM_HANDLER + 0x06)
> +#define TRC_HVM_IO_WRITE        (TRC_HVM_HANDLER + 0x07)
> +#define TRC_HVM_CR_READ         (TRC_HVM_HANDLER + 0x08)
> +#define TRC_HVM_CR_READ64       (TRC_HVM_HANDLER + TRC_64_FLAG + 0x08)
> +#define TRC_HVM_CR_WRITE        (TRC_HVM_HANDLER + 0x09)
> +#define TRC_HVM_CR_WRITE64      (TRC_HVM_HANDLER + TRC_64_FLAG + 0x09)
> +#define TRC_HVM_DR_READ         (TRC_HVM_HANDLER + 0x0A)
> +#define TRC_HVM_DR_WRITE        (TRC_HVM_HANDLER + 0x0B)
> +#define TRC_HVM_MSR_READ        (TRC_HVM_HANDLER + 0x0C)
> +#define TRC_HVM_MSR_WRITE       (TRC_HVM_HANDLER + 0x0D)
> +#define TRC_HVM_CPUID           (TRC_HVM_HANDLER + 0x0E)
> +#define TRC_HVM_INTR            (TRC_HVM_HANDLER + 0x0F)
> +#define TRC_HVM_NMI             (TRC_HVM_HANDLER + 0x10)
> +#define TRC_HVM_SMI             (TRC_HVM_HANDLER + 0x11)
> +#define TRC_HVM_VMMCALL         (TRC_HVM_HANDLER + 0x12)
> +#define TRC_HVM_HLT             (TRC_HVM_HANDLER + 0x13)
> +#define TRC_HVM_INVLPG          (TRC_HVM_HANDLER + 0x14)
> +#define TRC_HVM_INVLPG64        (TRC_HVM_HANDLER + TRC_64_FLAG + 0x14)
> +#define TRC_HVM_MCE             (TRC_HVM_HANDLER + 0x15)
> +#define TRC_HVM_IOPORT_READ     (TRC_HVM_HANDLER + 0x16)
> +#define TRC_HVM_IOMEM_READ      (TRC_HVM_HANDLER + 0x17)
> +#define TRC_HVM_CLTS            (TRC_HVM_HANDLER + 0x18)
> +#define TRC_HVM_LMSW            (TRC_HVM_HANDLER + 0x19)
> +#define TRC_HVM_LMSW64          (TRC_HVM_HANDLER + TRC_64_FLAG + 0x19)
> +#define TRC_HVM_RDTSC           (TRC_HVM_HANDLER + 0x1a)
> +#define TRC_HVM_INTR_WINDOW     (TRC_HVM_HANDLER + 0x20)
> +#define TRC_HVM_NPF             (TRC_HVM_HANDLER + 0x21)
> +#define TRC_HVM_REALMODE_EMULATE (TRC_HVM_HANDLER + 0x22)
> +#define TRC_HVM_TRAP             (TRC_HVM_HANDLER + 0x23)
> +#define TRC_HVM_TRAP_DEBUG       (TRC_HVM_HANDLER + 0x24)
> +#define TRC_HVM_VLAPIC           (TRC_HVM_HANDLER + 0x25)
> +
> +#define TRC_HVM_IOPORT_WRITE    (TRC_HVM_HANDLER + 0x216)
> +#define TRC_HVM_IOMEM_WRITE     (TRC_HVM_HANDLER + 0x217)
> +
> +/* trace events for per class */
> +#define TRC_PM_FREQ_CHANGE      (TRC_HW_PM + 0x01)
> +#define TRC_PM_IDLE_ENTRY       (TRC_HW_PM + 0x02)
> +#define TRC_PM_IDLE_EXIT        (TRC_HW_PM + 0x03)
> +
> +/* Trace events for IRQs */
> +#define TRC_HW_IRQ_MOVE_CLEANUP_DELAY (TRC_HW_IRQ + 0x1)
> +#define TRC_HW_IRQ_MOVE_CLEANUP       (TRC_HW_IRQ + 0x2)
> +#define TRC_HW_IRQ_BIND_VECTOR        (TRC_HW_IRQ + 0x3)
> +#define TRC_HW_IRQ_CLEAR_VECTOR       (TRC_HW_IRQ + 0x4)
> +#define TRC_HW_IRQ_MOVE_FINISH        (TRC_HW_IRQ + 0x5)
> +#define TRC_HW_IRQ_ASSIGN_VECTOR      (TRC_HW_IRQ + 0x6)
> +#define TRC_HW_IRQ_UNMAPPED_VECTOR    (TRC_HW_IRQ + 0x7)
> +#define TRC_HW_IRQ_HANDLED            (TRC_HW_IRQ + 0x8)
> +
> +/*
> + * Event Flags
> + *
> + * Some events (e.g, TRC_PV_TRAP and TRC_HVM_IOMEM_READ) have multiple
> + * record formats.  These event flags distinguish between the
> + * different formats.
> + */
> +#define TRC_64_FLAG 0x100 /* Addresses are 64 bits (instead of 32 bits) */
> +
> +/* This structure represents a single trace buffer record. */
> +struct t_rec {
> +    UINT32 event:28;
> +    UINT32 extra_u32:3;         /* # entries in trailing extra_u32[] array */
> +    UINT32 cycles_included:1;   /* u.cycles or u.no_cycles? */
> +    union {
> +        struct {
> +            UINT32 cycles_lo, cycles_hi; /* cycle counter timestamp */
> +            UINT32 extra_u32[7];         /* event data items */
> +        } cycles;
> +        struct {
> +            UINT32 extra_u32[7];         /* event data items */
> +        } nocycles;
> +    } u;
> +};
> +
> +/*
> + * This structure contains the metadata for a single trace buffer.  The head
> + * field, indexes into an array of struct t_rec's.
> + */
> +struct t_buf {
> +    /* Assume the data buffer size is X.  X is generally not a power of 2.
> +     * CONS and PROD are incremented modulo (2*X):
> +     *     0 <= cons < 2*X
> +     *     0 <= prod < 2*X
> +     * This is done because addition modulo X breaks at 2^32 when X is not a
> +     * power of 2:
> +     *     (((2^32 - 1) % X) + 1) % X != (2^32) % X
> +     */
> +    UINT32 cons;   /* Offset of next item to be consumed by control tools. */
> +    UINT32 prod;   /* Offset of next item to be produced by Xen.           */
> +    /*  Records follow immediately after the meta-data header.    */
> +};
> +
> +/* Structure used to pass MFNs to the trace buffers back to trace consumers.
> + * Offset is an offset into the mapped structure where the mfn list will be 
> held.
> + * MFNs will be at ((UINTN *)(t_info))+(t_info->cpu_offset[cpu]).
> + */
> +struct t_info {
> +    UINT16 tbuf_size; /* Size in pages of each trace buffer */
> +    UINT16 mfn_offset[];  /* Offset within t_info structure of the page list 
> per cpu */
> +    /* MFN lists immediately after the header */
> +};
> +
> +#endif /* __XEN_PUBLIC_TRACE_H__ */
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/OvmfPkg/Include/IndustryStandard/Xen/xen-compat.h 
> b/OvmfPkg/Include/IndustryStandard/Xen/xen-compat.h
> new file mode 100644
> index 0000000..3eb80a0
> --- /dev/null
> +++ b/OvmfPkg/Include/IndustryStandard/Xen/xen-compat.h
> @@ -0,0 +1,44 @@
> +/******************************************************************************
> + * xen-compat.h
> + * 
> + * Guest OS interface to Xen.  Compatibility layer.
> + * 
> + * Permission is hereby granted, free of charge, to any person obtaining a 
> copy
> + * of this software and associated documentation files (the "Software"), to
> + * deal in the Software without restriction, including without limitation the
> + * rights to use, copy, modify, merge, publish, distribute, sublicense, 
> and/or
> + * sell copies of the Software, and to permit persons to whom the Software is
> + * furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 
> THE
> + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
> + * DEALINGS IN THE SOFTWARE.
> + *
> + * Copyright (c) 2006, Christian Limpach
> + */
> +
> +#ifndef __XEN_PUBLIC_XEN_COMPAT_H__
> +#define __XEN_PUBLIC_XEN_COMPAT_H__
> +
> +#define __XEN_LATEST_INTERFACE_VERSION__ 0x00040400
> +
> +#if defined(__XEN__) || defined(__XEN_TOOLS__)
> +/* Xen is built with matching headers and implements the latest interface. */
> +#define __XEN_INTERFACE_VERSION__ __XEN_LATEST_INTERFACE_VERSION__
> +#elif !defined(__XEN_INTERFACE_VERSION__)
> +/* Guests which do not specify a version get the legacy interface. */
> +#define __XEN_INTERFACE_VERSION__ 0x00000000
> +#endif
> +
> +#if __XEN_INTERFACE_VERSION__ > __XEN_LATEST_INTERFACE_VERSION__
> +#error "These header files do not support the requested interface version."
> +#endif
> +
> +#endif /* __XEN_PUBLIC_XEN_COMPAT_H__ */
> diff --git a/OvmfPkg/Include/IndustryStandard/Xen/xen.h 
> b/OvmfPkg/Include/IndustryStandard/Xen/xen.h
> new file mode 100644
> index 0000000..e19f30c
> --- /dev/null
> +++ b/OvmfPkg/Include/IndustryStandard/Xen/xen.h
> @@ -0,0 +1,897 @@
> +/******************************************************************************
> + * xen.h
> + * 
> + * Guest OS interface to Xen.
> + * 
> + * Permission is hereby granted, free of charge, to any person obtaining a 
> copy
> + * of this software and associated documentation files (the "Software"), to
> + * deal in the Software without restriction, including without limitation the
> + * rights to use, copy, modify, merge, publish, distribute, sublicense, 
> and/or
> + * sell copies of the Software, and to permit persons to whom the Software is
> + * furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 
> THE
> + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
> + * DEALINGS IN THE SOFTWARE.
> + *
> + * Copyright (c) 2004, K A Fraser
> + */
> +
> +#ifndef __XEN_PUBLIC_XEN_H__
> +#define __XEN_PUBLIC_XEN_H__
> +
> +#include "xen-compat.h"
> +
> +#if defined(__i386__) || defined(__x86_64__)
> +#include "arch-x86/xen.h"
> +#elif defined(__arm__) || defined (__aarch64__)
> +#include "arch-arm.h"
> +#else
> +#error "Unsupported architecture"
> +#endif
> +
> +#ifndef __ASSEMBLY__
> +/* Guest handles for primitive C types. */
> +DEFINE_XEN_GUEST_HANDLE(CHAR8);
> +/* __DEFINE_XEN_GUEST_HANDLE(uchar, unsigned char); */
> +DEFINE_XEN_GUEST_HANDLE(INT32);
> +__DEFINE_XEN_GUEST_HANDLE(uint,  UINT32);
> +#if __XEN_INTERFACE_VERSION__ < 0x00040300
> +DEFINE_XEN_GUEST_HANDLE(INTN);
> +__DEFINE_XEN_GUEST_HANDLE(ulong, UINTN);
> +#endif
> +DEFINE_XEN_GUEST_HANDLE(VOID);
> +
> +DEFINE_XEN_GUEST_HANDLE(UINT64);
> +DEFINE_XEN_GUEST_HANDLE(xen_pfn_t);
> +DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);
> +#endif
> +
> +/*
> + * HYPERCALLS
> + */
> +
> +/* `incontents 100 hcalls List of hypercalls
> + * ` enum hypercall_num { // __HYPERVISOR_* => HYPERVISOR_*()
> + */
> +
> +#define __HYPERVISOR_set_trap_table        0
> +#define __HYPERVISOR_mmu_update            1
> +#define __HYPERVISOR_set_gdt               2
> +#define __HYPERVISOR_stack_switch          3
> +#define __HYPERVISOR_set_callbacks         4
> +#define __HYPERVISOR_fpu_taskswitch        5
> +#define __HYPERVISOR_sched_op_compat       6 /* compat since 0x00030101 */
> +#define __HYPERVISOR_platform_op           7
> +#define __HYPERVISOR_set_debugreg          8
> +#define __HYPERVISOR_get_debugreg          9
> +#define __HYPERVISOR_update_descriptor    10
> +#define __HYPERVISOR_memory_op            12
> +#define __HYPERVISOR_multicall            13
> +#define __HYPERVISOR_update_va_mapping    14
> +#define __HYPERVISOR_set_timer_op         15
> +#define __HYPERVISOR_event_channel_op_compat 16 /* compat since 0x00030202 */
> +#define __HYPERVISOR_xen_version          17
> +#define __HYPERVISOR_console_io           18
> +#define __HYPERVISOR_physdev_op_compat    19 /* compat since 0x00030202 */
> +#define __HYPERVISOR_grant_table_op       20
> +#define __HYPERVISOR_vm_assist            21
> +#define __HYPERVISOR_update_va_mapping_otherdomain 22
> +#define __HYPERVISOR_iret                 23 /* x86 only */
> +#define __HYPERVISOR_vcpu_op              24
> +#define __HYPERVISOR_set_segment_base     25 /* x86/64 only */
> +#define __HYPERVISOR_mmuext_op            26
> +#define __HYPERVISOR_xsm_op               27
> +#define __HYPERVISOR_nmi_op               28
> +#define __HYPERVISOR_sched_op             29
> +#define __HYPERVISOR_callback_op          30
> +#define __HYPERVISOR_xenoprof_op          31
> +#define __HYPERVISOR_event_channel_op     32
> +#define __HYPERVISOR_physdev_op           33
> +#define __HYPERVISOR_hvm_op               34
> +#define __HYPERVISOR_sysctl               35
> +#define __HYPERVISOR_domctl               36
> +#define __HYPERVISOR_kexec_op             37
> +#define __HYPERVISOR_tmem_op              38
> +#define __HYPERVISOR_xc_reserved_op       39 /* reserved for XenClient */
> +
> +/* Architecture-specific hypercall definitions. */
> +#define __HYPERVISOR_arch_0               48
> +#define __HYPERVISOR_arch_1               49
> +#define __HYPERVISOR_arch_2               50
> +#define __HYPERVISOR_arch_3               51
> +#define __HYPERVISOR_arch_4               52
> +#define __HYPERVISOR_arch_5               53
> +#define __HYPERVISOR_arch_6               54
> +#define __HYPERVISOR_arch_7               55
> +
> +/* ` } */
> +
> +/*
> + * HYPERCALL COMPATIBILITY.
> + */
> +
> +/* New sched_op hypercall introduced in 0x00030101. */
> +#if __XEN_INTERFACE_VERSION__ < 0x00030101
> +#undef __HYPERVISOR_sched_op
> +#define __HYPERVISOR_sched_op __HYPERVISOR_sched_op_compat
> +#endif
> +
> +/* New event-channel and physdev hypercalls introduced in 0x00030202. */
> +#if __XEN_INTERFACE_VERSION__ < 0x00030202
> +#undef __HYPERVISOR_event_channel_op
> +#define __HYPERVISOR_event_channel_op __HYPERVISOR_event_channel_op_compat
> +#undef __HYPERVISOR_physdev_op
> +#define __HYPERVISOR_physdev_op __HYPERVISOR_physdev_op_compat
> +#endif
> +
> +/* New platform_op hypercall introduced in 0x00030204. */
> +#if __XEN_INTERFACE_VERSION__ < 0x00030204
> +#define __HYPERVISOR_dom0_op __HYPERVISOR_platform_op
> +#endif
> +
> +/* 
> + * VIRTUAL INTERRUPTS
> + * 
> + * Virtual interrupts that a guest OS may receive from Xen.
> + * 
> + * In the side comments, 'V.' denotes a per-VCPU VIRQ while 'G.' denotes a
> + * global VIRQ. The former can be bound once per VCPU and cannot be re-bound.
> + * The latter can be allocated only once per guest: they must initially be
> + * allocated to VCPU0 but can subsequently be re-bound.
> + */
> +/* ` enum virq { */
> +#define VIRQ_TIMER      0  /* V. Timebase update, and/or requested timeout.  
> */
> +#define VIRQ_DEBUG      1  /* V. Request guest to dump debug info.           
> */
> +#define VIRQ_CONSOLE    2  /* G. (DOM0) Bytes received on emergency console. 
> */
> +#define VIRQ_DOM_EXC    3  /* G. (DOM0) Exceptional event for some domain.   
> */
> +#define VIRQ_TBUF       4  /* G. (DOM0) Trace buffer has records available.  
> */
> +#define VIRQ_DEBUGGER   6  /* G. (DOM0) A domain has paused for debugging.   
> */
> +#define VIRQ_XENOPROF   7  /* V. XenOprofile interrupt: new sample available 
> */
> +#define VIRQ_CON_RING   8  /* G. (DOM0) Bytes received on console            
> */
> +#define VIRQ_PCPU_STATE 9  /* G. (DOM0) PCPU state changed                   
> */
> +#define VIRQ_MEM_EVENT  10 /* G. (DOM0) A memory event has occured           
> */
> +#define VIRQ_XC_RESERVED 11 /* G. Reserved for XenClient                     
> */
> +#define VIRQ_ENOMEM     12 /* G. (DOM0) Low on heap memory       */
> +
> +/* Architecture-specific VIRQ definitions. */
> +#define VIRQ_ARCH_0    16
> +#define VIRQ_ARCH_1    17
> +#define VIRQ_ARCH_2    18
> +#define VIRQ_ARCH_3    19
> +#define VIRQ_ARCH_4    20
> +#define VIRQ_ARCH_5    21
> +#define VIRQ_ARCH_6    22
> +#define VIRQ_ARCH_7    23
> +/* ` } */
> +
> +#define NR_VIRQS       24
> +
> +/*
> + * ` enum neg_errnoval
> + * ` HYPERVISOR_mmu_update(const struct mmu_update reqs[],
> + * `                       unsigned count, unsigned *done_out,
> + * `                       unsigned foreigndom)
> + * `
> + * @reqs is an array of mmu_update_t structures ((ptr, val) pairs).
> + * @count is the length of the above array.
> + * @pdone is an output parameter indicating number of completed operations
> + * @foreigndom[15:0]: FD, the expected owner of data pages referenced in this
> + *                    hypercall invocation. Can be DOMID_SELF.
> + * @foreigndom[31:16]: PFD, the expected owner of pagetable pages referenced
> + *                     in this hypercall invocation. The value of this field
> + *                     (x) encodes the PFD as follows:
> + *                     x == 0 => PFD == DOMID_SELF
> + *                     x != 0 => PFD == x - 1
> + * 
> + * Sub-commands: ptr[1:0] specifies the appropriate MMU_* command.
> + * -------------
> + * ptr[1:0] == MMU_NORMAL_PT_UPDATE:
> + * Updates an entry in a page table belonging to PFD. If updating an L1 
> table,
> + * and the new table entry is valid/present, the mapped frame must belong to
> + * FD. If attempting to map an I/O page then the caller assumes the privilege
> + * of the FD.
> + * FD == DOMID_IO: Permit /only/ I/O mappings, at the priv level of the 
> caller.
> + * FD == DOMID_XEN: Map restricted areas of Xen's heap space.
> + * ptr[:2]  -- Machine address of the page-table entry to modify.
> + * val      -- Value to write.
> + *
> + * There also certain implicit requirements when using this hypercall. The
> + * pages that make up a pagetable must be mapped read-only in the guest.
> + * This prevents uncontrolled guest updates to the pagetable. Xen strictly
> + * enforces this, and will disallow any pagetable update which will end up
> + * mapping pagetable page RW, and will disallow using any writable page as a
> + * pagetable. In practice it means that when constructing a page table for a
> + * process, thread, etc, we MUST be very dilligient in following these rules:
> + *  1). Start with top-level page (PGD or in Xen language: L4). Fill out
> + *      the entries.
> + *  2). Keep on going, filling out the upper (PUD or L3), and middle (PMD
> + *      or L2).
> + *  3). Start filling out the PTE table (L1) with the PTE entries. Once
> + *   done, make sure to set each of those entries to RO (so writeable bit
> + *   is unset). Once that has been completed, set the PMD (L2) for this
> + *   PTE table as RO.
> + *  4). When completed with all of the PMD (L2) entries, and all of them have
> + *   been set to RO, make sure to set RO the PUD (L3). Do the same
> + *   operation on PGD (L4) pagetable entries that have a PUD (L3) entry.
> + *  5). Now before you can use those pages (so setting the cr3), you MUST 
> also
> + *      pin them so that the hypervisor can verify the entries. This is done
> + *      via the HYPERVISOR_mmuext_op(MMUEXT_PIN_L4_TABLE, guest physical 
> frame
> + *      number of the PGD (L4)). And this point the HYPERVISOR_mmuext_op(
> + *      MMUEXT_NEW_BASEPTR, guest physical frame number of the PGD (L4)) can 
> be
> + *      issued.
> + * For 32-bit guests, the L4 is not used (as there is less pagetables), so
> + * instead use L3.
> + * At this point the pagetables can be modified using the 
> MMU_NORMAL_PT_UPDATE
> + * hypercall. Also if so desired the OS can also try to write to the PTE
> + * and be trapped by the hypervisor (as the PTE entry is RO).
> + *
> + * To deallocate the pages, the operations are the reverse of the steps
> + * mentioned above. The argument is MMUEXT_UNPIN_TABLE for all levels and the
> + * pagetable MUST not be in use (meaning that the cr3 is not set to it).
> + * 
> + * ptr[1:0] == MMU_MACHPHYS_UPDATE:
> + * Updates an entry in the machine->pseudo-physical mapping table.
> + * ptr[:2]  -- Machine address within the frame whose mapping to modify.
> + *             The frame must belong to the FD, if one is specified.
> + * val      -- Value to write into the mapping entry.
> + * 
> + * ptr[1:0] == MMU_PT_UPDATE_PRESERVE_AD:
> + * As MMU_NORMAL_PT_UPDATE above, but A/D bits currently in the PTE are ORed
> + * with those in @val.
> + *
> + * @val is usually the machine frame number along with some attributes.
> + * The attributes by default follow the architecture defined bits. Meaning 
> that
> + * if this is a X86_64 machine and four page table layout is used, the layout
> + * of val is:
> + *  - 63 if set means No execute (NX)
> + *  - 46-13 the machine frame number
> + *  - 12 available for guest
> + *  - 11 available for guest
> + *  - 10 available for guest
> + *  - 9 available for guest
> + *  - 8 global
> + *  - 7 PAT (PSE is disabled, must use hypercall to make 4MB or 2MB pages)
> + *  - 6 dirty
> + *  - 5 accessed
> + *  - 4 page cached disabled
> + *  - 3 page write through
> + *  - 2 userspace accessible
> + *  - 1 writeable
> + *  - 0 present
> + *
> + *  The one bits that does not fit with the default layout is the PAGE_PSE
> + *  also called PAGE_PAT). The MMUEXT_[UN]MARK_SUPER arguments to the
> + *  HYPERVISOR_mmuext_op serve as mechanism to set a pagetable to be 4MB
> + *  (or 2MB) instead of using the PAGE_PSE bit.
> + *
> + *  The reason that the PAGE_PSE (bit 7) is not being utilized is due to Xen
> + *  using it as the Page Attribute Table (PAT) bit - for details on it please
> + *  refer to Intel SDM 10.12. The PAT allows to set the caching attributes of
> + *  pages instead of using MTRRs.
> + *
> + *  The PAT MSR is as follows (it is a 64-bit value, each entry is 8 bits):
> + *                    PAT4                 PAT0
> + *  +-----+-----+----+----+----+-----+----+----+
> + *  | UC  | UC- | WC | WB | UC | UC- | WC | WB |  <= Linux
> + *  +-----+-----+----+----+----+-----+----+----+
> + *  | UC  | UC- | WT | WB | UC | UC- | WT | WB |  <= BIOS (default when 
> machine boots)
> + *  +-----+-----+----+----+----+-----+----+----+
> + *  | rsv | rsv | WP | WC | UC | UC- | WT | WB |  <= Xen
> + *  +-----+-----+----+----+----+-----+----+----+
> + *
> + *  The lookup of this index table translates to looking up
> + *  Bit 7, Bit 4, and Bit 3 of val entry:
> + *
> + *  PAT/PSE (bit 7) ... PCD (bit 4) .. PWT (bit 3).
> + *
> + *  If all bits are off, then we are using PAT0. If bit 3 turned on,
> + *  then we are using PAT1, if bit 3 and bit 4, then PAT2..
> + *
> + *  As you can see, the Linux PAT1 translates to PAT4 under Xen. Which means
> + *  that if a guest that follows Linux's PAT setup and would like to set 
> Write
> + *  Combined on pages it MUST use PAT4 entry. Meaning that Bit 7 (PAGE_PAT) 
> is
> + *  set. For example, under Linux it only uses PAT0, PAT1, and PAT2 for the
> + *  caching as:
> + *
> + *   WB = none (so PAT0)
> + *   WC = PWT (bit 3 on)
> + *   UC = PWT | PCD (bit 3 and 4 are on).
> + *
> + * To make it work with Xen, it needs to translate the WC bit as so:
> + *
> + *  PWT (so bit 3 on) --> PAT (so bit 7 is on) and clear bit 3
> + *
> + * And to translate back it would:
> + *
> + * PAT (bit 7 on) --> PWT (bit 3 on) and clear bit 7.
> + */
> +#define MMU_NORMAL_PT_UPDATE      0 /* checked '*ptr = val'. ptr is MA.      
> */
> +#define MMU_MACHPHYS_UPDATE       1 /* ptr = MA of frame to modify entry for 
> */
> +#define MMU_PT_UPDATE_PRESERVE_AD 2 /* atomically: *ptr = val | (*ptr&(A|D)) 
> */
> +
> +/*
> + * MMU EXTENDED OPERATIONS
> + *
> + * ` enum neg_errnoval
> + * ` HYPERVISOR_mmuext_op(mmuext_op_t uops[],
> + * `                      UINT32 count,
> + * `                      UINT32 *pdone,
> + * `                      UINT32 foreigndom)
> + */
> +/* HYPERVISOR_mmuext_op() accepts a list of mmuext_op structures.
> + * A foreigndom (FD) can be specified (or DOMID_SELF for none).
> + * Where the FD has some effect, it is described below.
> + *
> + * cmd: MMUEXT_(UN)PIN_*_TABLE
> + * mfn: Machine frame number to be (un)pinned as a p.t. page.
> + *      The frame must belong to the FD, if one is specified.
> + *
> + * cmd: MMUEXT_NEW_BASEPTR
> + * mfn: Machine frame number of new page-table base to install in MMU.
> + *
> + * cmd: MMUEXT_NEW_USER_BASEPTR [x86/64 only]
> + * mfn: Machine frame number of new page-table base to install in MMU
> + *      when in user space.
> + *
> + * cmd: MMUEXT_TLB_FLUSH_LOCAL
> + * No additional arguments. Flushes local TLB.
> + *
> + * cmd: MMUEXT_INVLPG_LOCAL
> + * linear_addr: Linear address to be flushed from the local TLB.
> + *
> + * cmd: MMUEXT_TLB_FLUSH_MULTI
> + * vcpumask: Pointer to bitmap of VCPUs to be flushed.
> + *
> + * cmd: MMUEXT_INVLPG_MULTI
> + * linear_addr: Linear address to be flushed.
> + * vcpumask: Pointer to bitmap of VCPUs to be flushed.
> + *
> + * cmd: MMUEXT_TLB_FLUSH_ALL
> + * No additional arguments. Flushes all VCPUs' TLBs.
> + *
> + * cmd: MMUEXT_INVLPG_ALL
> + * linear_addr: Linear address to be flushed from all VCPUs' TLBs.
> + *
> + * cmd: MMUEXT_FLUSH_CACHE
> + * No additional arguments. Writes back and flushes cache contents.
> + *
> + * cmd: MMUEXT_FLUSH_CACHE_GLOBAL
> + * No additional arguments. Writes back and flushes cache contents
> + * on all CPUs in the system.
> + *
> + * cmd: MMUEXT_SET_LDT
> + * linear_addr: Linear address of LDT base (NB. must be page-aligned).
> + * nr_ents: Number of entries in LDT.
> + *
> + * cmd: MMUEXT_CLEAR_PAGE
> + * mfn: Machine frame number to be cleared.
> + *
> + * cmd: MMUEXT_COPY_PAGE
> + * mfn: Machine frame number of the destination page.
> + * src_mfn: Machine frame number of the source page.
> + *
> + * cmd: MMUEXT_[UN]MARK_SUPER
> + * mfn: Machine frame number of head of superpage to be [un]marked.
> + */
> +/* ` enum mmuext_cmd { */
> +#define MMUEXT_PIN_L1_TABLE      0
> +#define MMUEXT_PIN_L2_TABLE      1
> +#define MMUEXT_PIN_L3_TABLE      2
> +#define MMUEXT_PIN_L4_TABLE      3
> +#define MMUEXT_UNPIN_TABLE       4
> +#define MMUEXT_NEW_BASEPTR       5
> +#define MMUEXT_TLB_FLUSH_LOCAL   6
> +#define MMUEXT_INVLPG_LOCAL      7
> +#define MMUEXT_TLB_FLUSH_MULTI   8
> +#define MMUEXT_INVLPG_MULTI      9
> +#define MMUEXT_TLB_FLUSH_ALL    10
> +#define MMUEXT_INVLPG_ALL       11
> +#define MMUEXT_FLUSH_CACHE      12
> +#define MMUEXT_SET_LDT          13
> +#define MMUEXT_NEW_USER_BASEPTR 15
> +#define MMUEXT_CLEAR_PAGE       16
> +#define MMUEXT_COPY_PAGE        17
> +#define MMUEXT_FLUSH_CACHE_GLOBAL 18
> +#define MMUEXT_MARK_SUPER       19
> +#define MMUEXT_UNMARK_SUPER     20
> +/* ` } */
> +
> +#ifndef __ASSEMBLY__
> +struct mmuext_op {
> +    UINT32 cmd; /* => enum mmuext_cmd */
> +    union {
> +        /* [UN]PIN_TABLE, NEW_BASEPTR, NEW_USER_BASEPTR
> +         * CLEAR_PAGE, COPY_PAGE, [UN]MARK_SUPER */
> +        xen_pfn_t     mfn;
> +        /* INVLPG_LOCAL, INVLPG_ALL, SET_LDT */
> +        UINTN linear_addr;
> +    } arg1;
> +    union {
> +        /* SET_LDT */
> +        UINT32 nr_ents;
> +        /* TLB_FLUSH_MULTI, INVLPG_MULTI */
> +#if __XEN_INTERFACE_VERSION__ >= 0x00030205
> +        XEN_GUEST_HANDLE(const_void) vcpumask;
> +#else
> +        const VOID *vcpumask;
> +#endif
> +        /* COPY_PAGE */
> +        xen_pfn_t src_mfn;
> +    } arg2;
> +};
> +typedef struct mmuext_op mmuext_op_t;
> +DEFINE_XEN_GUEST_HANDLE(mmuext_op_t);
> +#endif
> +
> +/*
> + * ` enum neg_errnoval
> + * ` HYPERVISOR_update_va_mapping(UINTN va, u64 val,
> + * `                              enum uvm_flags flags)
> + * `
> + * ` enum neg_errnoval
> + * ` HYPERVISOR_update_va_mapping_otherdomain(UINTN va, u64 val,
> + * `                                          enum uvm_flags flags,
> + * `                                          domid_t domid)
> + * `
> + * ` @va: The virtual address whose mapping we want to change
> + * ` @val: The new page table entry, must contain a machine address
> + * ` @flags: Control TLB flushes
> + */
> +/* These are passed as 'flags' to update_va_mapping. They can be ORed. */
> +/* When specifying UVMF_MULTI, also OR in a pointer to a CPU bitmap.   */
> +/* UVMF_LOCAL is merely UVMF_MULTI with a NULL bitmap pointer.         */
> +/* ` enum uvm_flags { */
> +#define UVMF_NONE               (0UL<<0) /* No flushing at all.   */
> +#define UVMF_TLB_FLUSH          (1UL<<0) /* Flush entire TLB(s).  */
> +#define UVMF_INVLPG             (2UL<<0) /* Flush only one entry. */
> +#define UVMF_FLUSHTYPE_MASK     (3UL<<0)
> +#define UVMF_MULTI              (0UL<<2) /* Flush subset of TLBs. */
> +#define UVMF_LOCAL              (0UL<<2) /* Flush local TLB.      */
> +#define UVMF_ALL                (1UL<<2) /* Flush all TLBs.       */
> +/* ` } */
> +
> +/*
> + * Commands to HYPERVISOR_console_io().
> + */
> +#define CONSOLEIO_write         0
> +#define CONSOLEIO_read          1
> +
> +/*
> + * Commands to HYPERVISOR_vm_assist().
> + */
> +#define VMASST_CMD_enable                0
> +#define VMASST_CMD_disable               1
> +
> +/* x86/32 guests: simulate full 4GB segment limits. */
> +#define VMASST_TYPE_4gb_segments         0
> +
> +/* x86/32 guests: trap (vector 15) whenever above vmassist is used. */
> +#define VMASST_TYPE_4gb_segments_notify  1
> +
> +/*
> + * x86 guests: support writes to bottom-level PTEs.
> + * NB1. Page-directory entries cannot be written.
> + * NB2. Guest must continue to remove all writable mappings of PTEs.
> + */
> +#define VMASST_TYPE_writable_pagetables  2
> +
> +/* x86/PAE guests: support PDPTs above 4GB. */
> +#define VMASST_TYPE_pae_extended_cr3     3
> +
> +#define MAX_VMASST_TYPE                  3
> +
> +#ifndef __ASSEMBLY__
> +
> +typedef UINT16 domid_t;
> +
> +/* Domain ids >= DOMID_FIRST_RESERVED cannot be used for ordinary domains. */
> +#define DOMID_FIRST_RESERVED (0x7FF0U)
> +
> +/* DOMID_SELF is used in certain contexts to refer to oneself. */
> +#define DOMID_SELF (0x7FF0U)
> +
> +/*
> + * DOMID_IO is used to restrict page-table updates to mapping I/O memory.
> + * Although no Foreign Domain need be specified to map I/O pages, DOMID_IO
> + * is useful to ensure that no mappings to the OS's own heap are accidentally
> + * installed. (e.g., in Linux this could cause havoc as reference counts
> + * aren't adjusted on the I/O-mapping code path).
> + * This only makes sense in MMUEXT_SET_FOREIGNDOM, but in that context can
> + * be specified by any calling domain.
> + */
> +#define DOMID_IO   (0x7FF1U)
> +
> +/*
> + * DOMID_XEN is used to allow privileged domains to map restricted parts of
> + * Xen's heap space (e.g., the machine_to_phys table).
> + * This only makes sense in MMUEXT_SET_FOREIGNDOM, and is only permitted if
> + * the caller is privileged.
> + */
> +#define DOMID_XEN  (0x7FF2U)
> +
> +/*
> + * DOMID_COW is used as the owner of sharable pages */
> +#define DOMID_COW  (0x7FF3U)
> +
> +/* DOMID_INVALID is used to identify pages with unknown owner. */
> +#define DOMID_INVALID (0x7FF4U)
> +
> +/* Idle domain. */
> +#define DOMID_IDLE (0x7FFFU)
> +
> +/*
> + * Send an array of these to HYPERVISOR_mmu_update().
> + * NB. The fields are natural pointer/address size for this architecture.
> + */
> +struct mmu_update {
> +    UINT64 ptr;       /* Machine address of PTE. */
> +    UINT64 val;       /* New contents of PTE.    */
> +};
> +typedef struct mmu_update mmu_update_t;
> +DEFINE_XEN_GUEST_HANDLE(mmu_update_t);
> +
> +/*
> + * ` enum neg_errnoval
> + * ` HYPERVISOR_multicall(multicall_entry_t call_list[],
> + * `                      UINT32 nr_calls);
> + *
> + * NB. The fields are natural register size for this architecture.
> + */
> +struct multicall_entry {
> +    UINTN op, result;
> +    UINTN args[6];
> +};
> +typedef struct multicall_entry multicall_entry_t;
> +DEFINE_XEN_GUEST_HANDLE(multicall_entry_t);
> +
> +#if __XEN_INTERFACE_VERSION__ < 0x00040400
> +/*
> + * Event channel endpoints per domain (when using the 2-level ABI):
> + *  1024 if a INTN is 32 bits; 4096 if a INTN is 64 bits.
> + */
> +#define NR_EVENT_CHANNELS EVTCHN_2L_NR_CHANNELS
> +#endif
> +
> +struct vcpu_time_info {
> +    /*
> +     * Updates to the following values are preceded and followed by an
> +     * increment of 'version'. The guest can therefore detect updates by
> +     * looking for changes to 'version'. If the least-significant bit of
> +     * the version number is set then an update is in progress and the guest
> +     * must wait to read a consistent set of values.
> +     * The correct way to interact with the version number is similar to
> +     * Linux's seqlock: see the implementations of 
> read_seqbegin/read_seqretry.
> +     */
> +    UINT32 version;
> +    UINT32 pad0;
> +    UINT64 tsc_timestamp;   /* TSC at last update of time vals.  */
> +    UINT64 system_time;     /* Time, in nanosecs, since boot.    */
> +    /*
> +     * Current system time:
> +     *   system_time +
> +     *   ((((tsc - tsc_timestamp) << tsc_shift) * tsc_to_system_mul) >> 32)
> +     * CPU frequency (Hz):
> +     *   ((10^9 << 32) / tsc_to_system_mul) >> tsc_shift
> +     */
> +    UINT32 tsc_to_system_mul;
> +    INT8   tsc_shift;
> +    INT8   pad1[3];
> +}; /* 32 bytes */
> +typedef struct vcpu_time_info vcpu_time_info_t;
> +
> +struct vcpu_info {
> +    /*
> +     * 'evtchn_upcall_pending' is written non-zero by Xen to indicate
> +     * a pending notification for a particular VCPU. It is then cleared 
> +     * by the guest OS /before/ checking for pending work, thus avoiding
> +     * a set-and-check race. Note that the mask is only accessed by Xen
> +     * on the CPU that is currently hosting the VCPU. This means that the
> +     * pending and mask flags can be updated by the guest without special
> +     * synchronisation (i.e., no need for the x86 LOCK prefix).
> +     * This may seem suboptimal because if the pending flag is set by
> +     * a different CPU then an IPI may be scheduled even when the mask
> +     * is set. However, note:
> +     *  1. The task of 'interrupt holdoff' is covered by the per-event-
> +     *     channel mask bits. A 'noisy' event that is continually being
> +     *     triggered can be masked at source at this very precise
> +     *     granularity.
> +     *  2. The main purpose of the per-VCPU mask is therefore to restrict
> +     *     reentrant execution: whether for concurrency control, or to
> +     *     prevent unbounded stack usage. Whatever the purpose, we expect
> +     *     that the mask will be asserted only for short periods at a time,
> +     *     and so the likelihood of a 'spurious' IPI is suitably small.
> +     * The mask is read before making an event upcall to the guest: a
> +     * non-zero mask therefore guarantees that the VCPU will not receive
> +     * an upcall activation. The mask is cleared when the VCPU requests
> +     * to block: this avoids wakeup-waiting races.
> +     */
> +    UINT8 evtchn_upcall_pending;
> +#ifdef XEN_HAVE_PV_UPCALL_MASK
> +    UINT8 evtchn_upcall_mask;
> +#else /* XEN_HAVE_PV_UPCALL_MASK */
> +    UINT8 pad0;
> +#endif /* XEN_HAVE_PV_UPCALL_MASK */
> +    xen_ulong_t evtchn_pending_sel;
> +    struct arch_vcpu_info arch;
> +    struct vcpu_time_info time;
> +}; /* 64 bytes (x86) */
> +#ifndef __XEN__
> +typedef struct vcpu_info vcpu_info_t;
> +#endif
> +
> +/*
> + * `incontents 200 startofday_shared Start-of-day shared data structure
> + * Xen/kernel shared data -- pointer provided in start_info.
> + *
> + * This structure is defined to be both smaller than a page, and the
> + * only data on the shared page, but may vary in actual size even within
> + * compatible Xen versions; guests should not rely on the size
> + * of this structure remaining constant.
> + */
> +struct shared_info {
> +    struct vcpu_info vcpu_info[XEN_LEGACY_MAX_VCPUS];
> +
> +    /*
> +     * A domain can create "event channels" on which it can send and receive
> +     * asynchronous event notifications. There are three classes of event 
> that
> +     * are delivered by this mechanism:
> +     *  1. Bi-directional inter- and intra-domain connections. Domains must
> +     *     arrange out-of-band to set up a connection (usually by allocating
> +     *     an unbound 'listener' port and avertising that via a storage 
> service
> +     *     such as xenstore).
> +     *  2. Physical interrupts. A domain with suitable hardware-access
> +     *     privileges can bind an event-channel port to a physical interrupt
> +     *     source.
> +     *  3. Virtual interrupts ('events'). A domain can bind an event-channel
> +     *     port to a virtual interrupt source, such as the virtual-timer
> +     *     device or the emergency console.
> +     * 
> +     * Event channels are addressed by a "port index". Each channel is
> +     * associated with two bits of information:
> +     *  1. PENDING -- notifies the domain that there is a pending 
> notification
> +     *     to be processed. This bit is cleared by the guest.
> +     *  2. MASK -- if this bit is clear then a 0->1 transition of PENDING
> +     *     will cause an asynchronous upcall to be scheduled. This bit is 
> only
> +     *     updated by the guest. It is read-only within Xen. If a channel
> +     *     becomes pending while the channel is masked then the 'edge' is 
> lost
> +     *     (i.e., when the channel is unmasked, the guest must manually 
> handle
> +     *     pending notifications as no upcall will be scheduled by Xen).
> +     * 
> +     * To expedite scanning of pending notifications, any 0->1 pending
> +     * transition on an unmasked channel causes a corresponding bit in a
> +     * per-vcpu selector word to be set. Each bit in the selector covers a
> +     * 'C INTN' in the PENDING bitfield array.
> +     */
> +    xen_ulong_t evtchn_pending[sizeof(xen_ulong_t) * 8];
> +    xen_ulong_t evtchn_mask[sizeof(xen_ulong_t) * 8];
> +
> +    /*
> +     * Wallclock time: updated only by control software. Guests should base
> +     * their gettimeofday() syscall on this wallclock-base value.
> +     */
> +    UINT32 wc_version;      /* Version counter: see vcpu_time_info_t. */
> +    UINT32 wc_sec;          /* Secs  00:00:00 UTC, Jan 1, 1970.  */
> +    UINT32 wc_nsec;         /* Nsecs 00:00:00 UTC, Jan 1, 1970.  */
> +
> +    struct arch_shared_info arch;
> +
> +};
> +#ifndef __XEN__
> +typedef struct shared_info shared_info_t;
> +#endif
> +
> +/*
> + * `incontents 200 startofday Start-of-day memory layout
> + *
> + *  1. The domain is started within contiguous virtual-memory region.
> + *  2. The contiguous region ends on an aligned 4MB boundary.
> + *  3. This the order of bootstrap elements in the initial virtual region:
> + *      a. relocated kernel image
> + *      b. initial ram disk              [mod_start, mod_len]
> + *      c. list of allocated page frames [mfn_list, nr_pages]
> + *         (unless relocated due to XEN_ELFNOTE_INIT_P2M)
> + *      d. start_info_t structure        [register ESI (x86)]
> + *      e. bootstrap page tables         [pt_base and CR3 (x86)]
> + *      f. bootstrap stack               [register ESP (x86)]
> + *  4. Bootstrap elements are packed together, but each is 4kB-aligned.
> + *  5. The initial ram disk may be omitted.
> + *  6. The list of page frames forms a contiguous 'pseudo-physical' memory
> + *     layout for the domain. In particular, the bootstrap virtual-memory
> + *     region is a 1:1 mapping to the first section of the pseudo-physical 
> map.
> + *  7. All bootstrap elements are mapped read-writable for the guest OS. The
> + *     only exception is the bootstrap page table, which is mapped read-only.
> + *  8. There is guaranteed to be at least 512kB padding after the final
> + *     bootstrap element. If necessary, the bootstrap virtual region is
> + *     extended by an extra 4MB to ensure this.
> + *
> + * Note: Prior to 25833:bb85bbccb1c9. ("x86/32-on-64 adjust Dom0 initial page
> + * table layout") a bug caused the pt_base (3.e above) and cr3 to not point
> + * to the start of the guest page tables (it was offset by two pages).
> + * This only manifested itself on 32-on-64 dom0 kernels and not 32-on-64 domU
> + * or 64-bit kernels of any colour. The page tables for a 32-on-64 dom0 got
> + * allocated in the order: 'first L1','first L2', 'first L3', so the offset
> + * to the page table base is by two pages back. The initial domain if it is
> + * 32-bit and runs under a 64-bit hypervisor should _NOT_ use two of the
> + * pages preceding pt_base and mark them as reserved/unused.
> + */
> +#ifdef XEN_HAVE_PV_GUEST_ENTRY
> +struct start_info {
> +    /* THE FOLLOWING ARE FILLED IN BOTH ON INITIAL BOOT AND ON RESUME.    */
> +    CHAR8 magic[32];             /* "xen-<version>-<platform>".            */
> +    UINTN nr_pages;     /* Total pages allocated to this domain.  */
> +    UINTN shared_info;  /* MACHINE address of shared info struct. */
> +    UINT32 flags;             /* SIF_xxx flags.                         */
> +    xen_pfn_t store_mfn;        /* MACHINE page number of shared page.    */
> +    UINT32 store_evtchn;      /* Event channel for store communication. */
> +    union {
> +        struct {
> +            xen_pfn_t mfn;      /* MACHINE page number of console page.   */
> +            UINT32  evtchn;   /* Event channel for console page.        */
> +        } domU;
> +        struct {
> +            UINT32 info_off;  /* Offset of console_info struct.         */
> +            UINT32 info_size; /* Size of console_info struct from start.*/
> +        } dom0;
> +    } console;
> +    /* THE FOLLOWING ARE ONLY FILLED IN ON INITIAL BOOT (NOT RESUME).     */
> +    UINTN pt_base;      /* VIRTUAL address of page directory.     */
> +    UINTN nr_pt_frames; /* Number of bootstrap p.t. frames.       */
> +    UINTN mfn_list;     /* VIRTUAL address of page-frame list.    */
> +    UINTN mod_start;    /* VIRTUAL address of pre-loaded module   */
> +                                /* (PFN of pre-loaded module if           */
> +                                /*  SIF_MOD_START_PFN set in flags).      */
> +    UINTN mod_len;      /* Size (bytes) of pre-loaded module.     */
> +#define MAX_GUEST_CMDLINE 1024
> +    INT8 cmd_line[MAX_GUEST_CMDLINE];
> +    /* The pfn range here covers both page table and p->m table frames.   */
> +    UINTN first_p2m_pfn;/* 1st pfn forming initial P->M table.    */
> +    UINTN nr_p2m_frames;/* # of pfns forming initial P->M table.  */
> +};
> +typedef struct start_info start_info_t;
> +
> +/* New console union for dom0 introduced in 0x00030203. */
> +#if __XEN_INTERFACE_VERSION__ < 0x00030203
> +#define console_mfn    console.domU.mfn
> +#define console_evtchn console.domU.evtchn
> +#endif
> +#endif /* XEN_HAVE_PV_GUEST_ENTRY */
> +
> +/* These flags are passed in the 'flags' field of start_info_t. */
> +#define SIF_PRIVILEGED    (1<<0)  /* Is the domain privileged? */
> +#define SIF_INITDOMAIN    (1<<1)  /* Is this the initial control domain? */
> +#define SIF_MULTIBOOT_MOD (1<<2)  /* Is mod_start a multiboot module? */
> +#define SIF_MOD_START_PFN (1<<3)  /* Is mod_start a PFN? */
> +#define SIF_PM_MASK       (0xFF<<8) /* reserve 1 byte for xen-pm options */
> +
> +/*
> + * A multiboot module is a package containing modules very similar to a
> + * multiboot module array. The only differences are:
> + * - the array of module descriptors is by convention simply at the beginning
> + *   of the multiboot module,
> + * - addresses in the module descriptors are based on the beginning of the
> + *   multiboot module,
> + * - the number of modules is determined by a termination descriptor that has
> + *   mod_start == 0.
> + *
> + * This permits to both build it statically and reference it in a 
> configuration
> + * file, and let the PV guest easily rebase the addresses to virtual 
> addresses
> + * and at the same time count the number of modules.
> + */
> +struct xen_multiboot_mod_list
> +{
> +    /* Address of first byte of the module */
> +    UINT32 mod_start;
> +    /* Address of last byte of the module (inclusive) */
> +    UINT32 mod_end;
> +    /* Address of zero-terminated command line */
> +    UINT32 cmdline;
> +    /* Unused, must be zero */
> +    UINT32 pad;
> +};
> +/*
> + * `incontents 200 startofday_dom0_console Dom0_console
> + *
> + * The console structure in start_info.console.dom0
> + *
> + * This structure includes a variety of information required to
> + * have a working VGA/VESA console.
> + */
> +typedef struct dom0_vga_console_info {
> +    UINT8 video_type; /* DOM0_VGA_CONSOLE_??? */
> +#define XEN_VGATYPE_TEXT_MODE_3 0x03
> +#define XEN_VGATYPE_VESA_LFB    0x23
> +#define XEN_VGATYPE_EFI_LFB     0x70
> +
> +    union {
> +        struct {
> +            /* Font height, in pixels. */
> +            UINT16 font_height;
> +            /* Cursor location (column, row). */
> +            UINT16 cursor_x, cursor_y;
> +            /* Number of rows and columns (dimensions in characters). */
> +            UINT16 rows, columns;
> +        } text_mode_3;
> +
> +        struct {
> +            /* Width and height, in pixels. */
> +            UINT16 width, height;
> +            /* Bytes per scan line. */
> +            UINT16 bytes_per_line;
> +            /* Bits per pixel. */
> +            UINT16 bits_per_pixel;
> +            /* LFB physical address, and size (in units of 64kB). */
> +            UINT32 lfb_base;
> +            UINT32 lfb_size;
> +            /* RGB mask offsets and sizes, as defined by VBE 1.2+ */
> +            UINT8  red_pos, red_size;
> +            UINT8  green_pos, green_size;
> +            UINT8  blue_pos, blue_size;
> +            UINT8  rsvd_pos, rsvd_size;
> +#if __XEN_INTERFACE_VERSION__ >= 0x00030206
> +            /* VESA capabilities (offset 0xa, VESA command 0x4f00). */
> +            UINT32 gbl_caps;
> +            /* Mode attributes (offset 0x0, VESA command 0x4f01). */
> +            UINT16 mode_attrs;
> +#endif
> +        } vesa_lfb;
> +    } u;
> +} dom0_vga_console_info_t;
> +#define xen_vga_console_info dom0_vga_console_info
> +#define xen_vga_console_info_t dom0_vga_console_info_t
> +
> +typedef UINT8 xen_domain_handle_t[16];
> +
> +/* Turn a plain number into a C UINTN constant. */
> +#define __mk_unsigned_long(x) x ## UL
> +#define mk_unsigned_long(x) __mk_unsigned_long(x)
> +
> +__DEFINE_XEN_GUEST_HANDLE(uint8,  UINT8);
> +__DEFINE_XEN_GUEST_HANDLE(uint16, UINT16);
> +__DEFINE_XEN_GUEST_HANDLE(uint32, UINT32);
> +__DEFINE_XEN_GUEST_HANDLE(uint64, UINT64);
> +
> +#else /* __ASSEMBLY__ */
> +
> +/* In assembly code we cannot use C numeric constant suffixes. */
> +#define mk_unsigned_long(x) x
> +
> +#endif /* !__ASSEMBLY__ */
> +
> +/* Default definitions for macros used by domctl/sysctl. */
> +#if defined(__XEN__) || defined(__XEN_TOOLS__)
> +
> +#ifndef uint64_aligned_t
> +#define uint64_aligned_t UINT64
> +#endif
> +#ifndef XEN_GUEST_HANDLE_64
> +#define XEN_GUEST_HANDLE_64(name) XEN_GUEST_HANDLE(name)
> +#endif
> +
> +#ifndef __ASSEMBLY__
> +struct xenctl_bitmap {
> +    XEN_GUEST_HANDLE_64(uint8) bitmap;
> +    UINT32 nr_bits;
> +};
> +#endif
> +
> +#endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
> +
> +#endif /* __XEN_PUBLIC_XEN_H__ */
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> -- 
> Anthony PERARD
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.