[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] [PATCH] [HVM] Rename file hvm_info_table.htoplatform.h



>>What kinds of things are going to be added? 'platform.h' is vague 
>>enough it could end up a dumping ground for all kinds of crud.
>>
>
>To support HVM guest with RAM above 4G physical address space, we need
>define a constant HVM_RAM_LIMIT_BELOW_4G, and physical address space
>from HVM_RAM_LIMIT_BELOW_4G to 4G is reserved for PCI device MMIO use.
>So if HVM guest has more than HVM_RAM_LIMIT_BELOW_4G RAM, RAM beyond
>HVM_RAM_LIMIT_BELOW_4G should go to physical address space 
>above 4G.  So
>p2m table and e820 table need  adjust Accordingly.
>The constant HVM_RAM_LIMIT_BELOW_4G will be used in control panel,
>device model and hypervisor, and I need a header file to hold the
>definition.  It's hard for me to find a good English name for 
>the header
>file.  Another concern is, in the future, we may have more such
>definitions.
>BTW, Qemu-dm allocates PCI devices MMIO from 0xf0000000.
>

This patch is to support HVM guests with more than 3.75G memory, and pls
comment.
Changes are:
1) M2P table and e820 table are changed to skip address space from
HVM_RAM_LIMIT_BELOW_4G to 4G
2) shared io page location, when less than HVM_RAM_LIMIT_BELOW_4G
memory, it's the last page of RAM as today, or it's the last page of
HVM_RAM_LIMIT_BELOW_4G RAM.
3) in qemu-dm address space from HVM_RAM_LIMIT_BELOW_4G to 4G are
stuffed with shared IO page, so the 1:1 mapping can still works. This is
ugly, but another limit check patch as changeset 10757 will prevent
qemu-dm to access this range.

This patch should cowork with the patch to remove 1:1 mapping from
qemu-dm in the future.

I believe may people will comment :-), surely it's welcome.

Thanks
-Xin

Attachment: 3.5G.11.patch
Description: 3.5G.11.patch

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.