[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] RE: [PATCH][1/6] add a hypercall number for virtual device in unmodified guest




Ian Pratt <mailto:m+Ian.Pratt@xxxxxxxxxxxx> wrote:
>> Subject: [PATCH][1/6] add a hypercall number for virtual device in
>> unmodified guest 
> Xiaofeng,
> 
> Please can you write a few paragraphs describing the xen-side
> changes. I thought we were going to have a separate hypercall table
> for hvm guests, but it looks like you've not gone down this route.
> Please can you describe your approach to simplify the review.   
> 
ok. The changes for hypervisor part include these 6 patches.
Currently, the hypervisor change has reduced a lot compare to previous 
patch.
The changes mainly for 3 parts.
1. make vmx guest can use hypercall.(patch 1,2)
2. deliver event by a irq for vmx guest.( patch 3,5)
3. setup process for 2 kinds of share page: (patch 4, 6)
      1) hypercall parameter sharepage, 
      2) grant table share page


patch1: Just reserver one hypercall number for para-driver. Even use a
seperate call table, we need to reserver one number.
patch2: hypercall entry for vmcall, in this patch, I use a permit bitmap
to check and then call the hypercall_table. To use a seperate call table
is also ok. I'll send the patch later.
patch3 add a call backirq member 
patch4 two virtual device operation
patch5 grant table share page setup
patch6 inject callback irq

For xen-linux patch.
ctrl_if.c   just resolve the setup_irq problem
gnttab.c  for setup share page
blkfront  use macro for virt_to_mfn and page_to_phys
xenbus  use macro for xenstore share page. 
a hypercall wrap.   copy paramters to the share page before the hypercall.

I've removed the support of phys_to_machine_map in guest.
for blkfront, the translation will be done in grant table for shadow_translate 
mode


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.