[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH 8/8]: PVH: privcmd changes



On Tue, 11 Sep 2012 15:10:23 +0100
Ian Campbell <Ian.Campbell@xxxxxxxxxx> wrote:

> I think you can't rely on the implicit teardown here since you need to
> unmap before you hand the page back to the balloon. The reason this
> doesn't look necessary now is that you don't give the page back.
> Also not ordering the stage 1 and stage 2 teardown correctly is
> dangerous, depending on the eventual ordering you potentially turn an
> erroneous access to a virtual address, which should result in a guest
> OS level page fault (and e.g. a seg fault to the offending process)
> into a hypervisor shoots the guest due to an unexpected stage 2 fault
> type failure, which is somewhat undesirable ;-)
> 
> With that in mind I think you do in the end need to add
> xen_unmap_domain_mfn_range which does the unmap from both stage 1 and
> stage 2 -- that balances out the interface (making pvh_rem_xen_p2m
> internal) too, which is nice. This function may turn out to be a nop
> on !pvh, but that's ok (although maybe there would be no harm in doing
> explicit unmaps, for consistency?).
 
Ok, I added xen_unmap_domain_mfn_range(). Take a look. It appears that
by the time privcmd_close() is called, the kernel has already freed 
process resources and lookup_address() returns NULL. Now I am wondering
if the kernel/mmu does anything to the page while shooting the pte
entry. Well the page was orig from balloon, so the cleanup hopefully
leaves it alone.

I had looked for other hooks initially when I did this, but 
vm_operations_struct->close was the only one to pan out.

I can't really move pvh_privcmd_resv_pfns to mmu.c because the 
xen_remap_domain_mfn_range is called one page at a time, and I need
to allocate the array first. I'd have to change it to linked list, worth
it? Or I'd have to move and export it.


> WRT passing data between interfaces in vma->vm_private, which is
> pretty subtle, can we push that whole thing down into
> xen_{remap,unmap}_domain_mfn_range too? This would make the
> requirement on the caller be simple "never use vm_private", as
> opposed to now where the requirement is "sometimes you might have to
> allocate some stuff and stick it in here". The downside is that it
> pushes code which could be generic down into per-arch stuff, although
> with suitable generic helper functions this isn't so bad (whatever
> happened to {alloc,free}_empty_pages_and_pagevec from the classic
> kernels? Those did exactly what we want here, I think)

Well, it has to hang off of vma->vm_private. The alternative would be to
have a hash table by process id or something, again not sure if worth it. 

Take a look at my latest files attached. Now the alloc_balloon and free
are split between privcmd and mmu.c. The alternative would be to call 
xen_unmap_domain_mfn_range one pfn at a time and have it call 
pvh_rem_xen_p2m(), and move free_xenballooned_pages to privcmd. But
that would be same as just changing the name of pvh_rem_xen_p2m to
xen_unmap_domain_mfn_range(). Also, remap and unmap won't be much
symmetric then.

Not sure if there's a lot we could do here to be honest. LMK what you 
think.

thanks,
Mukesh

Attachment: privcmd.c
Description: Text Data

Attachment: mmu.c
Description: Text Data

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.