[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Re: Next steps with pv_ops for Xen
Stephen C. Tweedie wrote: > I can't help wondering if this is a hint that now is the time to find a > better API, which doesn't have the requirement (a) that seems to be > causing such trouble? Are other PV guests --- *BSD, Solaris --- going > to have the same problems with their VM layers if they try to implement > this API? Well, it isn't that easy unfortunaly. We have to separate two things here: (a) the grant table hypercall API (linux kernel <-> xen). (b) the grant table device (userspace interface). The hypercall API *is* heavily used, block and network drivers are using it for example. It works quite well as long as the drivers are living in kernel space, thus the grants are also mapped in kernel space only. It isn't very hard to control map and unmap then. The problems start when the gntdev comes into play which wants allow userspace applications map grant references. At this point the whole VM subsystem becomes involved. And the requirement of the hypercall API to do any pte manipulation using grant table hypercalls becomes a real burden. The linux VM design simply doesn't allow that. Consequently the current gntdev implementation tries to get the job done by bypassing the VM (and hooking into it). It establishes mappings by doing the page table manipulations itself in the fops->mmap function. It tears down mappings using the hook discussed earlier. gntdev doesn't even try to handle forking. I wouldn't be surprised if that is a great way to kill Domain-0. The xen hypervisor will most likely not be amused to find a pte refering to a granted (but foreign) page which wasn't established using the grant table interface. Pinning the pgd of the child process will most likely fail and make the kernel BUG(). cheers, Gerd _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |