[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] managing address space inside paravirtualized guest



On Tue, 2011-08-16 at 14:44 +0100, Mike Rapoport wrote:
> On Mon, Aug 15, 2011 at 12:33 PM, Ian Campbell <Ian.Campbell@xxxxxxxxxx> 
> wrote:
> > On Sun, 2011-08-14 at 06:59 +0100, Mike Rapoport wrote:
> >> Hello all,
> >>
> >> I working on a project that runs in paravirtualized Linux. The
> >> application should have it's own address space that is not managed by
> >> the underlying Linux kernel. I use a kernel module to allocate pages
> >> for the application page table and to communicate the pages physical
> >> and machine physical addresses between the kernel and the application.
> >> The page tables I create in the application seem to be correct and I
> >> can successfully pin them using Xen hypercalls.
> >
> > What is your end goal here?
> 
> Unfortunately I cannot elaborate about the application because of NDA, but I
> can tell that certain  parts of the application are required to have
> control over
> hardware MMU and interrupts.

:-/ and the requirement is for the _application_ to control these
mappings etc rather than simply asking the kernel to do it (i.e. by
mmap'ing device)? Have you seen drivers/uio in the kernel for example?

Anyway, perhaps you could post a dummy user application (which just
creates the PT and perhaps flip to/from them?), along with the complete
hypervisor and Linux modifications? I don't think anyone is going to be
able to help if they are guessing what you might have actually done.

> > Does this scheme work for you under native Linux?
> 
> Yes, it does.
> 
> >  In general doing an
> > end-run around the OS like this seems likely to be fraught with
> > pitfalls.
> 
> Agree :)
> 
> >>  However, when I try to
> >> set cr3 to point to these page tables with MMUEXT_NEW_{USER}BASEPTR I
> >> get the following error:
> >>
> >> (XEN) domain_crash_sync called from entry.S
> >> (XEN) Domain 1 (vcpu#0) crashed on cpu#0:
> >> (XEN) ----[ Xen-4.0.1  x86_64  debug=n  Not tainted ]----
> >> (XEN) CPU:    0
> >> (XEN) RIP:    e033:[<0000000fb0013d09>]
> >
> > What does this address correspond to?
> 
> This addres corresponds to the printf("success") in the following code:
> 
> {
>     struct mmuext_op op;
>     int success_count;
>     int ret;
> 
>     op.cmd = MMUEXT_NEW_BASEPTR;
>     op.arg1.mfn = new_cr3 >> PAGE_SHIFT;
> 
>     ret = HYPERVISOR_mmuext_op(&op, 1, &success_count, DOMID_SELF);
>     if (ret || success_count != 1)
>         printf("%s: ret=%d, success_count=%d\n", __func__, ret, 
> success_count);
> 
>     printf("%s: success\n", __func__);
> }
> 
> i.e. the hypercall is apparently returned succesfully, but the further
> execution faults.

Where does new_cr3 come from?

Are you sure that your new pagetables include mappings for the kernel
text and data etc?

There was a cr2 value in the trace, I wonder if it is valid at this
point (it's not clear if you've taken a page fault or some other form of
fault)

> >> Any leads how to debug it would be highly appreciated.
> >
> > There's only a few calls to domain_crash_sync in entry.S and they all
> > involve errors while creating a bounce frame (i.e. setting up a return
> > to guest context with an event injection).
> >
> > Since you are replacing cr3 you are presumably taking steps to ensure no
> > interrupts or anything like that can occur, since they will necessarily
> > want to be running on the kernel's page tables and not some other
> > application controlled pagetables.
> 
> We have the interrupts disabled. Besides, the behavior is consistent and I
> wouldn't expect that if the reason for faults were interrupts...

OK.




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.