[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Error accessing memory mapped by xenforeignmemory_map()



Hello Brett,

On 27/10/2017 22:58, Brett Stahlman wrote:
On Fri, Oct 27, 2017 at 3:22 PM, Stefano Stabellini
<sstabellini@xxxxxxxxxx> wrote:
CC'ing the tools Maintainers and Paul

On Fri, 27 Oct 2017, Brett Stahlman wrote:
On Fri, Oct 27, 2017 at 9:31 AM, Roger Pau Monné <roger.pau@xxxxxxxxxx> wrote:
Adding the ARM maintainers.

On Wed, Oct 25, 2017 at 11:54:59AM -0500, Brett Stahlman wrote:
I'm trying to use the "xenforeignmemory" library to read arbitrary
memory ranges from a Xen domain. The code performing the reads is
designed to run in dom0 on a Zynq ultrascale MPSoC (ARM64), though I'm
currently testing in QEMU. I constructed a simple test program, which
reads an arbitrary domid/address pair from the command line, converts
the address (assumed to be physical) to a page frame number, and uses
xenforeignmemory_map() to map the page into the test app's virtual
memory space. Although xenforeignmemory_map() returns a non-NULL
pointer, my attempt to dereference it fails with the following error:

(XEN) traps.c:2508:d0v1 HSR=0x93810007 pc=0x400a20 gva=0x7f965f7000
gpa=0x00000030555000

[   74.361735] Unhandled fault: ttbr address size fault (0x92000000)
at 0x0000007f965f7000
Bus error

I'm not sure what a Bus error means on ARM, have you tried to look
at traps.c:2508 to see if there's some comment explaining why this
fault is triggered?

I believe the fault is occurring because mmap() failed to map the page.
Although xenforeignmemory_map() is indeed returning a non-NULL pointer,
code comments indicate that this does not imply success: page-level
errors might still be returned in the provided "err" array. In my case,
it appears that an EINVAL is produced by mmap(): specifically, I believe
it's coming from privcmd_ioctl_mmap_batch() (drivers/xen/privcmd.c), but
there are a number of conditions that can produce this error code, and I
haven't yet determined which is to blame...

So although I'm not sure why I would get an "address size" fault, it
makes sense that the pointer dereference would generate some sort of
paging-related fault, given that the page mapping was unsuccessful.
Hopefully, ARM developers will be able to explain why it was
unsuccessful, or at least give me an idea of what sorts of things could
cause a mapping attempt to fail... At this point, I'm not particular
about what address I map. I just want to be able to read known data at a
fixed (non-paged) address (e.g., kernel code/data), so I can prove to
myself that the page is actually mapped.

The fault means "Data Abort from a lower Exception level". It could be
an MMU fault or an alignment fault, according to the ARM ARM.

I guess that the address range is not good. What DomU addresses are you
trying to map?

The intent was to map fixed "guest physical" addresses corresponding to
(e.g) the "zero page" of a guest's running kernel. Up until today, I'd

What do you mean by "zero page"? Is it the guest physical address 0? If so, the current guest memory layout does not have anything mapped at the address.

assumed that a PV guest's kernel would be loaded at a known "guest
physical" address (like 0x100000 on i386), and that such addresses
corresponded to the gfn's expected by xenforeignmemory_map(). But now I
suspect this was an incorrect assumption, at least for the PV case. I've
had trouble finding relevant documentation on the Xen site, but I did
find a presentation earlier today suggesting that for PV's, gfn == mfn,
which IIUC, would effectively preclude the use of fixed addresses in a
PV guest. IOW, unlike an HVM's kernel, a PV's kernel cannot be loaded at
a "known" address (e.g., 0x100000 on i386).

Perhaps my use case (reading a guest kernel's code/data from dom0) makes
sense for an HVM, but not a PV? Is it not possible for dom0 to use the
foreignmemory interface to map PV guest pages read-only, without knowing
in advance what, if anything, those pages represent in the guest? Or is
the problem that the very concept of "guest physical" doesn't exist in a
PV? I guess it would help if I had a better understanding of what sort
of frame numbers are expected by xenforeignmemory_map() when the target
VM is a PV. Is the Xen code the only documentation for this sort of
thing, or is there some place I could get a high-level overview?

I am a bit confused with the rest of this e-mail. There are no concept of HVM or PV on Arm. This is x86 specific. For Arm, there is a single type of guest that borrow the goods of both HVM and PV.

For instance, like HVM, the hardware is used to provide a separate address space for each virtual machine. Arm calls that stage-2 translation. So gfn != mfn.

Cheers,

--
Julien Grall

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
https://lists.xen.org/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.