[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: XENMAPSPACE_grant_table vs. GNTTABOP_setup_table

On Tuesday, 09.06.2020 at 11:22, Andrew Cooper wrote:
> There is a little bit of history here...
> GNTTABOP_setup_table was the original PV way of doing things (specify
> size as an input, get a list of frames as an output to map), and
> XENMAPSPACE_grant_table was the original HVM way of doing things (as
> mapping is the other way around - I specify a GFN which I'd like to turn
> into a grant table mapping).
> When grant v2 came along, it was only XENMAPSPACE_grant_table updated to
> be compatible.  i.e. you have to use XENMAPSPACE_grant_table to map the
> status frames even if you used GNTTABOP_setup_table previously.
> It is a mistake that GNTTABOP_setup_table was usable in HVM guests to
> being with.  Returning -1 is necessary to avoid an information leak (the
> physical address of the frames making up the grant table) which an HVM
> guest shouldn't, and has no use knowing.
> An with that note, ARM is extra special because the grant API is
> specified to use host physical addresses rather than guest physical (at
> least for dom0, for reasons of there generally not being an IOMMU),
> which is why it does use the old PV way.
> It is all a bit of a mess.

Thanks for explaining, this is helpful.

So, going with the grant v2 ABI, is there a modern equivalent of
GNTTABOP_get_status_frames? Reading memory.h I'm guessing that it might be
XENMEM_add_to_physmap with space=XENMAPSPACE_grant_table and
idx=(XENMAPIDX_grant_table_status + N) where N is the frame I want, but
this is not explicitly mentioned anywhere and Linux uses the GNTTABOP

Further to that, what is the format of the grant status frames?
grant_table.h doesn't have much to say about it.

And lastly, given that I want the v2 grant ABI exclusively, I presume it's
sufficient to call GNTTABOP_set_version (version=2) first thing and abort
if it failed? Presumably the default is always v1 at start of day?





Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.