[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: grant_table_op v2 support for HVM?

Le lun. 20 avr. 2020 à 22:56, Andrew Cooper
<andrew.cooper3@xxxxxxxxxx> a écrit :
> Really?  The status handling is certainly different, but v2 is much
> harder to use correctly.

In which sense?

>From granter standpoint it seems to be just checking the status on
different place. Of course you can't atomically check the flags and
status any more, but with cooperating grantees that shouldn't be
problem - once grantee indicates it's done with the grant and unmaps
the pages, it doesn't map it again. Even e.g. Linux xbdback with
feature-persistent just keeps it mapped until it decides to g/c it.

Actually connected to this - am I correct to assume that for small
requests (say under 1500 bytes), it's faster to do just a memory copy
using the grant than it is to map+unmap the granted page into grantee
memory space, due to cost of TLB flushes on the grantee side?

> You want add_to_physmap(), requesting XENMAPSPACE_grant_table and or-ing
> XENMAPIDX_grant_table_status into the index.  (Because a new
> XENMAPSPACE_grant_status apparently wasn't the most logical way to
> extend the existing interface.)

This works indeed, so NetBSD can use v2 for both PV and HVM, thank you!

Interestingly, Linux kernel doesn't seem to use
XENMAPIDX_grant_table_status anywhere, I found only the standard setup
using the get_status_frames hypercall. How is HVM case handled in
Linux, is it just using v1?

I have another unrelated question, for MSI/MSI-X support in Dom0.

Is it necessary to do anything special to use properly the pirq/gsi
returned by physdev_op PHYSDEVOP_map_pirq?
After the map call for MSI interrupts (which succeeds), I execute only
the regular PHYSDEVOP_alloc_irq_vector for it, but interrupts don't
seem to be delivered right now under Dom0 (works native).

Of course this is likely to be a bugs in my code somewhere, I'd just
like to rule out that nothing else is necessary on Xen side.




Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.