[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [virtio-dev] Re: VIRTIO - compatibility with different virtualization solutions



On Thu, Feb 20, 2014 at 4:54 PM, Rusty Russell <rusty@xxxxxxxxxxx> wrote:
> Daniel Kiper <daniel.kiper@xxxxxxxxxx> writes:
>> Hey,
>>
>> On Thu, Feb 20, 2014 at 06:18:00PM +1030, Rusty Russell wrote:
>>> Ian Campbell <Ian.Campbell@xxxxxxxxxx> writes:
>>> > On Wed, 2014-02-19 at 10:56 +1030, Rusty Russell wrote:
>>> >> For platforms using EPT, I don't think you want anything but guest
>>> >> addresses, do you?
>>> >
>>> > No, the arguments for preventing unfettered access by backends to
>>> > frontend RAM applies to EPT as well.
>>>
>>> I can see how you'd parse my sentence that way, I think, but the two
>>> are orthogonal.
>>>
>>> AFAICT your grant-table access restrictions are page granularity, though
>>> you don't use page-aligned data (eg. in xen-netfront).  This level of
>>> access control is possible using the virtio ring too, but noone has
>>> implemented such a thing AFAIK.
>>
>> Could you say in short how it should be done? DMA API is an option but
>> if there is a simpler mechanism available in VIRTIO itself we will be
>> happy to use it in Xen.
>
> OK, this challenged me to think harder.
>
> The queue itself is effectively a grant table (as long as you don't give
> the backend write access to it).  The available ring tells you where the
> buffers are and whether they are readable or writable.  The used ring
> tells you when they're used.
>
> However, performance would suck due to no caching: you'd end up doing a
> map and unmap on every packet.  I'm assuming Xen currently avoids that
> somehow?  Seems likely...
>
> On the other hand, if we wanted a more Xen-like setup, it would looke
> like this:
>
> 1) Abstract away the "physical addresses" to "handles" in the standard,
>    and allow some platform-specific mapping setup and teardown.

At the risk of beating a dead horse, passing handles (grant
references) is going to be slow.  virtio-blk would never be as fast as
xen-blkif.  I don't want to see virtio adopt a bouncing mechanism like
blkfront has developed especially in a way that every driver had to
implement it on its own.

I really think the best paths forward for virtio on Xen are either (1)
reject the memory isolation thing and leave things as is or (2) assume
bounce buffering at the transport layer (by using the PCI DMA API).

Regards,

Anthony Liguori

> 2) In Linux, implement a virtio DMA ops which handles the grant table
>    stuff for Xen (returning grant table ids + offset or something?),
>    noop for others.  This would be a runtime thing.
>
> 3) In Linux, change the drivers to use this API.
>
> Now, Xen will not be able to use vhost to accelerate, but it doesn't now
> anyway.
>
> Am I missing anything?
>
> Cheers,
> Rusty.
>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.