[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] VIRTIO - compatibility with different virtualization solutions



Anthony Liguori <anthony@xxxxxxxxxxxxx> writes:
> On Tue, Feb 18, 2014 at 4:26 PM, Rusty Russell <rusty@xxxxxxxxxxx> wrote:
>> Daniel Kiper <daniel.kiper@xxxxxxxxxx> writes:
>>> Hi,
>>>
>>> Below you could find a summary of work in regards to VIRTIO compatibility 
>>> with
>>> different virtualization solutions. It was done mainly from Xen point of 
>>> view
>>> but results are quite generic and can be applied to wide spectrum
>>> of virtualization platforms.
>>
>> Hi Daniel,
>>
>>         Sorry for the delayed response, I was pondering...  CC changed
>> to virtio-dev.
>>
>> From a standard POV: It's possible to abstract out the where we use
>> 'physical address' for 'address handle'.  It's also possible to define
>> this per-platform (ie. Xen-PV vs everyone else).  This is sane, since
>> Xen-PV is a distinct platform from x86.
>
> I'll go even further and say that "address handle" doesn't make sense too.

I was trying to come up with a unique term, I wasn't trying to define
semantics :)

There are three debates here now: (1) what should the standard say, and
(2) how would Linux implement it, (3) should we use each platform's PCI
IOMMU.

> Just using grant table references is not enough to make virtio work
> well under Xen.  You really need to use bounce buffers ala persistent
> grants.

Wait, if you're using bounce buffers, you didn't make it "work well"!

> I think what you ultimately want is virtio using a DMA API (I know
> benh has scoffed at this but I don't buy his argument at face value)
> and a DMA layer that bounces requests to a pool of persistent grants.

We can have a virtio DMA API, sure.  It'd be a noop for non-Xen.

But emulating the programming of an IOMMU seems masochistic.  PowerPC
have made it clear they don't want this.  And noone else has come up
with a compelling reason to want this: virtio passthrough?

>> For platforms using EPT, I don't think you want anything but guest
>> addresses, do you?
>>
>> From an implementation POV:
>>
>> On IOMMU, start here for previous Linux discussion:
>>         
>> http://thread.gmane.org/gmane.linux.kernel.virtualization/14410/focus=14650
>>
>> And this is the real problem.  We don't want to use the PCI IOMMU for
>> PCI devices.  So it's not just a matter of using existing Linux APIs.
>
> Is there any data to back up that claim?

Yes, for powerpc.  Implementer gets to measure, as always.  I suspect
that if you emulate an IOMMU on Intel, your performance will suck too.

> Just because power currently does hypercalls for anything that uses
> the PCI IOMMU layer doesn't mean this cannot be changed.

Does someone have an implementation of an IOMMU which doesn't use
hypercalls, or is this theoretical?

>  It's pretty
> hacky that virtio-pci just happens to work well by accident on power
> today.  Not all architectures have this limitation.

It's a fundamental assumption of virtio that the host can access all of
guest memory.  That's paravert, not a hack.

But tomayto tomatoh aside, it's unclear to me how you'd build an
efficient IOMMU today.  And it's unclear what benefit you'd gain.  But
the cost for Power is clear.

So if someone wants do to this for PCI, they need to implement it and
benchmark.  But this is a little orthogonal to the Xen discussion.

Cheers,
Rusty.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.