[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Setting up a call to discuss PCI Emulation - Future Direction

On Fri, 13 Apr 2018 11:01:49 +0100
Roger Pau Monné <roger.pau@xxxxxxxxxx> wrote:

>On Thu, Apr 12, 2018 at 05:50:00PM +0100, Lars Kurth wrote:
>> On 12/04/2018, 17:41, "Roger Pau Monne" <roger.pau@xxxxxxxxxx>
>> wrote:
>>     On Thu, Apr 12, 2018 at 05:32:57PM +0100, Lars Kurth wrote:
>>     >    may work. For me Mon, Wed and Fri’s generally work at those
>>     > time-slots. Next week is a little busy for me, so I would
>>     > prefer the following week. If you could fill out the following
>>     > Google poll, if this week works that would be great. Otherwise
>>     > please scream.  
>>     I'm afraid I'm on vacations from the 21st to the 29th of April,
>> so I won't be able to join the meeting unless we move it to the week
>> after. Let's see what people think of the current dates.
>>     Roger.
>> Hi, I changed the dates to the week after. Poll so far has been
>> invalidated.
>> See https://doodle.com/poll/gdnmcrvnibmw563n  
>Thanks! I've already fixed my vote.
>I guess this will come later, but we need a clear agenda of items
>because the x86 and ARM topics are probably going to be completely
>different (albeit all related to PCI).

1. different approaches to handle some critical chipset-specific
   registers (MCH PCIEXBAR first of all), currently emulated by
   QEMU. Role of QEMU in the emulation of MMCONFIG accesses.

2. MMIO hole sizing in general (for HVM) -- in which PT usecases this is
   needed, requirements, limitations. It is related to the emulated
   chipset-specific resources and depends on the chosen solution for #1.

I'll try to describe different possible implementations how to make the
multiple PCI device emulators feature compatible with emulated MMCONFIG
before the meeting, to have a ground for discussion. There are at least
3 possible directions currently to solve this problem.

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.