[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen/arm: Virtual ITS command queue handling



On 15/05/15 11:59, Ian Campbell wrote:
>>>> AFAIU the process suggested, Xen will inject small batch as long as the
>>>> physical command queue is not full.
>>>
>>>> Let's take a simple case, only a single domain is using vITS on the
>>>> platform. If it injects a huge number of commands, Xen will split it
>>>> with lots of small batch. All batch will be injected in the same pass as
>>>> long as it fits in the physical command queue. Am I correct?
>>>
>>> That's how it is currently written, yes. With the "possible
>>> simplification" above the answer is no, only a batch at a time would be
>>> written for each guest.
>>>
>>> BTW, it doesn't have to be a single guest, the sum total of the
>>> injections across all guests could also take a similar amount of time.
>>> Is that a concern?
>>
>> Yes, the example with only a guest was easier to explain.
> 
> So as well as limiting the number of commands in each domains batch we
> also want to limit the total number of batches?

Right. We want to have a "short" scheduling pass no matter the size of
the queue.

>>>> I think we have to restrict total number of batch (i.e for all the
>>>> domain) injected in a same scheduling pass.
>>>>
>>>> I would even tend to allow only one in flight batch per domain. That
>>>> would limit the possible problem I pointed out.
>>>
>>> This is the "possible simplification" I think. Since it simplifies other
>>> things (I think) as well as addressing this issue I think it might be a
>>> good idea.
>>
>> With the limitation of command send per batch, would the fairness you
>> were talking on the design doc still required?
> 
> I think we still want to schedule the guest's in a strict round robin
> manner, to avoid one guest monopolising things.

I agree, although I was talking about the fairness you mentionned in
"However this may need some careful thought wrt fairness for
guests submitting frequent small batches of commands vs those sending
large batches."

>>>>> Therefore it is proposed that the restriction that a single vITS maps
>>>>> to one pITS be retained. If a guest requires access to devices
>>>>> associated with multiple pITSs then multiple vITS should be
>>>>> configured.
>>>>
>>>> Having multiple vITS per domain brings other issues:
>>>>    - How do you know the number of ITS to describe in the device tree at 
>>>> boot?
>>>
>>> I'm not sure. I don't think 1 vs N is very different from the question
>>> of 0 vs 1 though, somehow the tools need to know about the pITS setup.
>>
>> I don't see why the tools would require to know the pITS setup.
> 
> Even with only a single vits the tools need to know if the system has 0,
> 1, or more pits, to know whether to vreate a vits at all or not.

In the 1 vITS solution no, it's only necessary to add a new gic define
for the gic_version field in xen_arch_domainconfig.

Although, I agree that in multiple vITS configuration we would need to
know the number of vITS to create (not necessarily the number of pITS).

>>>>    - How do you tell to the guest that the PCI device is mapped to a
>>>> specific vITS?
>>>
>>> Device Tree or IORT, just like on native and just like we'd have to tell
>>> the guest about that mapping even if there was a single vITS.
>>
>> Right, although the root controller can only be attached to one ITS.
>>
>> It will be necessary to have multiple root controller in the guest in
>> the case of we passthrough devices using different ITS.
>>
>> Is pci-back able to expose multiple root controller?
> 
> In principal the xenstore protocol supports it, but AFAIK all toolstacks
> have only every used "bus" 0, so I wouldn't be surprised if there were
> bugs lurking.
> 
> But we could fix those, I don't think it is a requirement that this
> stuff suddenly springs into life on ARM even with existing kernels.

Right.

> 
>>> I think the complexity of having one vITS target multiple pITSs is going
>>> to be quite high in terms of data structures and the amount of
>>> thinking/tracking scheduler code will have to do, mostly down to out of
>>> order completion of things put in the pITS queue.
>>
>> I understand the complexity, but exposing on vITS per pITS means that we
>> are exposing the underlying hardware to the guest.
> 
> Some aspect of it, yes, but it is still a virtual ITs.

Yes and no. It make more complex the migration case (even without PCI
passthrough). See below.

>> If we are going to expose multiple vITS to the guest, we should only use
>> vITS for guest using PCI passthrough. This is because migration won't be
>> compatible with it.
> 
> It would be possible to support one s/w only vits for migration, i.e the
> evtchn thing at the end, but for the general case that is correct. On
> x86 I believe that if you hot unplug all passthrough devices you can
> migrate and then plug in other devices at the other end.

What about migration on platform having fewer/more pITS (AFAIU on cavium
it may be possible because there is only one node)? If we want to
migrate vITS we should have to handle case where there is a mismatch.
Which brings to the solution with one vITS.

As said your event channel paragraph, we should put aside the event
channel injected by the vITS for now. It was only a suggestion and it
will require more though that the vITS emulation.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.