[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen on ARM vITS Handling Draft B (Was Re: Xen/arm: Virtual ITS command queue handling)



On 22/05/15 14:58, Vijay Kilari wrote:
> On Fri, May 22, 2015 at 6:19 PM, Julien Grall <julien.grall@xxxxxxxxxx> wrote:
>>> 1) Command translation:
>>> -----------------------------------
>>>
>>>  - ITS commands contains device ID, Event ID (vID), Collection ID
>>> (vCID), Target Address (vTA)
>>>     parameters
>>>  - All these parameters should be validated
>>>  - These parameters should be translated from Virtual to Physical
>>>
>>> Of the existing GICv3 ITS commands, MAPC, MAPD, MAPVI/MAPI are the time
>>> consuming commands as these commands creates entry in the Xen ITS 
>>> structures,
>>> which are used to validate other ITS commands.
>>>
>>> 1.1 MAPC command translation
>>> -----------------------------------------------
>>>    Format: MAPC vCID, vTA
>>>
>>>    -  vTA is validated against Re-distributor address by searching
>>> Redistributor region /
>>>        CPU number based on GITS_TYPER.PAtype and Physical Collection
>>> ID & Physical
>>>        Target address are retrieved
>>>    -  Each vITS will have cid_map (struct cid_mapping) which holds mapping 
>>> of
>>>       Virtual Collection ID, Virtual Targets address and Physical 
>>> Collection ID.
>>
>> How the vCID is mapped to the pCID? How would that fit with interrupt
>> migration?
> 
> Physical ITS driver create one collection ID (pCID) per CPU.
> DomU's vCID should always 0 to MAXVCPUS as GITS.TYPER.PTAtype is set to 0.
> (as suggested by you below)

Why do you speak about GITS_TYPER.PTA? No matter the value of this
field, there will be always no more than MAXVPCUS collections

> So Migration should be within 0 - 8. Here there is scope for improvement
> to migration to pCPU on which vCPU is running.

Are you aware that the physical collection may contain interrupt from
other domain and Xen?

>>> 1.2 MAPD Command translation:
>>> -----------------------------------------------
>>>    Format: MAPD device, ITT IPA, ITT Size
>>>
>>>    MAPD is sent with Validation bit set if device needs to be added
>>> and reset when device is removed
>>>
>>> If Validation bit is set:

More other concerns about MAPD. How do you handle a guest who wants to
change the ITT by calling again MAPD?

>>      - Check if the device is assigned to the domain
>>
>>>    - Allocate memory for its_device struct
>>
>> Allocation can't be done in interrupt context.
> 
> Can't we allocate in softirq context?

It should be possible in softirq. Although, we still want something quick.

> 
>>
>>>    - Validate ITT IPA & ITT size and update its_device struct
>>>    - Find number of vectors(nrvecs) for this device by querying PCI
>>> helper function
>>
>> This could be read only once when the device is added to Xen via the
>> hypercall PHYSDEV_*pci*
> 
> If so, this value should be in pci_dev struct

Or a in a specific its_device structure in the ITS... because the
{,v}ITS code has to be device agnostic as much as possible.

> .
>>
>>>    - Allocate nrvecs number of LPI
>>>    - Allocate memory for struct vlpi_map for this device. This
>>> vlpi_map holds mapping
>>>      of Virtual LPI to Physical LPI and ID.
>>>    - Find physical ITS node for which this device is assigned
>>
>> Not necessary in a 1 vITS = 1 pITS which seem to be the solution we will
>> choose.
>>
>>>    - Call p2m_lookup on ITT IPA addr and get physical ITT address
>>>    - Validate ITT Size
>>
>> You already do it in "validate ITT IPA & ITT size...". Although all the
>> checks should be done before any allocation.
>>
>>>    - Generate/format physical ITS command: MAPD, ITT PA, ITT Size
>>>
>>>    Here the overhead is with memory allocation for its_device and vlpi_map
>>
>> As suggested earlier, the memory allocate of its_device and vlpi_map can
>> be done when the device is assigned to the domain or added to Xen
>>
>> The only things you would have to do here is checking the ITT size and
>> mark the device enable.
>>
>>>
>>> If Validation bit is not set:
>>>     - Validate if the device exits by checking vITS device list
>>
>> Using a list can be very expensive... I would use a radix tree.
>>
>>>     - Clear all vlpis assigned for this device
>>
>> What happens for interrupt assigned to this device? Are they disabled?
>> unroute?
> 
>     Should be disable with LPI configuration table update. I think
> release_irq is called

So calling release_irq on every LPIs associated? That sounds very long.

>>
>>>     - Remove this device from vITS list
>>>     - Free memory
>>>
>>> 1.3 MAPVI/MAPI Command translation:
>>> -----------------------------------------------
>>>    Format: MAPVI device, ID, vID, vCID
>>>
>>> - Validate if the device exits by checking vITS device list
>>
>> exists
>>
>>> - Validate vCID and get pCID by searching cid_map
>>> - if vID does not have entry in vlpi_entries of this device
>>>   If not, Allot pID from vlpi_map of this device and update
>>> vlpi_entries with new pID
>>> - Allocate irq descriptor and add to RB tree
>>> - call route_irq_to_guest() for this pID
>>> - Generate/format physical ITS command: MAPVI device ID, pID, pCID
>>>
>>> Here the overhead is allot physical ID, allocate memory for
>>> irq descriptor and  routing interrupt
>>
>> An overhead which can be removed by routing the IRQ when the device is
>> assigned.
> 
>    But, routing requires pID which is not known when device is assigned.
> nrvecs could be as high as 256/2K so cannot route all the pID when assigned.

Why? You just need to allocate a chunk of pID and having an optimized
function to route multiple IRQ at once. We could also improve the way to
store IRQ desc.

>>
>>> All other ITS command like MOVI, DISCARD, INV, INVALL, INT, CLEAR,
>>> SYNC just validate and generate physical command
>>
>> With the data structure you suggested it's not the case, the validation
>> can be very expensive.
> 
> which data structure?

The list ...

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.