[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [Draft F] Xen on ARM vITS Handling



On Thu, 2015-06-11 at 10:40 +0100, Ian Campbell wrote:
> Here's a quick update based on feedback prior to meeting on #xenarm at
> 12:00AM BST / 7:00AM EDT / 4:30PM IST (which is ~1:20 from now)

Here is the log.

(12:02:38) ijc: VK: So, are you happy that the design doc is something which 
could be implemented?
(12:03:26) VK: ijc: I have some doubts as conclusion is not done in some cases 
or I have missed to follow up
(12:03:53) ijc: OK, shall we go through them then?
(12:04:12) ijc: I'll be working from the Draft F I sent out an hour ago
(12:04:24) VK: ijc: I have listed down queries and go through topic wise as per 
draft E
(12:04:44) ijc: ok
(12:05:59) VK: ijc: 2.2.8 - Xen will use completion INT mechanism and trigger 
softIRQ for scheduling.
(12:05:59) VK:    and one completion INT per domain is allocated for mapping 
completion INT to domain's
(12:05:59) VK:    vITS. OK?
(12:06:41) ijc: VK: Why is that needed? AFAICT the pITS driver can either poll 
or use a single host wide completion interrupt
(12:06:52) ijc: From the PoV of vits we don't care how the pits driver gets 
completions I think
(12:07:01) ijc: Or at least this design does not require it 
(12:08:07) ijc: There is no softirq and nor ITS scheduling in draft E, so I 
don't htink it is neeed, od you understand differently?
(12:08:58) VK: ijc:  As there is no info about ITS scheduling in draft E, I 
want some clarification.
(12:09:29) ijc: VK: Everything is done synchronously in the GITS_CWIRTE handler
(12:09:48) ijc: THings have been arranged such that the commands are all cheap 
enough to do this
(12:10:16) ijc: The section "Command Queue Virtualisation" covers this I think
(12:10:29) ijc: 7.11 in draft E
(12:10:41) VK: ijc:  Ok. then it is same as what I have done RFC v2 patch
(12:10:49) ijc: and 7.14 in draft F
(12:11:21) ijc: VK: At least that aspect may be I think, I don't know though.
(12:11:56) VK: ijc: OK
(12:12:28) julieng: VK: At the difference that there is no physical command 
send to the ITS
(12:13:00) julieng: neither allocation
(12:13:39) ijc: julieng: Right, the intention was to do as much stuff at setup 
or assignment time such that the command processing was cheap
(12:15:10) VK: ijc:  But VCPU still polls right and for that you have proposed 
rudimentary form of preemption
(12:15:55) VK: ijc: in 7.14 in draft F. I am not aware of this any guidance on 
this?
(12:16:01) ijc: VK I think the polling and the preemption are unrelated. The 
write to GITS_WRITER is processed synchronously, so the vcpu cannot be polling 
then, I think?
(12:16:26) ijc: The premption thing is an optional extension to consider to 
allow that synchronous processing to be split up e.g. to allow other vcpus to 
run
(12:17:22) ijc: If other vcpus are reading GITS_READR then I suppose we would 
want them to see progress, i.e. by updating the internal CREADR stepwise rather 
than all at once at the end.
(12:18:36) VK: ijc: but when VCPU posts command on CWRITER write,  then VCPU 
polls for completion in pITS driver
(12:19:04) julieng: VK: There is no command send to the physical ITS.
(12:19:20) ijc: If a command generates a request to the generic code which 
results in a call to the pits driver then it is up to the pits driver how to 
deal with that and polling would be a valid response
(12:19:27) ijc: s/response/way to implement that/
(12:19:52) ijc: VK: I've deliberately decupled the vits and pits here (via the 
abstraction of the generic code) so that from a vits PoV you are not required 
to worry about it
(12:20:22) julieng: ijc: AFAICT, there no command requiring physical command 
anymore
(12:20:37) ijc: julieng: that would be even better ;-)
(12:20:44) ijc: and I think you are right
(12:21:01) julieng: If not, this would be a concern as a guest would be able to 
block a pCPU for a while.
(12:21:20) ijc: (I wouldn't be too worried about that in the end, but it is 
moot anyway)
(12:22:43) ijc: VK: Does that resolve your concern?
(12:23:54) VK: ijc: I am not getting it here. How vITS command does _not_ 
translate to physical ITS command
(12:24:13) ijc: VK: Everything is setup at start of day, so there is nothing to 
do during vits command processing
(12:24:36) ijc: Look through 7.15.2.* and you should see no calls to anything 
which interacts with the physical its
(12:25:28) ijc: (NB: 7.15.2.7 and .8 should read "Since LPI Configuration table 
updates are handled synchronously, there
(12:25:28) ijc: is nothing to do here." in Draft F, I missed updating them
(12:32:22) VK: ijc:  when you say it set up start of the day. you mean the 
guest Device Table and ITT table is directly updated by Xen instead of sending 
physical ITS command?
(12:32:59) ijc: VK: 
http://xenbits.xen.org/people/ianc/vits/draftF.html#device-discoveryregistration-and-configuration
(12:33:11) ijc: and the following section "6 Device Assignment"
(12:33:28) ijc: All of the events are routed to pLPIs during setup (either xen 
boot or during device assignment)
(12:37:36) VK: ijc: OK, on PHYSDEVOPS_pci_assign_device, MAPD & MAPVI commands 
required for this device is sent for all events of this device
(12:38:24) ijc: VK: According to 5.5 that happens upon discovery/registration, 
i.e. pci_device_add, rather than during assign.
(12:38:53) ijc: Since the physial ITT mapping doesn't depend on the specific 
domain I don't think it needs to be dferred
(12:39:29) VK: ijc: if so then only routing of interrupts will be changed to 
assigned domain on device assignment right?
(12:40:02) ijc: right. the (Device,Event)=>(pLPI) mapping is always there. On 
assignment what changes is what Xen does with the pLPI
(12:41:32) VK: ijc: you have also mentioned "Events will be assigned to 
physical collections in a round-robin fashion" . why?. round-robin is chosen 
just for distributing event fairly?
(12:42:50) ijc: VK: It was arbitrary, but better than "all to collection 0" or 
something
(12:45:11) VK: ijc:  next is on 7.11 in draft F ( ITT Vritualisation)
(12:45:19) VK: struct vitt {uint16_t valid:1;uint16_t pad:15;uint16_t 
collection;uint32_t vlpi;}
(12:46:19) VK: ijc: Is it ok to store pLPI and virtual collection in vitt?. 
Because this helps to easily map vLPI to pLPI
(12:47:15) ijc: There is no 1:1 map from vLPI to pLPI, so no. What need do you 
foresee for this mapping?
(12:47:46) ijc: the collection in vitt is already virtual
(12:48:18) ijc: s/collection/vcollection/ done on that struct and the uses
(12:50:20) VK: ijc: I think (Device, vID) is mapped to pLPI
(12:51:05) ijc: VK: Where?
(12:53:41) VK: ijc: OK. because in this design, we are generated pLPI based on 
(Device, Event), pLPI is not mapped to vLPI.
(12:53:53) ijc: Right
(12:54:38) ijc: I'm quite likely to get preempted by another thing shortly 
after 1pm BST (i.e. between 6 and 15 mins from now).
(12:54:43) ijc: Is there anything else we need to cover?
(12:55:43) VK: ijc: now vLPI and pLPI is mapped using Event
(12:56:07) ijc: For draftG I've got updates for 7.15.2.* mentioned above, a 
change to vitt to contain vcollection not collection and I need to update 7.14 
("Command Queue Virt") to consider multiple vcpus all pounding 
GITS_CREADR/CWRITER in parallel and how that should work (which I need to think 
about a bit)
(12:56:43) ijc: VK: mapped using Event> I'm not sure I follow, or was that just 
finishing your previous thought?
(12:57:06) ijc: Shall I post minutes (i.e. the IRC log) to xen-devel? VK and 
julieng are you OK with that?
(12:58:21) julieng: ijc: I'm fine with that. Thanks
(12:58:22) VK: ijc:  OK. I will ping you whenever I have some queries tomorrow.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.