[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v5 17/30] ARM: vITS: add command handling stub and MMIO emulation



Hi Andre,

On 04/06/2017 11:22 PM, André Przywara wrote:
On 06/04/17 22:43, Julien Grall wrote:
On 04/06/2017 12:19 AM, Andre Przywara wrote:
+static int vgic_v3_its_mmio_read(struct vcpu *v, mmio_info_t *info,
+                                 register_t *r, void *priv)
+{
+    struct virt_its *its = priv;
+    uint64_t reg;
+
+    switch ( info->gpa & 0xffff )
+    {
+    case VREG32(GITS_CTLR):
+        if ( info->dabt.size != DABT_WORD ) goto bad_width;
+
+        spin_lock(&its->vcmd_lock);
+        spin_lock(&its->its_lock);
+        if ( its->enabled )
+            reg = GITS_CTLR_ENABLE;
+        else
+            reg = 0;
+
+        if ( its->cwriter == its->creadr )

I think it would be better if you can read atomically cwriter and
creadr.

What do you mean by "atomically" here? For reading and comparing *both*
registers I have to hold a lock, isn't it?

This would avoid to take the vcmd_lock here, as this would
prevent a guest vCPU to access this register while you are processing
the command queue.

That's a noble goal, but I don't know if this is easily feasible.
I wonder if one can use spin_trylock(&its->vcmd_lock) here, and assume
not quiescent if that fails? My understanding is that an OS would poll
CTLR until it becomes quiescent (Linux does so), so it would try again,
returning here until the lock becomes actually free.
Would that work?
But looking at the Linux code again I see that this is only done once at
probe time (before the ITS is enabled), so at least from that side it's
not really worth optimizing.

As you may know, I don't buy the reason: "Linux is only doing it once".

In this case my worry is not about optimizing because a guest could call it often but the fact the you might end up in all the vCPUs but one reading GITS_CTLR and one handling the command queue. So formers will wait on the lock which will monopolize the pCPUs.

The command queue handling is not ideal (it can take a bit of time), but this is going to be worst. And I really don't want to see that.

+        *r = vgic_reg64_extract(its->cwriter, info);
+        spin_unlock(&its->vcmd_lock);
+        break;
+    case VREG64(GITS_CREADR):
+        if ( !vgic_reg64_check_access(info->dabt) ) goto bad_width;
+        spin_lock(&its->vcmd_lock);

Ditto.

Currently this does not work because it could read it when creadr
reaches the end of the buffer (before the code above resets it to 0).
But by rewriting that code using a local variable this should be doable.

Please do it.

+
+static int vgic_v3_its_mmio_write(struct vcpu *v, mmio_info_t *info,
+                                  register_t r, void *priv)
+{
+    struct domain *d = v->domain;
+    struct virt_its *its = priv;
+    uint64_t reg;
+    uint32_t reg32, ctlr;
+
+    switch ( info->gpa & 0xffff )
+    {
+    case VREG32(GITS_CTLR):
+        if ( info->dabt.size != DABT_WORD ) goto bad_width;
+
+        spin_lock(&its->vcmd_lock);

I am not sure to understand why taking the vcmd_lock here.

To prevent a nasty guest from turning off the ITS while commands are
still handled.

Please document it.


+        spin_lock(&its->its_lock);
+        ctlr = its->enabled ? GITS_CTLR_ENABLE : 0;
+        reg32 = ctlr;
+        vgic_reg32_update(&reg32, r, info);
+
+        if ( ctlr ^ reg32 )
+            its->enabled = vgic_v3_verify_its_status(its,
+                                                     reg32 &
GITS_CTLR_ENABLE);

Should not you print a warning to help a user debugging if you don't
enable ITS as expected?

Good idea.

Also, how do you disable it?

Not sure what you mean? vgic_v3_verify_its_status() returns false if the
ENABLE_BIT has been cleared, so its->enabled becomes false.
Or what do I miss?

I think I wrote the question and answered myself before sending the e-mail. Though I forgot to drop it.

Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.