[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH v5 2/6] vpci: fix deferral of long operations



Current logic to handle long running operations is flawed because it
doesn't prevent the guest vcpu from running. Fix this by raising a
scheduler softirq when preemption is required, so that the do_softirq
call in the guest entry path performs a rescheduling. Also move the
call to vpci_process_pending into handle_hvm_io_completion, together
with the IOREQ code that handles pending IO instructions.

Note that a scheduler softirq is also raised when the long running
operation is queued in order to prevent the guest vcpu from resuming
execution.

Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
---
Cc: Paul Durrant <paul.durrant@xxxxxxxxxx>
Cc: Jan Beulich <jbeulich@xxxxxxxx>
Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
Cc: Wei Liu <wei.liu2@xxxxxxxxxx>
Cc: George Dunlap <George.Dunlap@xxxxxxxxxxxxx>
Cc: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
Cc: Julien Grall <julien.grall@xxxxxxx>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>
Cc: Tim Deegan <tim@xxxxxxx>
---
Changes since v4:
 - Add a comment to clarify defer_map raising a scheduler softirq.
 - Reword commit message.
 - Raise the scheduler softirq in handle_hvm_io_completion rather than
   vpci_process_pending.

Changes since v3:
 - Don't use a tasklet.
---
 xen/arch/x86/hvm/ioreq.c  | 9 ++++++---
 xen/drivers/vpci/header.c | 5 +++++
 2 files changed, 11 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
index a56d634f31..71f23227e6 100644
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -85,9 +85,6 @@ bool hvm_io_pending(struct vcpu *v)
     struct hvm_ioreq_server *s;
     unsigned int id;
 
-    if ( has_vpci(d) && vpci_process_pending(v) )
-        return true;
-
     FOR_EACH_IOREQ_SERVER(d, id, s)
     {
         struct hvm_ioreq_vcpu *sv;
@@ -186,6 +183,12 @@ bool handle_hvm_io_completion(struct vcpu *v)
     enum hvm_io_completion io_completion;
     unsigned int id;
 
+    if ( has_vpci(d) && vpci_process_pending(v) )
+    {
+        raise_softirq(SCHEDULE_SOFTIRQ);
+        return false;
+    }
+
     FOR_EACH_IOREQ_SERVER(d, id, s)
     {
         struct hvm_ioreq_vcpu *sv;
diff --git a/xen/drivers/vpci/header.c b/xen/drivers/vpci/header.c
index 39dffb21fb..c9bdc2ced3 100644
--- a/xen/drivers/vpci/header.c
+++ b/xen/drivers/vpci/header.c
@@ -184,6 +184,11 @@ static void defer_map(struct domain *d, struct pci_dev 
*pdev,
     curr->vpci.mem = mem;
     curr->vpci.cmd = cmd;
     curr->vpci.rom_only = rom_only;
+    /*
+     * Raise a scheduler softirq in order to prevent the guest from resuming
+     * execution with pending mapping operations.
+     */
+    raise_softirq(SCHEDULE_SOFTIRQ);
 }
 
 static int modify_bars(const struct pci_dev *pdev, uint16_t cmd, bool rom_only)
-- 
2.19.1


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.