[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH 5/6] vpci: fix execution of long running operations
BAR map/unmap is a long running operation that needs to be preempted in order to avoid overrunning the assigned vCPU time (or even triggering the watchdog). Current logic for this preemption is wrong, and won't work at all for AMD since only Intel makes use of hvm_io_pending (and even in that case the current code is wrong). Instead move the code that performs the mapping/unmapping to handle_hvm_io_completion and use do_softirq in order to execute the pending softirqs while the {un}mapping takes place. Note that do_softirq can also trigger a context switch to another vCPU, so having pending vpci operations shouldn't prevent fair scheduling. When the {un}map operation is queued (as a result of a trapped PCI access) a schedule softirq is raised in order to force a context switch and the execution of handle_hvm_io_completion. Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx> --- Cc: Paul Durrant <paul.durrant@xxxxxxxxxx> Cc: Jan Beulich <jbeulich@xxxxxxxx> Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Cc: Wei Liu <wei.liu2@xxxxxxxxxx> Cc: George Dunlap <George.Dunlap@xxxxxxxxxxxxx> Cc: Ian Jackson <ian.jackson@xxxxxxxxxxxxx> Cc: Julien Grall <julien.grall@xxxxxxx> Cc: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx> Cc: Tim Deegan <tim@xxxxxxx> --- xen/arch/x86/hvm/ioreq.c | 6 +++--- xen/drivers/vpci/header.c | 16 ++++++++++------ xen/include/xen/vpci.h | 6 +++--- 3 files changed, 16 insertions(+), 12 deletions(-) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 3569beaad5..cf3abd0f58 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -85,9 +85,6 @@ bool hvm_io_pending(struct vcpu *v) struct hvm_ioreq_server *s; unsigned int id; - if ( has_vpci(d) && vpci_process_pending(v) ) - return true; - FOR_EACH_IOREQ_SERVER(d, id, s) { struct hvm_ioreq_vcpu *sv; @@ -186,6 +183,9 @@ bool handle_hvm_io_completion(struct vcpu *v) enum hvm_io_completion io_completion; unsigned int id; + if ( has_vpci(d) ) + vpci_process_pending(v); + FOR_EACH_IOREQ_SERVER(d, id, s) { struct hvm_ioreq_vcpu *sv; diff --git a/xen/drivers/vpci/header.c b/xen/drivers/vpci/header.c index 9234de9b26..7a1380a5e7 100644 --- a/xen/drivers/vpci/header.c +++ b/xen/drivers/vpci/header.c @@ -118,7 +118,7 @@ static void modify_decoding(const struct pci_dev *pdev, bool map, bool rom_only) cmd); } -bool vpci_process_pending(struct vcpu *v) +void vpci_process_pending(struct vcpu *v) { if ( v->vpci.mem ) { @@ -126,10 +126,11 @@ bool vpci_process_pending(struct vcpu *v) .d = v->domain, .map = v->vpci.map, }; - int rc = rangeset_consume_ranges(v->vpci.mem, map_range, &data); + int rc; - if ( rc == -ERESTART ) - return true; + while ( (rc = rangeset_consume_ranges(v->vpci.mem, map_range, &data)) == + -ERESTART ) + do_softirq(); spin_lock(&v->vpci.pdev->vpci->lock); /* Disable memory decoding unconditionally on failure. */ @@ -149,8 +150,6 @@ bool vpci_process_pending(struct vcpu *v) */ vpci_remove_device(v->vpci.pdev); } - - return false; } static int __init apply_map(struct domain *d, const struct pci_dev *pdev, @@ -183,6 +182,11 @@ static void defer_map(struct domain *d, struct pci_dev *pdev, curr->vpci.mem = mem; curr->vpci.map = map; curr->vpci.rom_only = rom_only; + /* + * Force a scheduler softirq in order to execute handle_hvm_io_completion + * (as part of hvm_do_resume) before attempting to return to guest context. + */ + raise_softirq(SCHEDULE_SOFTIRQ); } static int modify_bars(const struct pci_dev *pdev, bool map, bool rom_only) diff --git a/xen/include/xen/vpci.h b/xen/include/xen/vpci.h index af2b8580ee..df0537f523 100644 --- a/xen/include/xen/vpci.h +++ b/xen/include/xen/vpci.h @@ -50,10 +50,10 @@ uint32_t vpci_hw_read32(const struct pci_dev *pdev, unsigned int reg, void *data); /* - * Check for pending vPCI operations on this vcpu. Returns true if the vcpu - * should not run. + * Execute pending vPCI operations on this vcpu. + * Note that this call might force a rescheduling. */ -bool __must_check vpci_process_pending(struct vcpu *v); +void vpci_process_pending(struct vcpu *v); struct vpci { /* List of vPCI handlers for a device. */ -- 2.19.0 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |