[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-changelog] Fix a race condition for multi-thread qemu dma, where vmx linux guests
# HG changeset patch # User kaf24@xxxxxxxxxxxxxxxxxxxx # Node ID 3c687c6905e79747953b0458fa5cd5ec83560e44 # Parent ec370b3d2df3043989b7cb996c28e458d0b5304f Fix a race condition for multi-thread qemu dma, where vmx linux guests show warning "dma interrupt lost" and dma becomes very slow. root cause: In the time between set ide irq and set dma status, if guest receive the irq and query the status, it will find the status is not ready and therefore treat it as pseudo interrupt. Change the order of set irq and set dma status fixes this issue. Signed-off-by: Ke Yu <ke.yu@xxxxxxxxx> diff -r ec370b3d2df3 -r 3c687c6905e7 tools/ioemu/hw/ide.c --- a/tools/ioemu/hw/ide.c Tue Nov 29 01:00:10 2005 +++ b/tools/ioemu/hw/ide.c Tue Nov 29 10:38:53 2005 @@ -669,6 +669,8 @@ } if (s->io_buffer_index >= s->io_buffer_size && s->nsector == 0) { s->status = READY_STAT | SEEK_STAT; + s->bmdma->status &= ~BM_STATUS_DMAING; + s->bmdma->status |= BM_STATUS_INT; ide_set_irq(s); #ifdef DEBUG_IDE_ATAPI printf("dma status=0x%x\n", s->status); @@ -736,6 +738,8 @@ if (n == 0) { /* end of transfer */ s->status = READY_STAT | SEEK_STAT; + s->bmdma->status &= ~BM_STATUS_DMAING; + s->bmdma->status |= BM_STATUS_INT; ide_set_irq(s); return 0; } @@ -983,6 +987,8 @@ if (s->packet_transfer_size <= 0) { s->status = READY_STAT; s->nsector = (s->nsector & ~7) | ATAPI_INT_REASON_IO | ATAPI_INT_REASON_CD; + s->bmdma->status &= ~BM_STATUS_DMAING; + s->bmdma->status |= BM_STATUS_INT; ide_set_irq(s); #ifdef DEBUG_IDE_ATAPI printf("dma status=0x%x\n", s->status); @@ -2065,8 +2071,6 @@ } /* end of transfer */ the_end: - bm->status &= ~BM_STATUS_DMAING; - bm->status |= BM_STATUS_INT; bm->dma_cb = NULL; bm->ide_if = NULL; } _______________________________________________ Xen-changelog mailing list Xen-changelog@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-changelog
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |