[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] RE: [PATCH][RFC] Support S3 for MSI interrupt in latest kernel dom0



Yes, I tried it in Sles11 RC1, but some changes needed both for this patch and 
for the kernel, including:

a) In this patch, it call pci_get_pdev and protected by pcidevs_lock, while in 
SLES11, it will be pci_lock_pdev() and get protected by pci_dev's lock.

b) In .27 kernel,  current pci_restore_msi_state() will call 
__pci_restore_msix_state/__pci_restore_msi_state one by one, since we now 
passed the bus/devfn to Xen, so only a hypercall needed in 
pci_restore_msi_state() and no distinguish for msi/msix anymore. It is 
something like following:

static int msi_restore_msi(struct pci_dev *dev)
{
        struct physdev_restore_msi restore_msi;
        int rc;

        restore_msi.bus = dev->bus->number;
        restore_msi.devfn = dev->devfn;
        if ((rc = HYPERVISOR_physdev_op(PHYSDEVOP_restore_msi, &restore_msi)))
                printk(KERN_WARNING "restore msi failed\n");

        return rc;
}

void pci_restore_msi_state(struct pci_dev *dev)
{
        msi_restore_msi(dev);
}

At least it works for my AHCI mode disk and my e1000 NIC.

Thanks
Yunhong Jiang

Jan Beulich <mailto:jbeulich@xxxxxxxxxx> wrote:
>>>> "Jiang, Yunhong" <yunhong.jiang@xxxxxxxxx> 19.12.08 11:10 >>>
>> Attached is the patch with a new hypercall added. Jan/Keir, can you
>> please have a look on it? I didn't change the dom0 since it has already
>> implemented save/restore logic with dom0's internal data structure.
>> But latest kernel will need this.
> 
> Looks fine to me - did you try it with a patched .27 kernel?
> 
> Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.