[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH qemu-traditional] ioreq: Support 32-bit default_ioport_* accesses

On 05/25/2016 10:35 AM, Ian Jackson wrote:
> Ian Jackson writes ("Re: [PATCH qemu-traditional] ioreq: Support 32-bit 
> default_ioport_* accesses"):
>> Boris Ostrovsky writes ("[PATCH qemu-traditional] ioreq: Support 32-bit 
>> default_ioport_* accesses"):
>>> Recent changes in ACPICA (specifically, Linux commit 66b1ed5aa8dd ("ACPICA:
>>> ACPI 2.0, Hardware: Add access_width/bit_offset support for
>>> acpi_hw_write()") result in guests issuing 32-bit accesses to IO space.
>>> QEMU needs to be able to handle them.
>> I'm kind of missing something here.  If the specification has recently
>> been updated to permit this, why should old hardware support it ?
>> (I tried to find the Linux upstream git commit you're referring to but
>> my linux.git is up to date and it seems not to be fetching within a
>> reasonable time, so I thought I would reply now.)
> I have looked at this commit now and I am none the wiser.
> It says just "This patch adds access_width/bit_offset support in
> acpi_hw_write()".  I also looked at the two linked messages:
>   https://github.com/acpica/acpica/commit/48eea5e7
>   https://bugs.acpica.org/show_bug.cgi?id=1240
> and none of this explains why this supported is needed in a
> our deep-frozen ancient branch.

IIUIC, the Linux/ACPICA patch makes ACPICA use correct field in ACPI's
Generic Address Structure (section in the 6.0 spec). Before the
patch it used register's bit_width and now it will use access_size.
According to the spec access_size 0 means undefined/legacy access.

I just looked at what hvmloader provides and at least for FADT
address_size is 0. And I wonder whether ACPICA uses 4-byte-access for
these cases.

So maybe instead of trying to patch qemu-trad I should see if I can make
hvmloader provide proper access size. Let me poke at that.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.