[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC Design Doc] Add vNVDIMM support for Xen



>>> On 29.02.16 at 12:52, <haozhong.zhang@xxxxxxxxx> wrote:
> On 02/29/16 03:12, Jan Beulich wrote:
>> >>> On 29.02.16 at 10:45, <haozhong.zhang@xxxxxxxxx> wrote:
>> > On 02/29/16 02:01, Jan Beulich wrote:
>> >> >>> On 28.02.16 at 15:48, <haozhong.zhang@xxxxxxxxx> wrote:
>> >> > Anyway, we may avoid some conflicts between ACPI tables/objects by
>> >> > restricting which tables and objects can be passed from QEMU to Xen:
>> >> > (1) For ACPI tables, xen does not accept those built by itself,
>> >> >     e.g. DSDT and SSDT.
>> >> > (2) xen does not accept ACPI tables for devices that are not attached to
>> >> >     a domain, e.g. if NFIT cannot be passed if a domain does not have
>> >> >     vNVDIMM.
>> >> > (3) For ACPI objects, xen only accepts namespace devices and requires
>> >> >     their names does not conflict with existing ones provided by Xen.
>> >> 
>> >> And how do you imagine to enforce this without parsing the
>> >> handed AML? (Remember there's no AML parser in hvmloader.)
>> > 
>> > As I proposed in last reply, instead of passing an entire ACPI object,
>> > QEMU passes the device name and the AML code under the AML device entry
>> > separately. Because the name is explicitly given, no AML parser is
>> > needed in hvmloader.
>> 
>> I must not only have missed that proposal, but I also don't see
>> how you mean this to work: Are you suggesting for hvmloader to
>> construct valid AML from the passed in blob? Or are you instead
>> considering to pass redundant information (name once given
>> explicitly and once embedded in the AML blob), allowing the two
>> to be out of sync?
> 
> I mean the former one.

Which will involve adding how much new code to it?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.