[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC XEN PATCH v3 06/39] acpi: probe valid PMEM regions via NFIT



On 11/03/17 14:15 +0800, Chao Peng wrote:
> 
> > +static void __init acpi_nfit_register_pmem(struct acpi_nfit_desc
> > *desc)
> > +{
> > +    struct nfit_spa_desc *spa_desc;
> > +    struct nfit_memdev_desc *memdev_desc;
> > +    struct acpi_nfit_system_address *spa;
> > +    unsigned long smfn, emfn;
> > +
> > +    list_for_each_entry(memdev_desc, &desc->memdev_list, link)
> > +    {
> > +        spa_desc = memdev_desc->spa_desc;
> > +
> > +        if ( !spa_desc ||
> > +             (memdev_desc->acpi_table->flags &
> > +              (ACPI_NFIT_MEM_SAVE_FAILED |
> > ACPI_NFIT_MEM_RESTORE_FAILED |
> > +               ACPI_NFIT_MEM_FLUSH_FAILED | ACPI_NFIT_MEM_NOT_ARMED |
> > +               ACPI_NFIT_MEM_MAP_FAILED)) )
> > +            continue;
> 
> If failure is detected, is it reasonable to continue? We can print some
> messages at least I think.

I got something wrong here. I should iterate SPA structures, and check
all memdev in each SPA range. If any memdev contains failure flags,
then skip the whole SPA range and print an error message.

Haozhong

> 
> Chao
> > +
> > +        spa = spa_desc->acpi_table;
> > +        if ( memcmp(spa->range_guid, nfit_spa_pmem_guid, 16) )
> > +            continue;
> > +        smfn = paddr_to_pfn(spa->address);
> > +        emfn = paddr_to_pfn(spa->address + spa->length);
> > +        printk(XENLOG_INFO "NFIT: PMEM MFNs 0x%lx - 0x%lx\n", smfn,
> > emfn);
> > +    }
> > +}

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.