|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2] xen: fix setup of PVH Dom0 memory map
On 23/05/14 09:13, Jan Beulich wrote:
>>>> On 22.05.14 at 19:01, <roger.pau@xxxxxxxxxx> wrote:
>> @@ -369,6 +375,100 @@ static __init void pvh_map_all_iomem(struct domain *d)
>> nump = end_pfn - start_pfn;
>> pvh_add_mem_mapping(d, start_pfn, start_pfn, nump);
>> }
>> +
>> + /*
>> + * Add the memory removed by the holes at the end of the
>> + * memory map.
>> + */
>> + for ( i = 0, entry = e820.map; i < e820.nr_map && nr_holes > 0;
>> + i++, entry++ )
>> + {
>> + if ( entry->type != E820_RAM )
>> + continue;
>> +
>> + end_pfn = PFN_UP(entry->addr + entry->size);
>> + if ( end_pfn <= nr_pages )
>> + continue;
>> +
>> + navail = end_pfn - nr_pages;
>> + nmap = navail > nr_holes ? nr_holes : navail;
>
> min()?
>
>> + nr_holes -= nmap;
>> + start_pfn = PFN_DOWN(entry->addr) < nr_pages ?
>> + nr_pages : PFN_DOWN(entry->addr);
>
> max_t()?
Fixed both.
>> + /*
>> + * Populate this memory region using the pages
>> + * previously removed by the MMIO holes.
>> + */
>> + page_list_for_each ( page, &d->page_list )
>> + {
>> + mfn = page_to_mfn(page);
>> + if ( get_gpfn_from_mfn(mfn) != INVALID_M2P_ENTRY )
>> + continue;
>
> This being inside another loop, this is going to be unnecessarily
> expensive on big memory systems without dom0_mem=, as you're
> going to repeatedly skip a huge (and increasing) number of pages
> here. So apart from needing to add a process_pending_softirqs()
> call somewhere, I think you should have a cursor into the list
> (which isn't going to change under your feet).
>
> Talking of which, Mukesh, I think the lack of periodic calls to
> process_pending_softirqs() is also a latent problem in the function
> without Roger's changes. Furthermore now that I look at it again I
> wonder whether this whole thing shouldn't be SRAT based (when
> there is an SRAT) in order to avoid mapping hotplug memory
> regions as MMIO.
I've added a call to process_pending_softirqs on pvh_add_mem_mapping and
also one here, in case we are dealing with big holes. I will leave the
SRAT stuff for later, since it's a much wider change probably.
>> +
>> + rc = guest_physmap_add_page(d, start_pfn, mfn, 0);
>> + if ( rc != 0 )
>> + panic("Unable to add gpfn %#lx mfn %#lx to Dom0 physmap",
>> + start_pfn, page_to_mfn(page));
>
> Please print rc here too, to help diagnosing eventual problems. Also
> please use "mfn" rather than "page_to_mfn(page)".
Done.
>> + start_pfn++;
>> + if ( --nmap == 0 )
>> + break;
>> + }
>> + ASSERT(nmap == 0);
>> + }
>> +
>> + ASSERT(nr_holes == 0);
>> +}
>> +
>> +static __init void pvh_setup_e820(struct domain *d, unsigned long nr_pages)
>> +{
>> + struct e820entry *entry, *entry_guest;
>> + unsigned int i;
>> + unsigned long pages, cur_pages = 0;
>> +
>> + /*
>> + * Craft the e820 memory map for Dom0 based on the hardware e820 map.
>> + */
>> + d->arch.e820 = xzalloc_array(struct e820entry, e820.nr_map);
>> + if ( !d->arch.e820 )
>> + panic("Unable to allocate memory for Dom0 e820 map");
>> + entry_guest = d->arch.e820;
>> +
>> + /* Clamp e820 memory map to match the memory assigned to Dom0 */
>> + for ( i = 0, entry = e820.map; i < e820.nr_map; i++, entry++ )
>> + {
>> + if ( entry->type != E820_RAM )
>> + {
>> + *entry_guest = *entry;
>> + goto next;
>> + }
>> +
>> + if ( nr_pages == cur_pages )
>> + {
>> + /*
>> + * We already have all the assigned memory,
>> + * skip this entry
>> + */
>> + continue;
>> + }
>> +
>> + *entry_guest = *entry;
>> + pages = (entry_guest->size + PAGE_SIZE-1) >> PAGE_SHIFT;
>
> PFN_UP()?
>
>> + if ( (cur_pages + pages) > nr_pages )
>> + {
>> + /* Truncate region */
>> + entry_guest->size = (nr_pages - cur_pages) << PAGE_SHIFT;
>> + cur_pages = nr_pages;
>> + }
>> + else
>> + {
>> + cur_pages += pages;
>> + }
>> +next:
>
> Labels indented by at least on space please (avoiding diff's -p option
> picking this up as hunk context in future patches).
Both of the above comments are fixed, I'm going to send v3 now.
Roger.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |