[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Ubuntu 16.04.1 LTS kernel 4.4.0-57 over-allocation and xen-access fail



Hello,

We've come across a weird phenomenon: an Ubuntu 16.04.1 LTS HVM guest
running kernel 4.4.0 installed via XenCenter in XenServer Dundee seems
to eat up all the RAM it can:

(XEN) [  394.379760] d1v1 Over-allocation for domain 1: 524545 > 524544

This leads to a problem with xen-access, specifically libxc which does
this in xc_vm_event_enable() (this is Xen 4.6):

ring_page = xc_map_foreign_batch(xch, domain_id, PROT_READ | PROT_WRITE,
                                 &mmap_pfn, 1);

if ( mmap_pfn & XEN_DOMCTL_PFINFO_XTAB )
{
    /* Map failed, populate ring page */
    rc1 = xc_domain_populate_physmap_exact(xch, domain_id, 1, 0, 0,
                                               &ring_pfn);
    if ( rc1 != 0 )
    {
        PERROR("Failed to populate ring pfn\n");
        goto out;
    }

The first time everything works fine, xen-access can map the ring page.
But most of the time the second time fails in the
xc_domain_populate_physmap_exact() call, and again this is dumped in the
Xen log (once for each failed attempt):

(XEN) [  395.952188] d0v3 Over-allocation for domain 1: 524545 > 524544

This is the only guest we've seen so far doing this. All other HVM
guests (Linux, Windows) behave.

We've tried setting max_pfn and mem as kernel parameters for the guest,
and even setting HVM-shadow-multiplier from XenCenter to 10, but it has
made no difference.

Is this something that anyone else has encountered? Any suggestions
appreciated.


Thanks,
Razvan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.