[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] WARNING: at drivers/xen/gntdev.c:426 unmap_if_in_range+0x5d/0x60 [xen_gntdev]()



On 16/12/14 23:04, Christopher S. Aker wrote:
> On Dec 15, 2014, at 6:11 AM, David Vrabel <david.vrabel@xxxxxxxxxx> wrote:
>> On 11/12/14 15:12, Christopher S. Aker wrote:
>>> Xen: 4.4.2-pre (28573:f6f6236af933) + xsa111, xsa112, xsa114
>>> Dom0: 3.17.4
>>>
>>> Things go badly after a day or four.  We've hit this on a number of 
>>> previously healthy hosts, since moving from 3.10.x dom0 to 3.17.4:
>>>
>>> printk: 5441 messages suppressed.
>>> grant_table.c:567:d0 Failed to obtain maptrack handle.
>>> grant_table.c:567:d0 Failed to obtain maptrack handle.
>>> grant_table.c:567:d0 Failed to obtain maptrack handle.
>>> grant_table.c:567:d0 Failed to obtain maptrack handle.
>>
>> Can you provide more details about your networking and storage setup.
>> In particular, do you have a domU providing networked storage (iscsi for
>> example) to other domains on the same host?
> 
> Certainly. Thanks for the response. We're not using iscsi, but we do
> have some serious kit going on. This is the setup:
> 
> Storage: BBU hardware RAID (LSI), SSD drives, LVM logical volumes
> exported to the guests via blkback.
> 
> Network: Four 10Gbit links, ixgbe, bonded, then bridged onto br0,
> exported via netback via vifs.

That hardware or configuration isn't that exciting or unusual.  What's
the total number of VIFs and VBDs you have?  It may be that you're just
running out of space in the maptrack table.  There's a command line
option to increase this (indirectly, by increasing the maximum number of
grant table frames with the gnttab_max_nr_frames option).

David

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.