[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-users] ARM: "xen_add_mach_to_phys_entry: cannot add ... already exists and panics"
Hi an, thank you for your reply. 2014-07-02 11:56 GMT+02:00 Ian Campbell <Ian.Campbell@xxxxxxxxxx>: > First thing I would recommend would be to try the latest mainline stable > 3.15.x release. I think everything needed for a usable sunxi system is > in there already so no need for the sunxi-devel branch I tried Linus' linux.git/master, which corresponds to 3.16 -- resulting in the same messages and panic. Besides that, the mainline kernel works quite well. BTW, git shows that sunxi-devel and mainline Linux v3.15.2 are the same for drivers/net/xen-netback, though linux.git/master shows some changes. The bug can easily be triggered if you access blkback and netback in parallel (thanks to Maximilian), e.g. domU: iperf -s & cat /dev/xvda > /dev/null dom0: iperf -t 3600 -c domU It does not matter if the underlying dom0 block device is on a SATA, USB or mmc device. The panic is similar. > The reason I suggest the latest 3.15.x is that there were a few > interesting netback bugs but I think they've all been backported to > stable by now. I hope that they are all included in linux.git/master @ 16874b2, regarding xen-netback, those changes occurred from sunxi-devel to 16874b2: * xen-netback: bookkeep number of active queues in our own module * net: xen-netback: include linux/vmalloc.h again * xen-netback: Add support for multiple queues * xen-netback: Factor queue-specific data into queue struct * xen-netback: Move grant_copy_op array back into struct xenvif. * net: get rid of SET_ETHTOOL_OPS Interestingly, it takes some time until the bug triggers and the time increased when I switched from linux-sunxi to mainline. Do you have any idea what happens here? I am a bit clueless what's going on. Denis _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxx http://lists.xen.org/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |