[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Network driver domain broken


  • To: Andrea Stevanato <andrea.stevanato@xxxxxxxxxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Fri, 4 Mar 2022 13:27:53 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=gNKztTb58j2y4AuBoJQ8CN7b5TakX7boc7Um12ZryHc=; b=Wm5BZF6lZ3kYAZX7WAAqN97XqErUxxk6RIhlu6C30DsYzrdayli2eEJQ2ITvAsy3+WKrD68QkSvdsJGCOcijBJiUh349n/95KI+SBscYRWvdu7CUgUzUkzN99XzdK30FtwU497Bb/LcxdVlSKQ4nMsf9sPfWT56fp/xdDWLlVZDbRtn3V4by7RZdfvcnl68a4XKHcvCUmutdST8s+opbvkb7ElzQLGpCWsGfq/Ju+PIgsk/Hp0jf3mNOe5bZkAhQexHE3M/o4KTPIoMhTG5xwKkq8rmHquqUAxS4QnZA7mTmi/tKNo9nQSq95hM+KpeeKiHFoFB1rXropt1yhbqKzg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=bBh640GhcekHCsaQ7O71OcXRazG6E3P1cV6QYWkagRjr/oiV5+goSXJfKVgGt54s05pGV11O4sTgZzTbcZllzaAj6tMvDekU3AcJLL3M95lT1J5DgES/P6TzqLcV+DSMZGdyljPjHby4+pCtchUArc6Ls1vFCIxzZVn8fHW3x/EmiRbsRJ8EoFEslNbr6TRt0RSMjkVYLloihN3OVef5n9yH7oNHPE/hDOVU9n54mJ9ZvvUsenrwwEmWnHRgCPQAWTZj2acRIPcRyYwgKxg8GudnN9nrxMWepMEZbuwCAgScvdqbD4JAyiDwWBQWCP0A3gJAh0UtT8LtnXLYc8BmDA==
  • Authentication-results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: Jason Andryuk <jandryuk@xxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, "wl@xxxxxxx" <wl@xxxxxxx>, Anthony PERARD <anthony.perard@xxxxxxxxxx>
  • Delivery-date: Fri, 04 Mar 2022 12:28:14 +0000
  • Ironport-data: A9a23:IFy9W6nrEaJKB1slIs79aHLo5gyyJkRdPkR7XQ2eYbSJt1+Wr1Gzt xJNDDuOO/jZYGKnL95wbNvlph9VupLXxoUySgRqq3pjFSMWpZLJC+rCIxarNUt+DCFioGGLT Sk6QoOdRCzhZiaE/n9BCpC48T8kk/vgqoPUUIYoAAgoLeNfYHpn2EoLd9IR2NYy24DiW1/V4 7senuWEULOb828sWo4rw/rrRCNH5JwebxtB4zTSzdgS1LPvvyF94KA3fMldHFOhKmVgJcaoR v6r8V2M1jixEyHBqD+Suu2TnkUiGtY+NOUV45Zcc/DKbhNq/kTe3kunXRa1hIg+ZzihxrhMJ NtxWZOYFhoiNa3etuonWBRzPwVdL4BlpOPdGC3q2SCT5xWun3rExvxvCAc9PJEC+/YxCmZLn RAaAGlTNFbZ3bvwme/lDLk37iggBJCD0Ic3oHZvwCufFf87aZvCX7/L9ZlT2zJYasVmQ6uCO ZRIMGoHgBLoQkJCAgpQGcMHpsiPonagfBxoogO3jP9ii4TU5FMoi+W8WDbPQfSaSMMQkkuGq 2bu+2XiHgpcJNGZ0SCC8H+nmqnIhyyTcJ0WPK218LhtmlL77mUVBAcbXB2gvfSng0i3R9V3M EUS5iMoq6Eq9VeiCNjhNzW6qXiIpA8BWPJfFuQ77EeGza+83uqCLjFaFHgbMoVg7ZJoA2xxv rOUoz/3LWNyu7aVW1yTzLyZhh+WGjo4DjNefxZRGGPp/OLfiI00ixvOSPNqH6i0ksD5FFnM/ tyakMQtr+5N1JBWjs1X6XiC2mvx/caREmbZ8y2KBjrN0+9vWGKyi2VEA3D/5O0IEouWR0LpU JMsy5nHt7Bm4X1geUWwrAQx8FOBuq7t3N702wcH83wdG9KFoSTLkWd4um0WGauRGpxYEQIFm WeK0e+r2LddPWGxcYh8aJ+rBsIhwMDITIq5CKuPNIsVOsYtL2drGR2Cg2bKhQgBd2B2zMkC1 WqzK57wXR7294w9pNZJewvt+eBynX1vrY8ibZv60w6mwdKjiI29Et843K+1Rrlhtsus+VyNm /4Gbpfi40gPAYXWP3iMmaZOfA9iEJTOLc2vwyChXrXYeVQO9aBII6K5/I7NjKQ+x/UFzrqTp yrlMqKaoXKm7UD6xcyxQikLQJvkXIplrGJ9OiopPF2y3GMkb5rp56AaH6bbt5F+nAC/5ZaYl 8U4Rvg=
  • Ironport-hdrordr: A9a23:BRISNKrvnohXIKrIa9lbKOgaV5vJL9V00zEX/kB9WHVpm5Oj+P xGzc526farslsssREb+OxpOMG7MBThHLpOkPMs1NCZLXTbUQqTXfpfBO7ZrQEIdBeOlNK1uZ 0QFpSWTeeAcWSS7vyKkTVQcexQueVvmZrA7Yy1rwYPcegpUdAZ0+4QMHfkLqQcfnghOXNWLu v52iIRzADQBkj/I/7LTkUtbqzmnZnmhZjmaRkJC1oO7xSPtyqh7PrfHwKD1hkTfjtTyfN6mF K12TDR1+GGibWW2xXc32jc49B/n8bg8MJKAIiphtIOIjvhpw60bMBKWqGEvhoyvOazgWxa2+ XkklMFBYBe+nnRdma6rV/E3BTh6i8n7zvYxVqRkRLY0LvEbQN/L/AEqZNScxPf5UZllsp7yr h302WQsIcSJQ/cnQzmjuK4GC1Cpw6Rmz4PgOQTh3tQXc81c7lKt7ES+0tTDdMpAD/60oY6C+ NjZfusqMq+SWnqLkwxg1MfgOBFBh8Ib1S7qwk5y4GoOgFt7T5EJxBy/r1cop8CnKhNP6Wsqd 60d5iAr4s+PfP+XZgNdNvpfvHHeFAlYSi8Rl56cm6XXZ3uBRr22uvKCfMOlaWXRKA=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Fri, Mar 04, 2022 at 01:05:55PM +0100, Andrea Stevanato wrote:
> On 3/4/2022 12:52 PM, Roger Pau Monné wrote:
> > On Thu, Mar 03, 2022 at 01:08:31PM -0500, Jason Andryuk wrote:
> > > On Thu, Mar 3, 2022 at 11:34 AM Roger Pau Monné <roger.pau@xxxxxxxxxx> 
> > > wrote:
> > > > 
> > > > On Thu, Mar 03, 2022 at 05:01:23PM +0100, Andrea Stevanato wrote:
> > > > > On 03/03/2022 15:54, Andrea Stevanato wrote:
> > > > > > Hi all,
> > > > > > 
> > > > > > according to the conversation that I had with royger, aa67b97ed34  
> > > > > > broke the driver domain support.
> > > > > > 
> > > > > > What I'm trying to do is to setup networking between guests using 
> > > > > > driver domain. Therefore, the guest (driver) has been started with 
> > > > > > the following cfg.
> > > > > > 
> > > > > > name    = "guest0"
> > > > > > kernel  = "/media/sd-mmcblk0p1/Image"
> > > > > > ramdisk = "/media/sd-mmcblk0p1/rootfs.cpio.gz"
> > > > > > extra   = "console=hvc0 rdinit=/sbin/init root=/dev/ram0"
> > > > > > memory  = 1024 vcpus   = 2
> > > > > > driver_domain = 1
> > > > > > 
> > > > > > On guest0 I created the bridge, assigned a static IP and started 
> > > > > > the udhcpd on xenbr0 interface.
> > > > > > While the second guest has been started with the following cfg:
> > > > > > 
> > > > > > name    = "guest1"
> > > > > > kernel  = "/media/sd-mmcblk0p1/Image"
> > > > > > ramdisk = "/media/sd-mmcblk0p1/rootfs.cpio.gz"
> > > > > > extra   = "console=hvc0 rdinit=/sbin/init root=/dev/ram0"
> > > > > > memory  = 1024 vcpus   = 2
> > > > > > vcpus   = 2
> > > > > > vif = [ 'bridge=xenbr0, backend=guest0' ]
> > > > > > 
> > > > > > Follows the result of strace xl devd:
> > > > > > 
> > > > > > # strace xl devd
> > > > > > execve("/usr/sbin/xl", ["xl", "devd"], 0xffffdf0420c8 /* 13 vars 
> > > > > > */) = 0
> > > 
> > > > > > ioctl(5, _IOC(_IOC_NONE, 0x50, 0, 0x30), 0xffffe6e41b40) = -1 EPERM 
> > > > > > (Operation not permitted)
> > > > > > write(2, "libxl: ", 7libxl: )                  = 7
> > > > > > write(2, "error: ", 7error: )                  = 7
> > > > > > write(2, "libxl_utils.c:820:libxl_cpu_bitm"..., 
> > > > > > 87libxl_utils.c:820:libxl_cpu_bitmap_alloc: failed to retrieve the 
> > > > > > maximum number of cpus) = 87
> > > > > > write(2, "\n", 1
> > > > > > )                       = 1
> > > > > > clone(child_stack=NULL, 
> > > > > > flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, 
> > > > > > child_tidptr=0xffff9ee7a0e0) = 814
> > > > > > wait4(814, [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0, NULL) = 814
> > > > > > --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=814, 
> > > > > > si_uid=0, si_status=0, si_utime=2, si_stime=2} ---
> > > 
> > > xl devd is daemonizing, but strace is only following the first
> > > process.  Use `strace xl devd -F` to prevent the daemonizing (or
> > > `strace -f xl devd` to follow children).
> > 
> > Or as a first step try to see what kind of messages you get from `xl
> > devd -F` when trying to attach a device using the driver domain.
> 
> Nothing has changed. On guest0 (the driver domain):
> 
> # xl devd -F
> libxl: error: libxl_utils.c:820:libxl_cpu_bitmap_alloc: failed to retrieve
> the maximum number of cpus
> libxl: error: libxl_utils.c:820:libxl_cpu_bitmap_alloc: failed to retrieve
> the maximum number of cpus
> libxl: error: libxl_utils.c:820:libxl_cpu_bitmap_alloc: failed to retrieve
> the maximum number of cpus
> [  696.805619] xenbr0: port 1(vif2.0) entered blocking state
> [  696.810334] xenbr0: port 1(vif2.0) entered disabled state
> [  696.824518] device vif2.0 entered promiscuous mode

Can you use `xl -vvv devd -F` here?

I assume the process doesn't die unexpectedly?

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.