[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-users] API calls not found
Dear XEN users,I installed xen 3.4.2 from source on slackware64. I'm using latest dom0 from git with the recommended kernel options (http://wiki.xensource.com/xenwiki/XenParavirtOps) without using pv_ops. When xend launches, I get those in the logs, [2010-01-24 12:14:18 3123] WARNING (XendAPI:701) API call: VBD.set_device not found [2010-01-24 12:14:18 3123] WARNING (XendAPI:701) API call: VBD.set_type not found [2010-01-24 12:14:18 3123] WARNING (XendAPI:701) API call: session.get_all_records not found [2010-01-24 12:14:18 3123] WARNING (XendAPI:701) API call: event.get_record not found [2010-01-24 12:14:18 3123] WARNING (XendAPI:701) API call: event.get_all not found [2010-01-24 12:14:18 3123] WARNING (XendAPI:701) API call: VIF.get_network not found [2010-01-24 12:14:18 3123] WARNING (XendAPI:701) API call: VIF.set_device not found [2010-01-24 12:14:18 3123] WARNING (XendAPI:701) API call: VIF.set_MAC not found [2010-01-24 12:14:18 3123] WARNING (XendAPI:701) API call: VIF.set_MTU not found [2010-01-24 12:14:18 3123] WARNING (XendAPI:701) API call: debug.get_all not found How to fix this? Is that why eventhough I'm able to start a pv guest (netbsd) and configure the vif it sees, network doesn't respond at all inside it ? I use network-bridge on the dom0. Some additionnal info below. teillard# uname -aLinux teillard 2.6.31.6 #4 SMP Sun Jan 24 11:49:13 CET 2010 x86_64 AMD Phenom(tm) II X2 550 Processor AuthenticAMD GNU/Linux teillard# xm info host : teillard release : 2.6.31.6 version : #4 SMP Sun Jan 24 11:49:13 CET 2010 machine : x86_64 nr_cpus : 2 nr_nodes : 1 cores_per_socket : 2 threads_per_core : 1 cpu_mhz : 3113hw_caps : 178bf3ff:efd3fbff:00000000:00000310:00802001:00000000:000037ff:00000000 virt_caps : total_memory : 3839 free_memory : 3280 node_to_cpu : node0:0-1 node_to_memory : node0:3280 xen_major : 3 xen_minor : 4 xen_extra : .2 xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p xen_scheduler : credit xen_pagesize : 4096 platform_params : virt_start=0xffff800000000000 xen_changeset : unavailable cc_compiler : gcc version 4.3.3 (GCC) cc_compile_by : root cc_compile_domain : nethence.local cc_compile_date : Sat Jan 23 17:38:31 CET 2010 xend_config_format : 4 teillard# ifconfig eth0 Link encap:Ethernet HWaddr 90:e6:ba:a6:51:c7 inet addr:192.168.0.70 Bcast:0.0.0.0 Mask:255.255.255.0 inet6 addr: fe80::92e6:baff:fea6:51c7/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3447 errors:0 dropped:0 overruns:0 frame:0 TX packets:3200 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:230242 (224.8 KiB) TX bytes:628333 (613.6 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:52 errors:0 dropped:0 overruns:0 frame:0 TX packets:52 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:4152 (4.0 KiB) TX bytes:4152 (4.0 KiB) peth0 Link encap:Ethernet HWaddr 90:e6:ba:a6:51:c7 inet6 addr: fe80::92e6:baff:fea6:51c7/64 Scope:Link UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:3794 errors:0 dropped:0 overruns:0 frame:0 TX packets:3324 errors:0 dropped:0 overruns:0 carrier:1 collisions:0 txqueuelen:1000 RX bytes:303752 (296.6 KiB) TX bytes:636269 (621.3 KiB) Interrupt:241 vif2.0 Link encap:Ethernet HWaddr fe:ff:ff:ff:ff:ff inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:240 overruns:0 carrier:0 collisions:0 txqueuelen:32 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) teillard# brctl show bridge name bridge id STP enabled interfaces eth0 8000.90e6baa651c7 no peth0 vif2.0 ps. isn't vif0.0 missing ? network's fine on dom0 anyways. Thanks -Pierre-Philipp _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |