[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-users] RE: Xen-users Digest, Vol 76, Issue 56





Peter Olson
RAMSES NAR COORDINATOR
ALCATEL-LUCENT
peter.olson@xxxxxxxxxxxxxxxxxx


-----Original Message-----
From: xen-users-request@xxxxxxxxxxxxxxxxxxx [xen-users-request@xxxxxxxxxxxxxxxxxxx]
Received: Monday, 27 Jun 2011, 9:03pm
To: xen-users@xxxxxxxxxxxxxxxxxxx [xen-users@xxxxxxxxxxxxxxxxxxx]
Subject: Xen-users Digest, Vol 76, Issue 56

Send Xen-users mailing list submissions to
        xen-users@xxxxxxxxxxxxxxxxxxx

To subscribe or unsubscribe via the World Wide Web, visit
        http://lists.xensource.com/mailman/listinfo/xen-users
or, via email, send a message with subject or body 'help' to
        xen-users-request@xxxxxxxxxxxxxxxxxxx

You can reach the person managing the list at
        xen-users-owner@xxxxxxxxxxxxxxxxxxx

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Xen-users digest..."


Today's Topics:

   1. Re: Malfunctioning bridge (Fajar A. Nugraha)
   2. Re: http-traffic rejected, domU (Markus Plessing)
   3. RE: Malfunctioning bridge (J.Witvliet@xxxxxxxxx)
   4. how to share vdi among several xcp pools (shreyas pandya)
   5. sug (babu krthk)
   6. [Xen-users]how to share vdi among several xcp pools
      (shreyas pandya)
   7. NAT networking in Xen (Iordan Iordanov)
   8. Re: annoying 2 - 3 second lag every few minutes in Windows        XP
      VM on Debian 6 Squeeze (Iordan Iordanov)
   9. NAT networking in Xen (Iordan Iordanov)
  10. Re: NAT networking in Xen (Iordan Iordanov)
  11. Re: [Xen-devel] Re: VM disk I/O limit patch
      (Konrad Rzeszutek Wilk)
  12. AW: [Xen-users] IPv6 with Bridge Modus (Stefan Becker)
  13. Invitation to connect on LinkedIn (David Rhodus via LinkedIn)


----------------------------------------------------------------------

Message: 1
Date: Mon, 27 Jun 2011 16:30:34 +0700
From: "Fajar A. Nugraha" <list@xxxxxxxxx>
Subject: Re: [Xen-users] Malfunctioning bridge
To: J.Witvliet@xxxxxxxxx
Cc: Xen User-List <xen-users@xxxxxxxxxxxxxxxxxxx>
Message-ID: <BANLkTin+RJ5q=W6apWfUi0k_Q6YY3ZW3=g@xxxxxxxxxxxxxx>
Content-Type: text/plain; charset=windows-1252

On Mon, Jun 27, 2011 at 4:25 PM, <J.Witvliet@xxxxxxxxx> wrote:
> To simulate different networks, I created dummy ethernet devices, and connected bridges to it.
> All of the bridges are working OK, except ONE: BR2 (for setup, see attachement)

>
> If i ping on the vpn-box (vpn is not setup yet) towards internal firewall or otherway round i see no traffic at all
> ( 172.16.100.1 => 172.16.100.2 OR 172.16.100.2 => 172.16.100.1)
>
> Looked at [internal] firewall, at the bridges, routing, but i'm clue-less?.
> Test i've done sofar:
>
> Any suggestion where to look next?

Your picture shows br2 is connected to server's eth1. "brctl show"
from yuor attachment shows br2 is connected to dummy1, not eth1.

--
Fajar



------------------------------

Message: 2
Date: Mon, 27 Jun 2011 11:29:41 +0200
From: Markus Plessing <info@xxxxxxxxxxx>
Subject: Re: [Xen-users] http-traffic rejected, domU
To: xen-users@xxxxxxxxxxxxxxxxxxx
Message-ID: <4E084D85.9000405@xxxxxxxxxxx>
Content-Type: text/plain; charset=ISO-8859-15; format=flowed

Hi list,

responding for the archives.

Several parallel issues (as ever) lead to the behaviour I faced with.

DNS requests were rejected, IMAP and HTTP requests also.

DNS and IMAP were responsive from localhost, http not ... humm.

The main problem has been solved by setting a 0 for alle entries
of /proc/sys/net/bridge/bridge-nf-call-*

The problem with IMAP was a DNS issue, in not resolving the correct
sub-sub-domain, redirecting the requests to our main internet-server
(because of the tld); Fixed with corrected dns-entries for bind9.

The next issue was, that the webserver was rejecting any connection
attempt, also a localhost connection could not be established.
Found out, that it listens only to ipv6-addresses, editing the
apache.conf to Listen 0.0.0.0:80 resolved this problem. (There was no
Listen directive at all for the apache2 config, so arp did a guess)

Have a nice week :-D

Am 26.06.2011 15:46, schrieb Markus Plessing:
> Hi list,
>
> i've been migrating our RAID1 system onto bigger drives. Therefor I've
> setup a new dom0 with a 2.6.32-5-xen-amd64 kernel and Xen in version
> 4.0, because there was a 2.6.19 kernel and xen 3.0.1 causing problems
> time by time.
>
> My main problem is, that the domU running our intranet services
> (webserver, mail, databases etc) is not responding or rejecting each
> attempt to connect to a service.
>
> I think that the root cause of the problem is located in the network
> bridge settings from xen in dom0. Maybe someone can give me a kick in
> the right direction to get these things up and running until the doors
> are opened again :-)
>
> As network option I've choosen (network-script network-bridge) and
> (vif-script vif-bridge) leading to the following outputs.
>
> $# brctl show
> bridge name bridge id STP enabled interfaces
> eth0 8000.00241d89463a no peth0
> vif1.0
> vif2.0
> vif3.0
> vif4.0
> vif5.0
> $# iptables -L -v : http://paste.debian.net/121061/
>
>
> $# ifconfig : http://paste.debian.net/121060/
>
> Many thanks for each hint :)
>
> Bye
>
> Markus
>
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users



------------------------------

Message: 3
Date: Mon, 27 Jun 2011 13:51:14 +0200
From: <J.Witvliet@xxxxxxxxx>
Subject: RE: [Xen-users] Malfunctioning bridge
To: <list@xxxxxxxxx>
Cc: xen-users@xxxxxxxxxxxxxxxxxxx
Message-ID: <20110627115142.556CC21C8A5@xxxxxxxxxxxxxxxxx>
Content-Type: text/plain; charset="us-ascii"

Avoip T.P.

-----Original Message-----
From: Fajar A. Nugraha [mailto:list@xxxxxxxxx]
Sent: Monday, June 27, 2011 11:31 AM
To: Witvliet, J, CDC/IVENT/OPS/I&S/HIN
Cc: Xen User-List
Subject: Re: [Xen-users] Malfunctioning bridge

On Mon, Jun 27, 2011 at 4:25 PM, <J.Witvliet@xxxxxxxxx> wrote:
> To simulate different networks, I created dummy ethernet devices, and connected bridges to it.
> All of the bridges are working OK, except ONE: BR2 (for setup, see
> attachement)

>
> If i ping on the vpn-box (vpn is not setup yet) towards internal
> firewall or otherway round i see no traffic at all ( 172.16.100.1 =>
> 172.16.100.2 OR 172.16.100.2 => 172.16.100.1)
>
> Looked at [internal] firewall, at the bridges, routing, but i'm clue-less....
> Test i've done sofar:
>
> Any suggestion where to look next?

Your picture shows br2 is connected to server's eth1. "brctl show"
from yuor attachment shows br2 is connected to dummy1, not eth1.

-----Original Message-----

Hi Fajar,

The output from "brctl show" is done from dom-0.
There br0 provides access to the real world
All the others (br1, br2 and br3) are restricted to within the machine

So BR2 is connected to:
A) Dummy0 on dom-0
B) eth1 on kc3072 (vpn)
C) eth1 on kc3041 (fw-int)

The bridges themselves are only visible on the dom-0, not on the dom-U.


Involved (kc3041, kc3072) startup scripts:

name="kc3041"
description="sumunatie interne firewall"
uuid="8cbb5269-e40e-0297-d27a-b2b8e1e2b613"
memory=500
maxmem=1000
vcpus=1
> > > localtime=0
keymap="en-us"
builder="linux"
bootloader="/usr/lib/xen/boot/domUloader.py"
bootargs="--entry=xvda1:/boot/vmlinuz-xen,/boot/initrd-xen"
extra=" "
disk=[ 'phy:/dev/xen-productie/kc3041-boot,xvda,w', 'phy:/dev/xen-productie/kc3041-swap,xvdb,w', 'phy:/dev/xen-productie/kc3041-syst,xvdc,w', 'phy:/dev/xen-productie/kc3041-data,xvdd,w',  ] vif=[ 'mac=00:16:3e:30:41:00,bridge=br0', 'mac=00:16:3e:30:41:01,bridge=br2', 'mac=00:16:3e:30:41:02,bridge=br3', ] vfb=['type=vnc,vncunused=1']


name="kc3072"
description="int vpn server"
uuid="99ee7c72-493b-e69d-3cfa-7b438fcd2988"
memory=1000
maxmem=1000
vcpus=1
> > > localtime=0
keymap="en-us"
builder="linux"
bootloader="/usr/bin/pygrub"
bootargs=""
extra=" "
disk=[ 'phy:/dev/xen-productie/kc3072-boot,xvda,w', 'phy:/dev/xen-productie/kc3072-swap,xvdb,w', 'phy:/dev/xen-productie/kc3072-syst,xvdc,w', 'phy:/dev/xen-productie/kc3072-data,xvdd,w',  ] 
vif=[ 'mac=00:16:3e:30:72:01,bridge=br1', 'mac=00:16:3e:30:72:02,bridge=br2', 'mac=00:16:3e:30:72:03,bridge=br3',  ] vfb=['type=vnc,vncunused=1']


______________________________________________________________________
Dit bericht kan informatie bevatten die niet voor u is bestemd. Indien u niet de geadresseerde bent of dit bericht abusievelijk aan u is toegezonden, wordt u verzocht dat aan de afzender te melden en het bericht te verwijderen. De Staat aanvaardt geen aansprakelijkheid voor schade, van welke aard ook, die verband houdt met risico's verbonden aan het elektronisch verzenden van berichten.

This message may contain information that is not intended for you. If you are not the addressee or if this message was sent to you by mistake, you are requested to inform the sender and delete the message. The State accepts no liability for damage of any kind resulting from the risks inherent in the electronic transmission of messages.



------------------------------

Message: 4
Date: Mon, 27 Jun 2011 19:16:15 +0530
From: shreyas pandya <pandyashreyas1@xxxxxxxxx>
Subject: [Xen-users] how to share vdi among several xcp pools
To: xen-users@xxxxxxxxxxxxxxxxxxx
Message-ID: <4E0889A7.7010803@xxxxxxxxx>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

hi
I am having several pools of xcp with NFS shared storage as default SR
on each pool and i want to do following in this arrangement

-I want to share vdis among all pools, i.e. for example if I want to
detach a vdi from a vm in pool-1  to a vm in pool-2.
so is this possible to do so?

vdi export is one of the solution but i want to do it seamlessly just
like it happens within the same pool or without actually moving vdi over
network because all the pools share the same NFS so after import/export
vdi is going to be in the same NFS so why waste bandwidth and instead
make it visible to all pools. any ideas?



------------------------------

Message: 5
Date: Mon, 27 Jun 2011 21:00:08 +0530
From: babu krthk <r.g.babukarthik@xxxxxxxxx>
Subject: [Xen-users] sug
To: xen-users@xxxxxxxxxxxxxxxxxxx
Message-ID: <BANLkTim2W0ZxTFAjrB_qPVwQ7TBRexreEg@xxxxxxxxxxxxxx>
Content-Type: text/plain; charset="iso-8859-1"

hi i have installed the xen api sdk using the xen center , and when i
installation finished  it is asking the login name and the password but for
xcp i have set the login name as : root and password : karthik ..
what is login name and password i need to give for xen api sdk template

--
R.G.BABUKARTHIK
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.xensource.com/archives/html/xen-users/attachments/20110627/dd3406d6/attachment.html

------------------------------

Message: 6
Date: Mon, 27 Jun 2011 21:01:28 +0530
From: shreyas pandya <pandyashreyas1@xxxxxxxxx>
Subject: [Xen-users]how to share vdi among several xcp pools
To: xen-users@xxxxxxxxxxxxxxxxxxx
Message-ID: <4E08A250.9040107@xxxxxxxxx>
Content-Type: text/plain; charset="iso-8859-1"

hi
I am having several pools of xcp with NFS shared storage as default SR
on each pool and i want to do following in this arrangement

-I want to share vdis among all pools, i.e. for example if I want to
detach a vdi from a vm in pool-1 to a vm in pool-2.
so is this possible to do so?

vdi export is one of the solution but i want to do it seamlessly just
like it happens within the same pool or without actually moving vdi over
network because all the pools share the same NFS so after import/export
vdi is going to be in the same NFS so why waste bandwidth and instead
make it visible to all pools. any ideas?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.xensource.com/archives/html/xen-users/attachments/20110627/51cc1eb5/attachment.html

------------------------------

Message: 7
Date: Mon, 27 Jun 2011 11:32:41 -0400
From: Iordan Iordanov <iordan@xxxxxxxxxxxxxxx>
Subject: [Xen-users] NAT networking in Xen
To: xen-users@xxxxxxxxxxxxxxxxxxx
Message-ID: <4E08A299.8080606@xxxxxxxxxxxxxxx>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

Hello everyone,

Last Friday I tried without success to get NAT networking working under
Xen using mainly this guide:

http://wiki.kartbuilding.net/index.php/Xen_Networking

We are trying to get this set up under Debian Squeeze with Xen 4.1 built
from source from Sid (Unstable).

Can somebody provide a guide or link to a guide that does work,
including how to do port forwarding to the virtual machines?

Many thanks!
Iordan Iordanov



------------------------------

Message: 8
Date: Mon, 27 Jun 2011 11:37:16 -0400
From: Iordan Iordanov <iordan@xxxxxxxxxxxxxxx>
Subject: Re: [Xen-users] annoying 2 - 3 second lag every few minutes
        in Windows      XP VM on Debian 6 Squeeze
To: Ian Tobin <itobin@xxxxxxxxxxxxx>
Cc: xen-users@xxxxxxxxxxxxxxxxxxx
Message-ID: <4E08A3AC.6060708@xxxxxxxxxxxxxxx>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

Hi Ian,

On 06/10/11 14:22, Ian Tobin wrote:
> Hi,
>
> Have you tried a vm with the pv drivers installed?

Just a follow-up. I never got to installing the vm and pv drivers. Are
they any use? We won't be using Windows much, but it would be nice if we
get a benefit.

Aside from that, I decided to pull in Sid's package into Squeeze and
build it, and with Xen 4.1, the annoying 2 - 3 second lag seems to have
disappeared.

Cheers,
Iordan



------------------------------

Message: 9
Date: Mon, 27 Jun 2011 11:38:00 -0400
From: Iordan Iordanov <iordan@xxxxxxxxxxxxxxx>
Subject: [Xen-users] NAT networking in Xen
To: xen-users@xxxxxxxxxxxxxxxxxxx
Message-ID: <4E08A3D8.4010607@xxxxxxxxxxxxxxx>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

Hello everyone,

Last Friday I tried without success to get NAT networking working under
Xen using mainly this guide:

http://wiki.kartbuilding.net/index.php/Xen_Networking

We are trying to get this set up under Debian Squeeze with Xen 4.1 built
from source from Sid (Unstable).

Can somebody provide a guide or link to a guide that does work,
including how to do port forwarding to the virtual machines?

Many thanks!
Iordan Iordanov



------------------------------

Message: 10
Date: Mon, 27 Jun 2011 11:38:40 -0400
From: Iordan Iordanov <iordan@xxxxxxxxxxxxxxx>
Subject: Re: [Xen-users] NAT networking in Xen
To: xen-users@xxxxxxxxxxxxxxxxxxx
Message-ID: <4E08A400.9000201@xxxxxxxxxxxxxxx>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

Oops, ignore that, I hit reply instead of compose... :).



------------------------------

Message: 11
Date: Mon, 27 Jun 2011 11:41:48 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
Subject: [Xen-users] Re: [Xen-devel] Re: VM disk I/O limit patch
To: Shaun Reitan <mailinglists@xxxxxxxxxxxxxxxx>
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx, xen-users@xxxxxxxxxxxxxxxxxxx
Message-ID: <20110627154148.GK6978@xxxxxxxxxxxx>
Content-Type: text/plain; charset=us-ascii

On Thu, Jun 23, 2011 at 01:45:36PM -0700, Shaun Reitan wrote:
> Does this match only limit throughput or can it also limit the guest
> by disk IOPS?  christopher aker had a patch way back for UML that

Just throughpout.

> did disk based qos.  What i really liked about that patch was that
> it allowed for bursting by using a bucket.  If i remember correctly
> you specified that a guest's bucket could hold say 4000 tokens, and
> the bucket would be filled with 10 tokens a second.  Each IO took
> one token from the bucket.  When the bucket was empty IO was paused
> and processed as the bucket was filled.  This allowed a guest to
> burst for a short period of time until that bucket was empty and
> then it would slowely be filled back up.

Uhhh... are you sure you are talking about the same patch.
>
> Also what was nice is that the guest had a /proc/ entry that told
> the customer how many tokens they currently had in their bucket.

OK.. but how would this help the customers? They don't have access
to the /proc in Dom0.
>
> I would like to see somthing like this in Xen, I've even thought
> about posting to the devel forums seeing if somebody wanted to write
> one for $$$

Why not use dm-ioband (here is a doc about it:
http://lwn.net/Articles/344441/) which has much more options and also
provide the bucket and tokens you are looking for.

[edit: Looks like dm-ioband never made it in the Linux kernel. But there
was something that I thought Vivek wrote that was superior to dm-ioband..
Ah, yes: blkio-controller.txt.

Look in Documentation/cgroups/blkio-controller.txt]




------------------------------

Message: 12
Date: Mon, 27 Jun 2011 20:08:22 +0200
From: "Stefan Becker" <stefan.becker@xxxxxxxxxxxxxxx>
Subject: AW: [Xen-users] IPv6 with Bridge Modus
To: "'Fajar A. Nugraha'" <list@xxxxxxxxx>
Cc: Xen-users@xxxxxxxxxxxxxxxxxxx
Message-ID:
        <!&!AAAAAAAAAAAYAAAAAAAAAC1c4i8UU+1IvBVspIXy2QcihAAAEAAAAAO4YZyivJNEgMwOQSqc+44BAAAAAA==@xxxxxxxxxxxxxxx>
       
Content-Type: text/plain;       charset="us-ascii"

Sorry for my query!

This machine is a dedicated server of my ISP. I don't have a fix mac  for
the IPv6 address, only for IPv4. Is your solution ok for my problem?

Stefan


> Can i use a IPv6 adress at XEN 4.0.1 of Debain 6?

Should be possible. Bridge mode passes ethernet packets, and doesn't really
care what's on top of it.

> In my cfg I use this setting:
>
>
>
> vif = [ 'ip=85.10.210.154 46.4.44.162 2a01:4f8:130:9301::3' ]

dump "ip" settings. It's useless in most setup anyway. Instead, you should
explicitly specify mac (needed for lots of things, including persistent
DHCP), and optionally bridge (to make it clear, in case you have multiple
bridge) and vifname (to make it easier to identify from dom0). Something
like

vif = [ 'mac=00:16:3E:A8:31:24, bridge=eth0, vifname=ubuntu-e0' ]

Then specify ipv6 address inside domU using usual OS methods
(/etc/network/interfaces on Debian/Ubuntu)

Most modern OS have IPv6 enabled by default, using link-local address.
So if (for example) your dom0 bridge is using fe80::21e:bff:fe5e:9f58, and
your domU only has one interface (eth0), you can test IPv6 conectivity from
domU to dom0 using something like

ping6 -I eth0 fe80::21e:bff:fe5e:9f58

--
Fajar




------------------------------

Message: 13
Date: Mon, 27 Jun 2011 18:28:01 +0000 (UTC)
From: David Rhodus via LinkedIn <member@xxxxxxxxxxxx>
Subject: [Xen-users] Invitation to connect on LinkedIn
To: Chris Chen <xen-users@xxxxxxxxxxxxxxxxxxx>
Message-ID:
        <1340882608.5830964.1309199281882.JavaMail.app@xxxxxxxxxxxxxxx>
Content-Type: text/plain; charset="utf-8"

LinkedIn
------------




    David Rhodus requested to add you as a connection on LinkedIn:
 
------------------------------------------

Chris,

I'd like to add you to my professional network on LinkedIn.

- David

Accept invitation from David Rhodus
http://www.linkedin.com/e/2i0xaq-gpfrg72u-4b/GxGJC4SCt4uc8Q7R7NT1A8cO6oOe8b9-LgXs7vVBD5ie/blk/I160990681_9/pmpxnSRJrSdvj4R5fnhv9ClRsDgZp6lQs6lzoQ5AomZIpn8_elYNe3oMejAMdz59bR98iQFvjmRIbPsTd3kQdjgRcPcLrCBxbOYWrSlI/EML_comm_afe/

View invitation from David Rhodus
http://www.linkedin.com/e/2i0xaq-gpfrg72u-4b/GxGJC4SCt4uc8Q7R7NT1A8cO6oOe8b9-LgXs7vVBD5ie/blk/I160990681_9/0VnP4Udz0Vej0SckALqnpPbOYWrSlI/svi/
------------------------------------------

DID YOU KNOW LinkedIn can help you find the right service providers using recommendations from your trusted network? Using LinkedIn Services, you can take the risky guesswork out of selecting service providers by reading the recommendations of credible, trustworthy members of your network.
http://www.linkedin.com/e/2i0xaq-gpfrg72u-4b/svp/inv-25/

 
--
(c) 2011, LinkedIn Corporation
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.xensource.com/archives/html/xen-users/attachments/20110627/1301b301/attachment.html

------------------------------

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


End of Xen-users Digest, Vol 76, Issue 56
*****************************************
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.