[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Payed Xen Admin



Hello,

 

thanks for answering Neil. I think Neil means the block devices ?

 

Neil can you show us how to verify if those devices are still running for the null domain ids?

 

I also think its maybe just a timing problem, maybe they do not shut down always as they should..

 

We can give you surely access to such u box and you could have a look..

 

Mit freundlichen Grüßen

 

Thomas Toka

 

- Second Level Support -

 

logo_mail

IP-Projects GmbH & Co. KG
Am Vogelherd 14
D - 97295 Waldbrunn

Telefon: 09306 - 76499-0
FAX: 09306 - 76499-15
E-Mail: info@xxxxxxxxxxxxxx

Geschäftsführer: Michael Schinzel
Registergericht Würzburg: HRA 6798
Komplementär: IP-Projects Verwaltungs GmbH

 

 

Von: Michael Schinzel
Gesendet: Montag, 28. November 2016 18:20
An: Neil Sikka <neilsikka@xxxxxxxxx>
Cc: Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>; Thomas Toka <toka@xxxxxxxxxxxxxx>
Betreff: AW: [Xen-devel] Payed Xen Admin

 

Hello,

 

thank you for your response. There are no quemu prozesses which we can identify with the ID of the failed guest.

 

 

Mit freundlichen Grüßen

 

Michael Schinzel

- Geschäftsführer -

 

https://www.ip-projects.de/logo_mail.png

IP-Projects GmbH & Co. KG
Am Vogelherd 14
D - 97295 Waldbrunn

Telefon: 09306 - 76499-0
FAX: 09306 - 76499-15
E-Mail: info@xxxxxxxxxxxxxx

Geschäftsführer: Michael Schinzel
Registergericht Würzburg: HRA 6798
Komplementär: IP-Projects Verwaltungs GmbH

 

 

Von: Neil Sikka [mailto:neilsikka@xxxxxxxxx]
Gesendet: Montag, 28.
November 2016 14:30
An: Michael Schinzel <
schinzel@xxxxxxxxxxxxxx>
Cc: Xen-devel <
xen-devel@xxxxxxxxxxxxxxxxxxxx>
Betreff: Re: [Xen-devel] Payed Xen Admin

 

Usually, I've seen (null) domains are not running but their Qemu DMs are running. You could probably remove the (null) from the list by using "kill -9" on the qemu pids.

 

On Nov 27, 2016 11:55 PM, "Michael Schinzel" <schinzel@xxxxxxxxxxxxxx> wrote:

Good Morning,

 

we have some issues with our Xen Hosts. It seems it is a xen bug but we do not find the solution.

 

Name                                        ID   Mem VCPUs      State   Time(s)

Domain-0                                     0 16192     4     r-----  147102.5

(null)                                       2     1     1     --p--d    1273.2

vmanager2268                                 4  1024     1     -b----   34798.8

vmanager2340                                 5  1024     1     -b----    5983.8

vmanager2619                                12   512     1     -b----    1067.0

vmanager2618                                13  1024     4     -b----    1448.7

vmanager2557                                14  1024     1     -b----    2783.5

vmanager1871                                16   512     1     -b----    3772.1

vmanager2592                                17   512     1     -b----   19744.5

vmanager2566                                18  2048     1     -b----    3068.4

vmanager2228                                19   512     1     -b----     837.6

vmanager2241                                20   512     1     -b----     997.0

vmanager2244                                21  2048     1     -b----    1457.9

vmanager2272                                22  2048     1     -b----    1924.5

vmanager2226                                23  1024     1     -b----    1454.0

vmanager2245                                24   512     1     -b----     692.5

vmanager2249                                25   512     1     -b----   22857.7

vmanager2265                                26  2048     1     -b----    1388.1

vmanager2270                                27   512     1     -b----    1250.6

vmanager2271                                28  2048     3     -b----    2060.8

vmanager2273                                29  1024     1     -b----   34089.4

vmanager2274                                30  2048     1     -b----    8585.1

vmanager2281                                31  2048     2     -b----    1848.9

vmanager2282                                32   512     1     -b----     755.1

vmanager2288                                33  1024     1     -b----     543.6

vmanager2292                                34   512     1     -b----    3004.9

vmanager2041                                35   512     1     -b----    4246.2

vmanager2216                                36  1536     1     -b----   47508.3

vmanager2295                                37   512     1     -b----    1414.9

vmanager2599                                38  1024     4     -b----    7523.0

vmanager2296                                39  1536     1     -b----    7142.0

vmanager2297                                40   512     1     -b----     536.7

vmanager2136                                42  1024     1     -b----    6162.9

vmanager2298                                43   512     1     -b----     441.7

vmanager2299                                44   512     1     -b----     368.7

(null)                                      45     4     1     --p--d    1296.3

vmanager2303                                46   512     1     -b----    1437.0

vmanager2308                                47   512     1     -b----     619.3

vmanager2318                                48   512     1     -b----     976.8

vmanager2325                                49   512     1     -b----     480.2

vmanager2620                                53   512     1     -b----     346.2

(null)                                      56     0     1     --p--d       8.8

vmanager2334                                57   512     1     -b----     255.5

vmanager2235                                58   512     1     -b----    1724.2

vmanager987                                 59   512     1     -b----     647.1

vmanager2302                                60   512     1     -b----     171.4

vmanager2335                                61   512     1     -b----      31.3

vmanager2336                                62   512     1     -b----      45.1

vmanager2338                                63   512     1     -b----      22.6

vmanager2346                                64   512     1     -b----      20.9

vmanager2349                                65  2048     1     -b----      14.4

vmanager2350                                66   512     1     -b----     324.8

vmanager2353                                67   512     1     -b----       7.6

 

 

HVM VMs change sometimes in the state (null).

 

We still upgraded xen from 4.1.1 to 4.8, we upgraded the System Kernel –

 

root@v8:~# uname -a

Linux v8.ip-projects.de 4.8.10-xen #2 SMP Mon Nov 21 18:56:56 CET 2016 x86_64 GNU/Linux

 

But all these points dont help us to solve this issue.

 

Now we are searching a Xen administrator which can help us anylising and solving this issue. We would also pay for this Service.

 

Hardware Specs of the host:

 

2x Intel Xeon E5-2620v4
256 GB DDR4 ECC Reg RAM
6x 3 TB WD RE
2x 512 GB Kingston KC
2x 256 GB Kingston KC
2x 600 GB SAS
LSI MegaRAID 9361-8i
MegaRAID Kit LSICVM02

 

 

The cause behind this Setup:

 

6x 3 TB WD RE – RAID 10 – W/R IO Cache + CacheCade LSI – Data Storage

2x 512 GB Kingston KC400 SSDs – RAID 1 – SSD Cache for RAID 10 Array

2x 256 GB Kingston KC400 SSD – RAID 1 – SWAP Array for Para VMs

2x 600 GB SAS  - RAID 1 – Backup Array for faster Backup of the VMs to external Storage.

 

 

 

 

Mit freundlichen Grüßen

 

Michael Schinzel

- Geschäftsführer -

 

https://www.ip-projects.de/logo_mail.png

IP-Projects GmbH & Co. KG
Am Vogelherd 14
D - 97295 Waldbrunn

Telefon: 09306 - 76499-0
FAX: 09306 - 76499-15
E-Mail: info@xxxxxxxxxxxxxx

Geschäftsführer: Michael Schinzel
Registergericht Würzburg: HRA 6798
Komplementär: IP-Projects Verwaltungs GmbH

 

 


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.