[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] XEN and ipq_read


  • To: xen-devel@xxxxxxxxxxxxxxxxxxx
  • From: "plamen .." <paco078@xxxxxx>
  • Date: Tue, 27 Apr 2010 11:31:33 +0300 (EEST)
  • Delivery-date: Tue, 27 Apr 2010 01:32:33 -0700
  • Domainkey-signature: a=rsa-sha1; s=smtp-out; d=abv.bg; c=simple; q=dns; b=HI4YN2h6Af+30wDBEQqXjgMMCb3FJjXDECH5mkxA9WTjy+WYf90bKRQBOGIy1d7CG 2jwsXeeHT8wFqz+TOAmXZZTMy1w10hkv57Djrwrj0BSB68gU/of9sR3633GfmjKdMye haTd6tqWjvMrIdyuc+1UPuQnpJGuddCp/EVZkAI=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

 Hi all,

I'm using Ubuntu Hardy, Xen version 3.2.1-rc1-pre, Dom0 kernel 2.6.24-27-xen, 
PV DomU kernel 2.6.24-27-xen. 

I'm setting DomU as a router having iptables 1.3.8. I put an IDS system Snort 
in inline mode (IPS) on the router, which is configured to retrieve specific 
packets from kernel (iptables ... -j QUEUE and ip_queue module). At first snort 
started to report errors on each received packet. After a little bit of 
debugging and doing a sample application to test ipq_read() I found that raw 
data sent from kernel contains about 24 bytes more than expected. The 
additional bytes are in the meta data structure before the real packet content. 
This breaks raw data parsing. After a little bit of additional debugging I 
noticed that this happens only on Xen DomU VMs. On Dom0 it work fine, on other 
servers not running Xen it works also fine. 

Currently I'm about to install rtr DomU as HVM and I think it will work fine, 
but I don't want to leave it like this in production. 

Is there any reason in xen kernel to break sending packets from kernel to user 
space through the ip_queue module ? If so is there any way to work around this 
issue ?

Thanks in advance,
Plamen


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.