[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] [HVM] Corruption of buffered_io_page



 

> -----Original Message-----
> From: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx 
> [mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Ian Pratt
> Sent: 06 December 2006 22:05
> To: Trolle Selander; xen-devel@xxxxxxxxxxxxxxxxxxx
> Subject: RE: [Xen-devel] [HVM] Corruption of buffered_io_page
> 
> > read_pointer is the first member of buffered_ioreq_t, so on 
> the hunch
> that
> > the corruption was occuring by something other than a wrong value
> actually
> > being written into the structure member, either overflowing 
> a previous
> > structure in memory or a pointer var mistake. I thus added a 64bit
> dummy
> > member to "pad" the buffered_ioreq_t structure at the 
> start, and as I
> had
> > suspected, the bad value does get written into this dummy member
> rather
> > than the read_pointer. I haven't (yet) been able to track 
> down what it
> is
> > that actually writes the bad value, and any help finding it would be
> > welcome.
> 
> What compiler are you using? What guest OS? Are you using PV 
> or emulated
> drivers? Any idea if there are particular workloads that provoke the
> problem?

I'll answer for Trolle as best as I can:
Compiler: gcc 4.1 I believe.
Guest OS: OS/2
Drivers would be emulated ones. 
I think it's failing during initial boot, as Trolle hasn't told me "It
works" yet... ;-)

By the way, I'm still a bit worried that this is caused by segment base
!= 0 in x86_emulate.c - this can cause all sorts of "interesting"
interaction between the page-table updates and actual memory being
affected. 

--
Mats
> 
> Best,
> Ian
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
> 
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.