[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] fix "Error flushing ioemu cache" message in xenpaging



On Thu, Jan 06, Ian Jackson wrote:

> Han-Lin Li writes ("[Xen-devel] [PATCH] fix "Error flushing ioemu cache" 
> message in xenpaging"):
> > While using xenpaging, "Error flushing ioemu cache" message will be shown
> > on screen even if the "flush-cache" command is sent to xenstore correctly.
> > That is because xenpaging assume xc_mem_paging_flush_ioemu_cache()
> > return non-zero value when operation fail.  But
> > xc_mem_paging_flush_ioemu_cache() return the return value from xs_write()
> > which is zero when operation fail. So,  we should invert the return value 
> > from
> > xs_write() before use it as return value to prevent printing those
> > incorrect error messages.
> 
> I'd like to give Olaf Hering a chance to respond, though, as it seems
> he hasn't already.

I see these harmless error messages as well, but havent looked at the
root cause yet.

> Perhaps xc_mem_paging_flush_ioemu_cache ought to return -1 on error
> and 0 on success, like most other xc functions ?

Like 'return rc ? 0 : -1;'?
Either way is fine with me.

Olaf

> > ---
> > Signed-off-by: Han-Lin Li <Han-Lin.Li@xxxxxxxxxxx>
> > 
> > diff -r 89116f28083f tools/xenpaging/xc.c
> > --- a/tools/xenpaging/xc.c   ÂWed Dec 08 10:46:31 2010 +0000
> > +++ b/tools/xenpaging/xc.c   ÂWed Dec 15 19:23:53 2010 +0800
> > @@ -62,7 +62,7 @@
> > ÂÂ Â xs_daemon_close(xsh);
> > - Â Âreturn rc;
> > + Â Âreturn !rc;
> > Â}
> > Âint xc_wait_for_event_or_timeout(xc_interface *xch, int xce_handle,
> > unsigned long ms)
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@xxxxxxxxxxxxxxxxxxx
> > http://lists.xensource.com/xen-devel
> > 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.