[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] FE driver and log dirty


  • To: "Keir Fraser" <keir.fraser@xxxxxxxxxxxxx>, "Xen-Devel (E-mail)" <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: "Tian, Kevin" <kevin.tian@xxxxxxxxx>
  • Date: Mon, 14 Jul 2008 17:22:44 +0800
  • Cc:
  • Delivery-date: Mon, 14 Jul 2008 02:23:45 -0700
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AcjlgcleH6AfAfjWTPSrkcaRDBcgVgAAVJukAAAXLwAAA6NjYAAARK+Q
  • Thread-topic: [Xen-devel] FE driver and log dirty

>From: Keir Fraser [mailto:keir.fraser@xxxxxxxxxxxxx] 
>Sent: 2008年7月14日 17:15
>
>On 14/7/08 08:39, "Tian, Kevin" <kevin.tian@xxxxxxxxx> wrote:
>
>> I can understand the replay trick here. My question is 
>whether there're
>> some requests/responses got dequeued from FE driver and already
>> sent to up level component, which however has not been ever accessed
>> by CPU (e.g. only descriptor is accessed) before __xen_suspend is
>> entered. Take network receive for example (I'm not familar with this
>> path), is it possible that some data page is already queued 
>in up level
>> protocol, and then suspend watch is triggered before receive process
>> is scheduled, and unfortunately within that window the page has not
>> been accessed yet? In this case, page becomes dirty in live migration
>> process, but not get recoded by log dirty logic since it's only BE to
>> access it...
>
>Granted mappings cause the mapped page to become dirtied when the BE
>relinquishes the grant. This happens after the I/O transfer 
>but before the
>response has been queued for the FE.
>

Got it. Thanks,
- Kevin

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.