[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: [Xen-devel] [PATCH 0/4] HVM Virtual S3 --- Revised and resent
Hi, Keir I have a rough look and just one thing, I noticed that in arch_vcpu_reset, if it is hvm, we don't do destroy_pagetables. Maybe it will have some problems. Since now s3 sleep down in protected mode yet wake up in real mode, so cr3 used in protected mode when sleeping down is not freed. It will cause Domain_heap use count != 0, domain_destroy could not be completed totally, some resource is not freed. We found the problem when trying to create a vtd device assigned hvm guest, then destroy it and redo create, yet It failed saying "vtd device is assigned aready". We tried to log this cr3, put_page of this cr3, it works, so we need vcpu_destroy_pagetables. Yet I am not sure whether the problem still exist after your restructure. I will do further try next Monday:) Thanks& Regards, Criping void arch_vcpu_reset(struct vcpu *v) { - destroy_gdt(v); - vcpu_destroy_pagetables(v); + if ( !is_hvm_vcpu(v) ) + { + destroy_gdt(v); + vcpu_destroy_pagetables(v); + } + else + { + vcpu_end_shutdown_deferral(v); + } } Ke, Liping wrote: > Sure, I will try it on Monday. > Thanks a lot! > Criping > Keir Fraser wrote: >> I think all these issues are fixed as of c/s 17713. However, when I >> s3resume a Linux guest I find it is unresponsive and the VGA display >> is corrupted by re-printing of BIOS start-of-day messages. Perhaps >> the BIOS is taking an incorrect path on S3 resume? It would be good >> if you can look into this now -- I think the hypervisor issues at >> least are now resolved and this is probably something in the >> higher-level rombios or ioemu logic. >> >> -- Keir > > _______________________________________________ > Xen-devel mailing list > Xen-devel@xxxxxxxxxxxxxxxxxxx > http://lists.xensource.com/xen-devel _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |