[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Re: ç-¤úRe: [Xen-devel][ PATCH]qemu-xen: let xenfb_guest_copy() handle dept h=32 case



Sorry for delay. Following is complete commit message:

In hw/xenfb.c, xenfb_guest_copy only handles xenfb->depth=8 and 24 cases, I guess it assumes in xenfb->depth=16 or 32 cases, buffer is shared. But that's not always the case: the code path that allows us to have a shared buffer when xenfb->depth=16 or 32 is xenfb->do_resize set, but on a guest vnc console, when enter CTRL+ALT+2 switch to qemu monitor console then CTRL+ALT+1 back to guest window, the xenfb->do_resize is not set, that is, buffer is not shared, and xenfb_guest_copy does not handle xenfb->depth=32 case, the result is: guest screen cannot be restored.

To fix above problem, this patch does two things:
1. Set xenfb->do_resize in xenfb_invalidate so that in console switch case, buffer is shared when xenfb->depth=16 or 32. The screen cannot be restored bug in above description can be solved.
2. To avoid that other special cases have the same problem, it's better to let xenfb_guest_copy handle all cases, so add processing to xenfb->depth=16 and 32 in xenfb_guest_copy.


Signed-off by Chunyan Liu <cyliu@xxxxxxxxxx>

diff -r 1e5cb7d6a96c hw/xenfb.c
--- a/hw/xenfb.c    Mon Oct 18 17:24:50 2010 +0100
+++ b/hw/xenfb.c    Sat Oct 30 00:48:45 2010 +0800
@@ -630,6 +630,18 @@
                 oops = 1;
             }
             break;
+    case 16:
+            if (bpp == 16) {
+                for (line = y; line < (y+h); line++) {
+                        memcpy (data + (line * linesize) + (x * bpp / 8), xenfb->pixels + xenfb->offset
+                              + (line * xenfb->row_stride) + (x * xenfb->depth / 8), w * xenfb->depth / 8);
+                }
+            } else if (bpp == 32) {
+                BLT(uint16_t, uint32_t,   5, 6, 5,   8, 8, 8);
+            } else {
+                oops = 1;
+            }
+            break;
         case 24:
             if (bpp == 16) {
                 BLT(uint32_t, uint16_t,  8, 8, 8,   5, 6, 5);
@@ -639,6 +651,18 @@
                 oops = 1;
             }
             break;
+        case 32:
+            if (bpp == 16) {
+                BLT(uint32_t, uint16_t,  8, 8, 8,   5, 6, 5);
+            } else if (bpp == 32) {
+                for (line = y; line < (y+h); line++) {
+                        memcpy (data + (line * linesize) + (x * bpp / 8), xenfb->pixels + xenfb->offset
+                              + (line * xenfb->row_stride) + (x * xenfb->depth / 8), w * xenfb->depth / 8);
+                }
+            } else {
+                oops = 1;
+            }
+            break;
         default:
             oops = 1;
     }
@@ -792,6 +816,7 @@
 static void xenfb_invalidate(void *opaque)
 {
     struct XenFB *xenfb = opaque;
+    xenfb->do_resize = 1;
     xenfb->up_fullscreen = 1;
 }
 


>>> Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx> 11/03/10 8:36 AM >>>
Chun Yan Liu writes ("Re: çÂ-½ú Re: [Xen-devel][ PATCH]qemu-xen: let xenfb_guest_copy() handle dept h=32 case"):
> > Could you please resubmit a patch with both changes and a signed-off-by
> > line?
>
> Sure. Following is the patch with both changes.
>
> Signed-off by Chunyan Liu

The patch is fine and sorry not to apply it right away, but I'm afraid
that I wasn't able to extract a coherent commit message from the
email thread, and I'm not confident enough that I understand the
details to write it myself.

Can you provide a commit message please ? The explanatory text you
supplied with your original aptch, on the 20th of October, is in the
right style. We just need something like that that properly describes
the final version of the patch.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.