[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC] design/API for plugging tmem into existing xen physical memory management code


  • To: Dan Magenheimer <dan.magenheimer@xxxxxxxxxx>, "Xen-Devel (E-mail)" <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
  • Date: Sat, 14 Feb 2009 07:41:50 +0000
  • Cc:
  • Delivery-date: Fri, 13 Feb 2009 23:42:47 -0800
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AcmOKl/zGhrvXKgGThO+IbeZX8T1SAATVJS5
  • Thread-topic: [Xen-devel] [RFC] design/API for plugging tmem into existing xen physical memory management code

On 13/02/2009 22:26, "Dan Magenheimer" <dan.magenheimer@xxxxxxxxxx> wrote:

> 4) Does anybody have a list of alloc requests of
>      order > 0

Domain and vcpu structs are order 1. Shadow pages are allocated in order-2
blocks.

> ** tmem has been working for months but the code has
> until now allocated (and freed) to (and from)
> xenheap and domheap.  This has been a security hole
> as the pages were released unscrubbed and so data
> could easily leak between domains.  Obviously this
> needed to be fixed :-)  And scrubbing data at every
> transfer from tmem to domheap/xenheap would be a huge
> waste of CPU cycles, especially since the most likely
> next consumer of that same page is tmem again.

Then why not mark pages as coming from tmem when you free them, and scrub
them on next use if it isn't going back to tmem?

I wasn't clear on who would call your C and D functions, and why they can't
be merged. I might veto those depending on how ugly and exposed the changes
are outside tmem.

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.