[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Xen Memory Sharing Query


  • To: Marc Bonnici <Marc.Bonnici@xxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>
  • Date: Fri, 15 Jul 2022 17:28:54 +0000
  • Accept-language: en-GB, en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=mXJLVFrcthD42hxcsZ+JmmHuqPHK0pqLkI928Y/v8uw=; b=FUMwV1BRGzNqa5DWhpbVqH/EL22DkJTyLbciskTBXdYUWnqh7z4cXovhToNQ08PKpORFrBNvEG1L8Pm12hO/tCDYzzL7HIbESVYDeEEv2KzUDaM7iEmGI7iSZz7EjNrh0KR5Pxbynb0NDrLiu515TfnJOcx0fzVT4iPORnEqO6fEEZAh4bbjTwlvepmpskge/HKX85m7UEbqU43Xg0VXaJSXzBObAMayuKrX9fj1LbNmZMX31db0Nv++sG279vI78Jg0yjTQhGUcLfN5y/CyegUTNYO/wWqw/e3qaEmymg1Mw5Xgh+nhva2V/oL7sXAPWydVFonpj8kQflmaMEY2yQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=LWnHC7+Zrrwe7fhHjNRlG1aLvd6UyVS0ILXJ7NR5YzNdE79+vrAVRIbjBYQ2VGgErHdZu+HTG/q+3cRntRnuLofFBGtphGsoqrNn+oH6By/uIH+MpkHLYhr6tl1UlPRqBz5De8CMtPwwu01stqAV0wF9uUtuMOKl6SugNqaBf9YqormkbdAqrRjj1jfZDNKt7KT/Ydk+BelKtGR+t1NfpKgqFJGMj7hMf0etFPSi9EaOKAhaNrVLHUJgO2a+3U8wymKcl9Bv5baXhBBYlcGNQR69f4ARDmODKe7g+DROxtap5Q5Ey5YKSCxSwB+7nOgRlaIvhPBMjLLF57bPg4igww==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Delivery-date: Fri, 15 Jul 2022 17:29:06 +0000
  • Ironport-data: A9a23:EMxiYqgC5MbQlzt1mIcNrsMKX161GBEKZh0ujC45NGQN5FlHY01je htvXWnSbq2CYmv8edF/Ptuy9kgDu56Bz4M1HAVq+CxhRCkb9cadCdqndUqhZCn6wu8v7a5EA 2fyTvGacajYm1eF/k/F3oDJ9CU6j+fQLlbFILasEjhrQgN5QzsWhxtmmuoo6qZlmtH8CA6W0 T/Ii5S31GSNhnglaAr414rZ8Ek15Kur5WtD1rADTasjUGH2xiF94K03fcldH1OgKqFIE+izQ fr0zb3R1gs1KD90V7tJOp6iGqE7aua60Tqm0xK6aID76vR2nQQg075TCRYpQRw/ZwNlPTxG4 I4lWZSYEW/FN0BX8QgXe0Ew/ypWZcWq9FJbSJQWXAP6I0DuKhPRL/tS4E4eZbYy6+B+JnN0z Mc2KDsXN0C4vfCx3+fuIgVsrpxLwMjDGqo64ysl5xeJSPEsTNbEXrnA4sJe0HEonMdSEP3CZ s0fLz1ycBDHZB4JMVASYH48tL7w2j+jLHsF9RTM+vJfD2v7lWSd1JDENtbPd8PMbsJShkuC/ UrN/njjAwFcP9uaodaA2i3x17eUx3qiMG4UPLiZrdFLhUHP/XU8FhM1dF61ivqSjGfrDrqzL GRRoELCt5Ma5EGtC9XwQRC8iHqFpQIHHcpdFfUg7wOAwbaS5ByWblXoVRZEYd0i8cUwFToj0 wbTm8uzXGM39rqIVXia67GY6yuoPjQYJnMDYilCShYZ597ko8c4iRenostfLZNZR+bdQVnYq w1mZgBl71nPpabnD5mGwG0=
  • Ironport-hdrordr: A9a23:pWCG6qgd+xH7+MJKYhpW9QLKd3BQX3Z13DAbv31ZSRFFG/FwyP rCoB1L73XJYWgqM03IwerwQJVp2RvnhNRICPoqTMyftW7dySaVxeBZnMDfKljbdxEWmdQtsZ uIH5IeNDS0NykCsS+Y2nj1Lz9D+qjhzEnAv463oBlQpENRGsddBmxCe2Wm+zhNNWx77O0CZf ihD6R8xwaISDAyVICWF3MFV+/Mq5ngj5T9eyMLABYh9U2nkS6owKSSKWnX4j4uFxd0hZsy+2 nMlAL0oo+5teug9xPa32jPq7xLhdrazMdZDsDksLlVFtyssHfpWG1SYczBgNkHmpDr1L/sqq iJn/4UBbUx15oWRBDznfKi4Xin7N9k0Q6Z9bbRuwqfnSW+fkN0NyMJv/MnTvOSgXBQwO1Uwe ZF2XmUuIFQCg6FlCPh58LQXxUvjUasp2E++NRj+UC3fLFuHIO5l7Zvi399AdMFBmb3+YonGO 5hAIXV4+tXa0qTazTcsnN0yNKhU3wvFlPeK3Jy8vC9wnxThjR03kEYzMsQkjMJ8488UYBN46 DBPr5znL9DQ8cKZeZ2BfsHQ8GwFmvRKCi8eV66MBDiDuUKKnjNo5n47PE84/yrYoUByN8olJ HIQDpjxBsPkoLVeL+zNbFwg2HwqT+GLErQI+llluhEk6y5Qqb3OiueT11rm9e8opwkc77mZ8 o=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AdiXcIvKzj1u3ooYRvOwLKlCsLdCEAA/89UA
  • Thread-topic: Xen Memory Sharing Query

On 15/07/2022 16:56, Marc Bonnici wrote:
> Hi All,
>
> I was wondering if someone could help me understand some of the rules of the 
> memory sharing implementation in Xen?
>
> Specifically I'm looking to understand what restrictions Xen places on
> granting access from one Dom to another from Xen's perspective, and what 
> types 
> of grant requests would be allowed/rejected by Xen?
>
> I.e. How would the situation be handled if the same frame of memory was 
> attempted 
> to be shared multiple times?
>
> As an example scenario, DomA shares 1 physical page of memory in a 
> transaction 
> with DomB. And then without releasing any memory, DomA attempts to share
> another region of memory, which includes the same physical page of the 
> previous share
> with DomB again. This would result in two concurrent shares containing an 
> overlap.
>
> Apologies if I've missed something but is there any documentation / threat 
> model
> that would cover these types of scenarios? So far I have been trying to read 
> through 
> the code but was wondering if there is something else I could refer to help 
> me 
> with my understanding?

There's nothing adequately written down.  It ought to live in sphinx
docs, but my copious free time is non-existent for speculative security
reasons.

This all pertains to gnttab v1 which is the only supported one on ARM
right now.  gnttab v2 is horribly more complicated.  Refer to
https://github.com/xen-project/xen/blob/master/xen/include/public/grant_table.h#L132-L186

When DomA and DomB are set up and running, they each have a grant
table.  The grant table is some shared memory (of Xen's) mapped into the
guest, and is a bidirectional communication interface between the guest
kernel and Xen.

The guest kernel logically owns the grant table, and it's a simple array
of grant entries.  Entries 0 thru 7 are reserved for system use, and
indeed two entries (one for xenstore, one for xenconsole) are set up on
the guest kernel's behalf by the domain builder.  Entries 8 thru $N are
entirely under the guest's control.

A guest kernel (domA) creates a grant by filling in a grant table entry,
and passing the grant reference (the entry's index in the table) to some
other entity in the system (in this case, domB).

The grant table entry is formed of:

u16 flags
u16 domid
u32 frame

so for domA to grant a frame to domB, it would pick a free gref (any
entry in the table with flags=0) and fill in:

frame = f
domid = domB
smp_wmb()
flags = GTF_permit_access (think "grant usable")

GFT_readonly is another relevant flag that domA might choose to set.

Then, domB would take the gref it has been given by domA, and make a
gnttab_op_map() hypercall, passing {domA, gref} as an input.

Xen looks up gref in domA's grant table, checks e.g. domA granted access
to domB, and if everything is happy, sets the the GFT_{reading,writing}
flags (as appropriate) in flags.  This tells domA that the grant is
currently mapped readably and/or writeably.

Later, when domB unmaps the grant, Xen clears the GFT_{reading,writing}
bits, telling domA that the grant is no longer in use.

DomA then clears GTF_permit_access to mark this as gref as invalid, and
can then free the frame.


Now, that's the simple overview.  To answer some of your specific questions:

DomA is perfectly free to grant away the same frame multiple times. 
DomA does this by writing multiple different grefs with the same frame
field.  These grefs could be to the same, or different domains, and can
have any (valid) combination of flags.

DomB is perfectly free to map the same gref multiple times.  This is
actually a necessity for x86 PV guests, because of how we reference
count pagetable entries.  It is not necessary for any kind of guest of
HVM guest (x86 or ARM) because of how guest physical address space works.

IMO it should have been restricted when the HVM ABI was designed, but
alas.  In practice, Xen has an internal refcount which prevents a gref
being mapped more than 127 times IIRC.

While a gref is mapped, domA is not permitted to edit the associated
entry.  Doing so shouldn't cause a security violation (Xen has a local
copy of the entry in the maptrack table), but will at least confuse
diagnostics of the granted state.

Importantly, and what may come as a surprise, is that domA has no way to
revoke a currently-mapped grant.  Fixing this limitation has been
discussed several times; there are some very complicated corner cases,
and I'm not aware of any work having started in earnest.

Xen does have logic to unmap grants of VMs which have shut down (for
whatever reason) with grants still mapped.  This prevents deadlocks
(e.g. two domains grant to each other, then both crash deliberately).


From a grant perspective, Xen doesn't enforce any policy.  domA's grefs
can be mapped in the manner permitted by what domA wrote into the grant
table.

If you want to get into policies that Xen may enforce, that would be a
discussion about XSM, Xen Security Modules.

Does any of this help?

~Andrew

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.