[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 07/10] vm_event: Add vm_event_ng interface


  • To: Petre Ovidiu PIRCALABU <ppircalabu@xxxxxxxxxxxxxxx>, Paul Durrant <Paul.Durrant@xxxxxxxxxx>
  • From: Jan Beulich <JBeulich@xxxxxxxx>
  • Date: Wed, 17 Jul 2019 16:32:56 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1;spf=pass smtp.mailfrom=suse.com;dmarc=pass action=none header.from=suse.com;dkim=pass header.d=suse.com;arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=lsgbUbDUf87DmcnISCDo6oO6Q7lCAYNuQzuGEfnHNzk=; b=QynFEMAGoCDWGwobZT5+ZzuMd53gojfJDyPt1ctZMAjo9/14Q4pWuIVBWvvJuTWQ0XXbrO69CgsLT1SdmFkonkHdypPudf/2OCYI3ZrMinwghEGGwU0Ju0WTriVTiE+2qF/4iP5faH6uyopNHnLUZLXyUkkDyPjTl/faHH3NLXob14pT683B2bmU8MssmR0fGdZF3xrNNkPy72k4kBwZFCC6N+4QKDHw5yNds67CdSzFtRX4GlqrXW1QvXVImcYzWFeFR8UuD+T5W+AAmDaq30vzCIXuMk0D8CzPjcdUM1f78uez7XiDp9LfWT2hU2nY33RuIIg7oaHtgngbnxfmpg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=mPPbrTZNgsFWCaCzcKfl4IGMnoqWaLmS6nJyjDt+SbwnJndCtbhOhTJEzPgy/wc+DOtzaH6JeSSEZ8K4maSHXL8VpCLE0QuLZ56MQmr63owyV1lzv2NWWMrKdoWliELsdv30RZMs4d6tc+ACV33p1DkPaGe7GlM44nL2yfVsM/Imo+r/Snp3EyKOBE0778y8IwPuO6NIQnqveSYuREgiLAYfQbYh3gfX+lmVcaaon4EHtJbO/Tmdb9VUWzwKs4EhIiLkPOuJzhtgl4YE0qCtFellJ23HeOsPenQVdBzf/CEHA5Q3YnJUNflQ/+eQvb02UXkwqORcPuil9qXp4DHxLg==
  • Authentication-results: spf=none (sender IP is ) smtp.mailfrom=JBeulich@xxxxxxxx;
  • Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Razvan Cojocaru <rcojocaru@xxxxxxxxxxxxxxx>, Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>, George Dunlap <George.Dunlap@xxxxxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Ian Jackson <ian.jackson@xxxxxxxxxxxxx>, Tim Deegan <tim@xxxxxxx>, JulienGrall <julien.grall@xxxxxxx>, Tamas K Lengyel <tamas@xxxxxxxxxxxxx>, Alexandru Stefan ISAILA <aisaila@xxxxxxxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Delivery-date: Wed, 17 Jul 2019 16:34:53 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHVO/ja8zGHp8jkY0SwPhMewFAvpabOlq4AgABMy4CAAB8hAA==
  • Thread-topic: [PATCH v2 07/10] vm_event: Add vm_event_ng interface

On 17.07.2019 16:41, Petre Ovidiu PIRCALABU wrote:
> On Wed, 2019-07-17 at 10:06 +0000, Jan Beulich wrote:
>> On 16.07.2019 19:06, Petre Pircalabu wrote:
>>> +static void vm_event_channels_free_buffer(struct
>>> vm_event_channels_domain *impl)
>>>    {
>>> -    vm_event_ring_resume(to_ring(v->domain->vm_event_monitor));
>>> +    int i;
>>> +
>>> +    vunmap(impl->slots);
>>> +    impl->slots = NULL;
>>> +
>>> +    for ( i = 0; i < impl->nr_frames; i++ )
>>> +        free_domheap_page(mfn_to_page(impl->mfn[i]));
>>>    }
>>
>> What guarantees that there are no mappings left of the pages you free
>> here? Sharing pages with guests is (so far) an (almost) irreversible
>> action, i.e. they may generally only be freed upon domain
>> destruction.
>> See gnttab_unpopulate_status_frames() for a case where we actually
>> make an attempt at freeing such pages (but where we fail the request
>> in case references are left in place).
>>
> I've tested manually 2 cases and they both work (no crashes):
> 1: introspected domain starts -> monitor starts -> monitor stops ->
> domain stops
> 2: introspected domain starts -> monitor starts -> domain stops ->
> monitor stops.

Well, testing is important, but doing tests like you describe won't
catch any misbehaving or malicious monitor domain.

> However, I will take a closer look at gnttab_unpopulate_status_frames
> and post a follow up email.

Thanks.

>> Furthermore, is there any guarantee that the pages you free here
>> were actually allocated? ->nr_frames gets set ahead of the actual
>> allocation.
>>
> vm_event_channels_free_buffer is called only from
> vm_event_channels_disable. The latter is called only if vm_event_check
> succeeds ( vm_event_cleanup and vm_event_domctl/VM_EVENT_DISABLE).
> vm_event_check will only return true if vm_event_enable succeeds.

Hmm, looks like I was mislead to believe the freeing function
would be called by vm_event_channels_enable() not itself freeing
what it might have allocated upon error. So perhaps the bug is
there, not where I thought it would be.

>>> +int vm_event_ng_get_frames(struct domain *d, unsigned int id,
>>> +                           unsigned long frame, unsigned int
>>> nr_frames,
>>> +                           xen_pfn_t mfn_list[])
>>> +{
>>> +    struct vm_event_domain *ved;
>>> +    int i;
>>> +
>>> +    switch (id )
>>> +    {
>>> +    case XEN_VM_EVENT_TYPE_MONITOR:
>>> +        ved = d->vm_event_monitor;
>>> +        break;
>>> +
>>> +    default:
>>> +        return -ENOSYS;
>>
>> Various other error codes might be fine here, but ENOSYS is not
>> (despite pre-existing misuse elsewhere in the tree).
> 
> vm_event_domctl also returns -ENOSYS if the type is neither
> XEN_VM_EVENT_TYPE_PAGING, XEN_VM_EVENT_TYPE_MONITOR, nor
> XEN_VM_EVENT_TYPE_SHARING. I've just followed the existing convention.

Right - that's one of the pre-existing misuses that I was referring
to.

>>> +    }
>>> +
>>> +    if ( !vm_event_check(ved) )
>>> +        return -EINVAL;
>>> +
>>> +    if ( frame != 0 || nr_frames != to_channels(ved)->nr_frames )
>>> +        return -EINVAL;
>>
>> Is there a particular reason for this all-or-nothing model?
> 
> I've added this extra check due to the way acquire_resource interface
> is designed. In our case, the memory is allocated from
> XEN_VM_EVENT_ENABLE and the size is known beforehand: the number of
> pages needed to store (vcpus_count * sizeof vm_event_slot) bytes.
> However the acquire_resource needs a "nr_frames" parameter which is
> computed in a similar manner in the libxc wrapper.

Hmm, maybe I'm not up to date here: Paul, is the general resource
obtaining/mapping logic indeed meant to be an all-or-nothing thing
only?

Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.