[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 03/13] libxenguest: deal with log-dirty op stats overflow

  • To: Juergen Gross <jgross@xxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Thu, 19 Aug 2021 13:06:54 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=oHgNK1P2+Qieg+5pz1vyjo5nwU/Ij1WJCnqDDxepy8Y=; b=QWS3qbqgR00B/LlFByjgrUXXYtol8HnPLsTdCVTdac2okRleACVtqogzy0q0Hh/S7SbiVgVMnb5lylYM+3Ed5AKWisUqf0d0jhijP/wUPVJU+nSYiw9miU+1PgqbPAy7/MKeaA4akjT+EDhcS8UA+npmpv+a3ZoKayFfZYLdleGijhORMAHRt1IVv0rhjcJKV9VSyNyy7hEGvsbefIrVWlOAGvRWJky3hbZky38rjruqjSo9QJmuGZ9vzZGeS32y5p0q9JC3qxZ6aN5WAwBDhebadc3Nn0t41W6I9w0n9Y4tZ61FlutEZ5heNCdFyqjUM9ubF2BHEa+WHyX/UZGLEg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=TKba09tLGb+Vvr1Q4IB0shQ0fwgD1deVk7mAjAUX/K74pivSXI+ZSi/NLJZXjQlTj6ejloRA/kajd1MUTA5pviOsWvHGxO2BTzihIKJRluArcmOdDNN8SRNAhECupdtruo0NP90JXWJrn/8d2YrutU+BtJBFeGgBTb4KoV/11qeH7YRdsvBF2CIhTXMgjgG1nHyx0ud1k9MnSr1EBIwLQiRSnZ+9s64n+V89gm8tAVjokxI7DdJVgayhR+N1813jAy+XEwvb0NFect2lUdE0IVv3scs7xbRPHeOQU6+YdUSEh6fBO3wau83z3Jw0K207Dz6sd5tWolGtabjOtbBrNw==
  • Authentication-results: lists.xenproject.org; dkim=none (message not signed) header.d=none;lists.xenproject.org; dmarc=none action=none header.from=suse.com;
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Ian Jackson <iwj@xxxxxxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Thu, 19 Aug 2021 11:07:21 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 19.08.2021 12:20, Juergen Gross wrote:
> On 05.07.21 17:13, Jan Beulich wrote:
>> In send_memory_live() the precise value the dirty_count struct field
>> gets initialized to doesn't matter much (apart from the triggering of
>> the log message in send_dirty_pages(), see below), but it is important
>> that it not be zero on the first iteration (or else send_dirty_pages()
>> won't get called at all). Saturate the initializer value at the maximum
>> value the field can hold.
>> While there also initialize struct precopy_stats' respective field to a
>> more sane value: We don't really know how many dirty pages there are at
>> that point.
>> In suspend_and_send_dirty() and verify_frames() the local variables
>> don't need initializing at all, as they're only an output from the
>> hypercall which gets invoked first thing.
>> In send_checkpoint_dirty_pfn_list() the local variable can be dropped
>> altogether: It's optional to xc_logdirty_control() and not used anywhere
>> else.
>> Note that in case the clipping actually takes effect, the "Bitmap
>> contained more entries than expected..." log message will trigger. This
>> being just an informational message, I don't think this is overly
>> concerning.
> Is there any real reason why the width of the stats fields can't be
> expanded to avoid clipping? This could avoid the need to set the
> initial value to -1, which seems one of the more controversial changes.

While not impossible, it comes with a price tag, as we'd either need
to decouple xc_shadow_op_stats_t from struct xen_domctl_shadow_op_stats
or alter the underlying domctl. Neither of which looked either
appealing or necessary to me; instead I'm still struggling with
Andrew's comments, yet I didn't receive any clarification of further
explanation. Plus I continue to think that statistics output like this
shouldn't be assumed to be precise anyway, and for practical purposes
I don't think it really matters how large the counts actually are once
they've moved into the billions.




Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.