[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4] xmalloc: add support for checking the pool integrity



>>> On 17.12.14 at 11:08, <andrew.cooper3@xxxxxxxxxx> wrote:
> On 17/12/14 09:59, Jan Beulich wrote:
>>>>> On 16.12.14 at 21:28, <andrew.cooper3@xxxxxxxxxx> wrote:
>>> On 16/12/14 19:33, Mihai DonÈu wrote:
>>>> +static bool_t __xmem_pool_check_locked(const char *file, int line,
>>>> +                                       const struct xmem_pool *pool)
>>>> +{
>>>> +    unsigned int i;
>>>> +    static bool_t once = 1;
>>> What is this static doing?  Surely corruption corruption in one pool has
>>> no effect on corruption in a separate pool (other than the usual side
>>> effects of general memory corruption, which tend to be bad).
>>>
>>> It looks as if it wants to be an extra field in an xmem_pool.
>> Question is whether logging more than the first corruption ever is
>> really all that useful.
> 
> True - as soon as one bit of memory corruption is detected, all other
> bets are off.
> 
> Might it perhaps be prudent to then panic() on a positive detection of
> memory corruption?

Indeed.

>>>> +bool_t __xmem_pool_check(const char *file, int line, struct xmem_pool 
> *pool)
>>>> +{
>>>> +    return __xmem_pool_check_unlocked(file, line, pool ?: xenpool);
>>> Why should a NULL pool be tolerated here?  This is debug code only, so
>>> we can require and trust that we are called appropriately.
>> xenpool is not and should not be visible to code outside this file.
> 
> Quite, but the only plausible external callers will be checking pools of
> their own.  This patch adds internal callers for checking the xenpool.

If you look up earlier conversation on this patch and its origin, I
think you'll find that it is specifically intended for calls to this to be
addable at arbitrary places in the code.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.