[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] page ref/type count overflows



>>> Keir Fraser <keir.fraser@xxxxxxxxxxxxx> 26.01.09 14:33 >>>
>On 26/01/2009 13:10, "Jan Beulich" <jbeulich@xxxxxxxxxx> wrote:
>
>> The backend drivers in my opinion have no alternative to getting taught
>> to do full error checking in order to avoid the respective DomU-induced
>> problems.
>
>Backend drivers, which get their mappings via grants, have sufficient
>checking already don't they?

Actually, after some checking, almost. There's one leak in netback, but
that's trivial to fix.

>As for the general issue, why fudge around it when for any modern system we
>can just fix it? By which I mean, x64 systems with CMPXCHG16 support
>(everything but ancient Opterons?) can have a full domain pointer and a long
>count_info in struct page_info, and still update both atomically.
>
>It'd be a smaller patch, and it'd be less kludgy. Disadvantages are extra 8
>bytes per page (which I think is okay, particularly to fix this nasty issue
>in a clean way) and only fixes the issue for x64 (I personally don't care
>about non-regressions for i386, especially when the cost of the fix in this
>case would be IMO high in code complexity and ugliness).
>
>Already there is no reason why type_info (an unsigned long) could not have a
>wider count. It's just no use until count_info is widened.
>
>What do you say to that then? :-)

I considered all of this first, but afair it's not only ancient Opterons that
don't have cmpxchg16b.

But (just having got your second response) if we can do without that ugly
cmpxchg8b altogether, then of course this is the much preferred solution.
The growing of struct page_info of course isn't very fortunate, but pretty
much unavoidable.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.