[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH] tools/xg: increase LZMA_BLOCK_SIZE for uncompressing the kernel
On 09.10.2024 11:52, Marek Marczykowski-Górecki wrote: > On Wed, Oct 09, 2024 at 09:19:57AM +0200, Jan Beulich wrote: >> On 08.10.2024 23:32, Marek Marczykowski-Górecki wrote: >>> --- a/tools/libs/guest/xg_dom_bzimageloader.c >>> +++ b/tools/libs/guest/xg_dom_bzimageloader.c >>> @@ -272,8 +272,7 @@ static int _xc_try_lzma_decode( >>> return retval; >>> } >>> >>> -/* 128 Mb is the minimum size (half-way) documented to work for all >>> inputs. */ >>> -#define LZMA_BLOCK_SIZE (128*1024*1024) >>> +#define LZMA_BLOCK_SIZE (256*1024*1024) >> >> That's as arbitrary as before, now just not even with a comment at least >> hinting at it being arbitrary. Quoting from one of the LZMA API headers: >> >> * Decoder already supports dictionaries up to 4 GiB - 1 B (i.e. >> * UINT32_MAX), so increasing the maximum dictionary size of the >> * encoder won't cause problems for old decoders. >> >> IOW - what if the Linux folks decided to increase the dictionary size >> further? I therefore wonder whether we don't need to make this more >> dynamic, perhaps by peeking into the header to obtain the dictionary >> size used. The one thing I'm not sure about is whether there can't be >> multiple such headers throughout the file, and hence (in principle) >> differing dictionary sizes. > > What is the purpose of this block size limit? From the error message, it > seems to be avoiding excessive memory usage during decompression (which > could be DoS via OOM). If that's the case, then taking the limit from > the kernel binary itself will miss this point (especially in case of > pygrub or similar, but there may be other cases of not-fully-trusted > kernel binaries). Indeed. The question then simply is: Where do we want to draw the line between what we permit and what we reject? Jan
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |