|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v3 2/2] x86/ioreq: Extend ioreq server to support multiple ioreq pages
On 26.02.2026 16:53, Jan Beulich wrote:
> On 23.02.2026 10:38, Julian Vetter wrote:
>> @@ -89,6 +91,39 @@ static gfn_t hvm_alloc_ioreq_gfn(struct ioreq_server *s)
>> return hvm_alloc_legacy_ioreq_gfn(s);
>> }
>>
>> +static gfn_t hvm_alloc_ioreq_gfns(struct ioreq_server *s,
>> + unsigned int nr_pages)
>> +{
>> + struct domain *d = s->target;
>> + unsigned long mask;
>> + unsigned int i, run;
>> +
>> + if ( nr_pages == 1 )
>> + return hvm_alloc_ioreq_gfn(s);
>> +
>> + /* Find nr_pages consecutive set bits */
>> + mask = d->arch.hvm.ioreq_gfn.mask;
>> +
>> + for ( i = 0, run = 0; i < BITS_PER_LONG; i++ )
>> + {
>> + if ( !test_bit(i, &mask) )
>> + run = 0;
>> + else if ( ++run == nr_pages )
>> + {
>> + /* Found a run - clear all bits and return base GFN */
>> + unsigned int start = i - nr_pages + 1;
>> + unsigned int j;
>> +
>> + for ( j = start; j <= i; j++ )
>> + clear_bit(j, &d->arch.hvm.ioreq_gfn.mask);
>
> You using clear_bit() here doesn't make the while operation atomic. There will
> need to be synchronization (also with hvm_alloc_ioreq_gfn()), and once that's
> there (or if things are suitably synchronized already) __clear_bit() ought to
> suffice here.
>
>> + return _gfn(d->arch.hvm.ioreq_gfn.base + start);
>> + }
>> + }
>> +
>> + return INVALID_GFN;
>> +}
>
> Did you consider whether fragmentation could get in the way here, as is
> usually
> the case when doing mixed-size allocations from a single pool? In how far is
> it
> necessary for the GFNs used to be consecutive?
Thinking about it - isn't this GFN based approach the legacy one? Can't we
demand
use of the resource mapping approach to support bigger guests?
Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |