[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC][v3][PATCH 4/6] xen:x86: add XENMEM_reserved_device_memory_map to expose RMRR



On 18/08/14 09:00, Chen, Tiejun wrote:
> On 2014/8/15 20:15, Andrew Cooper wrote:
>> On 15/08/14 09:27, Tiejun Chen wrote:
>>> We should expose RMRR mapping to libxc, then setup_guest() can
>>> check if current MMIO is not covered by any RMRR mapping.
>>>
>>> Signed-off-by: Tiejun Chen <tiejun.chen@xxxxxxxxx>
>>> ---
>>>   xen/arch/x86/mm.c | 32 ++++++++++++++++++++++++++++++++
>>>   1 file changed, 32 insertions(+)
>>>
>>> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
>>> index d23cb3f..fb6e92f 100644
>>> --- a/xen/arch/x86/mm.c
>>> +++ b/xen/arch/x86/mm.c
>>> @@ -4769,6 +4769,38 @@ long arch_memory_op(unsigned long cmd,
>>> XEN_GUEST_HANDLE_PARAM(void) arg)
>>>           return 0;
>>>       }
>>>
>>> +    case XENMEM_reserved_device_memory_map:
>>> +    {
>>> +        struct xen_memory_map map;
>>> +        XEN_GUEST_HANDLE(e820entry_t) buffer;
>>> +        XEN_GUEST_HANDLE_PARAM(e820entry_t) buffer_param;
>>> +        unsigned int i;
>>> +
>>> +        if ( copy_from_guest(&map, arg, 1) )
>>> +            return -EFAULT;
>>
>> This hypercall implementation is looking somewhat more plausible, but
>> still has some issues.
>>
>>> +        if ( map.nr_entries < rmrr_maps.nr_map + 1 )
>>> +            return -EINVAL;
>>
>> This causes a fencepost error, does it not?
>
> map.nr_entries = E820MAX, and obviously rmrr_maps.nr_map should be
> smaller than far E820MAX. So what is your problem?
>
> Here I have a reference to XENMEM_machine_memory_map.

Looks like XENMEM_machine_memory_map is also wrong.

Consider the case where the caller provides a buffer of exactly the
correct number of entries.  In that case, the hypercall would fail with
-EINVAL despite being able to complete successfully.

>
>>
>> Furthermore, the useful error would be to return -ENOBUFS and fill
>> arg.nr_entries with the rmrr_maps.nr_map so the caller can allocate an
>> appropriately sized buffer.
>>
>>
>> It is also very common with hypercalls like this to have allow a null
>> guest handle as an explicit request for size.
>
> Looks you like to issue twice time with a hypercall to finish, but
> what's wrong with my way?
>
> Again, here I have a reference to XENMEM_machine_memory_map.

Some lessons have been learnt since some of the older hypercall handlers
were written.  Specifically, there is no way to gauge the required
buffer size if a buffer too small is provided.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.