[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 4/9] xen: add basic hypervisor filesystem support



On 04.02.2020 10:21, Jürgen Groß wrote:
> On 04.02.20 09:48, Jan Beulich wrote:
>> On 04.02.2020 07:43, Jürgen Groß wrote:
>>> On 03.02.20 16:07, Jan Beulich wrote:
>>>> On 21.01.2020 09:43, Juergen Gross wrote:
>>>>> +static int hypfs_read(const struct hypfs_entry *entry,
>>>>> +                      XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned long 
>>>>> ulen)
>>>>> +{
>>>>> +    struct xen_hypfs_direntry e;
>>>>> +    long ret = -EINVAL;
>>>>> +
>>>>> +    if ( ulen < sizeof(e) )
>>>>> +        goto out;
>>>>> +
>>>>> +    e.flags = entry->write ? XEN_HYPFS_WRITEABLE : 0;
>>>>> +    e.type = entry->type;
>>>>> +    e.encoding = entry->encoding;
>>>>> +    e.content_len = entry->size;
>>>>> +
>>>>> +    ret = -EFAULT;
>>>>> +    if ( copy_to_guest(uaddr, &e, 1) )
>>>>> +        goto out;
>>>>> +
>>>>> +    ret = 0;
>>>>> +    if ( ulen < entry->size + sizeof(e) )
>>>>> +        goto out;
>>>>
>>>> So you return "success" even if the operation didn't complete
>>>> successfully. This isn't very nice, plus ...
>>>
>>> The direntry contains the needed size. The caller should know the
>>> size he passed to Xen.
>>>
>>>>
>>>>> +    guest_handle_add_offset(uaddr, sizeof(e));
>>>>> +
>>>>> +    ret = entry->read(entry, uaddr);
>>>>
>>>> ... how is the caller to know whether direntry was at least
>>>> copied if this then fails?
>>>
>>> Is this really important? Normally -EFAULT should just not happen. In
>>> case it does I don't think the caller can make real use of the direntry.
>>
>> "Important" has various possible meanings. The success/failure
>> indication to the caller should at least be rational. "If the
>> data buffer was not large enough for all the data no entry data
>> is returned, but the direntry will contain the needed size for
>> the returned data" is fine to be stated in the public header,
>> but I think this wants to be -ENOBUFS then, not 0 (success).
> 
> I would be fine with this, but this contradicts your previous demand
> not to enumerate the possible failure cases, which would be essential
> for this case.

Slightly re-writing the part of the comment I did quote would be
all that's needed afaict: "If the data buffer was not large enough
for all the data -ENOBUFS and no entry data is returned, but the
direntry will contain the needed size for the returned data."

>>>>> +    union {
>>>>> +        char buf[8];
>>>>> +        uint8_t u8;
>>>>> +        uint16_t u16;
>>>>> +        uint32_t u32;
>>>>> +        uint64_t u64;
>>>>> +    } u;
>>>>> +
>>>>> +    ASSERT(leaf->e.type == XEN_HYPFS_TYPE_UINT && leaf->e.size <= 8);
>>>>> +
>>>>> +    if ( ulen != leaf->e.size )
>>>>> +        return -EDOM;
>>>>
>>>> Is this restriction really necessary? Setting e.g. a 4-byte
>>>> field from 1-byte input is no problem at all. This being for
>>>> booleans I anyway wonder why input might be helpful to have
>>>> larger than a single byte. But maybe all of this is again a
>>>> result of not seeing what a user of the function would look
>>>> like.
>>>
>>> I wanted to have as little functionality as possible in the hypervisor.
>>> It is no problem for the library to pass a properly sized buffer.
>>>
>>> Allowing larger variables for booleans is just a consequence of the
>>> hypervisor parameters allowing that.
>>
>> But the caller shouldn't be concerned of the hypervisor
>> implementation detail of what the chose width is. Over time we
>> e.g. convert int (along with bool_t) to bool when it's used in
>> a boolean way. This should not result in the caller needing to
>> change, despite the width change of the variable.
> 
> This is basically a consequence of now passing binary values to and from
> the hypervisor.
> 
> The normal way of handling this (as can be seen in libxenhypfs) is to
> query the hypervisor for the size of the value (no matter whether its
> int, uint or bool) and then to do the conversion between ASCII and the
> binary value at the caller's side.

I can see why this is needed for e.g. integer values, but I don't
see the need for booleans.

>>>>> +struct xen_hypfs_direntry {
>>>>> +    uint16_t flags;
>>>>> +#define XEN_HYPFS_WRITEABLE    0x0001
>>>>> +    uint8_t type;
>>>>> +#define XEN_HYPFS_TYPE_DIR     0x0000
>>>>> +#define XEN_HYPFS_TYPE_BLOB    0x0001
>>>>> +#define XEN_HYPFS_TYPE_STRING  0x0002
>>>>> +#define XEN_HYPFS_TYPE_UINT    0x0003
>>>>> +#define XEN_HYPFS_TYPE_INT     0x0004
>>>>> +#define XEN_HYPFS_TYPE_BOOL    0x0005
>>>>> +    uint8_t encoding;
>>>>> +#define XEN_HYPFS_ENC_PLAIN    0x0000
>>>>> +#define XEN_HYPFS_ENC_GZIP     0x0001
>>>>
>>>> Meaning I can e.g. have a gzip-ed string or bool (or even dir)?
>>>> If this is just for "blob", why have separate fields instead of
>>>> e.g. BLOB_RAW and BLOB_GZIP or some such?
>>>
>>> gzip-ed string or blob are the primary targets.
>>>
>>> Maybe we want to have other encoding s later (Andrew asked for that
>>> possibility when I posted the patch for retrieving the .config file
>>> contents early last year).
>>
>> To me it would seem preferable if the contents of a blob
>> identified itself as to its format. But since this leaves
>> room for ambiguities I accept that the format needs
>> specifying. However, to me a gzip-ed string is as good as a
>> gzip-ed blob, and hence I still think sub-dividing "blob" is
>> the way to go, with no separate "encoding". Otherwise at the
>> very least a comment here would need adding to clarify what
>> combinations are valid / to be expected by callers.
> 
> libxenhypfs is able to handle all possible combinations. I just don't
> think some of the combinations are making sense (gzip-ing a binary
> value of 4 bytes e.g. is nonsense).
> 
> OTOH in case we'll add large arrays of longs in the future it might be
> beneficial to compress them in some way. So I'd like to keep type and
> encoding as separate information.

Okay, I'm not entirely opposed. But I'd be curious if anyone
else has an opinion here.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.