|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 2/6] xen: add basic hypervisor filesystem support
On 12.11.2019 17:06, Jürgen Groß wrote:
> On 12.11.19 14:48, Jan Beulich wrote:
>> On 02.10.2019 13:20, Juergen Gross wrote:
>>> +static int hypfs_add_entry(struct hypfs_dir *parent, struct hypfs_entry
>>> *new)
>>> +{
>>> + int ret = -ENOENT;
>>> + struct list_head *l;
>>> +
>>> + if ( !new->content )
>>> + return -EINVAL;
>>> +
>>> + spin_lock(&hypfs_lock);
>>> +
>>> + list_for_each ( l, &parent->list )
>>> + {
>>> + struct hypfs_entry *e = list_entry(l, struct hypfs_entry, list);
>>
>> const?
>
> Hmm, is this true when I add a new entry to it? l is part of *e
> after all.
But you don't use e in a way requiring it to be non-const. Question
is (as I did ask elsewhere already) whether you wouldn't better use
list_for_each_entry() here in the first place.
>>> +static unsigned int hypfs_get_entry_len(struct hypfs_entry *entry)
>>> +{
>>> + unsigned int len = 0;
>>> +
>>> + switch ( entry->type )
>>> + {
>>> + case hypfs_type_dir:
>>> + len = entry->dir->content_size;
>>> + break;
>>> + case hypfs_type_string:
>>> + len = strlen(entry->str_val) + 1;
>>> + break;
>>> + case hypfs_type_uint:
>>> + len = 11; /* longest possible printed value + 1 */
>>
>> Why would uint values be restricted to 32 bits? There are plenty of
>> 64-bit numbers that might be of interest exposing through this
>> interface (and even more so if down the road we were to re-use this
>> for something debugfs-like). And even without this I think it would
>> be better to not have a literal number here - it'll be close to
>> unnoticeable (without someone remembering) when porting to an arch
>> with unsigned int wider than 32 bits.
>
> Support of 64-bit numbers would add "hypfs_type_ulong".
At this layer I dislike making assumptions on the bitness of int
or long. Can we settle on either a type that's suitable for all
sensible values (would likely be unsigned long long) or use fixed
with identifications (hypfs_type_u32 et al)?
> So basically something like "(sizeof(type) * 8 * 3 + 9) / 10 + 1" ?
> (equivalent to: 10 bits make roughly 3 digits, round that up and
> add 0-Byte). This is correct for 1, 2, 4 and 8 byte values. For 16
> byte values the result is 40, but it should be 41.
That's one option. The other - especially worthwhile to consider
for large numbers - is to represent them in hex.
>>> + break;
>>> + }
>>> +
>>> + return len;
>>> +}
>>> +
>>> +long do_hypfs_op(unsigned int cmd,
>>> + XEN_GUEST_HANDLE_PARAM(void) arg1, unsigned long arg2,
>>> + XEN_GUEST_HANDLE_PARAM(void) arg3, unsigned long arg4)
>>> +{
>>> + int ret;
>>> + struct hypfs_entry *entry;
>>> + unsigned int len;
>>> + static char path[XEN_HYPFS_MAX_PATHLEN];
>>> +
>>> + if ( !is_control_domain(current->domain) &&
>>> + !is_hardware_domain(current->domain) )
>>> + return -EPERM;
>>
>> Replace by an XSM check?
>
> Yes, but I could need some help here. How do I add a new hypercall
> in XSM?
I guess we should rope in Daniel (now Cc-ed).
>>
>>> + spin_lock(&hypfs_lock);
>>
>> Wouldn't this better be an r/w lock from the beginning, requiring
>> only read access here?
>
> Depending on the further use cases we might even end up with per-element
> locks. I'm fine using a r/w lock for now.
Indeed I was thinking that write-value support would want a per-entry
lock, with the global one only guarding the directory tree.
>>> + ret = hypfs_get_path_user(path, arg1, arg2);
>>> + if ( ret )
>>> + goto out;
>>> +
>>> + entry = hypfs_get_entry(path);
>>> + if ( !entry )
>>> + {
>>> + ret = -ENOENT;
>>> + goto out;
>>> + }
>>> +
>>> + switch ( cmd )
>>> + {
>>> + case XEN_HYPFS_OP_read_contents:
>>> + {
>>> + char buf[12];
>>> + char *val = buf;
>>
>> const void *?
>
> Why void *? The result is always a string.
const char * might be fine too, but the code really doesn't depend
on this being other than void afaics.
>>> + * positive value: content buffer was too small, returned value is needed
>>> size
>>
>> Positive return values are problematic when reaching INT_MAX. Are you
>> convinced we want to have yet another instance?
>
> Are you convinced we want to return more then 2G long strings in one go?
Counter question: Are you convinced we'll stick to just strings?
See the gzip-ing question on the later patch for example.
>>> +struct hypfs_entry {
>>> + enum hypfs_entry_type type;
>>> + const char *name;
>>> + struct list_head list;
>>> + struct hypfs_dir *parent;
>>
>> Afaict you set this field, but you never use it anywhere. Why do you
>> add it in the first place? (Initially I meant to ask whether this
>> can be pointer-to-const.)
>
> It will be needed for cases where the entry is being changed, e.g.
> when support for custom runtime parameters is added.
Can we defer its introduction until then?
>>> + union {
>>> + void *content;
>>
>> const?
>>
>>> + struct hypfs_dir *dir;
>>
>> const?
>
> As already said above: mixing const and non-const pointers in a
> union looks fishy to me.
Hmm, well, I can see your point, but I think it still can be viewed
to have its (perhaps largely documentation) value.
>>> + char *str_val;
>>> + unsigned int *uint_val;
>>> + };
>>> +};
>>> +
>>> +extern struct hypfs_dir hypfs_root;
>>> +
>>> +int hypfs_new_dir(struct hypfs_dir *parent, const char *name,
>>> + struct hypfs_dir *dir);
>>> +int hypfs_new_entry_string(struct hypfs_dir *parent, const char *name,
>>> + char *val);
>>> +int hypfs_new_entry_uint(struct hypfs_dir *parent, const char *name,
>>> + unsigned int *val);
>>
>> Thinking about the lack of const on the last parameters here again -
>> if these are for the pointed to values to be modifiable through
>> this interface, then how would the "owning" component learn of the
>> value having changed? Not everyone may need this, but I think there
>> would want to be a callback. Until then perhaps better to add const
>> here, promising that the values won't change behind the backs of
>> the owners.
>
> That's what hypfs_lock is for (and maybe later per-element locks).
I don't understand: Are you intending random code to acquire this
lock?
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |