[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2] MAINTAINERS: consolidate vm-event/monitor entry



On Wed, Sep 06, 2023 at 02:50:22PM +0200, Jan Beulich wrote:
> On 06.09.2023 14:40, Anthony PERARD wrote:
> > On Thu, Aug 31, 2023 at 08:15:13AM +0200, Jan Beulich wrote:
> >> If the F: description is to be trusted, the two xen/arch/x86/hvm/
> >> lines were fully redundant with the earlier wildcard ones. Arch header
> >> files, otoh, were no longer covered by anything as of the move from
> >> include/asm-*/ to arch/*/include/asm/. Further also generalize (by
> >> folding) the x86- and Arm-specific mem_access.c entries.
> >>
> >> Finally, again assuming the F: description can be trusted, there's no
> >> point listing arch/, common/, and include/ entries separately. Fold
> >> them all.
> >>
> >> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> >> ---
> >> -F:        xen/arch/*/monitor.c
> >> -F:        xen/arch/*/vm_event.c
> >> -F:        xen/arch/arm/mem_access.c
> >> -F:        xen/arch/x86/include/asm/hvm/monitor.h
> >> -F:        xen/arch/x86/include/asm/hvm/vm_event.h
> >> -F:        xen/arch/x86/mm/mem_access.c
> >> -F:        xen/arch/x86/hvm/monitor.c
> >> -F:        xen/arch/x86/hvm/vm_event.c
> >> -F:        xen/common/mem_access.c
> >> -F:        xen/common/monitor.c
> >> -F:        xen/common/vm_event.c
> >> -F:        xen/include/*/mem_access.h
> >> -F:        xen/include/*/monitor.h
> >> -F:        xen/include/*/vm_event.h
> >> +F:        xen/*/mem_access.[ch]
> >> +F:        xen/*/monitor.[ch]
> >> +F:        xen/*/vm_event.[ch]
> > 
> > 
> > Hi,
> > 
> > Did you mean to for example change the maintainer ship of
> > "xen/arch/x86/mm/mem_access.c"? Before it was:
> >     - VM EVENT, MEM ACCESS and MONITOR
> >     - X86 MEMORY MANAGEMENT
> >     - X86 ARCHITECTURE
> > And now, it's just:
> >     - X86 MEMORY MANAGEMENT
> >     - X86 ARCHITECTURE
> > 
> > (see ./scripts/get_maintainer.pl --sections -f xen/arch/x86/mm/mem_access.c)
> > 
> > Also, now "xen/include/xen/monitor.h" is only "THE REST".
> 
> No, no change of maintainership was intended. But there was an uncertainty,
> which is why I said "assuming the F: description can be trusted". So ...
> 
> > On the other hand, there's no change for "xen/common/monitor.c", so the
> > pattern works for this particular file.
> 
> ... together with this observation, I take it that
> 
>          F:   */net/*         all files in "any top level directory"/net
> 
> is actually at best misleading / ambiguous - I read it as not just a single
> level of directories, but it may well be that that's what is meant. At

I guess the ambiguity would lie in the word "files". Here, "files" is a
single file and not a directory, unlike the shell globing which would
include directories with a '*'.

The first '*' is described at "any top level directory", but it is also
"only top level directory". This kind of tells me that there is only a
single level of directories that is match by '*'.

> which point the question is how "any number of directories" could be
> expressed. Would **/ or .../**/... work here? I'm afraid my Perl is far
> from sufficient to actually spot where (and hence how) this is handled in
> the script.

I think you could write a regexp with the "N:" type instead of "F:".
This is described Linux's MAINTAINERS file, but not ours, yet our
get_maintainer.pl script has the functionality. It might be nice to be
able to write just '**' but until someone implement that, we could go
for a regex, which is more complicated and more prone to mistake.

So I think in the short-term, you want:

N:      ^xen/.*/mem_access\.[ch]$
N:      ^xen/.*/monitor\.[ch]$
N:      ^xen/.*/vm_event\.[ch]$

As for adding "**", there's maybe something to do with
"file_match_pattern()" in get_maintainer.pl, this function compare the
number of '/' in both the pattern and the filepath to find out if a '*'
only match one level of directory or more.

Cheers,

-- 
Anthony PERARD



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.