[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH for-next v3 06/22] x86/traps: move PV hypercall handlers to pv/traps.c



On 06/06/17 08:36, Jan Beulich wrote:
>>>> On 02.06.17 at 13:01, <wei.liu2@xxxxxxxxxx> wrote:
>> On Wed, May 31, 2017 at 05:45:14AM -0600, Jan Beulich wrote:
>>>>>> On 31.05.17 at 13:14, <wei.liu2@xxxxxxxxxx> wrote:
>>>> On Tue, May 30, 2017 at 11:59:31PM -0600, Jan Beulich wrote:
>>>>>>>> On 30.05.17 at 19:40, <andrew.cooper3@xxxxxxxxxx> wrote:
>>>>>> On 29/05/17 16:40, Jan Beulich wrote:
>>>>>>>>>> On 18.05.17 at 19:09, <wei.liu2@xxxxxxxxxx> wrote:
>>>>>>>> The following handlers are moved:
>>>>>>>> 1. do_set_trap_table
>>>>>>> This one makes sense to move to pv/traps.c, but ...
>>>>>>>
>>>>>>>> 2. do_set_debugreg
>>>>>>>> 3. do_get_debugreg
>>>>>>>> 4. do_fpu_taskswitch
>>>>>>> ... none of these do. I could see them go into pv/hypercall.c,
>>>>>>> but I could also see that file dealing intentionally only with
>>>>>>> everything hypercall related except individual handlers. Andrew,
>>>>>>> do you have any opinion or thoughts here?
>>>>>> Despite its name, traps.c deals with mostly low level exception
>>>>>> handling, so I am not completely convinced that do_set_trap_table()
>>>>>> would logically live in traps.c
>>>>> I can see this being the case for traps.c, but pv/traps.c? There's
>>>>> not much _low level_ exception handling that's PV-specific. But I
>>>>> certainly don't mind such relatively small hypercall handlers to be
>>>>> lumped together into some other file, ...
>>>>>
>>>>>> I'd also prefer not to mix these into hypercall.c.  The best I can
>>>>>> suggest is pv/domain.c, but even that isn't great.
>>>>>>
>>>>>> Sorry for being unhelpful.  I'm not sure pv/misc-hypercalls.c is a
>>>>>> suitable name either.
>>>>> ... be it this name or some other one (if we can think of a better
>>>>> alternative). Thinking of it: The currently present file is named
>>>>> "hypercall.c" - how about "hypercalls.c"?
>>>>>
>>>> Well I don't think moving them into hypercall(s).c is nice either.
>>>>
>>>> Since you expressed the idea of using iret.c for do_iret, maybe we can
>>>> use debugreg.c and fpu_taskswitch.c ?
>>> I did consider this too, but some of these are really small, and
>>> while is dislike overly large files, I also don't think files with just
>>> one or two dozens of actual code lines are very useful to have.
>>>
>> TBH I don't really see a problem with that approach -- we have clear_page.S,
>> copy_page.S and gpr_switch.S which don't get lumped together into one
>> file.
> I view C files slightly differently from assembly ones in this regard.

Looking at those files, they really should be replaced with something
better.  Steel^W Borrowing the Linux alternatives-based clear/copy
routines would probably be a very good move.

>
>> But if you don't like that, I will just put those small handlers
>> into hypercall.c and change the comment of that file to:
>>
>> diff --git a/xen/arch/x86/pv/hypercall.c b/xen/arch/x86/pv/hypercall.c
>> index 7c5e5a629d..b0c9e01268 100644
>> --- a/xen/arch/x86/pv/hypercall.c
>> +++ b/xen/arch/x86/pv/hypercall.c
>> @@ -1,7 +1,7 @@
>>  
>> /****************************************************************************
>> **
>>   * arch/x86/pv/hypercall.c
>>   *
>> - * PV hypercall dispatching routines
>> + * PV hypercall dispatching routines and misc hypercalls
> As indicated before, I don't really like the idea of specific
> hypercall handler being put here. I did suggest hypercalls.c
> as a possible place for one which don't fit elsewhere.
>
> Andrew, do you have any specific opinion both on the specific
> situation here as well as the broader question of file granularity?

I'd prefer not to have individual files for individual functions; that
is going too far IMO.  I'd also prefer not to mix misc hypercalls into
this file.

pv/misc-hypercalls.c ?

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.