[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] x86/HVM: convert most remaining hvm_funcs hook invocations to alt-call



On 30/11/2021 14:32, Jan Beulich wrote:
> On 30.11.2021 15:25, Andrew Cooper wrote:
>> On 30/11/2021 14:03, Jan Beulich wrote:
>>> On 30.11.2021 14:48, Andrew Cooper wrote:
>>>> On 29/11/2021 09:04, Jan Beulich wrote:
>>>>> The aim being to have as few indirect calls as possible (see [1]),
>>>>> whereas during initial conversion performance was the main aspect and
>>>>> hence rarely used hooks didn't get converted. Apparently one use of
>>>>> get_interrupt_shadow() was missed at the time.
>>>>>
>>>>> While I've intentionally left alone the cpu_{up,down}() etc hooks for
>>>>> not being guest reachable, the nhvm_hap_walk_L1_p2m() one can't
>>>>> currently be converted as the framework supports only up to 6 arguments.
>>>>> Down the road the three booleans perhaps want folding into a single
>>>>> parameter/argument.
>>>> To use __initdata_cf_clobber, all hooks need to use altcall().
>>> Right, but that's not going to be sufficient: The data members then also
>>> need to move elsewhere, aiui.
>> Nope.  It is safe for data members to stay.
> But then it can't be in .init.data, can it?

Very good point.  I'll need to reconsider that plan then.

>>>> There is also an open question on how to cope with things such as the
>>>> TSC scaling hooks, which are only conditionally set in {vmx,svm}_hvm_funcs.
>>> Why's that an open question? The requirement is that the pointers be
>>> set before the 2nd pass of alternatives patching (it's really just
>>> one: setup()). That's already the case, or else the hook couldn't be
>>> invoked via altcall. And that's also not the only hook getting set
>>> dynamically.
>> This was in reference to cf_clobber, not altcall().
>>
>> If the conditional hooks aren't added into {vmx,svm}_hvm_funcs, then the
>> clobbering loop can't find them.
> Oh, I see. Which simple means the clobbering loop shouldn't run
> meaningfully earlier than the 2nd pass of patching.
>
>>>>   However...
>>>>
>>>>> [1] https://lists.xen.org/archives/html/xen-devel/2021-11/msg01822.html
>>>>> ---
>>>>> Another candidate for dropping the conditional would be
>>>>> .enable_msr_interception(), but this would then want the wrapper to also
>>>>> return void (hence perhaps better done separately).
>>>> I think that's a side effect of Intel support being added first, and
>>>> then an incomplete attempt to add AMD support.
>>>>
>>>> Seeing as support isn't disappearing, I'd be in favour of reducing it to
>>>> void.  The sole caller already doesn't check the return value.
>>>>
>>>>
>>>> If I do a prep series sorting out nhvm_hap_walk_L1_p2m() and
>>>> enable_msr_interception(), would you be happy rebasing this patch and
>>>> adjusting every caller, including cpu_up/down() ?
>>> Sure. I don't think cleaning up enable_msr_interception() is a prereq
>>> though. The potential for doing so was merely an observation while
>>> going through the hook uses.
>> Yeah, I suppose that one can be a followup.
>>
>>> With it not being sufficient to convert the remaining hook invocations
>>> and with the patch already being quite large, I wonder though whether
>>> it wouldn't make sense to make further conversions the subject of a
>>> fresh patch. I could commit this one then with your R-b (and further
>>> acks, once they have trickled in) once the tree is fully open again.
>> Honestly, this is legitimately "tree-wide".  While the patch is already
>> large, 3 extra hooks (on top of a fix for nhvm_hap_walk_L1_p2m()) is
>> mechanical, and probably easier than two patches.
> Okay, I'll wait for your change then and re-base on top.

Thanks.  I'll get them posted, and then we can decide exactly what to do.

~Andrew



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.