[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] xen/vm_event: introduce vm_event_is_enabled()



On 23.09.2025 10:19, Penny, Zheng wrote:
> [Public]
> 
>> -----Original Message-----
>> From: Jan Beulich <jbeulich@xxxxxxxx>
>> Sent: Friday, September 12, 2025 3:30 PM
>> To: Penny, Zheng <penny.zheng@xxxxxxx>; Tamas K Lengyel
>> <tamas@xxxxxxxxxxxxx>
>> Cc: Huang, Ray <Ray.Huang@xxxxxxx>; Andrew Cooper
>> <andrew.cooper3@xxxxxxxxxx>; Roger Pau Monné <roger.pau@xxxxxxxxxx>;
>> Alexandru Isaila <aisaila@xxxxxxxxxxxxxxx>; Petre Pircalabu
>> <ppircalabu@xxxxxxxxxxxxxxx>; xen-devel@xxxxxxxxxxxxxxxxxxxx; Oleksii 
>> Kurochko
>> <oleksii.kurochko@xxxxxxxxx>
>> Subject: Re: [PATCH] xen/vm_event: introduce vm_event_is_enabled()
>>
>> On 12.09.2025 06:52, Penny Zheng wrote:
>>> @@ -2462,9 +2461,8 @@ int hvm_set_cr3(unsigned long value, bool noflush,
>> bool may_defer)
>>>      if ( may_defer && unlikely(currd->arch.monitor.write_ctrlreg_enabled &
>>>                                 monitor_ctrlreg_bitmask(VM_EVENT_X86_CR3)) )
>>>      {
>>> -        ASSERT(curr->arch.vm_event);
>>> -
>>> -        if ( hvm_monitor_crX(CR3, value, curr->arch.hvm.guest_cr[3]) )
>>> +        if ( vm_event_is_enabled(curr) &&
>>> +             hvm_monitor_crX(CR3, value, curr->arch.hvm.guest_cr[3])
>>> + )
>>>          {
>>>              /* The actual write will occur in hvm_do_resume(), if 
>>> permitted. */
>>>              curr->arch.vm_event->write_data.do_write.cr3 = 1; @@
>>> -2544,9 +2542,7 @@ int hvm_set_cr4(unsigned long value, bool may_defer)
>>>      if ( may_defer && 
>>> unlikely(v->domain->arch.monitor.write_ctrlreg_enabled &
>>>                                 monitor_ctrlreg_bitmask(VM_EVENT_X86_CR4)) )
>>>      {
>>> -        ASSERT(v->arch.vm_event);
>>> -
>>> -        if ( hvm_monitor_crX(CR4, value, old_cr) )
>>> +        if ( vm_event_is_enabled(v) && hvm_monitor_crX(CR4, value,
>>> + old_cr) )
>>>          {
>>>              /* The actual write will occur in hvm_do_resume(), if 
>>> permitted. */
>>>              v->arch.vm_event->write_data.do_write.cr4 = 1; @@ -3407,7
>>> +3403,7 @@ static enum hvm_translation_result __hvm_copy(
>>>              return HVMTRANS_bad_gfn_to_mfn;
>>>          }
>>>
>>> -        if ( unlikely(v->arch.vm_event) &&
>>> +        if ( unlikely(vm_event_is_enabled(v)) &&
>>>               (flags & HVMCOPY_linear) &&
>>>               v->arch.vm_event->send_event &&
>>>               hvm_monitor_check_p2m(addr, gfn, pfec,
>>> npfec_kind_with_gla) ) @@ -3538,6 +3534,7 @@ int hvm_vmexit_cpuid(struct
>> cpu_user_regs *regs, unsigned int inst_len)
>>>      struct vcpu *curr = current;
>>>      unsigned int leaf = regs->eax, subleaf = regs->ecx;
>>>      struct cpuid_leaf res;
>>> +    int ret = 0;
>>>
>>>      if ( curr->arch.msrs->misc_features_enables.cpuid_faulting &&
>>>           hvm_get_cpl(curr) > 0 )
>>> @@ -3554,7 +3551,10 @@ int hvm_vmexit_cpuid(struct cpu_user_regs *regs,
>> unsigned int inst_len)
>>>      regs->rcx = res.c;
>>>      regs->rdx = res.d;
>>>
>>> -    return hvm_monitor_cpuid(inst_len, leaf, subleaf);
>>> +    if ( vm_event_is_enabled(curr) )
>>> +        ret = hvm_monitor_cpuid(inst_len, leaf, subleaf);
>>> +
>>> +    return ret;
>>>  }
>>>
>>>  void hvm_rdtsc_intercept(struct cpu_user_regs *regs) @@ -3694,9
>>> +3694,8 @@ int hvm_msr_write_intercept(unsigned int msr, uint64_t
>> msr_content,
>>>          if ( ret != X86EMUL_OKAY )
>>>              return ret;
>>>
>>> -        ASSERT(v->arch.vm_event);
>>> -
>>> -        if ( hvm_monitor_msr(msr, msr_content, msr_old_content) )
>>> +        if ( vm_event_is_enabled(v) &&
>>> +             hvm_monitor_msr(msr, msr_content, msr_old_content) )
>>>          {
>>>              /* The actual write will occur in hvm_do_resume(), if 
>>> permitted. */
>>>              v->arch.vm_event->write_data.do_write.msr = 1; @@
>>> -3854,12 +3853,10 @@ int hvm_descriptor_access_intercept(uint64_t exit_info,
>>>      struct vcpu *curr = current;
>>>      struct domain *currd = curr->domain;
>>>
>>> -    if ( currd->arch.monitor.descriptor_access_enabled )
>>> -    {
>>> -        ASSERT(curr->arch.vm_event);
>>> +    if ( currd->arch.monitor.descriptor_access_enabled &&
>>> +         vm_event_is_enabled(curr) )
>>>          hvm_monitor_descriptor_access(exit_info, vmx_exit_qualification,
>>>                                        descriptor, is_write);
>>> -    }
>>>      else if ( !hvm_emulate_one_insn(is_sysdesc_access, "sysdesc access") )
>>>          domain_crash(currd);
>>
>> Following "xen: consolidate CONFIG_VM_EVENT" this function is actually
>> unreachable when VM_EVENT=n, so no change should be needed here. It's instead
>> the unreachability which needs properly taking care of (to satisfy Misra
>> requirements) there.
>>
> 
> I'm a bit confused and may not understand you correctly here.
> Just because that hvm_monitor_descriptor_access() will become unreachable 
> codes when VM_EVENT=n, and to avoid writing stubs, we added the vm_event_xxx 
> check here. Or maybe you want me to add description to say the new checking 
> also helps compiling out unreachable codes?

If the function becomes unreachable, it's not its contents which need
altering. Instead, the unreachable function should be "removed" (by
#ifdef-ary) altogether in the respective configuration. Recall that
unreachability is a Misra violation (or a rule that iirc we accepted).

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.