[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH V3 03/13] x86/HV: Add new hvcall guest address host visibility support



On Mon, Aug 09, 2021 at 01:56:07PM -0400, Tianyu Lan wrote:
> From: Tianyu Lan <Tianyu.Lan@xxxxxxxxxxxxx>
> 
> Add new hvcall guest address host visibility support to mark
> memory visible to host. Call it inside set_memory_decrypted
> /encrypted(). Add HYPERVISOR feature check in the
> hv_is_isolation_supported() to optimize in non-virtualization
> environment.
> 
> Signed-off-by: Tianyu Lan <Tianyu.Lan@xxxxxxxxxxxxx>
> ---
> Change since v2:
>        * Rework __set_memory_enc_dec() and call Hyper-V and AMD function
>          according to platform check.
> 
> Change since v1:
>        * Use new staic call x86_set_memory_enc to avoid add Hyper-V
>          specific check in the set_memory code.
> ---
>  arch/x86/hyperv/Makefile           |   2 +-
>  arch/x86/hyperv/hv_init.c          |   6 ++
>  arch/x86/hyperv/ivm.c              | 114 +++++++++++++++++++++++++++++
>  arch/x86/include/asm/hyperv-tlfs.h |  20 +++++
>  arch/x86/include/asm/mshyperv.h    |   4 +-
>  arch/x86/mm/pat/set_memory.c       |  19 +++--
>  include/asm-generic/hyperv-tlfs.h  |   1 +
>  include/asm-generic/mshyperv.h     |   1 +
>  8 files changed, 160 insertions(+), 7 deletions(-)
>  create mode 100644 arch/x86/hyperv/ivm.c
> 
> diff --git a/arch/x86/hyperv/Makefile b/arch/x86/hyperv/Makefile
> index 48e2c51464e8..5d2de10809ae 100644
> --- a/arch/x86/hyperv/Makefile
> +++ b/arch/x86/hyperv/Makefile
> @@ -1,5 +1,5 @@
>  # SPDX-License-Identifier: GPL-2.0-only
> -obj-y                        := hv_init.o mmu.o nested.o irqdomain.o
> +obj-y                        := hv_init.o mmu.o nested.o irqdomain.o ivm.o
>  obj-$(CONFIG_X86_64) += hv_apic.o hv_proc.o
>  
>  ifdef CONFIG_X86_64
> diff --git a/arch/x86/hyperv/hv_init.c b/arch/x86/hyperv/hv_init.c
> index 0bb4d9ca7a55..b3683083208a 100644
> --- a/arch/x86/hyperv/hv_init.c
> +++ b/arch/x86/hyperv/hv_init.c
> @@ -607,6 +607,12 @@ EXPORT_SYMBOL_GPL(hv_get_isolation_type);
>  
>  bool hv_is_isolation_supported(void)
>  {
> +     if (!cpu_feature_enabled(X86_FEATURE_HYPERVISOR))
> +             return 0;

Nit: false instead of 0.

> +
> +     if (!hypervisor_is_type(X86_HYPER_MS_HYPERV))
> +             return 0;
> +
>       return hv_get_isolation_type() != HV_ISOLATION_TYPE_NONE;
>  }
>  
[...]
> +int hv_mark_gpa_visibility(u16 count, const u64 pfn[],
> +                        enum hv_mem_host_visibility visibility)
> +{
> +     struct hv_gpa_range_for_visibility **input_pcpu, *input;
> +     u16 pages_processed;
> +     u64 hv_status;
> +     unsigned long flags;
> +
> +     /* no-op if partition isolation is not enabled */
> +     if (!hv_is_isolation_supported())
> +             return 0;
> +
> +     if (count > HV_MAX_MODIFY_GPA_REP_COUNT) {
> +             pr_err("Hyper-V: GPA count:%d exceeds supported:%lu\n", count,
> +                     HV_MAX_MODIFY_GPA_REP_COUNT);
> +             return -EINVAL;
> +     }
> +
> +     local_irq_save(flags);
> +     input_pcpu = (struct hv_gpa_range_for_visibility **)
> +                     this_cpu_ptr(hyperv_pcpu_input_arg);
> +     input = *input_pcpu;
> +     if (unlikely(!input)) {
> +             local_irq_restore(flags);
> +             return -EINVAL;
> +     }
> +
> +     input->partition_id = HV_PARTITION_ID_SELF;
> +     input->host_visibility = visibility;
> +     input->reserved0 = 0;
> +     input->reserved1 = 0;
> +     memcpy((void *)input->gpa_page_list, pfn, count * sizeof(*pfn));
> +     hv_status = hv_do_rep_hypercall(
> +                     HVCALL_MODIFY_SPARSE_GPA_PAGE_HOST_VISIBILITY, count,
> +                     0, input, &pages_processed);
> +     local_irq_restore(flags);
> +
> +     if (!(hv_status & HV_HYPERCALL_RESULT_MASK))
> +             return 0;
> +
> +     return hv_status & HV_HYPERCALL_RESULT_MASK;

Joseph introduced a few helper functions in 753ed9c95c37d. They will
make the code simpler.

Wei.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.