[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH V4 1/3] xen/mem_access: Support for memory-content hiding



On 07/10/2015 03:41 PM, Jan Beulich wrote:
>>>> On 08.07.15 at 12:22, <rcojocaru@xxxxxxxxxxxxxxx> wrote:
>> --- a/xen/arch/x86/hvm/emulate.c
>> +++ b/xen/arch/x86/hvm/emulate.c
>> @@ -653,6 +653,31 @@ static int hvmemul_read(
>>      unsigned int bytes,
>>      struct x86_emulate_ctxt *ctxt)
>>  {
>> +    struct hvm_emulate_ctxt *hvmemul_ctxt =
>> +        container_of(ctxt, struct hvm_emulate_ctxt, ctxt);
>> +
>> +    if ( unlikely(hvmemul_ctxt->set_context) )
>> +    {
>> +        struct vcpu *curr = current;
>> +        unsigned int safe_bytes;
>> +
>> +        if ( !curr->arch.vm_event.emul_read_data )
>> +            return X86EMUL_UNHANDLEABLE;
>> +
>> +        safe_bytes = min_t(unsigned int,
>> +                           bytes, curr->arch.vm_event.emul_read_data->size);
> 
> Afaict min() would work here.

Ack.

>> +        if ( safe_bytes )
>> +        {
>> +            memcpy(p_data, curr->arch.vm_event.emul_read_data->data, 
>> safe_bytes);
> 
> Why is this conditional? Doesn't ->size being zero mean all data
> should be zeroed? Otherwise this isn't really consistent behavior...
> 
> Also - long line.

Ack.

>> +
>> +            if ( bytes > safe_bytes )
>> +                memset(p_data + safe_bytes, 0, bytes - safe_bytes);
> 
> No need for the conditional here.

Ack.

>> @@ -893,6 +918,28 @@ static int hvmemul_cmpxchg(
>>      unsigned int bytes,
>>      struct x86_emulate_ctxt *ctxt)
>>  {
>> +    struct hvm_emulate_ctxt *hvmemul_ctxt =
>> +        container_of(ctxt, struct hvm_emulate_ctxt, ctxt);
>> +
>> +    if ( unlikely(hvmemul_ctxt->set_context) )
>> +    {
>> +        struct vcpu *curr = current;
>> +
>> +        if ( curr->arch.vm_event.emul_read_data )
>> +        {
>> +            unsigned int safe_bytes = min_t(unsigned int, bytes,
>> +                curr->arch.vm_event.emul_read_data->size);
>> +
>> +            memcpy(p_new, curr->arch.vm_event.emul_read_data->data,
>> +                   safe_bytes);
>> +
>> +            if ( bytes > safe_bytes )
>> +                memset(p_new + safe_bytes, 0, bytes - safe_bytes);
>> +        }
>> +        else
>> +            return X86EMUL_UNHANDLEABLE;
> 
> Please make this look as similar to hvmemul_read() as possible (invert
> the if() condition etc). Once done, you'll be seeing that this would
> better be factored out...

Very true, I should have factored it out from the beginning, I only have
not done so because there's been a somewhat convoluted trip to the
current state of the code. Ack.

>> @@ -935,6 +982,43 @@ static int hvmemul_rep_ins(
>>                                 !!(ctxt->regs->eflags & X86_EFLAGS_DF), gpa);
>>  }
>>  
>> +static int hvmemul_rep_outs_set_context(
>> +    enum x86_segment src_seg,
>> +    unsigned long src_offset,
>> +    uint16_t dst_port,
>> +    unsigned int bytes_per_rep,
>> +    unsigned long *reps,
>> +    struct x86_emulate_ctxt *ctxt)
>> +{
>> +    unsigned int bytes = *reps * bytes_per_rep;
>> +    struct vcpu *curr = current;
>> +    unsigned int safe_bytes;
>> +    char *buf = NULL;
> 
> Pointless initializer.

Ack.

>> @@ -1063,7 +1151,23 @@ static int hvmemul_rep_movs(
>>       */
>>      rc = hvm_copy_from_guest_phys(buf, sgpa, bytes);
>>      if ( rc == HVMCOPY_okay )
>> +    {
>> +        struct vcpu *curr = current;
>> +
>> +        if ( unlikely(hvmemul_ctxt->set_context) &&
>> +             curr->arch.vm_event.emul_read_data )
> 
> Factoring out will also make obvious that here you're _still_ not
> handling things consistently with the other cases.

True. Will fix it.

>> @@ -1466,6 +1466,10 @@ void p2m_mem_access_emulate_check(struct vcpu *v,
>>          }
>>  
>>          v->arch.vm_event.emulate_flags = violation ? rsp->flags : 0;
>> +
>> +        if ( rsp->flags & VM_EVENT_FLAG_SET_EMUL_READ_DATA &&
> 
> Please parenthesize the &.

Ack.

>> --- a/xen/common/domain.c
>> +++ b/xen/common/domain.c
>> @@ -166,6 +166,7 @@ struct vcpu *alloc_vcpu(
>>          free_cpumask_var(v->cpu_hard_affinity_saved);
>>          free_cpumask_var(v->cpu_soft_affinity);
>>          free_cpumask_var(v->vcpu_dirty_cpumask);
>> +        xfree(v->arch.vm_event.emul_read_data);
>>          free_vcpu_struct(v);
>>          return NULL;
>>      }
>> @@ -821,6 +822,7 @@ static void complete_domain_destroy(struct rcu_head 
>> *head)
>>              free_cpumask_var(v->cpu_hard_affinity_saved);
>>              free_cpumask_var(v->cpu_soft_affinity);
>>              free_cpumask_var(v->vcpu_dirty_cpumask);
>> +            xfree(v->arch.vm_event.emul_read_data);
> 
> Common code fiddling with arch-specific bits. I can't see how this
> would build on ARM.

Indeed, I should have been more careful with the xfree() move.

> 
>> @@ -63,6 +64,21 @@ static int vm_event_enable(
>>      vm_event_ring_lock_init(ved);
>>      vm_event_ring_lock(ved);
>>  
>> +    for_each_vcpu( d, v )
>> +    {
>> +        if ( v->arch.vm_event.emul_read_data )
>> +            continue;
>> +
>> +        v->arch.vm_event.emul_read_data =
>> +            xmalloc(struct vm_event_emul_read_data);
>> +
>> +        if ( !v->arch.vm_event.emul_read_data )
>> +        {
>> +            rc = -ENOMEM;
>> +            goto err;
>> +        }
>> +    }
> 
> Same here. You need to decide whether this is to be a generic
> feature (then the pointer needs to move to struct vcpu) or x86
> specific (then the code needs to move to x86-specific files).

I think the best bet is probably to move it to struct vcpu, especially
since Tamas intends to use the feature in new scenarios, so it's
probably best to not restrict it to x86. Tamas, what do you think?


Thanks,
Razvan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.