[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v5 10/11] viridian: add implementation of synthetic timers



>>> On 13.03.19 at 15:37, <Paul.Durrant@xxxxxxxxxx> wrote:
>> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
>> Sent: 13 March 2019 14:06
>> 
>> >>> On 11.03.19 at 14:41, <paul.durrant@xxxxxxxxxx> wrote:
>> > +bool viridian_synic_deliver_timer_msg(struct vcpu *v, unsigned int sintx,
>> > +                                      unsigned int index,
>> > +                                      uint64_t expiration,
>> > +                                      uint64_t delivery)
>> > +{
>> > +    struct viridian_vcpu *vv = v->arch.hvm.viridian;
>> > +    const union viridian_sint_msr *vs = &vv->sint[sintx];
>> > +    HV_MESSAGE *msg = vv->simp.ptr;
>> > +    struct {
>> > +        uint32_t TimerIndex;
>> > +        uint32_t Reserved;
>> > +        uint64_t ExpirationTime;
>> > +        uint64_t DeliveryTime;
>> > +    } payload = {
>> > +        .TimerIndex = index,
>> > +        .ExpirationTime = expiration,
>> > +        .DeliveryTime = delivery,
>> > +    };
>> > +
>> > +    if ( test_bit(sintx, &vv->msg_pending) )
>> > +        return false;
>> > +
>> > +    BUILD_BUG_ON(sizeof(*msg) != HV_MESSAGE_SIZE);
>> > +    msg += sintx;
>> > +
>> > +    /*
>> > +     * To avoid using an atomic test-and-set, and barrier before calling
>> > +     * vlapic_set_irq(), this function must be called in context of the
>> > +     * vcpu receiving the message.
>> > +     */
>> > +    ASSERT(v == current);
>> > +    if ( msg->Header.MessageType != HvMessageTypeNone )
>> > +    {
>> > +        msg->Header.MessageFlags.MessagePending = 1;
>> > +        __set_bit(sintx, &vv->msg_pending);
>> > +        return false;
>> > +    }
>> > +
>> > +    msg->Header.MessageType = HvMessageTimerExpired;
>> > +    msg->Header.MessageFlags.MessagePending = 0;
>> > +    msg->Header.PayloadSize = sizeof(payload);
>> > +    memcpy(msg->Payload, &payload, sizeof(payload));
>> 
>> Since you can't use plain assignment here, how about a
>> BUILD_BUG_ON(sizeof(payload) <= sizeof(msg->payload))?
> 
> Surely '>' rather than '<='?

Oops, yes - I was apparently thinking the ASSERT() way.

>> As to safety of this, I have two concerns:
>> 
>> 1) TscSequence gets updated as a result of a guest action (an MSR
>> write). This makes it non-obvious that the loop above will get
>> exited in due course.
>> 
> 
> True. The domain could try to DoS this call. This could be avoided by doing 
> a domain_pause() if we test continuously fails for a number of iterations, or 
> maybe just one iteration.

As per what you say further down, one iteration ought to be enough
indeed. Otherwise I would have suggested a handful.

>> > +static void poll_stimer(struct vcpu *v, unsigned int stimerx)
>> > +{
>> > +    struct viridian_vcpu *vv = v->arch.hvm.viridian;
>> > +    struct viridian_stimer *vs = &vv->stimer[stimerx];
>> > +
>> > +    if ( !test_bit(stimerx, &vv->stimer_pending) )
>> > +        return;
>> > +
>> > +    if ( !viridian_synic_deliver_timer_msg(v, vs->config.fields.sintx,
>> > +                                           stimerx, vs->expiration,
>> > +                                           time_now(v->domain)) )
>> > +        return;
>> > +
>> > +    clear_bit(stimerx, &vv->stimer_pending);
>> 
>> While perhaps benign, wouldn't it be better to clear the pending bit
>> before delivering the message?
> 
> No, because I only want to clear it if the delivery is successful.

Ah, I see.

>> > @@ -149,6 +398,63 @@ int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, 
>> > uint64_t val)
>> >          }
>> >          break;
>> >
>> > +    case HV_X64_MSR_TIME_REF_COUNT:
>> > +        return X86EMUL_EXCEPTION;
>> > +
>> > +    case HV_X64_MSR_STIMER0_CONFIG:
>> > +    case HV_X64_MSR_STIMER1_CONFIG:
>> > +    case HV_X64_MSR_STIMER2_CONFIG:
>> > +    case HV_X64_MSR_STIMER3_CONFIG:
>> > +    {
>> > +        unsigned int stimerx =
>> > +            array_index_nospec((idx - HV_X64_MSR_STIMER0_CONFIG) / 2,
>> > +                               ARRAY_SIZE(vv->stimer));
>> > +        struct viridian_stimer *vs = &vv->stimer[stimerx];
>> 
>> I again think you'd better use array_access_nospec() here (also
>> for the rdmsr counterparts).
> 
> I don't follow. I *am* using array_index_nospec().

But "index" != "access".

>> > @@ -160,6 +466,7 @@ int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, 
>> > uint64_t val)
>> >
>> >  int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val)
>> >  {
>> > +    struct viridian_vcpu *vv = v->arch.hvm.viridian;
>> 
>> const?
>> 
> 
> I don't think so. A read of the reference TSC MSR updates a flag.

But you don't make any existing code use vv in this patch. And
the new code you add doesn't look to require it to be non-const.
I can see why vd (introduced by an earlier patch in the series)
can't be constified for the reason you name.

>> > @@ -322,6 +324,15 @@ int guest_wrmsr_viridian(struct vcpu *v, uint32_t 
>> > idx, uint64_t val)
>> >      case HV_X64_MSR_TSC_FREQUENCY:
>> >      case HV_X64_MSR_APIC_FREQUENCY:
>> >      case HV_X64_MSR_REFERENCE_TSC:
>> > +    case HV_X64_MSR_TIME_REF_COUNT:
>> > +    case HV_X64_MSR_STIMER0_CONFIG:
>> > +    case HV_X64_MSR_STIMER0_COUNT:
>> > +    case HV_X64_MSR_STIMER1_CONFIG:
>> > +    case HV_X64_MSR_STIMER1_COUNT:
>> > +    case HV_X64_MSR_STIMER2_CONFIG:
>> > +    case HV_X64_MSR_STIMER2_COUNT:
>> > +    case HV_X64_MSR_STIMER3_CONFIG:
>> > +    case HV_X64_MSR_STIMER3_COUNT:
>> 
>> For readability / brevity
>> 
>>     case HV_X64_MSR_STIMER0_CONFIG ... HV_X64_MSR_STIMER3_COUNT:
>> 
>> ?
> 
> Certainly brevity, but I'm not sure about readability. I'll make the change.

Well, you're the maintainer, so I don't want to talk you into
something you're really opposed to.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.