[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v5 09/15] argo: implement the sendv op; evtchn: expose send_guest_global_virq
On Thu, Jan 31, 2019 at 03:35:23AM -0700, Jan Beulich wrote: > >>> On 31.01.19 at 11:18, <roger.pau@xxxxxxxxxx> wrote: > > On Wed, Jan 30, 2019 at 08:10:28PM -0800, Christopher Clark wrote: > >> On Tue, Jan 22, 2019 at 4:08 AM Roger Pau Monné <roger.pau@xxxxxxxxxx> > >> wrote: > >> > On Mon, Jan 21, 2019 at 01:59:49AM -0800, Christopher Clark wrote: > >> > > + /* > >> > > + * Check padding is zeroed. Reject niov above limit or > >> > > message_types > >> > > + * that are outside 32 bit range. > >> > > + */ > >> > > + if ( unlikely(send_addr.src.pad || send_addr.dst.pad || > >> > > + (arg3 > XEN_ARGO_MAXIOV) || (arg4 & > >> > > ~0xffffffffUL)) ) > >> > > >> > arg4 & (GB(4) - 1) > >> > > >> > Is clearer IMO, or: > >> > > >> > arg4 > UINT32_MAX > >> > >> I've left the code unchanged, as the mask constant is used multiple > >> places elsewhere in Xen. UINT32_MAX is only used as a threshold value. > > > > The fact that others parts of the code could be improved is not an > > excuse to follow suit. I'm having a hard time believing that you find > > "arg4 & ~0xffffffffUL" easier to read than "arg4 & ~(GB(4) - 1)" or > > even "arg4 >= GB(4)". > > > > IMO it's much more likely to miss an 'f' in the first construct, and > > thus get the value wrong and introduce a bug. > > I agree with this last statement, but I'm having trouble to see how > message _type_ is related to a size construct like GB(4) is. I see > only UINT32_MAX as a viable alternative for something that's not > expressing the size of anything. I've suggested the GB construct as an alternative because the comment above mentions the 32bit range. IMO anything that avoids using 0xffffffffUL is fine. Roger. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |