[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [RFC][PATCH 04/13] tools/libxl: detect and avoid conflicts with RDM
>>> "Chen, Tiejun" <tiejun.chen@xxxxxxxxx> 05/07/15 4:22 AM >>> >On 2015/5/6 23:34, Jan Beulich wrote: >>>>> On 06.05.15 at 17:00, <tiejun.chen@xxxxxxxxx> wrote: >>> On 2015/4/20 19:13, Jan Beulich wrote: >>>>>>> On 10.04.15 at 11:21, <tiejun.chen@xxxxxxxxx> wrote: >>>>> + PERROR("Could not allocate memory."); >>>> >>>> Now that's exactly the kind of error message that makes no sense: >>>> As errno will already cause PERROR() to print something along the >>>> lines of the message you provide here, you're just creating >>>> redundancy. Indicating the purpose of the allocation, otoh, would >>>> add helpful context for the one inspecting the resulting log. >>> >>> What about this? >>> >>> PERROR("Could not allocate memory buffers to store reserved device >>> memory entries."); >> >> You kind of go from one extreme to the other - the message >> doesn't need to be overly long, but it should be distinct from >> all other messages (so that when seen one can identify what >> went wrong). > >I originally refer to some existing examples like this, > >int >xc_core_arch_memory_map_get(xc_interface *xch, struct >xc_core_arch_context *unused, >xc_dominfo_t *info, shared_info_any_t >*live_shinfo, >xc_core_memory_map_t **mapp, >unsigned int *nr_entries) >{ >... >map = malloc(sizeof(*map)); >if ( map == NULL ) >{ >PERROR("Could not allocate memory"); >return -1; >} > >Maybe this is wrong to my case. Could I change this? Yeah, I realize there are bad examples. But we try to avoid spreading those. >PERROR("Could not allocate memory for XENMEM_reserved_device_memory_map >hypercall"); > >Or just give me your line. How about PERROR("Could not allocate RDM buffer"); It's brief but specific enough to uniquely identify it. >>>> and hence don't have the final say on stylistic issues, I don't see >>>> why the above couldn't be expressed with a single return statement. >>> >>> Are you saying something like this? Note this was showed by yourself >>> long time ago. >> >> I know, and hence I was puzzled to still see you use the more >> convoluted form. >> >>> static bool check_mmio_hole_conflict(uint64_t start, uint64_t memsize, >>> uint64_t mmio_start, uint64_t >>> mmio_size) >>> { >>> return start + memsize > mmio_start && start < mmio_start + >>> mmio_size; >>> } >>> >>> But I don't think this really can't work out our case. >> >> It's equivalent to the original you had, so I don't see what you >> mean with "this really can't work out our case". > >Let me make this point clear. > >The original implementation, > >+static int check_rdm_hole(uint64_t start, uint64_t memsize, >+ uint64_t rdm_start, uint64_t rdm_size) >+{ >+ if (start + memsize <= rdm_start || start >= rdm_start + rdm_size) >+ return 0; >+ else >+ return 1; >+} > >means it returns 'false' in two cases: > >#1. end = start + memsize; end <= rdm_start; > >This region [start, end] is below of rdm entry. > >#2. rdm_end = rdm_start + rdm_size; stat >= rdm_end; > >This region [start, end] is above of rdm entry. > >So others conditions should indicate that rdm entry is overlapping with >this region. Actually this has three cases: > >#1. This region just conflicts with the first half of rdm entry; >#2. This region just conflicts with the second half of rdm entry; >#3. This whole region falls inside of rdm entry; > >Then it should return 'true', right? > >But with this single line, > >return start + memsize > rdm_start && start < rdm_start + rdm_size; > >=> > >return end > rdm_start && start < rdm_end; > >This just guarantee it return 'true' *only* if #3 above occurs. I don't even need to look at all the explanations you give. It is a simple matter of expression re-writing to see that if (a <= b || c >= d) return 0; else return 1; is equivalent to return !(a <= b || c >= d); and a simple matter of formal logic to see that this is equivalent to return a > b && c < d; Or what am I missing here? Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |