[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Ping: [PATCH v3 0/4] x86/HVM: implement memory read caching
> On Oct 11, 2018, at 5:15 PM, Jan Beulich <JBeulich@xxxxxxxx> wrote: > >>>> On 11.10.18 at 17:54, <George.Dunlap@xxxxxxxxxx> wrote: > >>> On Oct 2, 2018, at 1:47 PM, Jan Beulich <JBeulich@xxxxxxxx> wrote: >>> >>>>>> On 02.10.18 at 12:51, <andrew.cooper3@xxxxxxxxxx> wrote: >>> >>>> This doesn't behave like real hardware, and definitely doesn't behave as >>>> named - "struct hvmemul_cache" is simply false. If it were named >>>> hvmemul_psc (or some other variation on Paging Structure Cache) then it >>>> wouldn't be so bad, as the individual levels do make more sense in that >>>> context >>> >>> As previously pointed out (without any suggestion coming back from >>> you), I chose the name "cache" for the lack of a better term. However, >>> I certainly disagree with naming it PSC or some such, as its level zero >>> is intentionally there to be eventually used for non-paging-structure >>> data. >> >> I can think of lots of descriptive names which could yield unique >> three-letter acronyms: >> >> Logical Read Sequence >> Logical Read Series >> Logical Read Record >> Read Consistency Structure >> Consistent Read Structure >> Consistent Read Record >> Emulation Read Record >> […] > > Well, I'm not sure LRS, LRR, RCS, CRS, CRR, or ERR would be > easily recognizable as what they stand for. To be honest I'd > prefer a non-acronym. Did you see my consideration towards > "latch”? Of course not; that’s why you put the long form name in a comment near the declaration. :-) I don’t think I’ve personally used “latch” with that meaning very frequently (at least not in the last 10 years), so to me it sounds a bit obscure. I would probably go with something else myself but I don’t object to it. -George _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |