[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [DOCDAY PATCH] docs: initial documentation for xenstore paths



On 18 August 2012 01:05, Ian Campbell <Ian.Campbell@xxxxxxxxxx> wrote:
> On Thu, 2012-08-09 at 15:02 +0100, Ian Jackson wrote:
>> Ian Campbell writes ("Re: [DOCDAY PATCH] docs: initial documentation for 
>> xenstore paths"):
>> ...
>> > > --- a/docs/misc/xenstore-paths.markdown
>> > > +++ b/docs/misc/xenstore-paths.markdown
>> > > @@ -0,0 +1,294 @@
>> ...
>> > > +PATH can contain simple regex constructs following the POSIX regexp
>> > > +syntax described in regexp(7). In addition the following additional
>> > > +wild card names are defined and are evaluated before regexp expansion:
>>
>> Can we use a restricted perl re syntax ?  That avoids weirdness with
>> the rules for \.
>
> Is "restricted perl re syntax" a well defined thing (reference?) or do
> you just mean perlre(1)--?
>
> What's the weirdness with \.?
>
>> Also how does this interact with markdown ?
>
> The html version looks ok after a brief inspection.
>
>> > > +#### ~/image/device-model-pid = INTEGER   [r]
>>
>> This [r] tag is not defined above.  I assume you mean "readonly to the
>> domain" but that's the default.  Left over from an earlier version ?
>
> Yes, it's vestigial. Remove it.
>
>>
>> > > +The process ID of the device model associated with this domain, if it
>> > > +has one.
>> > > +
>> > > +XXX why is this visible to the guest?
>>
>> I think some of these things were put here just because there wasn't
>> another place for the toolstack to store things.  See also the
>> arbitrary junk stored by scripts in the device backend directories.
>
> Should we define a proper home for these? e.g. /$toolstack/$domid?
>
>> > > +#### ~/cpu/[0-9]+/availability = ("online"|"offline") [PV]
>> > > +
>> > > +One node for each virtual CPU up to the guest's configured
>> > > +maximum. Valid values are "online" and "offline".
>>
>> Should have a cross-reference to the cpu online/offline protocol,
>> which appears to be in xen/include/public/vcpu.h.  It doesn't seem to
>> be fully documented yet.
>
> vcpu.h has the hypercalls which are the mechanism by which a guest
> brings a cpu up/down but nothing on the xenstore protocol which might
> cause it to do so.
>
> I don't think a reference currently exists for that protocol. This
> probably belongs in the same (non-existent) protocol doc as
> ~/control/shutdown in so much as it is a toolstack<->guest kernel
> protocol.
>
>> > > +#### ~/memory/static-max = MEMKB []
>> > > +
>> > > +Specifies a static maximum amount memory which this domain should
>> > > +expect to be given. In the absence of in-guest memory hotplug support
>> > > +this set on domain boot and is usually the maximum amount of RAM which
>> > > +a guest can make use of .
>>
>> This should have a cross-reference to the documentation defining
>> static-max etc.  I thought we had some in tree but I can't seem to
>> find it.  The best I can find is docs/man/xl.cfg.pod.5.
>
> I think you might be thinking of tools/libxl/libxl_memory.txt.
>
> Shall we move that doc to docs/misc?
>
>>
>> > > +#### ~/memory/target = MEMKB []
>> > > +
>> > > +The current balloon target for the domain. The balloon driver within 
>> > > the guest is expected to make every effort
>>
>> every effort to ... ?
>
> err. yes. I appear  to have got distracted there ...
>
> Perhaps:
>
>         every effort to ... reach this target
>
> ? but I'm not sure that is strictly correct, a guest can use less if it
> wants to. So perhaps
>
>         every effort to ... not use more than this
>
> ? seems clumsy though.
>
>>
>> The interaction with the Xen maximum should be stated, preferably by
>> cross-reference.  In general it might be better to have a single place
>> where all these values and their semantics are written down ?
>>
>> > > +#### ~/device/suspend/event-channel = ""|EVTCHN [w]
>> > > +
>> > > +The domain's suspend event channel. The use of a suspend event channel
>> > > +is optional at the domain's discression. If it is not used then this
>> > > +path will be left blank.
>>
>> May it be ENOENT ?  Does the toolstack create it as "" then ?
>
> libxl seems to *mkdir* it:
>     libxl__xs_mkdir(gc, t,
>                     libxl__sprintf(gc, "%s/device/suspend/event-channel", 
> dom_path),
>                     rwperm, ARRAY_SIZE(rwperm));
>
> which I suppose is the same as writing it as "" (unless there is some
> subtle xenstore semantic difference I'm not thinking of)
>
> If xend writes this key then I can't find it. I rather suspect the
> ~/device/suspend is guest writeable in that case (but I can't find that
> either).
>
> While grepping around I noticed xs_suspend_evtchn_port which reads this.
> Seems like an odd place for it...
>
>>
>> > > +#### ~/device/serial/$DEVID/* [HVM]
>> > > +
>> > > +An emulated serial device
>>
>> You should presumably add
>>     XXX documentation for the protocol needed
>> here.
>
> I think this is in docs/misc/console.txt along with the PV stuff, so
> I've added that as a reference.
>
>>
>> > > +#### ~/store/port = EVTCHN []
>> > > +
>> > > +The event channel used by the domains connection to XenStore.
>>
>> Apostrophe.
>>
>> > > +XXX why is this exposed to the guest?
>>
>> Is there really only one event channel ?  Ie the same evtchn is used
>> to signal to xenstore that the guest has sent a command, and to signal
>> the guest that xenstore has written the response ?
>
> Yes, event channels are bidirectional so that's quite common.
>
>> Anyway surely this is something the guest needs to know.  Why it's in
>> xenstore is a bit of a mystery since you can't use xenstore without it
>> and it's in the start_info.
>
> I should have written "why is this exposed to the guest via xenstore?"
>
>> Is this the same value as start_info.store_evtchn ?  Cross reference ?
>
> I'd be semi inclined to ditch/deprecate it unless we can figure out what
> it is for -- as you say there is something of a chicken and egg problem
> with using it.
>
>>
>> > > +#### ~/store/ring-ref = GNTREF []
>> > > +
>> > > +The grant reference of the domain's XenStore ring.
>> > > +
>> > > +XXX why is this exposed to the guest?
>>
>> See above.
>
> Yup, the same issues.
>
>> > > +#### ~/device-model/$DOMID/* []
>> > > +
>> > > +Information relating to device models running in the domain. $DOMID is
>> > > +the target domain of the device model.
>> > > +
>> > > +XXX where is the contents of this directory specified?
>>
>> I think it's not specified anywhere.  It's ad-hoc.  The guest
>> shouldn't need to see it but exposing it readonly is probably
>> harmless.  Except perhaps for the vnc password ?
>
> vnc password appears to go into /vm/$uuid/vncpass (looking at libxl code
> only).
>
> AFAIK it does nothing special with the perms, but /vm/$uuid is not guest
> readable (perms are "n0") so I think that works out ok.
>
> I wonder if that's part of the point of /vm/$uuid.

What has /vm/$UUID been used for historically?

I find it useful if you set your own UUIDs as it provides a consistent
path across guest reboots (which ofcourse change the domid)
A /byname shortcut sounds good as a replacement if /vm/$UUID goes away.

>
>> > > +### /vm/$UUID/uuid = UUID []
>> > > +
>> > > +Value is the same UUID as the path.
>> > > +
>> > > +### /vm/$UUID/name = STRING []
>> > > +
>> > > +The domains name.
>>
>> IMO this should be
>>   (a) in /local/domain/$DOMID
>>   (b) also a copy in /byname/$NAME = $DOMID   for fast lookup
>> but not in 4.2.
>>
>> Guests shouldn't rely on it.  In fact do guests actually need anything
>> from here ?
>
> I'd say definitely not, but it has existed with xend for many years so
> I'd be surprised if something hadn't crept in somewhere :-(
>
> Ian.
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel



-- 
CTO | Orion Virtualisation Solutions | www.orionvm.com.au
Phone: 1300 56 99 52 | Mobile: 0428 754 846

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.