[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 07/15] argo: implement the register op



On Wed, Jan 9, 2019 at 10:14 AM Julien Grall <julien.grall@xxxxxxxxx> wrote:
> On Wed, 9 Jan 2019, 12:18 Stefano Stabellini, <sstabellini@xxxxxxxxxx> wrote:
>> On Wed, 9 Jan 2019, Julien Grall wrote:
>> > Hi,
>> > Sorry for the formatting. Sending it from my phone.
>> >
>> > On Wed, 9 Jan 2019, 11:03 Christopher Clark, 
>> > <christopher.w.clark@xxxxxxxxx> wrote:
>> >       On Wed, Jan 9, 2019 at 7:56 AM Wei Liu <wei.liu2@xxxxxxxxxx> wrote:
>> >       >
>> >       > On Sun, Jan 06, 2019 at 11:42:40PM -0800, Christopher Clark wrote:
>> >       > > The register op is used by a domain to register a region of 
>> > memory for
>> >       > > receiving messages from either a specified other domain, or, if 
>> > specifying a
>> >       > > wildcard, any domain.
>> >       > >
>> >       > > xen_argo_page_descr_t type is introduced as a page descriptor, 
>> > to convey
>> >       > > both the physical address of the start of the page and its 
>> > granularity. The
>> >       > > smallest granularity page is assumed to be 4096 bytes and the 
>> > lower twelve
>> >       > > bits of the type are used to indicate the size of page of memory 
>> > supplied.
>> >       > > The implementation of the hypercall op currently only supports 
>> > 4K pages.
>> >       >
>> >       > What is the resolution for the Arm issues mentioned by Julien? I 
>> > read
>> >       > the conversation in previous thread. A solution seemed to have been
>> >       > agreed upon, but the changelog doesn't say anything about it.
>> >
>> >       I made the interface changes that Julien had asked for. The register
>> >       op now takes arguments that can describe the granularitity of the
>> >       pages supplied, though only support for 4K pages is accepted in the
>> >       current implementation. I believe it meets Julien's requirements.
>> >
>> > I still don't think allowing 4K or 64K is the right solution to go. You 
>> > are adding unnecessary burden in the hypervisor and would
>> > prevent optimization i the hypervisor and unwanted side effect.
>> >
>> > For instance a 64K hypervisor will always map 64K even when the guest is 
>> > passing 4K. You also can't map everything contiguously
>> > in Xen (if you ever wanted to).
>> >
>> > We need to stick on a single chunk size. That could be different between 
>> > Arm and x86. For Arm it would need to be 64KB.
>>
>> I don't think we should force 64K as the only granularity on ARM. It
>> causes unnecessary overhead and confusion on 4K-only deployments that
>> are almost all our use-cases today.
>
> Why a confusion? People should read the documentation when writing a driver...
>
>> One option is to make the granularity configurable at the guest side,
>> like Christopher did, letting the guest specifying the granularity. The
>> hypervisor could return -ENOSYS if the specified granularity is not
>> supported.
>>
>> The other option is having the hypervisor export the granularity it
>> supports for this interface: Xen would say "I support only 4K".
>> Tomorrow, it could change and Xen could say "I support only 64K". Then,
>> it would be up to the guest passing a page of the right granularity to
>> the hypervisor. I think this is probably the best option, but it would
>> require the addition of one hypercall to retrieve the supported
>> granularity from Xen.
>
> I would recommend to read my initial answers on the first series to 
> understand why allowing 4K is an issue.
>
> AFAIK virtio and UEFI has restrictions to allow a guest running agnostically 
> of the hypervisor page-granularity. An example is to mandate 64K chunk or 
> 64KB aligned address.
>
> With your suggestion you are going to break many use-cases if the hypervisor 
> is moving to 64KB. At worst it could introduce security issues. At best 
> preventing optimization in the hypervisor or prevent guest running (bad for 
> backward compatibility).
>
> Actually, this is not going to help moving towards 64K in Argo because you 
> would still have to modify the kernel. So this does not meet my requirements.
>
> I don't think requiring 64K chunk is going to be a major issue nor a concern. 
> Unless you expect small ring... Christoffer, what is the expected size?

The current implementation of the Linux driver has a default ring size
of 128K and I would expect that to be a common case. I'm not familiar
with any current use cases where smaller than 64K would be likely, but
I can imagine it could potentially be useful to have the option to do
so in a memory-constrained embedded environment running a service that
handles large numbers of short-lived connections, so running frequent
setup and teardown of many rings, with only small amounts of data
exchanged on each. That's not my current use case though, so it is
speculating a bit.

> Another solution was to require contiguous guest physical memory. That would 
> solve quite a few problem on Arm. But Christoffer had some convincing point 
> to not implement this.
>
> As I said before, I know this is not going to be the only place with that 
> issue. I merely wanted to start tackling the problem. However, IHMO, this 
> interface is not more suitable than what we have currently. So this raise the 
> question on whether we should just use the usual Xen interface if 64K is not 
> an option...

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.