|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [RFC PATCH 3/16]: PVH xen: Add PHYSDEVOP_map_iomem
>>> On 28.01.13 at 17:39, Konrad Rzeszutek Wilk <konrad@xxxxxxxxxx> wrote:
> On Thu, Jan 24, 2013 at 05:03:50PM -0800, Mukesh Rathor wrote:
>> On Thu, 24 Jan 2013 15:06:29 +0000
>> Tim Deegan <tim@xxxxxxx> wrote:
>>
>> > At 17:32 -0800 on 11 Jan (1357925563), Mukesh Rathor wrote:
>> >DEFINE_XEN_GUEST_HANDLE(physdev_dbgp_op_t);
>> > >
>> > > +
>> > > +#define PHYSDEVOP_map_iomem 30
>> > > +struct physdev_map_iomem {
>> > > + /* IN */
>> > > + unsigned long first_gfn;
>> > > + unsigned long first_mfn;
>> > > + unsigned int nr_mfns;
>> > > + unsigned int add_mapping; /* 1 == add mapping; 0 ==
>> > > unmap */ +
>> > > +};
>> > > +typedef struct physdev_map_iomem physdev_map_iomem_t;
>> > > +DEFINE_XEN_GUEST_HANDLE(physdev_map_iomem_t);
>> > > +
>> >
>> > This needs documentation. Also, the arguemnts should be explicitly
>> > sized to avoid compat difficulties.
>> >
>> > Tim.
>>
>> Done:
>>
>> /* Map given gfns to mfns where mfns are part of IO space. */
>> #define PHYSDEVOP_map_iomem 30
>> struct physdev_map_iomem {
>> /* IN */
>> uint64_t first_gfn;
>> uint64_t first_mfn;
>> uint32_t nr_mfns;
>> uint32_t add_mapping; /* 1 == add mapping; 0 == unmap */
>>
>> };
>
> Which is BTW what the Linux tree already has.
>
> Perhaps the 'add_mapping' should be just called 'flags'
> and have two #defines?
>
> It is also has a bit of an issue when you use __packed__ - that is it
> will shrink from 32 bytes down to 24 bytes.
How that? Or really - how would it ever end up being 32 bytes?
> Perhaps we should make this
> hypercall be:
>
> uint64_t first_gfn
> uint64_t first_mfn;
> uint32_t nr_mfns;
> uint32_t flags;
> uint64_t _pad;
>
> and then it is a nice 32 bytes long?
A trailing 64-bit field seems pretty pointless to me.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |