[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v1 2/2] ioreq-server: Support scatter page forwarding
>>> On 04.08.14 at 07:41, <wei.ye@xxxxxxxxx> wrote: >> From: Jan Beulich [mailto:JBeulich@xxxxxxxx] >> >>> On 28.07.14 at 19:55, <wei.ye@xxxxxxxxx> wrote: >> > +static int hvm_ioreq_server_pglist_hash_rem(struct pglist_hash_table >> *pglist_ht, >> > + unsigned long gpfn) { >> > + int index = pglist_hash(gpfn); >> > + struct pglist_hash_table *next, *prev; >> > + >> > + if ( pglist_ht[index].gpfn == gpfn ) >> > + pglist_ht[index].gpfn = PGLIST_INVALID_GPFN; >> > + else >> > + { >> > + prev = &pglist_ht[index]; >> > + while ( 1 ) >> > + { >> > + next = prev->next; >> > + if ( !next ) >> > + { >> > + printk("hvm_ioreq_server_pglist_hash_rem hash_table %p >> remove" >> > + " %lx not found\n", pglist_ht, gpfn); >> >> The value of this log message is questionable anyway, but there's no way for >> this to be acceptable without a suitably low guest level being set. > > You mean should also print out the VM id in this log? That's a secondary thing: Of course you need to ask yourself what use you could make of the message if you see it in some log, including when there was more than one VM running. But my point was that this lacks a suitable XENLOG_G_* prefix, thus representing a security issue (guest being able to flood the hypervisor log). >> And now the most fundamental question: Why is the existing (range set >> based) mechanism not enough? Is that because we limit the number of >> entries permitted on the range set? If so, the obvious question is - why is >> there no limit being enforced for your new type of pages? >> > > In our usage, we should write protect all the pages allocated by GFX driver > using as Per-process graphics translation table, > This kind of pages is much more than MAX_NR_IO_RANGES(256). So we need a > scatter page list for containing all the pages. So am I reading this right that you could do with the existing mechanism if the limit was higher? Together with you not answering the question about the lack of an enforced limit in your new mechanism, this clearly tells me that you should use the existing one, raising the limit and - if necessary - dealing with the (security) fallout. >> > --- a/xen/include/public/hvm/hvm_op.h >> > +++ b/xen/include/public/hvm/hvm_op.h >> > @@ -367,6 +367,18 @@ struct xen_hvm_set_ioreq_server_state >> { typedef >> > struct xen_hvm_set_ioreq_server_state >> > xen_hvm_set_ioreq_server_state_t; >> > DEFINE_XEN_GUEST_HANDLE(xen_hvm_set_ioreq_server_state_t); >> > >> > +#define HVMOP_map_pages_to_ioreq_server 23 >> > +#define MAX_MAP_BATCH_PAGES 128 >> >> Too generic a name. > > How about MAX_IOREQSVR_MAP_BATCH_PAGES ? Butter, but still insufficient. XEN_IOREQSRV_MAX_MAP_BATCH_PAGES would go in the direction of being acceptable in a public header. But with the comments earlier on I suppose this will go away anyway. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |