[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-changelog] [qemu-xen-traditional stable-4.6] virtio: error out if guest exceeds virtqueue size
commit cff044b5c8bf51d9c9f3f9439671ed378857928a Author: P J P <ppandit@xxxxxxxxxx> AuthorDate: Tue Jul 26 15:31:59 2016 +0100 Commit: Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx> CommitDate: Tue Sep 20 16:36:56 2016 +0100 virtio: error out if guest exceeds virtqueue size A broken or malicious guest can submit more requests than the virtqueue size permits. The guest can submit requests without bothering to wait for completion and is therefore not bound by virtqueue size. This requires reusing vring descriptors in more than one request, which is incorrect but possible. Processing a request allocates a VirtQueueElement and therefore causes unbounded memory allocation controlled by the guest. Exit with an error if the guest provides more requests than the virtqueue size permits. This bounds memory allocation and makes the buggy guest visible to the user. Reported-by: Zhenhao Hong <zhenhaohong@xxxxxxxxx> Signed-off-by: Stefan Hajnoczi <stefanha@xxxxxxxxxx> (cherry picked from commit c4e0d84d3c92923fdbc7fa922638d54e5e834753) (cherry picked from commit 81111451256fd2f77c361fe65fa591743dbf04db) --- hw/virtio.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/hw/virtio.c b/hw/virtio.c index c26feff..42897bf 100644 --- a/hw/virtio.c +++ b/hw/virtio.c @@ -421,6 +421,11 @@ int virtqueue_pop(VirtQueue *vq, VirtQueueElement *elem) /* When we start there are none of either input nor output. */ elem->out_num = elem->in_num = 0; + if (vq->inuse >= vq->vring.num) { + fprintf(stderr, "Virtqueue size exceeded"); + exit(1); + } + i = head = virtqueue_get_head(vq, vq->last_avail_idx++); do { struct iovec *sg; -- generated by git-patchbot for /home/xen/git/qemu-xen-traditional.git#stable-4.6 _______________________________________________ Xen-changelog mailing list Xen-changelog@xxxxxxxxxxxxx https://lists.xenproject.org/xen-changelog
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |