[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [xen staging-4.10] xen/common: event_channel: Don't ignore error in get_free_port()
commit 6261a06d990cc50ea8765356d296a61cc2d4a3e5 Author: Julien Grall <jgrall@xxxxxxxxxx> AuthorDate: Tue Jul 7 15:27:47 2020 +0200 Commit: Jan Beulich <jbeulich@xxxxxxxx> CommitDate: Tue Jul 7 15:27:47 2020 +0200 xen/common: event_channel: Don't ignore error in get_free_port() Currently, get_free_port() is assuming that the port has been allocated when evtchn_allocate_port() is not return -EBUSY. However, the function may return an error when: - We exhausted all the event channels. This can happen if the limit configured by the administrator for the guest ('max_event_channels' in xl cfg) is higher than the ABI used by the guest. For instance, if the guest is using 2L, the limit should not be higher than 4095. - We cannot allocate memory (e.g Xen has not more memory). Users of get_free_port() (such as EVTCHNOP_alloc_unbound) will validly assuming the port was valid and will next call evtchn_from_port(). This will result to a crash as the memory backing the event channel structure is not present. Fixes: 368ae9a05fe ("xen/pvshim: forward evtchn ops between L0 Xen and L2 DomU") Signed-off-by: Julien Grall <jgrall@xxxxxxxxxx> Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx> master commit: 2e9c2bc292231823a3a021d2e0a9f1956bf00b3c master date: 2020-07-07 14:35:36 +0200 --- xen/common/event_channel.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c index be834c5c78..07ef45a140 100644 --- a/xen/common/event_channel.c +++ b/xen/common/event_channel.c @@ -202,10 +202,10 @@ static int get_free_port(struct domain *d) { int rc = evtchn_allocate_port(d, port); - if ( rc == -EBUSY ) - continue; - - return port; + if ( rc == 0 ) + return port; + else if ( rc != -EBUSY ) + return rc; } return -ENOSPC; -- generated by git-patchbot for /home/xen/git/xen.git#staging-4.10
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |