[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH v4 09/11] drivers/dma/iop-adma: Use dma_alloc_writecombine() kernel-style



From: "Luis R. Rodriguez" <mcgrof@xxxxxxxx>

dma_alloc_writecombine()'s call and return value check is tangled in all
in one call. Untangle both calls according to kernel coding style.

Signed-off-by: Luis R. Rodriguez <mcgrof@xxxxxxxx>
Acked-by: Vinod Koul <vinod.koul@xxxxxxxxx>
Cc: Vinod Koul <vinod.koul@xxxxxxxxx>
Cc: Dan Williams <dan.j.williams@xxxxxxxxx>
Cc: dmaengine@xxxxxxxxxxxxxxx
Cc: x86@xxxxxxxxxx
Link: 
http://lkml.kernel.org/r/1435258191-543-2-git-send-email-mcgrof@xxxxxxxxxxxxxxxx
Signed-off-by: Borislav Petkov <bp@xxxxxxx>
---
 drivers/dma/iop-adma.c | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/drivers/dma/iop-adma.c b/drivers/dma/iop-adma.c
index 998826854fdd..e4f43125e0fb 100644
--- a/drivers/dma/iop-adma.c
+++ b/drivers/dma/iop-adma.c
@@ -1300,10 +1300,11 @@ static int iop_adma_probe(struct platform_device *pdev)
         * note: writecombine gives slightly better performance, but
         * requires that we explicitly flush the writes
         */
-       if ((adev->dma_desc_pool_virt = dma_alloc_writecombine(&pdev->dev,
-                                       plat_data->pool_size,
-                                       &adev->dma_desc_pool,
-                                       GFP_KERNEL)) == NULL) {
+       adev->dma_desc_pool_virt = dma_alloc_writecombine(&pdev->dev,
+                                                         plat_data->pool_size,
+                                                         &adev->dma_desc_pool,
+                                                         GFP_KERNEL);
+       if (!adev->dma_desc_pool_virt) {
                ret = -ENOMEM;
                goto err_free_adev;
        }
-- 
2.4.3


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.