[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH RFC v3 for-6.8/block 04/17] mtd: block2mtd: use bdev apis



Hi,

在 2024/01/04 19:28, Jan Kara 写道:
On Thu 21-12-23 16:56:59, Yu Kuai wrote:
From: Yu Kuai <yukuai3@xxxxxxxxxx>

On the one hand covert to use folio while reading bdev inode, on the
other hand prevent to access bd_inode directly.

Signed-off-by: Yu Kuai <yukuai3@xxxxxxxxxx>
...
+               for (p = folio_address(folio); p < max; p++)
                        if (*p != -1UL) {
-                               lock_page(page);
-                               memset(page_address(page), 0xff, PAGE_SIZE);
-                               set_page_dirty(page);
-                               unlock_page(page);
-                               balance_dirty_pages_ratelimited(mapping);
+                               folio_lock(folio);
+                               memset(folio_address(folio), 0xff,
+                                      folio_size(folio));
+                               folio_mark_dirty(folio);
+                               folio_unlock(folio);
+                               bdev_balance_dirty_pages_ratelimited(bdev);

Rather then creating this bdev_balance_dirty_pages_ratelimited() just for
MTD perhaps we can have here (and in other functions):

                                ...
                                mapping = folio_mapping(folio);
                                folio_unlock(folio);
                                if (mapping)
                                        
balance_dirty_pages_ratelimited(mapping);

What do you think? Because when we are working with the folios it is rather
natural to use their mapping for dirty balancing?

I think this is a great idea! And bdev_balance_dirty_pages_ratelimited()
can be removed as well.

Thanks,
Kuai


                                                                Honza





 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.