[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v3 28/49] dm zoned: dynamically allocate the dm-zoned-meta shrinker
On 7/27/23 17:04, Qi Zheng wrote: > In preparation for implementing lockless slab shrink, use new APIs to > dynamically allocate the dm-zoned-meta shrinker, so that it can be freed > asynchronously using kfree_rcu(). Then it doesn't need to wait for RCU > read-side critical section when releasing the struct dmz_metadata. > > Signed-off-by: Qi Zheng <zhengqi.arch@xxxxxxxxxxxxx> > Reviewed-by: Muchun Song <songmuchun@xxxxxxxxxxxxx> > --- > drivers/md/dm-zoned-metadata.c | 28 ++++++++++++++++------------ > 1 file changed, 16 insertions(+), 12 deletions(-) > > diff --git a/drivers/md/dm-zoned-metadata.c b/drivers/md/dm-zoned-metadata.c > index 9d3cca8e3dc9..0bcb26a43578 100644 > --- a/drivers/md/dm-zoned-metadata.c > +++ b/drivers/md/dm-zoned-metadata.c > @@ -187,7 +187,7 @@ struct dmz_metadata { > struct rb_root mblk_rbtree; > struct list_head mblk_lru_list; > struct list_head mblk_dirty_list; > - struct shrinker mblk_shrinker; > + struct shrinker *mblk_shrinker; > > /* Zone allocation management */ > struct mutex map_lock; > @@ -615,7 +615,7 @@ static unsigned long dmz_shrink_mblock_cache(struct > dmz_metadata *zmd, > static unsigned long dmz_mblock_shrinker_count(struct shrinker *shrink, > struct shrink_control *sc) > { > - struct dmz_metadata *zmd = container_of(shrink, struct dmz_metadata, > mblk_shrinker); > + struct dmz_metadata *zmd = shrink->private_data; > > return atomic_read(&zmd->nr_mblks); > } > @@ -626,7 +626,7 @@ static unsigned long dmz_mblock_shrinker_count(struct > shrinker *shrink, > static unsigned long dmz_mblock_shrinker_scan(struct shrinker *shrink, > struct shrink_control *sc) > { > - struct dmz_metadata *zmd = container_of(shrink, struct dmz_metadata, > mblk_shrinker); > + struct dmz_metadata *zmd = shrink->private_data; > unsigned long count; > > spin_lock(&zmd->mblk_lock); > @@ -2936,19 +2936,23 @@ int dmz_ctr_metadata(struct dmz_dev *dev, int num_dev, > */ > zmd->min_nr_mblks = 2 + zmd->nr_map_blocks + zmd->zone_nr_bitmap_blocks > * 16; > zmd->max_nr_mblks = zmd->min_nr_mblks + 512; > - zmd->mblk_shrinker.count_objects = dmz_mblock_shrinker_count; > - zmd->mblk_shrinker.scan_objects = dmz_mblock_shrinker_scan; > - zmd->mblk_shrinker.seeks = DEFAULT_SEEKS; > > /* Metadata cache shrinker */ > - ret = register_shrinker(&zmd->mblk_shrinker, "dm-zoned-meta:(%u:%u)", > - MAJOR(dev->bdev->bd_dev), > - MINOR(dev->bdev->bd_dev)); > - if (ret) { > - dmz_zmd_err(zmd, "Register metadata cache shrinker failed"); > + zmd->mblk_shrinker = shrinker_alloc(0, "dm-zoned-meta:(%u:%u)", > + MAJOR(dev->bdev->bd_dev), > + MINOR(dev->bdev->bd_dev)); > + if (!zmd->mblk_shrinker) { > + dmz_zmd_err(zmd, "Allocate metadata cache shrinker failed"); ret is not set here, so dmz_ctr_metadata() will return success. You need to add: ret = -ENOMEM; or something. > goto err; > } > > + zmd->mblk_shrinker->count_objects = dmz_mblock_shrinker_count; > + zmd->mblk_shrinker->scan_objects = dmz_mblock_shrinker_scan; > + zmd->mblk_shrinker->seeks = DEFAULT_SEEKS; > + zmd->mblk_shrinker->private_data = zmd; > + > + shrinker_register(zmd->mblk_shrinker); I fail to see how this new shrinker API is better... Why isn't there a shrinker_alloc_and_register() function ? That would avoid adding all this code all over the place as the new API call would be very similar to the current shrinker_register() call with static allocation. > + > dmz_zmd_info(zmd, "DM-Zoned metadata version %d", zmd->sb_version); > for (i = 0; i < zmd->nr_devs; i++) > dmz_print_dev(zmd, i); > @@ -2995,7 +2999,7 @@ int dmz_ctr_metadata(struct dmz_dev *dev, int num_dev, > */ > void dmz_dtr_metadata(struct dmz_metadata *zmd) > { > - unregister_shrinker(&zmd->mblk_shrinker); > + shrinker_free(zmd->mblk_shrinker); > dmz_cleanup_metadata(zmd); > kfree(zmd); > } -- Damien Le Moal Western Digital Research
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |