[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v1 0/3] vPCI capabilities filtering


  • To: Jan Beulich <jbeulich@xxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Stewart Hildebrand <stewart.hildebrand@xxxxxxx>
  • Date: Mon, 14 Aug 2023 17:11:21 -0400
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=suse.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Ao/5dtiNUpxNevILOmbsQh1BCZjALSkqiUEmLmDrYTc=; b=FsZ8SDiDrJF/1dnbfxJBWZX5kubPPlgpVxeamajt1mb5q+wYa6e1cD3l/xZRHPruiSrsEffOr7YBYMFLLn4b4lDWExSaO4yNgRdq7hlzWIsCPUtNM8sYwo/nL88bChvINBBKGur9mztUxn/13kslBliOy+ObAqYW9IsHQr6GG3fSapkm2c3IjL9ujN+RqSrPX+mxOaW3KLPIcKsIuU02r/q+ZNuS/WXQ6JoJKX0k20hIl4kNW5r5afxHo7JrDItyJdi00MbO3J8OINaOe3vnxZEINPLGLq7cW6PWL11fLdd0k7/Oz5J+cTXMWHR9/UOftmS+stwtfZAE9CSvHcujmQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=UgDucqR0qctKHUy2s2yVHp30V3eyu2Qkhkg+yYmg9XPhqRz8lXOmeodY3u8nzGcJ0jzNWiDwq9y0SeC6SVk6Lg3xjm29RA9YxxeNatQi4w6PKCPPj5W/AoA2JNLj7HAubK9BemQo+PT2lnJgPumggN18Md6KX6a2sMsiAPENmvMlwAHVY4Sk5GqWnFlqzD8NNbPie7c3WPoCJqKzJfFJszUF+48Jb3KGWQHULqizJfDHhWjhths6ibTlQARVmP5DLFZgKncl7M5EzjKhLeEEZp+tr//h6vk2gQFZ+aGfpnEutbxNg5LORr09f5SdNc8EDXQm6Vj/D3E5GBbdc4AqUA==
  • Cc: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Delivery-date: Mon, 14 Aug 2023 21:11:53 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 8/14/23 09:59, Jan Beulich wrote:
> On 10.08.2023 21:12, Stewart Hildebrand wrote:
>> This small series enables vPCI to filter which PCI capabilites we expose to a
>> domU. This series adds vPCI register handlers within
>> xen/drivers/vpci/header.c:init_bars(), along with some supporting functions.
>>
>> Note there are minor rebase conflicts with the in-progress vPCI series [1].
>> These conflicts fall into the category of functions and code being added
>> adjacent to one another, so are easily resolved. I did not identify any
>> dependency on the vPCI locking work, and the two series deal with different
>> aspects of emulating the PCI header.
>>
>> Future work may involve adding handlers for more registers in the vPCI 
>> header,
>> such as STATUS, VID/DID, etc.
>>
>> [1] 
>> https://lists.xenproject.org/archives/html/xen-devel/2023-07/msg01281.html
>>
>> Stewart Hildebrand (3):
>>   xen/vpci: add vpci_hw_read8 helper
>>   xen/vpci: add vpci_read_val helper
>>   xen/vpci: header: filter PCI capabilities
> 
> I'm not convinced of the split here: Seeing the new helpers in isolation
> leaves entirely open what they're to be used for.

Our code review guide [2] (section "General Patterns") explicitly suggests 
separating independent helper functions into (a) separate patch(es). Whether it 
is one patch per helper, or all helpers in a single patch appears ambiguous.
That said, I'd still be happy to squash all these into a single patch to avoid 
the transient dead code situation - please confirm.

> Plus besides introducing
> dead code (even if only transiently), you also introduce cf_check marked
> code which isn't really called indirectly from anywhere. Yet we'd like to
> keep the amount of these markings down (in the final binary, not so much
> in source code).

The helper functions will be added to struct vpci_register objects, where they 
will be called from vpci.c:vpci_read():

    val = r->read(pdev, r->offset, r->private);

Does this justify the cf_check attribute?
If so, should the cf_check attributes rather be added once the helpers are 
actually used in the patch "xen/vpci: header: filter PCI capabilities"?

[2] http://xenbits.xenproject.org/governance/code-review-guide.html



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.