Detecting PCI device in case of HyperV / Azure for kernel 3.14.x

0

Azure platform supports "accelerated networking" which is nothing but a real NIC interface that gets passed to the VM, instead of a virtual switch. This boosts the networking performance.

I am able to launch custom ArchLinux on Azure and also able to detect the PCI device. See "ethernet controller" in the below output :

[slashinit@kerneldev ~]$ uname -a
Linux kerneldev 5.7.2-arch1-1 #1 SMP PREEMPT Wed, 10 Jun 2020 20:36:24 +0000 x86_64 GNU/Linux
[slashinit@kerneldev ~]$ lspci
0000:00:00.0 Host bridge: Intel Corporation 440BX/ZX/DX - 82443BX/ZX/DX Host bridge (AGP disabled) (rev 03)
0000:00:07.0 ISA bridge: Intel Corporation 82371AB/EB/MB PIIX4 ISA (rev 01)
0000:00:07.1 IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01)
0000:00:07.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 02)
0000:00:08.0 VGA compatible controller: Microsoft Corporation Hyper-V virtual VGA
6ba6:00:02.0 Ethernet controller: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] (rev 80)

I also found that this device is detected because of the driver pci_hyperv, because as soon as I unload this module, the ethernet controller is not shown in the lspci output. The corresponding config to enable this driver is CONFIG_PCI_HYPERV.

The problem is that I need to detect the same ethernet controller on Azure VM, but on a kernel version 3.14.x. In this kernel version, I do not see any kernel config called CONFIG_PCI_HYPERV and maybe this is the reason that I do not see this ethernet controller in this VM. I have other drivers loaded though hv_balloon,hyperv_keyboard,hv_netvsc,hv_utils,hv_storvsc but still I do not see the controller. Below is the lspci output in this VM (you can see that the ethernet controller is missing):

00:00.0 Host bridge: Intel Corporation 440BX/ZX/DX - 82443BX/ZX/DX Host bridge (AGP disabled) (rev 03)
00:07.0 ISA bridge: Intel Corporation 82371AB/EB/MB PIIX4 ISA (rev 01)
00:07.1 IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01)
00:07.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 02)
00:08.0 VGA compatible controller: Microsoft Corporation Hyper-V virtual VGA

This is the dmesg | grep -i pci output :

[    0.391104] ACPI: bus type PCI registered
[    0.391105] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
[    0.391359] PCI: Using configuration type 1 for base access
[    0.450045] PCI: Ignoring host bridge windows from ACPI; if necessary, use "pci=use_crs" and report a bug
[    0.498040] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
[    0.501041] PCI: root bus 00: using default resources
[    0.501043] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space underthis bridge.
[    0.502166] PCI host bridge to bus 0000:00
[    0.503014] pci_bus 0000:00: root bus resource [bus 00-ff]
[    0.504008] pci_bus 0000:00: root bus resource [io  0x0000-0xffff]
[    0.505002] pci_bus 0000:00: root bus resource [mem 0x00000000-0xfffffffffff]
[    0.506278] pci 0000:00:00.0: [8086:7192] type 00 class 0x060000
[    0.510426] pci 0000:00:07.0: [8086:7110] type 00 class 0x060100
[    0.515279] pci 0000:00:07.1: [8086:7111] type 00 class 0x010180
[    0.518398] pci 0000:00:07.1: reg 0x20: [io  0xffa0-0xffaf]
[    0.520377] pci 0000:00:07.3: [8086:7113] type 00 class 0x068000
[    0.525465] pci 0000:00:07.3: quirk: [io  0x0400-0x043f] claimed by PIIX4 ACPI
[    0.527543] pci 0000:00:08.0: [1414:5353] type 00 class 0x030000
[    0.528589] pci 0000:00:08.0: reg 0x10: [mem 0xf8000000-0xfbffffff]
[    0.550989] ACPI: PCI Interrupt Link [LNKA] (IRQs 3 4 5 7 9 10 *11 12 14 15)
[    0.555808] ACPI: PCI Interrupt Link [LNKB] (IRQs 3 4 5 7 9 10 11 12 14 15) *0, disabled.
[    0.562093] ACPI: PCI Interrupt Link [LNKC] (IRQs 3 4 5 7 9 10 11 12 14 15) *0, disabled.
[    0.569313] ACPI: PCI Interrupt Link [LNKD] (IRQs 3 4 5 7 9 10 11 12 14 15) *0, disabled.
[    0.577178] vgaarb: device added: PCI:0000:00:08.0,decodes=io+mem,owns=io+mem,locks=none
[    0.636031] PCI: Using ACPI for IRQ routing
[    0.640051] PCI: pci_cache_line_size set to 64 bytes
[    0.806792] pci_bus 0000:00: resource 4 [io  0x0000-0xffff]
[    0.806794] pci_bus 0000:00: resource 5 [mem 0x00000000-0xfffffffffff]
[    0.859198] pci 0000:00:00.0: Limiting direct PCI/PCI transfers
[    0.865071] pci 0000:00:08.0: Boot video device
[    0.865140] PCI: CLS 0 bytes, default 64
[    1.064576] PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
[    1.377415] pci_hotplug: PCI Hot Plug PCI Core version: 0.5
[    1.383121] pciehp: PCI Express Hot Plug Controller Driver version: 0.4
[    1.389219] shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
[    1.526963] ehci-pci: EHCI PCI platform driver

Have I hit a roadblock? I surely do not have the drivers of Mellanox in this VM but I believe that should not be a problem, I should have at least seen that device in the lspci output. I could nowhere find any document from Microsoft stating the minimum kernel version required for "accelerated networking"/SR-IOV and hence I assume there is a way to do this on older kernels as well.

azure
linux-kernel
linux-device-driver
hyper-v
pci
asked on Stack Overflow Aug 18, 2020 by Insane Coder • edited Aug 18, 2020 by Insane Coder

0 Answers

Nobody has answered this question yet.


User contributions licensed under CC BY-SA 3.0