Jump to content

Search the Community

Showing results for tags 'proxmox ve'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • General Discussion
    • Artificial Intelligence
    • DevOpsForum News
  • DevOps & SRE
    • DevOps & SRE General Discussion
    • Databases, Data Engineering & Data Science
    • Development & Programming
    • CI/CD, GitOps, Orchestration & Scheduling
    • Docker, Containers, Microservices, Serverless & Virtualization
    • Infrastructure-as-Code
    • Kubernetes & Container Orchestration
    • Linux
    • Logging, Monitoring & Observability
    • Security, Governance, Risk & Compliance
  • Cloud Providers
    • Amazon Web Services
    • Google Cloud Platform
    • Microsoft Azure

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

Found 19 results

  1. Once you’ve configured your Proxmox VE 8 server and NVIDIA GPU for PCI/PCIE passthrough and created a Windows 11 virtual machine (VM) on your Proxmox VE 8 server, you need to add your NVIDIA GPU to the Windows 11 Proxmox VE virtual machine. You also need to install the NVIDIA GPU drivers on the Windows 11 virtual machine and connect a monitor, a keyboard, and a mouse to use the Windows 11 Proxmox VE 8 virtual machine as a normal PC. Table of Contents Preparing Proxmox VE 8 for NVIDIA GPU Passthrough Creating a Windows 11 Virtual Machine on Proxmox VE 8 Adding Your NVIDIA GPU to the Windows 11 Proxmox VE 8 Virtual Machine (VM) Adding a Keyboard and Mouse to the Windows 11 Proxmox VE 8 Virtual Machine (VM) Checking if NVIDIA GPU Passthrough is Working on the Windows 11 Proxmox VE Virtual Machine Downloading and Installing NVIDIA GPU Drivers on the Windows 11 Proxmox VE Virtual Machine Removing the Virtual Graphics Adapter of the Windows 11 Proxmox VE Virtual Machine (VM) Conclusion Preparing Proxmox VE 8 for NVIDIA GPU Passthrough Before you can passthrough your NVIDIA GPU on Proxmox VE virtual machines (VMs), you must configure your NVIDIA GPU for PCI/PCIE passthrough on your Proxmox VE 8 server. For detailed information on how to configure NVIDIA GPUs for PCI/PCIE passthrough on Proxmox VE 8, read this article. Creating a Windows 11 Virtual Machine on Proxmox VE 8 To passthrough your NVIDIA GPU to a Windows 11 Proxmox VE 8 virtual machine, you will of course need a Windows 11 virtual machine on your Proxmox VE 8 server. For detailed information on how to create a Windows 11 virtual machine on Proxmox VE 8, read this article. Adding Your NVIDIA GPU to the Windows 11 Proxmox VE 8 Virtual Machine (VM) To add your NVIDIA GPU to a Windows 11 Proxmox VE virtual machine, open the Windows 11 virtual machine in your Proxmox VE dashboard, navigate to the Hardware section of the Windows 11 virtual machine, and click on Add > PCI Device as marked in the screenshot below. From the Add: PCI Device window, select Raw Device[1] and select your NVIDIA GPU (not the NVIDIA Audio Device of the GPU) from the Device dropdown menu[2]. Check All Functions[1], check PCI-Express[2], and click on Add[3]. Your NVIDIA GPU should be added to your selected Windows 11 Proxmox VE virtual machine (VM). Adding a Keyboard and Mouse to the Windows 11 Proxmox VE 8 Virtual Machine (VM) To use the Windows 11 Proxmox VE virtual machine as a normal PC, you need to add a keyboard and mouse to the virtual machine. First, connect a USB keyboard and a USB mouse to the USB ports of your Proxmox VE 8 server. Then, open the Windows 11 virtual machine on Proxmox VE dashboard, navigate to the Hardware section, and click on Add > USB Device. From the Add: USB Device window, select Use USB Vendor/Device ID[1] and select your mouse from the Choose Device dropdown menu[2]. Click on Add. The USB mouse should be added to your Windows 11 Proxmox VE virtual machine. In the same way, add your USB keyboard to the Windows 11 Proxmox VE virtual machine. The USB keyboard should be added to the Windows 11 Proxmox VE virtual machine. Checking if NVIDIA GPU Passthrough is Working on the Windows 11 Proxmox VE Virtual Machine To check if the NVIDIA GPU passthrough is working on the Windows 11 Proxmox VE virtual machine (VM), you need to start the Windows 11 virtual machine and see if it starts without any errors. If the NVIDIA GPU passthrough fails, the Windows 11 virtual machine won’t start. To start the Windows 11 virtual machine, open it on the Proxmox VE dashboard and click on Start. The Windows 11 virtual machine should start without any issues. If the NVIDIA GPU passthrough is successful, you will see two display adapters in the Device Manager of the Windows 11 virtual machine. NOTE: To open Device Manager on Windows 11, right-click (RMB) on the Start Menu and click on Device Manager. Downloading and Installing NVIDIA GPU Drivers on the Windows 11 Proxmox VE Virtual Machine Once you’ve added your NVIDIA GPU to the Windows 11 Proxmox VE virtual machine (VM), you need to install the NVIDIA GPU drivers on the Windows 11 virtual machine for it to work properly. The process of downloading and installing the NVIDIA GPU drivers on the Windows 11 virtual machine is the same as on a real Windows 11 PC. To download the latest version of the NVIDIA GPU drivers, visit the official NVIDIA GPU Drivers downloads page from a web browser on the Windows 11 virtual machine. Once the page loads, select your NVIDIA GPU from the Product Type, Product Series, and Product dropdown menus[1]. Then, select Windows 11 from the Operating System dropdown menu[2], select the type of driver (GRD – Game Ready Driver or SD – Studio Driver) you want to download from the Download Type dropdown menu[3], select your language from the Language dropdown menu[4], and click on Search[5]. Click on Download. Click on Download. Your browser should start downloading the NVIDIA GPU drivers installer file. It will take a while to complete. At this point, the NVIDIA GPU drivers installer file should be downloaded. Once the NVIDIA GPU drivers installer file is downloaded, you will find it in the Downloads folder of your Windows 11 virtual machine. To install the NVIDIA GPU drivers on the Windows 11 Proxmox VE virtual machine, double-click (LMB) on the NVIDIA GPU drivers installer file. Click on Yes. Click on OK. NVIDIA Drivers installer is being extracted to your computer. Once the NVIDIA Drivers installer is extracted, NVIDIA drivers installer should window be displayed. To install only the NVIDIA GPU drivers (not the GeForce Experience), select NVIDIA Graphics Driver[1] and click on AGREE AND CONTINUE[2]. Select Custom (Advanced)[1] and click on NEXT[2]. Check the Graphics Driver, HD Audio Driver, and PhysX System Software components from the list[1], check Perform a clean installation[2], and click on NEXT[3]. NVIDIA GPU drivers are being installed on the Windows 11 Proxmox VE virtual machine. It will take a while to complete. Once the NVIDIA GPU drivers installation is complete, click on CLOSE. You can confirm that the NVIDIA GPU (that you’ve added to the Windows 11 Proxmox VE virtual machine) is recognized from the Device Manager app of Windows 11. You can also confirm that your NVIDIA GPU is working correctly (on the Windows 11 Proxmox VE virtual machine) from the Performance section of the Task Manager app of Windows 11. NOTE: The Task Manager app can be opened on Windows 11 using the keyboard shortcut <Ctrl> + <Shift> + <Esc>. You can also right-click (RMB) on the start menu and click on Task Manager to open the Task Manager app on Windows 11. For more information on opening the Task Manager app on Windows 10/11, read this article. Removing the Virtual Graphics Adapter of the Windows 11 Proxmox VE Virtual Machine (VM) Once the NVIDIA GPU is added to the Windows 11 Proxmox VE virtual machine and the NVIDIA GPU drivers is installed on the Windows 11 virtual machine, you can remove the virtual graphics adapter of the Windows 11 virtual machine so that you only get video output on the monitor/monitors directly connected to your NVIDIA GPU and get the best performance from the NVIDIA GPU, just like your real computer. You can use it like you do your real Windows PC, you won’t notice any difference. To remove the virtual graphics adapter from the Windows 11 Proxmox VE virtual machine (VM), first, click on Shutdown from the top-right corner of the Proxmox VE dashboard to shutdown the Windows 11 virtual machine. Click on Yes. Once the Windows 11 virtual machine is shut down, navigate to the Hardware section, select Display, and click on Edit. Select none from the Graphic card dropdown menu[1] and click on OK[2]. The virtual graphics adapter should be removed from the Windows 11 Proxmox VE virtual machine (VM). As you can see, the screen of the Windows 11 Proxmox VE virtual machine (VM) is displayed on the monitor connected to the NVIDIA GPU via HDMI cable, just like a real computer. The virtual graphics adapter is removed from the Windows 11 virtual machine and only the NVIDIA GPU of the virtual machine is used for displaying the screen of the virtual machine. I am running the UNIGINE Heaven benchmark on the Windows 11 Proxmox VE virtual machine and I am getting good framerates as you can see in the screenshot below. Conclusion In this article, I have shown you how to passthrough an NVIDIA GPU, a keyboard, and a mouse to a Windows 11 Proxmox VE 8 virtual machine. I have also shown you how to install the NVIDIA GPU drivers on the Windows 11 Proxmox VE virtual machine and configure it to work just like any other Windows PC. View the full article
  2. In this article, I am going to show you how to download the ISO image of Windows 11 and VirtIO Windows drivers on Proxmox VE 8, create a Windows 11 virtual machine (VM) on Proxmox VE 8, install Windows 11 on the Proxmox VE 8 virtual machine (VM), and install VirtIO drivers and QEMU guest agent on the Windows 11 Proxmox VE 8 virtual machine (VM). Table of Contents How to Download/Upload the Windows 11 ISO Image on Proxmox VE 8 Downloading the Latest VirtIO Drivers ISO File for Windows 11 on Proxmox VE 8 Creating a Windows 11 Proxmox VE 8 Virtual Machine (VM) Installing Windows 11 on the Proxmox VE 8 Virtual Machine (VM) Installing VirtIO Drivers and QEMU Guest Agent on the Windows 11 Proxmox VE 8 Virtual Machine (VM) Removing the Windows 11 and VirtIO Drivers ISO Images from the Windows 11 Proxmox VE 8 Virtual Machine (VM) Conclusion References How to Download/Upload the Windows 11 ISO Image on Proxmox VE 8 There are two ways to get the Windows 11 ISO image on your Proxmox VE 8 server. Download the Windows 11 ISO image on your computer and upload it to Proxmox VE from your computer. Download the Windows 11 ISO image directly on Proxmox VE. To download the Windows 11 ISO image, visit the official download page of Windows 11 from your favorite web browser. Once the page loads, select Windows 11 (multi-edition ISO for x64 devices)[1] from the dropdown menu and click on Download Now[2]. Select your language from the dropdown menu[1] and click on Confirm[2]. The download link for the Windows 11 ISO image should be generated. To download the Windows 11 ISO image on your computer (so that you can upload it to Proxmox VE), click on the 64-bit Download button. To download the Windows 11 ISO image on your Proxmox VE 8 server directly, right-click (RMB) on the 64-bit Download button and click on Copy Link (or similar option depending on your web browser) to copy the download link of the Windows 11 ISO image. Now, navigate to the ISO Images section of your desired Proxmox VE datastore (that supports ISO image) from the resource tree of your Proxmox VE server[1]. If you’ve downloaded the Windows 11 ISO image on your computer, click on Upload and select the Windows 11 ISO image file from your computer to upload it to your Proxmox VE server[2]. If you want to download the Windows 11 ISO image directly on your Proxmox VE server, click on Download from URL on your Proxmox VE server[3]. I will demonstrate this method in this article. Once you’ve clicked on the Download from URL button, you will see the following window. Paste the Windows 11 ISO download link (that you’ve copied recently) in the URL section and click on Query URL. The correct File name[1] and File size[2] of the Windows 11 ISO image should be displayed. Click on Download[3]. Proxmox VE should start downloading the Windows 11 ISO image. It will take a while to complete as it’s a big download. Just wait till it finishes. Once the Windows 11 ISO image is downloaded on your Proxmox VE 8 server, it will be listed in the ISO Images section of your selected Proxmox VE datastore. Downloading the Latest VirtIO Drivers ISO File for Windows 11 on Proxmox VE 8 To get the best performance, you need to install the required VirtIO drivers on the Windows 11 operating system after it’s installed on the Proxmox VE virtual machine. To download the latest version of the VirtIO drivers ISO image file on Proxmox VE 8, visit the official VirtIO drivers download page from your favorite web browser. Once the page loads, right-click on the virtio-win.iso image file or virtio-win-<version>.iso image file and click on Copy Link (or similar option dependiing on the web browser you’re using). Then, navigate to the ISO Images section of your desired Proxmox VE datastore (that supports ISO images) and click on Download from URL. Type in the VirtIO ISO image download link (that you’ve copied recently) in the URL section and click on Query URL[1]. The file name[2] and file size[3] of the VirtIO ISO image should be displayed. Then, click on Download[4]. Proxmox VE should start downloading the VirtIO ISO image. It will take a while to complete. Once the VirtIO ISO image is downloaded, it will be displayed in the ISO Images section of the Proxmox VE datastore (where you’ve downloaded it). Creating a Windows 11 Proxmox VE 8 Virtual Machine (VM) To create a new virtual machine on Proxmox VE 8, click on Create VM from the top-right corner of the Proxmox VE dashboard. The Proxmox VE 8 virtual machine creation wizard should be displayed. In the General tab, type in a name for your Windows 11 virtual machine[1] and click on Next[2]. In the OS tab, select USE CD/DVD disk image file (iso)[1], select the datastore where you’ve uploaded/downloaded the Windows 11 ISO image from the Storage dropdown menu, and select the Windows 11 ISO image from the ISO image dropdown menu[2]. Then, select Microsoft Windows from the Guest OS Type dropdown menu[3], select 11/2022 from the Version dropdown menu[4], tick Add additional drive for VirtIO drivers[5], and select the VirtIO drivers ISO image file from the Storage and ISO image dropdown menus[6]. Once you’re done with all the steps, click on Next[7]. In the System tab, select a datastore for the EFI disk from the EFI Storage dropdown menu[1], tick the Qemu Agent checkbox[2], and select a datastore for the TPM from the TPM Storage dropdown menu[3]. Once you’re done, click on Next[4]. In the Disks tab, select VirtIO Block from the Bus/Device dropdown menu[1], select a datastore for the virtual machine’s disk from the Storage dropdown menu[2], and type in your desired disk size in the Disk size (GiB) section[3]. Once you’re done, click on Next[4]. In the CPU tab, select the number of CPU cores you want to allocate for the virtual machine from the Cores section[1], select host from the Type dropdown menu[2], and click on Next[3]. In the Memory tab, type in the amount of memory you want to allocate to the Proxmox VE virtual machine (VM) in the Memory (MiB) section[1]. If you want to overprovision the memory of your Proxmox VE server (allocate more memory to virtual machines than you have available on your Proxmox VE server), tick Ballooning Device[2] and type in the minimum amount of memory that you want to allocate to the virtual machine in the Minimum memory (MiB) section[3]. If you enable Ballooning Device for this virtual machine, the virtual machine will release unused memory to the Proxmox VE server so that it can allocate it to other virtual machines. For more information on this, read the Proxmox VE Dynamic Memory Management documentation. Once you’re done, click on Next[4]. In the Network tab, select VirtIO (paravirtualized)[1] from the Model dropdown menu and click on Next[2]. Click on Finish. A Windows 11 Proxmox VE 8 virtual machine should be created[1]. To start the Windows 11 virtual machine, click on Start[2]. Press any key and the Windows 11 installer should be displayed on the virtual machine. From here, you can install Windows 11 on the Proxmox VE virtual machine as usual. Installing Windows 11 on the Proxmox VE 8 Virtual Machine (VM) To install Windows 11, select your language, time and currency format, and keyboard/input method from the respective dropdown menus[1] and click on Next[2]. Click on Install now. If you have a Windows 11 license key, type it in and click on Next. If you don’t have a Windows 11 license key or want to activate Windows 11 later, click on I don’t have a product key. Select the Windows 11 version that you want to install on the Proxmox VE virtual machine and click on Next. Tick the I accept the Microsoft Software License Terms… checkbox[1] and click on Next[2]. Click on Custom: Install Windows only (advanced). Now, you have to install the VirtIO SCSI driver and VirtIO Ethernet driver to successfully install Windows 11 on the Proxmox VE 8 virtual machine. To install the VirtIO SCSI driver from the VirtIO drivers ISO file, click on Load driver. Click on Browse. Select CD Drive: virtio-win > amd64 > w11 folder and click on OK as marked in the screenshot below. The VirIO SCSI driver should be listed. Select the Red Hat VirtIO SCSI controller driver[1] and click on Next[2]. The VirtIO SCSI driver is being installed. It will take a few seconds to complete. Once the VirtIO SCSI driver is installed, you will see a free disk in your Proxmox VE 8 virtual machine[1]. To install the VirtIO Ethernet driver, click on Load driver again[2]. Click on Browse. Select CD Drive: virtio-win > NetKVM > w11 > amd64 folder and click on OK as marked in the screenshot below. The VirtIO Ethernet driver should be listed. Select the Red Hat VirtIO Ethernet Adapter driver[1] and click on Next[2]. The VirtIO Ethernet driver is being installed. It will take a few seconds to complete. The VirtIO Ethernet driver should be installed. Once the VirtIO SCSI and VirtIO Ethernet drivers are installed, select the free disk[1] and click on Next[2]. Windows installer should start installing Windows 11 on the disk of the Proxmox VE 8 virtual machine. It will take a few minutes to complete. Once the required Windows 11 files are installed on the Proxmox VE 8 virtual machine, the virtual machine will reboot. On the next boot, the Windows installer will ask you a few questions to configure Windows 11 for you. First, select your country/region from the list and click on Yes. Select a keyboard layout or input method from the list and click on Yes. If you want to add another keyboard layout or input method on your Windows 11 installation, click on Add layout and follow the instructions. If you don’t want to add another keyboard layout or input method, click on Skip. You will need to wait a few minutes for the Windows 11 installer to get ready and show you the next steps. Type in a name for your Windows 11 virtual machine[1] and click on Next[2]. Select how you want this Windows 11 virtual machine set up[1] and click on Next[2]. Depending on what you select from this section, you will see different options later. I am setting up this Windows 11 virtual machine for personal use. Click on Sign in. You must have a Microsoft account to install and use Windows 11. If you don’t have a Microsoft account, you can create one from here. Once you have a Microsoft account, log in to your Microsoft account to continue the Windows 11 installation. If you’ve used the same Microsoft account on different Windows 10/11 devices, you will be asked to restore data on this virtual machine from the latest backup. To do that, click on Restore from this PC[1]. If the device you want to restore from is not listed or you want to set this virtual machine as a new Windows 11 device, click on More options[2]. All the Windows 10/11 devices that you’ve connected to this Microsoft account should be listed. You can restore data from any of these devices. Just select your desired Windows 10/11 device from the list and click on Restore from this PC[1]. If you want to set this virtual machine as a new Windows 11 device, click on Set up as a new PC[2]. Click on Create PIN. Type in your PIN and click on OK. Click on Next. Click on Accept. You can select the type of work you want to do in this virtual machine from the list and click on Accept so that Windows 11 can customize it for you. If you don’t want to answer it now, click on Skip. You will be asked to connect your Android phone to Windows 11. You can do that later. So, click on Skip to simplify the Windows 11 installation. You will be asked to import the browsing data from your Microsoft account. If you’re a Microsoft Edge user, this will be helpful. So, click on Accept and follow the procedures. If you don’t want to import the browsing data from your Microsoft account, click on Not now. To simplify the Windows 11 installation, I have selected this option. Click on Decline to simplify the Windows 11 installation. Click on Decline. Windows 11 should be ready to use in a few minutes. Windows 11 should be installed on the Proxmox VE 8 virtual machine. Installing VirtIO Drivers and QEMU Guest Agent on the Windows 11 Proxmox VE 8 Virtual Machine (VM) To install all the VirtIO drivers and QEMU quest agent on the Windows 11 Proxmox VE 8 virtual machine, double-click (LMB) on the VirtIO driver CD (CD Drive virtio-win-<version>) from the File Explorer of Windows 11. Double-click (LMB) on the virtio-win-guest-tools installer file as marked in the screenshot below. The VirtIO Guest Tools installer window should be displayed. Check I agree to the license terms and conditions[1] and click on Install[2]. Click on Yes. Click on Next. Check I accept the terms in the License Agreement[1] and click on Next[2]. Click on Next. Click on Install. The VirtIO drivers are being installed. It will take a few seconds to complete. Once the VirtIO drivers are installed on the Windows 11 Proxmox VE virtual machine, click on Finish. After the VirtIO drivers are installed, the QEMU Guest Agent should start installing. It will take a few seconds to complete. Once the QEMU Guest Agent is installed, click on Close. Removing the Windows 11 and VirtIO Drivers ISO Images from the Windows 11 Proxmox VE 8 Virtual Machine (VM) Once you’ve installed Windows 11 on the Proxmox VE 8 virtual machine, you can remove the Windows 11 and VirtIO drivers ISO images from the Windows 11 virtual machine. To remove the Windows 11 ISO image from the Windows 11 Proxmox VE virtual machine, navigate to the Hardware section of the Windows 11 virtual machine, select the CD/DVD Drive that has the Windows 11 ISO image file mounted, and click on Edit. Select Do not use any media and click on OK. The Windows 11 ISO image should be removed from the CD/DVD Drive of the Windows 11 Proxmox VE virtual machine[1]. In the same way, you can remove the VirtIO drivers ISO image from the CD/DVD Drive of the Windows 11 Proxmox VE virtual machine[2]. Conclusion In this article, I have shown you how to download/upload the latest Windows 11 ISO image on your Proxmox VE 8 server directly from Microsoft. I have also shown you how to download the latest VirtIO drivers ISO image for Windows 11 Proxmox VE 8 virtual machine. I have shown you how to create a Windows 11 Proxmox VE 8 virtual machine, install Windows 11 on it, and install VirtIO drivers and QEMU guest agent on the Windows 11 virtual machine as well. After Windows 11 and the VirtIO drivers and QEMU guest agent are installed on the Proxmox VE virtual machine, I have shown you how to remove the Windows 11 and VirtIO drivers ISO images from the Windows 11 Proxmox VE virtual machine. References Download Windows 11 Windows VirtIO Drivers – Proxmox VE View the full article
  3. Proxmox VE 8 is one of the best open-source and free Type-I hypervisors out there for running QEMU/KVM virtual machines (VMs) and LXC containers. It has a nice web management interface and a lot of features. One of the most amazing features of Proxmox VE is that it can passthrough PCI/PCIE devices (i.e. an NVIDIA GPU) from your computer to Proxmox VE virtual machines (VMs). The PCI/PCIE passthrough is getting better and better with newer Proxmox VE releases. At the time of this writing, the latest version of Proxmox VE is Proxmox VE v8.1 and it has great PCI/PCIE passthrough support. In this article, I am going to show you how to configure your Proxmox VE 8 host/server for PCI/PCIE passthrough and configure your NVIDIA GPU for PCIE passthrough on Proxmox VE 8 virtual machines (VMs). Table of Contents Enabling Virtualization from the BIOS/UEFI Firmware of Your Motherboard Installing Proxmox VE 8 Enabling Proxmox VE 8 Community Repositories Installing Updates on Proxmox VE 8 Enabling IOMMU from the BIOS/UEFI Firmware of Your Motherboard Enabling IOMMU on Proxmox VE 8 Verifying if IOMMU is Enabled on Proxmox VE 8 Loading VFIO Kernel Modules on Proxmox VE 8 Listing IOMMU Groups on Proxmox VE 8 Checking if Your NVIDIA GPU Can Be Passthrough to a Proxmox VE 8 Virtual Machine (VM) Checking for the Kernel Modules to Blacklist for PCI/PCIE Passthrough on Proxmox VE 8 Blacklisting Required Kernel Modules for PCI/PCIE Passthrough on Proxmox VE 8 Configuring Your NVIDIA GPU to Use the VFIO Kernel Module on Proxmox VE 8 Passthrough the NVIDIA GPU to a Proxmox VE 8 Virtual Machine (VM) Still Having Problems with PCI/PCIE Passthrough on Proxmox VE 8 Virtual Machines (VMs)? Conclusion References Enabling Virtualization from the BIOS/UEFI Firmware of Your Motherboard Before you can install Proxmox VE 8 on your computer/server, you must enable the hardware virtualization feature of your processor from the BIOS/UEFI firmware of your motherboard. The process is different for different motherboards. So, if you need any assistance in enabling hardware virtualization on your motherboard, read this article. Installing Proxmox VE 8 Proxmox VE 8 is free to download, install, and use. Before you get started, make sure to install Proxmox VE 8 on your computer. If you need any assistance on that, read this article. Enabling Proxmox VE 8 Community Repositories Once you have Proxmox VE 8 installed on your computer/server, make sure to enable the Proxmox VE 8 community package repositories. By default, Proxmox VE 8 enterprise package repositories are enabled and you won’t be able to get/install updates and bug fixes from the enterprise repositories unless you have bought Proxmox VE 8 enterprise licenses. So, if you want to use Proxmox VE 8 for free, make sure to enable the Proxmox VE 8 community package repositories to get the latest updates and bug fixes from Proxmox for free. Installing Updates on Proxmox VE 8 Once you’ve enabled the Proxmox VE 8 community package repositories, make sure to install all the available updates on your Proxmox VE 8 server. Enabling IOMMU from the BIOS/UEFI Firmware of Your Motherboard The IOMMU configuration is found in different locations in different motherboards. To enable IOMMU on your motherboard, read this article. Enabling IOMMU on Proxmox VE 8 Once the IOMMU is enabled on the hardware side, you also need to enable IOMMU from the software side (from Proxmox VE 8). To enable IOMMU from Proxmox VE 8, you have the add the following kernel boot parameters: Processor Vendor Kernel boot parameters to add Intel intel_iommu=on, iommu=pt AMD iommu=pt To modify the kernel boot parameters of Proxmox VE 8, open the /etc/default/grub file with the nano text editor as follows: $ nano /etc/default/grub At the end of the GRUB_CMDLINE_LINUX_DEFAULT, add the required kernel boot parameters for enabling IOMMU depending on the processor you’re using. As I am using an AMD processor, I have added only the kernel boot parameter iommu=pt at the end of the GRUB_CMDLINE_LINUX_DEFAULT line in the /etc/default/grub file. Once you’re done, press <Ctrl> + X followed by Y and <Enter> to save the /etc/default/grub file. Now, update the GRUB boot configurations with the following command: $ update-grub2 Once the GRUB boot configurations are updated, click on Reboot to restart your Proxmox VE 8 server for the changes to take effect. Verifying if IOMMU is Enabled on Proxmox VE 8 To verify whether IOMMU is enabled on Proxmox VE 8, run the following command: $ dmesg | grep -e DMAR -e IOMMU If IOMMU is enabled, you will see some outputs confirming that IOMMU is enabled. If IOMMU is not enabled, you may not see any outputs. You also need to have the IOMMU Interrupt Remapping enabled for PCI/PCIE passthrough to work. To check if IOMMU Interrupt Remapping is enabled on your Proxmox VE 8 server, run the following command: $ dmesg | grep 'remapping' As you can see, IOMMU Interrupt Remapping is enabled on my Proxmox VE 8 server. NOTE: Most modern AMD and Intel processors will have IOMMU Interrupt Remapping enabled. If for any reason, you don’t have IOMMU Interrupt Remapping enabled, there’s a workaround. You have to enable Unsafe Interrupts for VFIO. Read this article for more information on enabling Unsafe Interrupts on your Proxmox VE 8 server. Loading VFIO Kernel Modules on Proxmox VE 8 The PCI/PCIE passthrough is done mainly by the VFIO (Virtual Function I/O) kernel modules on Proxmox VE 8. The VFIO kernel modules are not loaded at boot time by default on Proxmox VE 8. But, it’s easy to load the VFIO kernel modules at boot time on Proxmox VE 8. First, open the /etc/modules-load.d/vfio.conf file with the nano text editor as follows: $ nano /etc/modules-load.d/vfio.conf Type in the following lines in the /etc/modules-load.d/vfio.conf file. vfio vfio_iommu_type1 vfio_pci Once you’re done, press <Ctrl> + X followed by Y and <Enter> to save the changes. Now, update the initramfs of your Proxmox VE 8 installation with the following command: $ update-initramfs -u -k all Once the initramfs is updated, click on Reboot to restart your Proxmox VE 8 server for the changes to take effect. Once your Proxmox VE 8 server boots, you should see that all the required VFIO kernel modules are loaded. $ lsmod | grep vfio Listing IOMMU Groups on Proxmox VE 8 To passthrough PCI/PCIE devices on Proxmox VE 8 virtual machines (VMs), you will need to check the IOMMU groups of your PCI/PCIE devices quite frequently. To make checking for IOMMU groups easier, I decided to write a shell script (I got it from GitHub, but I can’t remember the name of the original poster) in the path /usr/local/bin/print-iommu-groups so that I can just run print-iommu-groups command and it will print the IOMMU groups on the Proxmox VE 8 shell. First, create a new file print-iommu-groups in the path /usr/local/bin and open it with the nano text editor as follows: $ nano /usr/local/bin/print-iommu-groups Type in the following lines in the print-iommu-groups file: #!/bin/bash shopt -s nullglob for g in `find /sys/kernel/iommu_groups/* -maxdepth 0 -type d | sort -V`; do echo "IOMMU Group ${g##*/}:" for d in $g/devices/*; do echo -e "\t$(lspci -nns ${d##*/})" done; done; Once you’re done, press <Ctrl> + X followed by Y and <Enter> to save the changes to the print-iommu-groups file. Make the print-iommu-groups script file executable with the following command: $ chmod +x /usr/local/bin/print-iommu-groups Now, you can run the print-iommu-groups command as follows to print the IOMMU groups of the PCI/PCIE devices installed on your Proxmox VE 8 server: $ print-iommu-groups As you can see, the IOMMU groups of the PCI/PCIE devices installed on my Proxmox VE 8 server are printed. Checking if Your NVIDIA GPU Can Be Passthrough to a Proxmox VE 8 Virtual Machine (VM) To passthrough a PCI/PCIE device to a Proxmox VE 8 virtual machine (VM), it must be in its own IOMMU group. If 2 or more PCI/PCIE devices share an IOMMU group, you can’t passthrough any of the PCI/PCIE devices of that IOMMU group to any Proxmox VE 8 virtual machines (VMs). So, if your NVIDIA GPU and its audio device are on its own IOMMU group, you can passthrough the NVIDIA GPU to any Proxmox VE 8 virtual machines (VMs). On my Proxmox VE 8 server, I am using an MSI X570 ACE motherboard paired with a Ryzen 3900X processor and Gigabyte RTX 4070 NVIDIA GPU. According to the IOMMU groups of my system, I can passthrough the NVIDIA RTX 4070 GPU (IOMMU Group 21), RTL8125 2.5Gbe Ethernet Controller (IOMMU Group 20), Intel I211 Gigabit Ethernet Controller (IOMMU Group 19), a USB 3.0 controller (IOMMU Group 24), and the Onboard HD Audio Controller (IOMMU Group 25). $ print-iommu-groups As the main focus of this article is configuring Proxmox VE 8 for passing through the NVIDIA GPU to Proxmox VE 8 virtual machines, the NVIDIA GPU and its Audio device must be in its own IOMMU group. Checking for the Kernel Modules to Blacklist for PCI/PCIE Passthrough on Proxmox VE 8 To passthrough a PCI/PCIE device on a Proxmox VE 8 virtual machine (VM), you must make sure that Proxmox VE forces it to use the VFIO kernel module instead of its original kernel module. To find out the kernel module your PCI/PCIE devices are using, you will need to know the vendor ID and device ID of these PCI/PCIE devices. You can find the vendor ID and device ID of the PCI/PCIE devices using the print-iommu-groups command. $ print-iommu-groups For example, the vendor ID and device ID of my NVIDIA RTX 4070 GPU is 10de:2786 and it’s audio device is 10de:22bc. To find the kernel module a PCI/PCIE device 10de:2786 (my NVIDIA RTX 4070 GPU) is using, run the lspci command as follows: $ lspci -v -d 10de:2786 As you can see, my NVIDIA RTX 4070 GPU is using the nvidiafb and nouveau kernel modules by default. So, they can’t be passed to a Proxmox VE 8 virtual machine (VM) at this point. The Audio device of my NVIDIA RTX 4070 GPU is using the snd_hda_intel kernel module. So, it can’t be passed on a Proxmox VE 8 virtual machine at this point either. $ lspci -v -d 10de:22bc So, to passthrough my NVIDIA RTX 4070 GPU and its audio device on a Proxmox VE 8 virtual machine (VM), I must blacklist the nvidiafb, nouveau, and snd_hda_intel kernel modules and configure my NVIDIA RTX 4070 GPU and its audio device to use the vfio-pci kernel module. Blacklisting Required Kernel Modules for PCI/PCIE Passthrough on Proxmox VE 8 To blacklist kernel modules on Proxmox VE 8, open the /etc/modprobe.d/blacklist.conf file with the nano text editor as follows: $ nano /etc/modprobe.d/blacklist.conf To blacklist the kernel modules nouveau, nvidiafb, and snd_hda_intel kernel modules (to passthrough NVIDIA GPU), add the following lines in the /etc/modprobe.d/blacklist.conf file: blacklist nouveau blacklist nvidiafb blacklist snd_hda_intel Once you’re done, press <Ctrl> + X followed by Y and <Enter> to save the /etc/modprobe.d/blacklist.conf file. Configuring Your NVIDIA GPU to Use the VFIO Kernel Module on Proxmox VE 8 To configure the PCI/PCIE device (i.e. your NVIDIA GPU) to use the VFIO kernel module, you need to know their vendor ID and device ID. In this case, the vendor ID and device ID of my NVIDIA RTX 4070 GPU and its audio device are 10de:2786 and 10de:22bc. To configure your NVIDIA GPU to use the VFIO kernel module, open the /etc/modprobe.d/vfio.conf file with the nano text editor as follows: $ nano /etc/modprobe.d/vfio.conf To configure your NVIDIA GPU and its audio device with the <vendor-id>:<device-id> 10de:2786 and 10de:22bc (let’s say) respectively to use the VFIO kernel module, add the following line to the /etc/modprobe.d/vfio.conf file. options vfio-pci ids=10de:2786,10de:22bc Once you’re done, press <Ctrl> + X followed by Y and <Enter> to save the /etc/modprobe.d/vfio.conf file. Now, update the initramfs of Proxmove VE 8 with the following command: $ update-initramfs -u -k all Once initramfs is updated, click on Reboot to restart your Proxmox VE 8 server for the changes to take effect. Once your Proxmox VE 8 server boots, you should see that your NVIDIA GPU and its audio device (10de:2786 and 10de:22bc in my case) are using the vfio-pci kernel module. Now, your NVIDIA GPU is ready to be passed to a Proxmox VE 8 virtual machine. $ lspci -v -d 10de:2786 $ lspci -v -d 10de:22bc Passthrough the NVIDIA GPU to a Proxmox VE 8 Virtual Machine (VM) Now that your NVIDIA GPU is ready for passthrough on Proxmox VE 8 virtual machines (VMs), you can passthrough your NVIDIA GPU on your desired Proxmox VE 8 virtual machine and install the NVIDIA GPU drivers depending on the operating system that you’re using on that virtual machine as usual. For detailed information on how to passthrough your NVIDIA GPU on a Proxmox VE 8 virtual machine (VM) with different operating systems installed, read one of the following articles: How to Passthrough an NVIDIA GPU to a Windows 11 Proxmox VE 8 Virtual Machine (VM) How to Passthrough an NVIDIA GPU to a Ubuntu 24.04 LTS Proxmox VE 8 Virtual Machine (VM) How to Passthrough an NVIDIA GPU to a LinuxMint 21 Proxmox VE 8 Virtual Machine (VM) How to Passthrough an NVIDIA GPU to a Debian 12 Proxmox VE 8 Virtual Machine (VM) How to Passthrough an NVIDIA GPU to an Elementary OS 8 Proxmox VE 8 Virtual Machine (VM) How to Passthrough an NVIDIA GPU to a Fedora 39+ Proxmox VE 8 Virtual Machine (VM) How to Passthrough an NVIDIA GPU on an Arch Linux Proxmox VE 8 Virtual Machine (VM) How to Passthrough an NVIDIA GPU on a Red Hat Enterprise Linux 9 (RHEL 9) Proxmox VE 8 Virtual Machine (VM) Still Having Problems with PCI/PCIE Passthrough on Proxmox VE 8 Virtual Machines (VMs)? Even after trying everything listed in this article correctly, if PCI/PCIE passthrough still does not work for you, be sure to try out some of the Proxmox VE PCI/PCIE passthrough tricks and/or workarounds that you can use to get PCI/PCIE passthrough work on your hardware. Conclusion In this article, I have shown you how to configure your Proxmox VE 8 server for PCI/PCIE passthrough so that you can passthrough PCI/PCIE devices (i.e. your NVIDIA GPU) to your Proxmox VE 8 virtual machines (VMs). I have also shown you how to find out the kernel modules that you need to blacklist and how to blacklist them for a successful passthrough of your desired PCI/PCIE devices (i.e. your NVIDIA GPU) to a Proxmox VE 8 virtual machine. Finally, I have shown you how to configure your desired PCI/PCIE devices (i.e. your NVIDIA GPU) to use the VFIO kernel modules, which is also an essential step for a successful passthrough of your desired PCI/PCIE devices (i.e. your NVIDIA GPU) to a Proxmox VE 8 virtual machine (VM). References PCI(e) Passthrough – Proxmox VE PCI Passthrough – Proxmox VE The ultimate gaming virtual machine on proxmox – YouTube View the full article
  4. On Proxmox VE, QEMU Guest Agent is installed on the virtual machines (VMs) for the following reasons: To send ACPI commands to Proxmox VE virtual machines to properly shutdown the virtual machines from the Proxmox VE web UI. To freeze/suspend the Proxmox VE virtual machines while taking backup and snapshots to make sure that no files are changed while taking backups/snapshots. To resume suspended Proxmox VE virtual machines correctly. To collect CPU, memory, disk I/O, and network usage information of Proxmox VE virtual machines for graphing the usage information in the Proxmox VE web UI. To perform dynamic memory management on Proxmox VE virtual machines. In this article, I am going to show you how to install QEMU Guest Agent on some of the most popular Linux distributions. Table of Contents How to Enable QEMU Guest Agent for a Proxmox VE Virtual Machine Installing QEMU Guest Agent on Ubuntu/Debian/Linux Mint/Kali Linux/KDE Neon Installing QEMU Guest Agent on Fedora/RHEL/CentOS Stream/Alma Linux/Rocky Linux/Oracle Linux Installing QEMU Guest Agent on OpenSUSE and SUSE Linux Enterprise Server (SLES) Installing QEMU Guest Agent on Arch Linux/Manjaro Linux Verifying If QEMU Guest Agent is Working Correctly on Proxmox VE Virtual Machines Conclusion References How to Enable QEMU Guest Agent for a Proxmox VE Virtual Machine Before installing QEMU Guest Agent on a Proxmox VE Linux virtual machine, you must enable QEMU Guest Agent for the virtual machine. Installing QEMU Guest Agent on Ubuntu/Debian/Linux Mint/Kali Linux/KDE Neon On Ubuntu/Debian and Ubuntu/Debian-based Linux distributions (i.e. Linux Mint, Kali Linux, KDE Neon, Elementary OS, Deepin Linux, Pop OS!), QEMU Guest Agent can be installed with the following commands: $ sudo apt update $ sudo apt install qemu-guest-agent -y QEMU Guest Agent should be installed. Once QEMU Guest Agent is installed on the Proxmox VE virtual machine, reboot the virtual machine for the changes to take effect with the following command: $ sudo reboot Once the Proxmox VE virtual machine boots, check if the QEMU Guest Agent service is working correctly. Installing QEMU Guest Agent on Fedora/RHEL/CentOS Stream/Alma Linux/Rocky Linux/Oracle Linux On Fedora, RHEL, CentOS, and other RHEL-based Linux distributions (i.e. Alma Linux, Rocky Linux, Oracle Linux), QEMU Guest Agent can be installed with the following commands: $ sudo dnf makecache $ sudo dnf install qemu-guest-agent To confirm the installation, press Y and then press <Enter>. QEMU Guest Agent should be installed. Once QEMU Guest Agent is installed on the Proxmox VE virtual machine, reboot the virtual machine for the changes to take effect with the following command: $ sudo reboot Once the Proxmox VE virtual machine boots, check if the QEMU Guest Agent service is working correctly. Installing QEMU Guest Agent on OpenSUSE and SUSE Linux Enterprise Server (SLES) On OpenSUSE Linux and SUSE Linux Enterprise Server (SLES), QEMU Guest Agent can be installed with the following commands: $ sudo zypper refresh $ sudo zypper install qemu-guest-agent To confirm the installation, press Y and then press <Enter>. QEMU Guest Agent should be installed. Once QEMU Guest Agent is installed on the Proxmox VE virtual machine, reboot the virtual machine for the changes to take effect with the following command: $ sudo reboot Once the Proxmox VE virtual machine boots, check if the QEMU Guest Agent service is working correctly. Installing QEMU Guest Agent on Arch Linux/Manjaro Linux On Arch Linux, Manjaro Linux, and other Arch Linux based Linux distributions, QEMU Guest Agent can be installed with the following command: $ sudo pacman -Sy qemu-guest-agent To confirm the installation, press Y and then press <Enter>. QEMU Guest Agent should be installed. Once QEMU Guest Agent is installed on the Proxmox VE virtual machine, reboot the virtual machine for the changes to take effect with the following command: $ sudo reboot Once the Proxmox VE virtual machine boots, check if the QEMU Guest Agent service is working correctly. Verifying If QEMU Guest Agent is Working Correctly on Proxmox VE Virtual Machines To verify whether the QEMU Guest Agent is working correctly, check the status of the qemu-guest-agent service with the following command: $ sudo systemctl status qemu-guest-agent.service If the QEMU Guest Agent is working correctly, the qemu-guest-agent systemd service should be active/running. Some Linux distribution may not activate/enable the qemu-guest-agent systemd service by default. In that case, you can start the qemu-guest-agent service and add it to the system startup with the following commands: $ sudo systemctl start qemu-guest-agent $ sudo systemctl enable qemu-guest-agent You can also check the Summary section of the virtual machine (from the Proxmox VE web UI) where you’ve enabled and installed QEMU Guest Agent to verify whether it’s working. If the QEMU Guest Agent is working correctly, you will see the IP information and other usage stats (i.e. CPU, memory, network, disk I/O) of the virtual machine in the Summary section of the virtual machine. Conclusion In this article, I have discussed the importance of enabling and installing the QEMU Guest Agent on Proxmox VE virtual machines. I have also shown you how to install QEMU Guest Agent on some of the most popular Linux distributions. References Qemu-guest-agent – Proxmox VE View the full article
  5. On Proxmox VE, QEMU Guest Agent is used to: Send ACPI commands to Proxmox VE virtual machines to properly shutdown the virtual machines from the Proxmox VE web UI. Freeze/Suspend the Proxmox VE virtual machines while taking backup and snapshots to make sure that no files are changed while taking backups/snapshots. Resume suspended Proxmox VE virtual machines correctly. Collect CPU, memory, disk I/O, and network usage information of Proxmox VE virtual machines for graphing the usage information in the Proxmox VE web UI. Perform dynamic memory management on Proxmox VE virtual machines. For optimal performance and proper Proxmox VE integration, you must enable the QEMU Guest Agent on your Proxmox VE virtual machine and install the QEMU Guest Agent driver on the virtual machine. In this article, I will show you how to enable QEMU Guest Agent on Proxmox VE virtual machines. Table of Contents Enabling QEMU Guest Agent on a Proxmox VE Virtual Machines Installing QEMU Guest Agent on Windows 10/11 Installing QEMU Guest Agent on Linux Conclusion References Enabling QEMU Guest Agent on a Proxmox VE Virtual Machines To enable QEMU Guest Agent on a Proxmox VE virtual machine, navigate to the Options section of the virtual machine[1] and double-click (LMB) on the QEMU Guest Agent option[2]. Tick Use QEMU Guest Agent[1] and click on OK to save the changes[2]. QEMU Guest Agent should be enabled for the Proxmox VE virtual machine. Installing QEMU Guest Agent on Windows 10/11 Once you’ve enabled the QEMU Guest Agent on your Windows 10/11 Proxmox VE virtual machine, make sure to install the QEMU Guest Agent on the Windows 10/11 virtual machine for QEMU Guest Agent to work. Installing QEMU Guest Agent on Linux Once you’ve enabled the QEMU Guest Agent on your Linux Proxmox VE virtual machine, make sure to install the QEMU Guest Agent on the Linux virtual machine for QEMU Guest Agent to work. Conclusion In this article, I have shown you how to enable QEMU Guest Agent on Proxmox VE virtual machines. I have also linked the necessary articles to assist you in installing the QEMU Guest Agent drivers on Windows 10/11 and popular Linux distributions, which is essential for the QEMU Guest Agent to work. References Qemu-guest-agent – Proxmox VE View the full article
  6. For optimal performance and the best Proxmox VE integration, installing the VirtIO drivers and QEMU Guest Agent on your Windows 10/11 Proxmox VE virtual machine is very important. In this article, I am going to show you how to download the latest stable version of the VirtIO drivers ISO image for Windows and install the VirtIO drivers and QEMU Guest Agent on your Windows 10/11 Proxmox VE virtual machine. Table of Contents Downloading the VirtIO Drivers ISO Image for Windows Mounting the VirtIO Drivers ISO Image on Windows 10/11 Installing VirtIO Drivers and QEMU Guest Agent on Windows 10/11 Conclusion References Downloading the VirtIO Drivers ISO Image for Windows To download the latest stable version of the VirtIO drivers ISO image for Windows operating systems, click on the VirtIO drivers ISO image download link from your favorite web browser. Your browser should start downloading the latest stable version of the VirtIO drivers ISO image for Windows. It will take a while to complete. At this point, the VirtIO drivers ISO image should be downloaded. Mounting the VirtIO Drivers ISO Image on Windows 10/11 To mount the VirtIO drivers ISO image on Windows 10/11, right-click (RMB) on the VirtIO drivers ISO image file and click on Mount. The VirtIO drivers ISO image should be mounted and you should be able to access all the files. Installing VirtIO Drivers and QEMU Guest Agent on Windows 10/11 To install the VirtIO drivers and QEMU guest agent on your Proxmox VE Windows 10/11 virtual machine, double-click (LMB) on the virtio-win-guest-tools installer file from the mounted VirtIO drivers ISO image. The VirtIO Guest Tools installer window should be displayed. Select I agree to the license terms and conditions[1] and click on Install[2]. Click on Yes. Click on Next. Select I accept the terms in the License Agreement[1] and click on Next[2]. Click on Next. Click on Install. VirtIO drivers are being installed. It will take a while to complete. Once the VirtIO drivers are installed, click on Finish. The QEMU Guest Agent installation should start right away. It will take a few seconds to complete. Once the installation is complete, click on Close. At this point, the VirtIO drivers and QEMU Guest Agent should be installed on your Windows 10/11 Proxmox VE virtual machine. Conclusion In this article, I have shown you how to download the latest stable version of the VirtIO drivers ISO image for Windows. I have also shown you how to mount the VirtIO drivers ISO image and install the VirtIO drivers and the QEMU Guest Agent on a Windows 10/11 Proxmox VE virtual machine. References Windows VirtIO Drivers – Proxmox VE Official Latest Stable VirtIO Drivers Download Page View the full article
  7. In this article, I am going to show you how to mount a USB thumb drive or a USB HDD/SSD on your Proxmox VE server. Table of Contents: Finding the USB Thumb Drive/HDD/SSD to Mount on Proxmox VE Creating a Mount Point for the USB Storage Device on Proxmox VE Mounting the USB Storage Device on Proxmox VE Confirming the USB Storage Device is Mounted on Proxmox VE Conclusion Finding the USB Thumb Drive/HDD/SSD to Mount on Proxmox VE: First, insert the USB thumb drive or USB HDD/SSD on your Proxmox VE server and run the command below to find the device path of the USB storage device. $ lsblk -p In this case, my 32GB USB thumb drive has the device path /dev/sdd and it has a partition /dev/sdd1. You will be mounting the partition of your USB storage device on your Proxmox VE server. To learn more about the partition /dev/sdd1 (let’s say) of the USB storage device on your Proxmox VE server, run the blkid command as follows: $ blkid /dev/sdd1 As you can see, the partition /dev/sdd1 has the filesystem label backup[1] and is formatted as the NTFS filesystem[2]. Creating a Mount Point for the USB Storage Device on Proxmox VE: You can create a mount point /mnt/usb/backup (let’s say) for the USB storage device with the mkdir command as follows: $ mkdir -pv /mnt/usb/backup Mounting the USB Storage Device on Proxmox VE: To mount the partition /dev/sdd1 (let’s say) of the USB storage device on the mount point /mnt/usb/backup (let’s say), run the following command: $ mount /dev/sdd1 /mnt/usb/backup Confirming the USB Storage Device is Mounted on Proxmox VE: To confirm whether the partition /dev/sdd1 (let’s say) of the USB storage device is mounted, run the following command: $ df -h /dev/sdd1 As you can see, the partition /dev/sdd1 is mounted[1] in the path /mnt/usb/backup[2]. The usage information of the partition is also displayed[3]. Once the partition is mounted, you can access the files stored on the USB storage device from the Proxmox VE shell. $ ls -lh /mnt/usb/backup Conclusion: In this article, I have shown you how to find the device path of a USB thumb drive or USB HDD/SSD on Proxmox VE. I have also shown you how to create a mount point, mount the USB storage device on the mount point, and access the files stored on the USB storage device from the Proxmox VE shell. View the full article
  8. You can add/mount an SMB/CIFS share from your Windows OS or NAS device on Proxmox VE as storage for storing ISO images, container images, VM disk images, backups, etc. In this article, I am going to show you how to add a Windows SMB/CIFS share on Proxmox VE as storage. Table of Contents: Adding an SMB/CIFS Share as Storage on Proxmox VE Accessing the SMB/CIFS Storage on Proxmox VE Conclusion Adding an SMB/CIFS Share as Storage on Proxmox VE: To add an SMB/CIFS share on Proxmox VE as storage, navigate to Datacenter > Storage and click on Add > SMB/CIFS as marked in the screenshot below. Type in an ID/name for the SMB/CIFS storage[1], the domain name or IP address of the SMB/CIFS server[2], and the login username[3] and password[4] of the SMB/CIFS server. If all the details are correct, you should be able to select the SMB/CIFS share you want to add to Proxmox VE from the Share dropdown menu[5]. You can also add a subdirectory of the SMB/CIFS share on Proxmox VE. To do that, type in a subdirectory path in the Subdirectory section[6]. From the Content dropdown menu, you can select the type of data you want to store on the SMB/CIFS share. Disk image: If selected, the disks of the Proxmox VE virtual machines can be stored on this storage. ISO image: If selected, the ISO installation images of different operating systems can be stored on this storage. Container template: If selected, the LXC container template files can be stored on this storage. VZDump backup file: If selected, the Proxmox VE virtual machine and container backups can be stored on this storage. Container: If selected, the disks of the Proxmox VE LXC containers can be stored on this storage. Snippets: If selected, you can store Proxmox VE snippets on this storage. Once you’re done, click on Add. A new SMB/CIFS storage should be added to Proxmox VE[1]. You can also find the mount path of the SMB/CIFS share in the Datacenter > Storage section[2]. The SMB/CIFS storage should also be displayed in the Proxmox VE server tree[3]. Accessing the SMB/CIFS Storage on Proxmox VE: You can access only the Proxmox VE contents stored on the SMB/CIFS storage from the Proxmox VE dashboard. In the Summary section of the SMB/CIFS storage, you will see usage information of the SMB/CIFS storage. For each selected content, you will see respective sections in the SMB/CIFS storage. For example, for ISO image content type, I have a section ISO Images on my SMB/CIFS storage nas-datastore that shows all the ISO installation images that I have stored on the SMB/CIFS storage. You can access all the files of the SMB/CIFS storage on your Proxmox VE server from the command line. In this case, the nas-datastore SMB/CIFS storage is mounted in the path /mnt/pve/nas-datastore and all the files of the SMB/CIFS storage are available in that mount path. Conclusion: In this article, I have shown you how to add an SMB/CIFS share as storage on Proxmox VE. I have also shown you how to access the SMB/CIFS storage on Proxmox VE. View the full article
  9. Usually, you don’t need a GPU on your Proxmox VE server to run the virtual machines. But if you want to enable the 3D acceleration (using VirtIO-GL or VirGL) on your Proxmox VE virtual machines, or passthrough a GPU on a Proxmox VE container for AI/CUDA acceleration, you will need a GPU and the required GPU drivers installed on your Proxmox VE server. In this article, we will show you how to install the latest version of the official NVIDIA GPU drivers on Proxmox VE 8 so that you can use it for VirIO-GL/VirGL 3D acceleration on your Proxmox VE virtual machines or passthrough your NVIDIA GPU on Proxmox VE containers for AI/CUDA acceleration. Topic of Contents: Checking If an NVIDIA GPU Is Installed on Your Proxmox VE Server Enabling the Proxmox VE Community Package Repositories (Optional for Enterprise Users) Updating the Proxmox VE Package Database Cache Installing the Proxmox VE Kernel Headers on Proxmox VE Installing the Required Dependencies for NVIDIA GPU Drivers on Proxmox VE Downloading the Latest Version of NVIDIA GPU Drivers for Proxmox VE Installing the NVIDIA GPU Drivers on Proxmox VE Checking If the NVIDIA GPU Drivers Are Installed Correctly on Proxmox VE Conclusion Checking If an NVIDIA GPU Is Installed on Your Proxmox VE Server To install the NVIDIA GPU drivers on your Proxmox VE server, you must have an NVIDIA GPU hardware installed on your server. If you need any assistance in verifying whether you have an NVIDIA GPU hardware available/installed on your server, read this article. Enabling the Proxmox VE Community Package Repositories (Optional for Enterprise Users) If you don’t have a Proxmox VE enterprise subscription, you must enable the Proxmox VE community package repositories to install the required header files to compile the NVIDIA GPU drivers for your Proxmox VE server. Updating the Proxmox VE Package Database Cache Once you have the Proxmox VE community package repositories enabled, navigate to pve > Shell from the Proxmox VE dashboard and run the following command to update the Proxmox VE package database cache: $ apt update Installing Proxmox VE Kernel Headers on Proxmox VE The Proxmox VE kernel headers are required to compile the NVIDIA GPU drivers kernel modules. To install the Proxmox VE kernel headers on your Proxmox VE server, run the following command: $ apt install -y pve-headers-$(uname -r) The Promox VE kernel headers should be installed on your Proxmox VE server. Installing the Required Dependencies for NVIDIA GPU Drivers on Proxmox VE To build the NVIDIA GPU drivers kernel modules, you need to install some dependency packages on your Proxmox VE server as well. To install all the required dependency packages on your Proxmox VE server, run the following command: $ apt install build-essential pkg-config xorg xorg-dev libglvnd0 libglvnd-dev To confirm the installation, press “Y” and then press <Enter>. The required dependency packages are being downloaded from the internet. It takes a while to complete. The required dependency packages are being installed. It takes a while to complete. At this point, the required dependency packages should be installed on your Proxmox VE server. Downloading the Latest Version of NVIDIA GPU Drivers for Proxmox VE To download the latest version of the official NVIDIA GPU drivers installer file for Proxmox VE, visit the NVIDIA Drivers Downloads page from any web browser. Once the page loads, select your GPU from the “Product Type”, “Product Series”, and “Product” dropdown menus[1]. Select “Linux 64-bit” as the “Operating System”[2], “Production Branch” as the “Download Type”[3], and click on “Search”[4]. Click on “Download”. Right-click (RMB) on “Agree & Download” and click on “Copy Link” to copy the download link of the NVIDIA GPU Drivers installer file. Now, go back to the Proxmox VE shell and type in the “wget” command [1], press <Space Bar>, right-click (RMB) on the Proxmox VE shell, and click on “Paste”[2] to paste the NVIDIA GPU drivers download link. Once the download link is pasted on the Proxmox VE shell, press <Enter> to run the NVIDIA GPU drivers download command: $ wget https://us.download.nvidia.com/XFree86/Linux-x86_64/535.146.02/NVIDIA-Linux-x86_64-535.146.02.run The NVIDIA GPU drivers installation file is being downloaded. It takes a while to complete. At this point, the NVIDIA GPU drivers installer file should be downloaded. You can find the NVIDIA GPU drivers installer file (NVIDIA-Linux-x86_64-535.146.02.run in our case) in the home directory of your Proxmox VE server. $ ls -lh Installing the NVIDIA GPU Drivers on Proxmox VE Before you can run the NVIDIA GPU drivers installer file on your Proxmox VE server, add executable permission to the NVIDIA GPU drivers installer file as follows: $ chmod +x NVIDIA-Linux-x86_64-535.146.02.run Now, run the NVIDIA GPU drivers installer file as follows: $ ./NVIDIA-Linux-x86_64-535.146.02.run The NVIDIA GPU drivers are now being installed on your Proxmox VE server. It takes a while to compile all the NVIDIA GPU drivers kernel modules for Proxmox VE server. When you’re asked to install the NVIDIA 32-bit compatibility libraries, select “Yes” and press <Enter>. The NVIDIA GPU drivers installation should continue. Once you see the following prompt, select “Yes” and press <Enter>. Press <Enter>. The NVIDIA GPU drivers should be installed on your Proxmox VE server. For the changes to take effect, restart your Proxmox VE server with the following command: $ reboot Checking If the NVIDIA GPU Drivers Are Installed Correctly on Proxmox VE To check whether the NVIDIA GPU drivers are installed correctly on your Proxmox VE server, run the following command from your Proxmox VE shell: $ lsmod | grep nvidia If the NVIDIA GPU drivers are installed correctly on your Proxmox VE server, the NVIDIA kernel modules should be loaded as you can see in the following screenshot: You can also use the “nvidia-smi” command to verify whether the NVIDIA GPU drivers are working correctly. As you can see, the “nvidia-smi” command shows that we have the NVIDIA GeForce RTX 4070 (12GB)[1][2] version installed on our Proxmox VE server and we are using the NVIDIA GPU drivers version 535.146.02[3]. $ nvidia-smi Conclusion In this article, we showed you how to download and install the latest version of the official NVIDIA GPU drivers on your Proxmox VE server. The NVIDIA GPU drivers must be installed on your Proxmox VE server if you want to use your NVIDIA GPU to enable the VirtIO-GL/VirGL 3D acceleration on Proxmox VE virtual machines or passthrough the NVIDIA GPU to Proxmox VE LXC containers for AI/CUDA acceleration. View the full article
  10. If you have an NVIDIA GPU installed on your Proxmox VE server, you can pass it to a Proxmox VE LXC container and use it in the container for CUDA/AI acceleration (i.e. TensorFlow, PyTorch). You can also use the NVIDIA GPU for media transcoding, video streaming, etc. in a Proxmox VE LXC container with the installed Plex Media Server or NextCloud (for example). In this article, we will show you how to passthrough an NVIDIA GPU to a Proxmox VE 8 LXC container so that you can use it for CUDA/AI acceleration, media transcoding, or other tasks that require an NVIDIA GPU. Topic of Contents: Installing the NVIDIA GPU Drivers on Proxmox VE 8 Making Sure the NVIDIA GPU Kernel Modules Are Loaded in Proxmox VE 8 Automatically Creating a Proxmox VE 8 LXC Container for NVIDIA GPU Passthrough Configuring an LXC Container for NVIDIA GPU Passthrough on Promox VE 8 Installing the NVIDIA GPU Drivers on the Proxmox VE 8 LXC Container Installing NVIDIA CUDA and cuDNN on the Proxmox VE 8 LXC Container Checking If the NVIDIA CUDA Acceleration Is Working on the Proxmox VE 8 LXC Container Conclusion References Installing the NVIDIA GPU Drivers on Proxmox VE 8 To passthrough an NVIDIA GPU to a Proxmox VE LXC container, you must have the NVIDIA GPU drivers installed on your Proxmox VE 8 server. If you need any assistance in installing the latest version of the official NVIDIA GPU drivers on your Proxmox VE 8 server, read this article. Making Sure the NVIDIA GPU Kernel Modules Are Loaded in Proxmox VE 8 Automatically Once you have the NVIDIA GPU drivers installed on your Proxmox VE 8 server, you must make sure that the NVIDIA GPU kernel modules are loaded automatically at boot time. First, create a new file like “nvidia.conf” in the “/etc/modules-load.d/” directory and open it with the nano text editor. $ nano /etc/modules-load.d/nvidia.conf Add the following lines and press <Ctrl> + X followed by “Y” and <Enter> to save the “nvidia.conf” file: nvidia nvidia_uvm For the changes to take effect, update the “initramfs” file with the following command: $ update-initramfs -u For some reason, Proxmox VE 8 does not create the required NVIDIA GPU device files in the “/dev/” directory. Without those device files, the Promox VE 8 LXC containers won’t be able to use the NVIDIA GPU. To make sure that Proxmox VE 8 creates the NVIDIA GPU device files in the “/dev/” directory at boot time, create a udev rules file “70-nvidia.rules” in the “/etc/udev/rules.d/” directory and open it with the nano text editor as follows: $ nano /etc/udev/rules.d/70-nvidia.rules Type in the following lines in the “70-nvidia.rules” file and press <Ctrl> + X followed by “Y” and <Enter> to save the file: # create necessary NVIDIA device files in /dev/* KERNEL=="nvidia", RUN+="/bin/bash -c '/usr/bin/nvidia-smi -L && /bin/chmod 0666 /dev/nvidia*'" KERNEL=="nvidia_uvm", RUN+="/bin/bash -c '/usr/bin/nvidia-modprobe -c0 -u && /bin/chmod 0666 /dev/nvidia-uvm*'" For the changes to take effect, reboot your Proxmox VE 8 server as follows: $ reboot Once your Proxmox VE 8 server boots, the NVIDIA kernel modules should be loaded automatically as you can see in the following screenshot: $ lsmod | grep nvidia The required NVIDIA device files should also be populated in the “/dev” directory of your Proxmox VE 8 server. Note the CGroup IDs of the NVIDIA device files. You must allow those CGroup IDs on the LXC container where you want to passthrough the NVIDIA GPUs from your Proxmox VE 8 server. In our case, the CGroup IDs are 195, 237, and 226. $ ls -lh /dev/nvidia* $ ls -lh /dev/dri Creating a Proxmox VE 8 LXC Container for NVIDIA GPU Passthrough We used an Ubuntu 22.04 LTS Proxmox VE 8 LXC container in this article for the demonstration since the NVIDIA CUDA and NVIDIA cuDNN libraries are easy to install on Ubuntu 22.04 LTS from the Ubuntu package repositories and it’s easier to test if the NVIDIA CUDA acceleration is working. If you want, you can use other Linux distributions as well. In that case, the NVIDIA CUDA and NVIDIA cuDNN installation commands will vary. Make sure to follow the NVIDIA CUDA and NVIDIA cuDNN installation instructions for your desired Linux distribution. If you need any assistance in creating a Proxmox VE 8 LXC container, read this article. Configuring an LXC Container for NVIDIA GPU Passthrough on Promox VE 8 To configure an LXC container (container 102, let’s say) for NVIDIA GPU passthrough, open the LXC container configuration file from the Proxmox VE shell with the nano text editor as follows: $ nano /etc/pve/lxc/102.conf Type in the following lines at the end of the LXC container configuration file: lxc.cgroup.devices.allow: c 195:* rwm lxc.cgroup.devices.allow: c 237:* rwm lxc.cgroup.devices.allow: c 226:* rwm lxc.mount.entry: /dev/nvidia0 dev/nvidia0 none bind,optional,create=file lxc.mount.entry: /dev/nvidiactl dev/nvidiactl none bind,optional,create=file lxc.mount.entry: /dev/nvidia-modeset dev/nvidia-modeset none bind,optional,create=file lxc.mount.entry: /dev/nvidia-uvm dev/nvidia-uvm none bind,optional,create=file lxc.mount.entry: /dev/nvidia-uvm-tools dev/nvidia-uvm-tools none bind,optional,create=file lxc.mount.entry: /dev/dri dev/dri none bind,optional,create=dir Make sure to replace the CGroup IDs in the “lxc.cgroup.devices.allow” lines of the LXC container configuration file. Once you’re done, press <Ctrl> + X followed by “Y” and <Enter> to save the LXC container configuration file. Now, start the LXC container from the Proxmox VE 8 dashboard. If the NVIDIA GPU passthrough is successful, the LXC container should start without any error and you should see the NVIDIA device files in the “/dev” directory of the container. $ ls -lh /dev/nvidia* $ ls -lh /dev/dri Installing the NVIDIA GPU Drivers on the Proxmox VE 8 LXC Container NOTE: We are using an Ubuntu 22.04 LTS LXC container on our Proxmox VE 8 server for demonstration. If you’re using another Linux distribution on the LXC container, your commands will slightly vary from ours. So, make sure to adjust the commands depending on the Linux distribution you’re using on the container. You can find the NVIDIA GPU drivers version that you installed on your Proxmox VE 8 server with the “nvidia-smi” command. As you can see, we have the NVIDIA GPU drivers version 535.146.02 installed on our Proxmox VE 8 server. So, we must install the NVIDIA GPU drivers version 535.146.02 on our LXC container as well. $ nvidia-smi First, install CURL on the LXC container as follows: $ apt update && apt install curl -y CURL should be installed on the LXC container. To install the NVIDIA GPU drivers version 535.146.02 (let’s say), export the NVIDIA_VERSION environment variable and run the CURL command (on the container) to download the required version of the NVIDIA GPU drivers installer file. $ export NVIDIA_VERSION="535.146.02" $ curl -O "https://us.download.nvidia.com/XFree86/Linux-x86_64/${NVIDIA_VERSION}/NVIDIA-Linux-x86_64-${NVIDIA_VERSION}.run" The correct version of the NVIDIA GPU drivers installer file should be downloaded on the LXC container as you can see in the following screenshot: Now, add an executable permission to the NVIDIA GPU drivers installer file on the container as follows: $ chmod +x NVIDIA-Linux-x86_64-535.146.02.run To install the NVIDIA GPU drivers on the container, run the NVIDIA GPU drivers installer file with the “–no-kernel-module” option as follows: $ ./NVIDIA-Linux-x86_64-535.146.02.run --no-kernel-module Once you see this option, select “OK” and press <Enter>. Select “OK” and press <Enter>. Select “Yes” and press <Enter>. Select “OK” and press <Enter>. The NVIDIA GPU drivers are being installed on the LXC container. It takes a few seconds to complete. Once you see this prompt, select “Yes” and press <Enter>. Select “OK” and press <Enter>. The NVIDIA GPU drivers should be installed on the LXC container. To confirm whether the NVIDIA GPU drivers are installed and working, run the “nvidia-smi” command on the LXC container. As you can see, the NVIDIA GPU driver version 535.146.02 (the same version as installed on the Proxmox VE 8 server) is installed on the LXC container and it detected our NVIDIA RTX 4070 GPU correctly. $ nvidia-smi Installing NVIDIA CUDA and cuDNN on the Proxmox VE 8 LXC Container NOTE: We are using an Ubuntu 22.04 LTS LXC container on our Proxmox VE 8 server for demonstration. If you’re using another Linux distribution on the LXC container, your commands will slightly vary from ours. So, make sure to adjust the commands depending on the Linux distribution you’re using on the container. To install NVIDIA CUDA and cuDNN on the Ubuntu 22.04 LTS Proxmox VE 8 container, run the following command on the container: $ apt install build-essential nvidia-cuda-toolkit nvidia-cudnn To confirm the installation, press “Y” and then press <Enter>. The required packages are being downloaded and installed. It takes a while to complete. Once you see this window, select “OK” and press <Enter>. Select “I Agree” and press <Enter>. The installation should continue. The installer is downloading the NVIDIA cuDNN library archive from NVIDIA. It’s a big file, so it takes a long time to complete. Once the NVIDIA cuDNN library archive is downloaded, the installation should continue as usual. At this point, NVIDIA CUDA and cuDNN should be installed on the Ubuntu 22.04 LTS Proxmox VE 8 LXC container. Checking If the NVIDIA CUDA Acceleration Is Working on the Proxmox VE 8 LXC Container To verify whether NVIDIA CUDA is installed correctly, check if the “nvcc” command is available on the Proxmox VE 8 container as follows: $ nvcc --version As you can see, we have NVIDIA CUDA 11.5 installed on our Proxmox VE 8 container. Now, let’s write, compile, and run a simple CUDA C program and see if everything is working as expected. First, create a “~/code” project directory on the Proxmox VE 8 container to keep the files organized. $ mkdir ~/code Navigate to the “~/code” project directory as follows: $ cd `/code Create a new file like “hello.cu” in the “~/code” directory of the Proxmox VE 8 container and open it with the nano text editor: $ nano hello.cu Type in the following lines of code in the “hello.cu” file: #include <stdio.h> __global__ void sayHello() { printf("Hello world from the GPU!\n"); } int main() { printf("Hello world from the CPU!\n"); sayHello<<1,1>>(); cudaDeviceSynchronize(); return 0; } Once you’re done, press <Ctrl> + X followed by “Y” and <Enter> to save the “hello.cu” file. To compile the “hello.cu” CUDA program on the Proxmox VE 8 container, run the following commands: $ nvcc hello.cu -o hello Now, you can run the “hello” CUDA program on the Proxmox VE 8 container as follows: $ ./hello If the Proxmox VE 8 container can use the NVIDIA GPU for NVIDIA CUDA acceleration, the program will print two lines as shown in the following screenshot. If the NVIDIA GPU is not accessible from the Proxmox VE 8 container, the program will print only the first line which is “Hello world from the CPU!”, not the second line. Conclusion In this article, we showed you how to passthrough an NVIDIA GPU from the Proxmox VE 8 host to a Proxmox VE 8 LXC container. We also showed you how to install the same version of the NVIDIA GPU drivers on the Proxmox VE 8 container as the Proxmox VE host. Finally, we showed you how to install NVIDIA CUDA and NVIDIA cuDNN on an Ubuntu 22.04 LTS Proxmox VE 8 container and compile and run a simple NVIDIA CUDA program on the Proxmox VE 8 container. References: Journey to Deep Learning: Nvidia GPU passthrough to LXC Container | by Mamy André-Ratsimbazafy | Medium How to Install CUDA on Ubuntu 22.04 LTS View the full article
  11. Virt-Viewer is a SPICE client that is used to access the KVM/QEMU/libvirt virtual machines remotely. Proxmox VE is built using the KVM/QEMU/libvirt technologies. So, you can use Virt-Viewer to remotely access the Proxmox VE virtual machines as well. Virt-Viewer can also be used to access the Proxmox VE LXC containers remotely via SPICE. In this article, we will show you how to install Virt-Viewer on Windows 10/11, Ubuntu, Debian, Linux Mint, Kali Linux, and Fedora operating systems and access the Promox VE virtual machines and LXC containers remotely via SPICE protocol using Virt-Viewer. Topic of Contents: Installing Virt-Viewer on Windows 10/11 Installing Virt-Viewer on Ubuntu/Debian/Linux Mint/Kali Linux Installing Virt-Viewer on Fedora Configuring the SPICE/QXL Display for the Proxmox VE Virtual Machines and LXC Containers Accessing the Proxmox VE Virtual Machines Remotely via SPICE Protocol Using Virt-Viewer Accessing the Proxmox VE LXC Containers Remotely via SPICE Protocol Using Virt-Viewer Sharing the Remote Access to Proxmox VE Virtual Machines and LXC Containers with Others Conclusion Installing Virt-Viewer on Windows 10/11 To download Virt-Viewer for Windows 10/11, visit the official website of Virtual Machine Manager from your favorite web browser. Once the page loads, click on “Win x64 MSI” from the “virt-viewer 11.0” section as marked in the following screenshot: Your browser should start downloading the Virt-Viewer installer file. It takes a while to complete. At this point, the Virt-Viewer installer file for Windows 10/11 should be downloaded. To install Virt-Viewer on your Windows 10/11 system, double-click (LMB) on the Virt-Viewer installer file (that you just downloaded). The Virt-Viewer installer file should be found in the “Downloads” folder of your Windows 10/11 system. Click on “Yes”. Virt-Viewer is being installed on your Windows 10/11 system. It takes a while for it to complete the installation. Installing Virt-Viewer on Ubuntu/Debian/Linux Mint/Kali Linux Virt-Viewer is available in the official package repository of Ubuntu/Debian/Linux Mint/Kali Linux. So, you can easily install it on your computer if you’re using Ubuntu/Debian or any Ubuntu/Debian-based operating systems (i.e. Linux Mint, Kali Linux). First, update the APT package database cache with the following command: $ sudo apt update To install Virt-Viewer on Ubuntu/Debian/Linux Mint/Kali Linux, run the following command: $ sudo apt install virt-viewer To confirm the installation, press “Y” and then press <Enter>. Virt-Viewer is being installed. It takes a while to complete. Virt-Viewer should now be installed. Installing Virt-Viewer on Fedora Virt-Viewer can be easily installed from the official package repository of Fedora. First, update the DNF package database cache with the following command: $ sudo dnf makecache To install Virt-Viewer on Fedora, run the following command: $ sudo dnf install virt-viewer To confirm the installation, press “Y” and then press <Enter>. You might be asked to confirm the GPG key of the official Fedora package repository. To do that, press “Y” and then press <Enter>. Virt-Viewer should now be installed on your Fedora system. Configuring the SPICE/QXL Display for the Proxmox VE Virtual Machines and LXC Containers SPICE is enabled for LXC containers by default on Proxmox VE. So, you don’t need to do anything to access the Proxmox VE LXC containers with Virt-Manager via SPICE protocol. SPICE is not enabled for Proxmox VE virtual machines. To access the Proxmox VE virtual machines with Virt-Viewer via SPICE protocol, you must configure SPICE for the display of the virtual machines that you want to access. To configure the SPICE access for a Proxmox VE virtual machine, navigate to the “Hardware” section of the virtual machine from the Proxmox VE web management interface[1]. Double-click (LMB) on the “Display” hardware[2], select SPICE from the “Graphic card” dropdown menu[3], and click on “OK”[4]. SPICE should be enabled for your Proxmox VE virtual machine. Now, you can access the Proxmox VE virtual machine with Virt-Viewer via SPICE protocol. Accessing the Proxmox VE Virtual Machines Remotely via SPICE Protocol Using Virt-Viewer To acess a Proxmox VE virtual machine remotely via SPICE protocol using Virt-Viewer, open the virtual machine in the Proxmox VE server and click on Console > SPICE from the top-right corner of the Proxmox VE dashboard. A SPICE connection file for the virtual machine should be downloaded. To access the virtual machine with Virt-Viewer, click on the downloaded SPICE connection file. The Proxmox VE virtual machine should be opened with Virt-Viewer via the SPICE protocol. Figure 1: Ubuntu 22.04 LTS Proxmox VE virtual machine remotely accessed with Virt-Viewer from Windows 10 Figure 2: Ubuntu 22.04 LTS Proxmox VE virtual machine remotely accessed with Virt-Viewer from Fedora Accessing the Proxmox VE LXC Containers Remotely via SPICE Protocol Using Virt-Viewer You can access a Proxmox VE LXC container with Virt-Viewer in the same way as you access a Proxmox VE virtual machine. To access a Proxmox VE LXC container remotely via SPICE protocol using Virt-Viewer, open the LXC container in the Proxmox VE server and click on Console > SPICE from the top-right corner of the Proxmox VE dashboard. A SPICE connection file for the LXC container should be downloaded. To access the LXC container with Virt-Viewer, click on the downloaded SPICE connection file. The Proxmox VE LXC container should be opened with Virt-Viewer via the SPICE protocol. Sharing the Remote Access to Proxmox VE Virtual Machines and LXC Containers with Others If you want to share a Proxmox VE virtual machine with someone, all you have to do is share the SPICE connection file (ending in “.vv” file extension) of the virtual machine that you downloaded from the Proxmox VE web management interface. Anyone can access the Proxmox VE virtual machine only once using the SPICE connection file. NOTE: The person with whom you shared the SPICE connection file must be able to access your Proxmox VE server to be able to access the Proxmox VE virtual machine. If your Proxmox VE server has a private IP address, only the people connected to your home network will be able to connect to the shared virtual machines. If your Proxmox VE server has a public IP address, anyone can connect to the shared virtual machines. Conclusion In this article, we showed you how to install Virt-Viewer on Windows 10/11, Ubuntu, Debian, Linux Mint, Kali Linux, and Fedora. We also showed you how to access the Proxmox VE virtual machines and LXC containers remotely with Virt-Viewer via SPICE protocol. We showed you how to share the access to Proxmox VE virtual machines and LXC containers with other people as well. View the full article
  12. VirtIO-GL/VirGL is a feature of the KVM/QEMU/libvirt hypervisor that is used to provide the KVM/QEMU/libvirt virtual machines with 3D acceleration capabilities. The 3D acceleration allows the virtual machines (with a graphical user interface installed) to use the GPU of the host to make the graphical user interface more responsive and capable of running the 3D productivity software or games. Proxmox VE uses the KVM/QEMU/libvirt technology for its virtual machines. Starting with Proxmox VE 8, you can use the VirtIO-GL/VirGL 3D acceleration on the Linux virtual machines for a better graphical user experience. In this article, we will show you how to enable the VirtIO-GL/VirGL 3D acceleration on Proxmox VE 8 virtual machines. Topic of Contents: Installing the GPU Drivers on Proxmox VE 8 Installing the Required Libraries for VirtIO-GL/VirGL 3D Acceleration on Proxmox VE 8 Enabling the VirtIO-GL/VirGL GPU 3D Acceleration on a Proxmox VE 8 Virtual Machine Testing If the VirtIO-GL/VirGL GPU 3D Acceleration Is Working on Proxmox VE Virtual Machine Conclusion Installing the GPU Drivers on Proxmox VE 8 For the VirtIO-GL/VirGL 3D acceleration to work on Proxmox VE 8, you must have the following: An installed GPU on your Proxmox VE 8 server Installed GPU drivers on your Proxmox VE 8 server The Intel GPU drivers should be installed on your Proxmox VE 8 server by default if you have the Intel iGPU (integrated GPU) available. You don’t need any manual intervention. If you have an NVIDIA GPU on your Proxmox VE 8 server and you want to use it for VirtIO-GL/VirGL, you must download the NVIDIA GPU drivers manually and install them yourself on your Proxmox VE 8 server. If you need any assistance in installing the NVIDIA GPU drivers on your Proxmox VE 8 server, read this article. If you have an AMD GPU on your Proxmox VE 8 server, you may also need to install the required GPU drivers on your Proxmox VE 8 server. We don’t have an AMD GPU. So, we haven’t tested it. But if you’re using an AMD APU (AMD processor with integrated GPU), the GPU drivers should be installed by default as far as we know. We will update this article if we get a chance to test it. Installing the Required Libraries for VirtIO-GL/VirGL 3D Acceleration on Proxmox VE 8 For the VirtIo-GL/VirGL 3D acceleration to work on Proxmox VE 8 virtual machines, you must have the LibEGL and libGL libraries installed on your Proxmox VE 8 server. The LibEGL and libGL libraries are available in the official package repositories of Proxmox VE 8. So, they are very easy to install. First, navigate to Datacenter > pve > Shell to access the Proxmox VE shell of your Proxmox VE 8 server and run the following command to update the Proxmox VE package database cache: $ apt update To install the LibEGL and LibGL libraries on Proxmox VE 8 server, run the following command: $ apt install -y libegl1 libgl1 The LibEGL and LibGL libraries should be installed. In our case, they are already installed. Enabling the VirtIO-GL/VirGL GPU 3D Acceleration on a Proxmox VE 8 Virtual Machine To enable the VirtIO-GL/VirGL 3D acceleration on a Proxmox VE 8 virtual machine, navigate to the “Hardware” section of the virtual machine[1]. Double-click (LMB) on “Display”[2] and select “VirGL GPU” from the “Graphics card” dropdown menu[3]. By default, VirGL GPU uses only 256 MB of memory/VRAM (at max) from the installed GPU on your Proxmox VE server when the virtual machine is running. This is enough for most cases. If you want to allocate more memory/VRAM to the virtual machine, type it in the “Memory (MiB)” section[4]. Once you’re done, click on “OK”[5]. VirtIO-GL/VirGL should be enabled for your desired Proxmox VE 8 virtual machine. Now, you can start the virtual machine as usual. If VirtIO-GL/VirGL is enabled on the Proxmox VE 8 virtual machine successfully, the virtual machine will start without any error and the screen of the virtual machine will be displayed on the Proxmox VE 8 web interface. Testing If the VirtIO-GL/VirGL GPU 3D Acceleration Is Working on Proxmox VE Virtual Machines You can navigate to Settings > About on the GNOME desktop environment to find the “Graphics” information of the virtual machine. As you can see, the virtual machine is using the NVIDIA RTX 4070 that we have on our Proxmox VE 8 server via VirIO-GL/VirGL. On other desktop environments, you will find a similar information on the “Settings” app. To test whether VirIO-GL/VirGL does any 3D improvements on Proxmox VE 8 virtual machines, we create two Ubuntu 22.04 LTS virtual machines on our Proxmox VE 8 server. We enable VirtIO-GL/VirGL on one of them and use the default display settings (3D acceleration disabled) on the other one. Then, we run the “glmark2” test and compare the results. If you want to perform the same tests, you can install “glmark2” on your Ubuntu 22.04 LTS virtual machine with the following commands: $ sudo apt update $ sudo apt install glmark2 -y While the “glmark2” benchmark is running, the Proxmox VE 8 virtual machine that has the VirtIO-GL/VirGL 3D acceleration enabled consumes less CPU resources (Figure 1) compared to the one which has the VirtIO-GL/VirGL 3D acceleration disabled (Figure 2). The VirtIO-GL/VirGL 3D acceleration that disables the Proxmox VE 8 virtual machines CPU usage is almost 100% as you can see in the following screenshot (figure 2). High CPU usage means that 3D is simulated via CPU instead of being accelerated via GPU. So, the VirtIO-GL/VirGL 3D acceleration improves the 3D performance of Proxmox VE 8 virtual machines and makes the user interface of the Linux graphical desktop environment more responsive. Figure 1: CPU usage while running the “glmark2” benchmark on the VirtIO-GL/VirGL 3D acceleration that is enabled on Proxmox VE 8 virtual machine Figure 2: CPU usage while running the “glmark2” benchmark on the VirtIO-GL/VirGL 3D acceleration that is disabled on Proxmox VE 8 virtual machine The “glmark2” score proves that the VirIO-GL/VirGL 3D acceleration really improves the overall graphical user experience of the Proxmox VE 8 virtual machines. On the Proxmox VE 8 virtual machine with the VirtIO-GL/VirGL 3D acceleration enabled, the “glmark2” score is 2167 (figure 3) and only 163 on the one with the disabled VirtIO-GL/VirGL 3D acceleration (figure 4). That’s a huge difference. Figure 3: The “glmark2” score of the VirtIO-GL/VirGL 3D acceleration enabled on Promox VE 8 virtual machine when NVIDIA RTX 4070 GPU and AMD Ryzen 3900X CPU (4 cores allocated to the virtual machine) are used on the Promox VE 8 server Figure 4: The “glmark2” score of the VirtIO-GL/VirGL 3D acceleration disabled on Promox VE 8 virtual machine when AMD Ryzen 3900X CPU (4 cores allocated to the virtual machine) is used on the Promox VE 8 server You can also verify that the Proxmox VE 8 virtual machine is using the GPU from your Proxmox VE 8 server for 3D acceleration via VirIO-GL/VirGL if you’re using an NVIDIA GPU on the Proxmox VE 8 server. To find the programs that are using the NVIDIA GPU of your Proxmox VE 8 server, open the Proxmox VE shell and run the “nvidia-smi” command. As you can see, one of the Proxmox VE 8 virtual machine consumes about 194 MiB of VRAM from the NVIDIA RTX 4070 GPU of our Proxmox VE 8 server for 3D acceleration. Conclusion In this article, we showed you how to install the required libraries on your Proxmox VE 8 server to get the VirtIO-GL/VirGL 3D acceleration working. We also showed you how to configure/enable the VirtIO-GL/VirGL 3D acceleration on a Proxmox VE 8 virtual machine. We showed you how to verify whether the VirtIO-GL/VirGL 3D acceleration is working on Proxmox VE 8 virtual machines as well. Finally, we benchmarked the VirtIO-GL/VirGL GPU of a Proxmox VE 8 virtual machine using “glmark2” to show you how it performs compared to a Proxmox VE 8 virtual machine with the VirtIO-GL/VirGL 3D acceleration disabled. View the full article
  13. Proxmox VE is an open-source and enterprise-grade Type-I virtualization platform. It has an integrated support for KVM virtual machines as well as LXC containers. Proxmox VE offers the community and enterprise versions of their software. The community version is free to use for everyone while the enterprise version requires a paid subscription. On the new Proxmox VE installations, the Proxmox VE enterprise package repositories are enabled by default. The Proxmox VE enterprise package repositories are not free to use. If you’re thinking of using the Proxmox VE community edition and not buying a Proxmox VE enterprise subscription at this point, you won’t be able to install any package from the Proxmox VE enterprise package repositories or upgrade the Proxmox VE packages. In that case, you have to disable the Proxmox VE enterprise package repositories and enable the Proxmox VE community package repositories. As a free Promox VE community user, you can install the packages and upgrade Proxmox VE from the Proxmox VE community package repositories. In this article, we will show you how to disable the Proxmox VE enterprise package repositories and enable the Proxmox VE community package repositories on your Proxmox VE 8 installation. Topic of Contents: Disabling the Proxmox VE Enterprise and Ceph Enterprise Package Repositories from Proxmox VE Adding and Enabling the Proxmox VE Community and Ceph Community Package Repositories on Proxmox VE Updating the Proxmox VE Package Database Cache Conclusion Disabling the Proxmox VE Enterprise and Ceph Enterprise Package Repositories from Proxmox VE To find all the package repositories that are added to your Proxmox VE server, log in to your Proxmox VE dashboard and navigate to pve[1] > Repositories[2]. As you can see, the Proxmox VE and Ceph enterprise package repositories are enabled by default[3]. To disable the Proxmox VE enterprise package repository, select it[1] and click on “Disable”[2]. The Proxmox VE enterprise package repository should be disabled as you can see in the following screenshot[1]. In the same way, to disable the Ceph enterprise repository, select it[2] and click on “Disable”[3]. The Ceph enterprise package repository should be disabled. Adding and Enabling the Proxmox VE Community and Ceph Community Package Repositories on Proxmox VE Once the Proxmox VE enterprise and Ceph enterprise package repositories are disabled from Proxmox VE, you can add and enable the Proxmox VE community and Ceph community package repositories on your Proxmox VE server. To add a new package repository on Proxmox VE, navigate to pve[1] > Repositories[2] from the Proxmox VE dashboard and click on “Add”[3]. You will see a warning that no valid subscription has been added to your Proxmox VE server. That’s fine as we want to use the Proxmox VE community version, not the enterprise version. Just click on “OK”. To add the Promox VE community package repository, select “No-Subscription” from the “Repository” dropdown menu[1] and click on “Add”[2]. The Proxmox VE community package repository should be added and enabled as you can see in the following screenshot: Ceph has different versions. The version of Ceph that was enabled on your Proxmox VE server by default should be found at the end of the Ceph enterprise package repository that you just disabled. In this case, the version of Ceph that is enabled on our Proxmox VE server by default is Quincy[1]. You may see the newer versions of Ceph while attempting to add a Ceph community package repository on your Proxmox VE server. You can use the version of Ceph that you want on your Proxmox VE server. To add a Ceph community package repository on your Proxmox VE server, click on “Add”[2]. Click on “OK”. As you can see, we can either add Ceph Quincy or Ceph Reef on our Proxmox VE server. Ceph Reef is newer than Ceph Quincy. To add a Ceph community repository on your Proxmox VE server, select your desired version of the “Ceph No-Subscription” package repository. Once you selected a “Ceph No-Subscription” repository, click on “Add”. Your desired version of the Ceph community package repository should be added and enabled on your Proxmox VE server. Updating the Proxmox VE Package Database Cache Once you added and enabled the Proxmox VE community and Ceph community package repositories on your Proxmox VE server, you can install the packages from these repositories on your Proxmox VE server. First, you need to update the Proxmox VE package database cache. To do that, you need an access to the Proxmox VE shell. To access the Proxmox VE shell, navigate to pve[1] > Shell[2]. The Proxmox VE shell should be displayed. To update the Proxmox VE package database cache, run the following command: $ apt update Proxmox VE 8 is based on Debian 12 “Bookworm”. So, it uses the APT package manager to manage the software packages. You can install the packages on your Proxmox VE server in the same way as you install them on Debian 12. Conclusion In this article, we showed you how to disable the Proxmox enterprise and Ceph enterprise package repositories on your Proxmox VE 8 server. We showed you how to add and enable the Proxmox VE community and Ceph community package repositories on your Promox VE 8 server as well. View the full article
  14. The post How to Set Up Clustering and High Availability in Proxmox first appeared on Tecmint: Linux Howtos, Tutorials & Guides .A cluster is a collection of two or more nodes that offers an avenue for moving around resources between servers. Migrating resources make it possible The post How to Set Up Clustering and High Availability in Proxmox first appeared on Tecmint: Linux Howtos, Tutorials & Guides.View the full article
  15. The post How to Create Clones and Templates of Virtual Machines in Proxmox first appeared on Tecmint: Linux Howtos, Tutorials & Guides .In this tutorial, you will learn how to create clones and templates of Virtual Machines in Proxmox. Take a look at our earlier tutorials on The post How to Create Clones and Templates of Virtual Machines in Proxmox first appeared on Tecmint: Linux Howtos, Tutorials & Guides.View the full article
  16. The post How to Backup and Restore VMs in Proxmox first appeared on Tecmint: Linux Howtos, Tutorials & Guides .This is our fourth guide of the Proxmox series, in this tutorial, we will explore how to backup and restore VMs in Promox. As a The post How to Backup and Restore VMs in Proxmox first appeared on Tecmint: Linux Howtos, Tutorials & Guides.View the full article
  17. The post How to Create Containers in Proxmox first appeared on Tecmint: Linux Howtos, Tutorials & Guides .In the previous lectures, we learned how to install Proxmox on Debian and also how to create virtual machines. In this tutorial, we will see The post How to Create Containers in Proxmox first appeared on Tecmint: Linux Howtos, Tutorials & Guides.View the full article
  18. The post How to Create a Virtual Machine in Proxmox first appeared on Tecmint: Linux Howtos, Tutorials & Guides .In the previous tutorial, we demonstrated step-by-step how to install Proxmox on a Debian 12 system. In this second part, we will go a step The post How to Create a Virtual Machine in Proxmox first appeared on Tecmint: Linux Howtos, Tutorials & Guides.View the full article
  19. The post How to Install Proxmox (Server Virtualization) on Debian 12 first appeared on Tecmint: Linux Howtos, Tutorials & Guides .Proxmox Virtual Environment is a robust and open-source virtualization platform based on Debian GNU/Linux that ships with a custom kernel and encapsulates KVM virtualization and The post How to Install Proxmox (Server Virtualization) on Debian 12 first appeared on Tecmint: Linux Howtos, Tutorials & Guides.View the full article
  • Forum Statistics

    43.8k
    Total Topics
    43.3k
    Total Posts
×
×
  • Create New...