Jump to content

Search the Community

Showing results for tags 'vms'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

There are no results to display.

There are no results to display.


Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

Found 12 results

  1. Storage-dense workloads need consistent performance, high SSD density, and predictable maintenance that preserves SSD data. Last week at Google Cloud Next ‘24, we announced the general availability of the Z3 machine series, our first storage-optimized VM family. With an industry-leading 6M 100% random-read and 6M write IOPs, an incredibly dense storage configuration of up to 409 SSD (GB) per vCPU and a highmem configuration (1 vCPU : 8 GB RAM), Z3 VMs provide a consistent performance and maintenance experience with minimal disruptions for storage-dense workloads such as horizontal, scale-out databases and log analytics workloads, allowing you to reduce total cost of ownership by offering more storage capacity for fewer cores. The Z3 machine series brings together the enterprise-grade performance and reliability of the 4th Gen Intel Xeon scalable processor (code-named Sapphire Rapids), Google’s custom Intel Infrastructure Processing Unit (IPU), and the latest generation of Local SSD to Google Compute Engine and Google Kubernetes Engine customers. Z3 is also the debut of next generation Local SSD, with up to 3x disk throughput over prior-generation instances and up to 35% lower disk latency. Z3 is ideal for horizontal, scale-out databases, flash-optimized databases, data warehouses, and other applications with dense storage requirements. Initially, Z3 comes in two shapes: 176 vCPU with 1.4T of DDR5 memory, and 88 vCPU with 704 GB of DDR5 memory, both with 36TB of next generation Local SSD. What our customers are saying “Google’s Z3 instances help fulfill Aerospike's commitment to deliver superior cost performance for real-time database users. Our testing shows that they not only meet the high expectations of our mutual customers, but can also reduce their cluster sizes by more than 70%, simplifying their environments and reducing overall costs.” - Srini Srinivasan, Chief Technology Officer, Aerospike “bi(OS) is the real-time database that ‘scales up and scales out.’ GCP’s Z3 instance is the first cloud VM that can unleash its true potential. Over 72 hours, using only three Z3-88 instances, bi(OS) delivered ~164,000 rows/sec of throughput at a mean latency < 32 ms. All inserts (49%), upserts (12%), and selects (39%) were performed with five 9s reliability across three zones.” - Darshan Rawal, Founder and CEO, Isima "When we tested ScyllaDB on the new Z3 instances, ScyllaDB exhibited a significant throughput improvement across workloads versus the previous generation of N2 instances. We observed a 23% increase in write throughput, 24% for mixed workloads, and 14% for reads – and that’s with 8 fewer cores (z3-highmem-88 vs n2-highmem-96). On these new instances, a cluster of just three ScyllaDB nodes can achieve around 2.2M OPS for writes and mixed workloads and around 3.6M OPS for reads. We are excited for the incredible performance and value that these new instances will offer our customers." - Avi Kivity, Co-founder and CTO, ScyllaDB Enhanced maintenance experience Z3 VMs come with a variety of new infrastructure lifecycle technologies that provide tighter control and specificity around maintenance. Z3 VMs receive notice from the system several days in advance of a maintenance event. You can then schedule the maintenance event at a time of your choosing, or default to the scheduled time. This allows you to more predictively plan ahead of a disruptive event, while allowing us to deliver more performant and secure infrastructure. You’ll also receive in-place upgrades that preserve your data through planned maintenance events. Powered by Titanium Z3 VMs are built on Titanium, Google’s system of purpose-built custom silicon, security microcontrollers, and tiered scale-out offloads. The end result is better performance, lifecycle management, reliability, and security for your workloads. Titanium enables Z3 to deliver up to 200 Gbps of fully encrypted networking, 3x faster packet-processing capabilities than prior generation VMs, near-bare-metal performance, integrated maintenance updates for the majority of workloads, and advanced controls for the more sensitive workloads. “Building on our successful partnership with Google Cloud since 2016, we're proud to collaborate on Google Cloud’s first storage-optimized VM family. This collaboration delivers Intel’s 4th Gen Intel Xeon processor and Google’s custom Intel IPU that unlocks new levels of efficiency and performance.” - Suzi Jewett, General Manager - Intel Xeon Products, Intel Corporation Hyperdisk storage Hyperdisk is Google Cloud’s latest-generation block storage. Built on Titanium, Hyperdisk delivers significantly higher levels of performance, flexibility, and efficiency by decoupling storage processing from the virtual machine host. With Hyperdisk, you can dynamically scale storage performance and capacity independently to efficiently meet the storage I/O needs of data-intensive workloads such as data analytics and databases. Now, you don’t have to choose expensive, large compute instances just to get higher storage performance. Get started with Z3 today Z3 VMs are available today in the following regions: us-central1 (Iowa), europe-west4 (Netherlands), and asia-southeast1 (Singapore). To start using Z3 instances, select Z3 under the new Storage-Optimized machine family when creating a new VM or GKE node pool in the Google Cloud console. Learn more at the Z3 machine series page. Contact your Google Cloud sales representative for more information on regional availability. View the full article
  2. In this article, I am going to show you how to download the ISO image of Windows 11 and VirtIO Windows drivers on Proxmox VE 8, create a Windows 11 virtual machine (VM) on Proxmox VE 8, install Windows 11 on the Proxmox VE 8 virtual machine (VM), and install VirtIO drivers and QEMU guest agent on the Windows 11 Proxmox VE 8 virtual machine (VM). Table of Contents How to Download/Upload the Windows 11 ISO Image on Proxmox VE 8 Downloading the Latest VirtIO Drivers ISO File for Windows 11 on Proxmox VE 8 Creating a Windows 11 Proxmox VE 8 Virtual Machine (VM) Installing Windows 11 on the Proxmox VE 8 Virtual Machine (VM) Installing VirtIO Drivers and QEMU Guest Agent on the Windows 11 Proxmox VE 8 Virtual Machine (VM) Removing the Windows 11 and VirtIO Drivers ISO Images from the Windows 11 Proxmox VE 8 Virtual Machine (VM) Conclusion References How to Download/Upload the Windows 11 ISO Image on Proxmox VE 8 There are two ways to get the Windows 11 ISO image on your Proxmox VE 8 server. Download the Windows 11 ISO image on your computer and upload it to Proxmox VE from your computer. Download the Windows 11 ISO image directly on Proxmox VE. To download the Windows 11 ISO image, visit the official download page of Windows 11 from your favorite web browser. Once the page loads, select Windows 11 (multi-edition ISO for x64 devices)[1] from the dropdown menu and click on Download Now[2]. Select your language from the dropdown menu[1] and click on Confirm[2]. The download link for the Windows 11 ISO image should be generated. To download the Windows 11 ISO image on your computer (so that you can upload it to Proxmox VE), click on the 64-bit Download button. To download the Windows 11 ISO image on your Proxmox VE 8 server directly, right-click (RMB) on the 64-bit Download button and click on Copy Link (or similar option depending on your web browser) to copy the download link of the Windows 11 ISO image. Now, navigate to the ISO Images section of your desired Proxmox VE datastore (that supports ISO image) from the resource tree of your Proxmox VE server[1]. If you’ve downloaded the Windows 11 ISO image on your computer, click on Upload and select the Windows 11 ISO image file from your computer to upload it to your Proxmox VE server[2]. If you want to download the Windows 11 ISO image directly on your Proxmox VE server, click on Download from URL on your Proxmox VE server[3]. I will demonstrate this method in this article. Once you’ve clicked on the Download from URL button, you will see the following window. Paste the Windows 11 ISO download link (that you’ve copied recently) in the URL section and click on Query URL. The correct File name[1] and File size[2] of the Windows 11 ISO image should be displayed. Click on Download[3]. Proxmox VE should start downloading the Windows 11 ISO image. It will take a while to complete as it’s a big download. Just wait till it finishes. Once the Windows 11 ISO image is downloaded on your Proxmox VE 8 server, it will be listed in the ISO Images section of your selected Proxmox VE datastore. Downloading the Latest VirtIO Drivers ISO File for Windows 11 on Proxmox VE 8 To get the best performance, you need to install the required VirtIO drivers on the Windows 11 operating system after it’s installed on the Proxmox VE virtual machine. To download the latest version of the VirtIO drivers ISO image file on Proxmox VE 8, visit the official VirtIO drivers download page from your favorite web browser. Once the page loads, right-click on the virtio-win.iso image file or virtio-win-<version>.iso image file and click on Copy Link (or similar option dependiing on the web browser you’re using). Then, navigate to the ISO Images section of your desired Proxmox VE datastore (that supports ISO images) and click on Download from URL. Type in the VirtIO ISO image download link (that you’ve copied recently) in the URL section and click on Query URL[1]. The file name[2] and file size[3] of the VirtIO ISO image should be displayed. Then, click on Download[4]. Proxmox VE should start downloading the VirtIO ISO image. It will take a while to complete. Once the VirtIO ISO image is downloaded, it will be displayed in the ISO Images section of the Proxmox VE datastore (where you’ve downloaded it). Creating a Windows 11 Proxmox VE 8 Virtual Machine (VM) To create a new virtual machine on Proxmox VE 8, click on Create VM from the top-right corner of the Proxmox VE dashboard. The Proxmox VE 8 virtual machine creation wizard should be displayed. In the General tab, type in a name for your Windows 11 virtual machine[1] and click on Next[2]. In the OS tab, select USE CD/DVD disk image file (iso)[1], select the datastore where you’ve uploaded/downloaded the Windows 11 ISO image from the Storage dropdown menu, and select the Windows 11 ISO image from the ISO image dropdown menu[2]. Then, select Microsoft Windows from the Guest OS Type dropdown menu[3], select 11/2022 from the Version dropdown menu[4], tick Add additional drive for VirtIO drivers[5], and select the VirtIO drivers ISO image file from the Storage and ISO image dropdown menus[6]. Once you’re done with all the steps, click on Next[7]. In the System tab, select a datastore for the EFI disk from the EFI Storage dropdown menu[1], tick the Qemu Agent checkbox[2], and select a datastore for the TPM from the TPM Storage dropdown menu[3]. Once you’re done, click on Next[4]. In the Disks tab, select VirtIO Block from the Bus/Device dropdown menu[1], select a datastore for the virtual machine’s disk from the Storage dropdown menu[2], and type in your desired disk size in the Disk size (GiB) section[3]. Once you’re done, click on Next[4]. In the CPU tab, select the number of CPU cores you want to allocate for the virtual machine from the Cores section[1], select host from the Type dropdown menu[2], and click on Next[3]. In the Memory tab, type in the amount of memory you want to allocate to the Proxmox VE virtual machine (VM) in the Memory (MiB) section[1]. If you want to overprovision the memory of your Proxmox VE server (allocate more memory to virtual machines than you have available on your Proxmox VE server), tick Ballooning Device[2] and type in the minimum amount of memory that you want to allocate to the virtual machine in the Minimum memory (MiB) section[3]. If you enable Ballooning Device for this virtual machine, the virtual machine will release unused memory to the Proxmox VE server so that it can allocate it to other virtual machines. For more information on this, read the Proxmox VE Dynamic Memory Management documentation. Once you’re done, click on Next[4]. In the Network tab, select VirtIO (paravirtualized)[1] from the Model dropdown menu and click on Next[2]. Click on Finish. A Windows 11 Proxmox VE 8 virtual machine should be created[1]. To start the Windows 11 virtual machine, click on Start[2]. Press any key and the Windows 11 installer should be displayed on the virtual machine. From here, you can install Windows 11 on the Proxmox VE virtual machine as usual. Installing Windows 11 on the Proxmox VE 8 Virtual Machine (VM) To install Windows 11, select your language, time and currency format, and keyboard/input method from the respective dropdown menus[1] and click on Next[2]. Click on Install now. If you have a Windows 11 license key, type it in and click on Next. If you don’t have a Windows 11 license key or want to activate Windows 11 later, click on I don’t have a product key. Select the Windows 11 version that you want to install on the Proxmox VE virtual machine and click on Next. Tick the I accept the Microsoft Software License Terms… checkbox[1] and click on Next[2]. Click on Custom: Install Windows only (advanced). Now, you have to install the VirtIO SCSI driver and VirtIO Ethernet driver to successfully install Windows 11 on the Proxmox VE 8 virtual machine. To install the VirtIO SCSI driver from the VirtIO drivers ISO file, click on Load driver. Click on Browse. Select CD Drive: virtio-win > amd64 > w11 folder and click on OK as marked in the screenshot below. The VirIO SCSI driver should be listed. Select the Red Hat VirtIO SCSI controller driver[1] and click on Next[2]. The VirtIO SCSI driver is being installed. It will take a few seconds to complete. Once the VirtIO SCSI driver is installed, you will see a free disk in your Proxmox VE 8 virtual machine[1]. To install the VirtIO Ethernet driver, click on Load driver again[2]. Click on Browse. Select CD Drive: virtio-win > NetKVM > w11 > amd64 folder and click on OK as marked in the screenshot below. The VirtIO Ethernet driver should be listed. Select the Red Hat VirtIO Ethernet Adapter driver[1] and click on Next[2]. The VirtIO Ethernet driver is being installed. It will take a few seconds to complete. The VirtIO Ethernet driver should be installed. Once the VirtIO SCSI and VirtIO Ethernet drivers are installed, select the free disk[1] and click on Next[2]. Windows installer should start installing Windows 11 on the disk of the Proxmox VE 8 virtual machine. It will take a few minutes to complete. Once the required Windows 11 files are installed on the Proxmox VE 8 virtual machine, the virtual machine will reboot. On the next boot, the Windows installer will ask you a few questions to configure Windows 11 for you. First, select your country/region from the list and click on Yes. Select a keyboard layout or input method from the list and click on Yes. If you want to add another keyboard layout or input method on your Windows 11 installation, click on Add layout and follow the instructions. If you don’t want to add another keyboard layout or input method, click on Skip. You will need to wait a few minutes for the Windows 11 installer to get ready and show you the next steps. Type in a name for your Windows 11 virtual machine[1] and click on Next[2]. Select how you want this Windows 11 virtual machine set up[1] and click on Next[2]. Depending on what you select from this section, you will see different options later. I am setting up this Windows 11 virtual machine for personal use. Click on Sign in. You must have a Microsoft account to install and use Windows 11. If you don’t have a Microsoft account, you can create one from here. Once you have a Microsoft account, log in to your Microsoft account to continue the Windows 11 installation. If you’ve used the same Microsoft account on different Windows 10/11 devices, you will be asked to restore data on this virtual machine from the latest backup. To do that, click on Restore from this PC[1]. If the device you want to restore from is not listed or you want to set this virtual machine as a new Windows 11 device, click on More options[2]. All the Windows 10/11 devices that you’ve connected to this Microsoft account should be listed. You can restore data from any of these devices. Just select your desired Windows 10/11 device from the list and click on Restore from this PC[1]. If you want to set this virtual machine as a new Windows 11 device, click on Set up as a new PC[2]. Click on Create PIN. Type in your PIN and click on OK. Click on Next. Click on Accept. You can select the type of work you want to do in this virtual machine from the list and click on Accept so that Windows 11 can customize it for you. If you don’t want to answer it now, click on Skip. You will be asked to connect your Android phone to Windows 11. You can do that later. So, click on Skip to simplify the Windows 11 installation. You will be asked to import the browsing data from your Microsoft account. If you’re a Microsoft Edge user, this will be helpful. So, click on Accept and follow the procedures. If you don’t want to import the browsing data from your Microsoft account, click on Not now. To simplify the Windows 11 installation, I have selected this option. Click on Decline to simplify the Windows 11 installation. Click on Decline. Windows 11 should be ready to use in a few minutes. Windows 11 should be installed on the Proxmox VE 8 virtual machine. Installing VirtIO Drivers and QEMU Guest Agent on the Windows 11 Proxmox VE 8 Virtual Machine (VM) To install all the VirtIO drivers and QEMU quest agent on the Windows 11 Proxmox VE 8 virtual machine, double-click (LMB) on the VirtIO driver CD (CD Drive virtio-win-<version>) from the File Explorer of Windows 11. Double-click (LMB) on the virtio-win-guest-tools installer file as marked in the screenshot below. The VirtIO Guest Tools installer window should be displayed. Check I agree to the license terms and conditions[1] and click on Install[2]. Click on Yes. Click on Next. Check I accept the terms in the License Agreement[1] and click on Next[2]. Click on Next. Click on Install. The VirtIO drivers are being installed. It will take a few seconds to complete. Once the VirtIO drivers are installed on the Windows 11 Proxmox VE virtual machine, click on Finish. After the VirtIO drivers are installed, the QEMU Guest Agent should start installing. It will take a few seconds to complete. Once the QEMU Guest Agent is installed, click on Close. Removing the Windows 11 and VirtIO Drivers ISO Images from the Windows 11 Proxmox VE 8 Virtual Machine (VM) Once you’ve installed Windows 11 on the Proxmox VE 8 virtual machine, you can remove the Windows 11 and VirtIO drivers ISO images from the Windows 11 virtual machine. To remove the Windows 11 ISO image from the Windows 11 Proxmox VE virtual machine, navigate to the Hardware section of the Windows 11 virtual machine, select the CD/DVD Drive that has the Windows 11 ISO image file mounted, and click on Edit. Select Do not use any media and click on OK. The Windows 11 ISO image should be removed from the CD/DVD Drive of the Windows 11 Proxmox VE virtual machine[1]. In the same way, you can remove the VirtIO drivers ISO image from the CD/DVD Drive of the Windows 11 Proxmox VE virtual machine[2]. Conclusion In this article, I have shown you how to download/upload the latest Windows 11 ISO image on your Proxmox VE 8 server directly from Microsoft. I have also shown you how to download the latest VirtIO drivers ISO image for Windows 11 Proxmox VE 8 virtual machine. I have shown you how to create a Windows 11 Proxmox VE 8 virtual machine, install Windows 11 on it, and install VirtIO drivers and QEMU guest agent on the Windows 11 virtual machine as well. After Windows 11 and the VirtIO drivers and QEMU guest agent are installed on the Proxmox VE virtual machine, I have shown you how to remove the Windows 11 and VirtIO drivers ISO images from the Windows 11 Proxmox VE virtual machine. References Download Windows 11 Windows VirtIO Drivers – Proxmox VE View the full article
  3. On Proxmox VE, QEMU Guest Agent is installed on the virtual machines (VMs) for the following reasons: To send ACPI commands to Proxmox VE virtual machines to properly shutdown the virtual machines from the Proxmox VE web UI. To freeze/suspend the Proxmox VE virtual machines while taking backup and snapshots to make sure that no files are changed while taking backups/snapshots. To resume suspended Proxmox VE virtual machines correctly. To collect CPU, memory, disk I/O, and network usage information of Proxmox VE virtual machines for graphing the usage information in the Proxmox VE web UI. To perform dynamic memory management on Proxmox VE virtual machines. In this article, I am going to show you how to install QEMU Guest Agent on some of the most popular Linux distributions. Table of Contents How to Enable QEMU Guest Agent for a Proxmox VE Virtual Machine Installing QEMU Guest Agent on Ubuntu/Debian/Linux Mint/Kali Linux/KDE Neon Installing QEMU Guest Agent on Fedora/RHEL/CentOS Stream/Alma Linux/Rocky Linux/Oracle Linux Installing QEMU Guest Agent on OpenSUSE and SUSE Linux Enterprise Server (SLES) Installing QEMU Guest Agent on Arch Linux/Manjaro Linux Verifying If QEMU Guest Agent is Working Correctly on Proxmox VE Virtual Machines Conclusion References How to Enable QEMU Guest Agent for a Proxmox VE Virtual Machine Before installing QEMU Guest Agent on a Proxmox VE Linux virtual machine, you must enable QEMU Guest Agent for the virtual machine. Installing QEMU Guest Agent on Ubuntu/Debian/Linux Mint/Kali Linux/KDE Neon On Ubuntu/Debian and Ubuntu/Debian-based Linux distributions (i.e. Linux Mint, Kali Linux, KDE Neon, Elementary OS, Deepin Linux, Pop OS!), QEMU Guest Agent can be installed with the following commands: $ sudo apt update $ sudo apt install qemu-guest-agent -y QEMU Guest Agent should be installed. Once QEMU Guest Agent is installed on the Proxmox VE virtual machine, reboot the virtual machine for the changes to take effect with the following command: $ sudo reboot Once the Proxmox VE virtual machine boots, check if the QEMU Guest Agent service is working correctly. Installing QEMU Guest Agent on Fedora/RHEL/CentOS Stream/Alma Linux/Rocky Linux/Oracle Linux On Fedora, RHEL, CentOS, and other RHEL-based Linux distributions (i.e. Alma Linux, Rocky Linux, Oracle Linux), QEMU Guest Agent can be installed with the following commands: $ sudo dnf makecache $ sudo dnf install qemu-guest-agent To confirm the installation, press Y and then press <Enter>. QEMU Guest Agent should be installed. Once QEMU Guest Agent is installed on the Proxmox VE virtual machine, reboot the virtual machine for the changes to take effect with the following command: $ sudo reboot Once the Proxmox VE virtual machine boots, check if the QEMU Guest Agent service is working correctly. Installing QEMU Guest Agent on OpenSUSE and SUSE Linux Enterprise Server (SLES) On OpenSUSE Linux and SUSE Linux Enterprise Server (SLES), QEMU Guest Agent can be installed with the following commands: $ sudo zypper refresh $ sudo zypper install qemu-guest-agent To confirm the installation, press Y and then press <Enter>. QEMU Guest Agent should be installed. Once QEMU Guest Agent is installed on the Proxmox VE virtual machine, reboot the virtual machine for the changes to take effect with the following command: $ sudo reboot Once the Proxmox VE virtual machine boots, check if the QEMU Guest Agent service is working correctly. Installing QEMU Guest Agent on Arch Linux/Manjaro Linux On Arch Linux, Manjaro Linux, and other Arch Linux based Linux distributions, QEMU Guest Agent can be installed with the following command: $ sudo pacman -Sy qemu-guest-agent To confirm the installation, press Y and then press <Enter>. QEMU Guest Agent should be installed. Once QEMU Guest Agent is installed on the Proxmox VE virtual machine, reboot the virtual machine for the changes to take effect with the following command: $ sudo reboot Once the Proxmox VE virtual machine boots, check if the QEMU Guest Agent service is working correctly. Verifying If QEMU Guest Agent is Working Correctly on Proxmox VE Virtual Machines To verify whether the QEMU Guest Agent is working correctly, check the status of the qemu-guest-agent service with the following command: $ sudo systemctl status qemu-guest-agent.service If the QEMU Guest Agent is working correctly, the qemu-guest-agent systemd service should be active/running. Some Linux distribution may not activate/enable the qemu-guest-agent systemd service by default. In that case, you can start the qemu-guest-agent service and add it to the system startup with the following commands: $ sudo systemctl start qemu-guest-agent $ sudo systemctl enable qemu-guest-agent You can also check the Summary section of the virtual machine (from the Proxmox VE web UI) where you’ve enabled and installed QEMU Guest Agent to verify whether it’s working. If the QEMU Guest Agent is working correctly, you will see the IP information and other usage stats (i.e. CPU, memory, network, disk I/O) of the virtual machine in the Summary section of the virtual machine. Conclusion In this article, I have discussed the importance of enabling and installing the QEMU Guest Agent on Proxmox VE virtual machines. I have also shown you how to install QEMU Guest Agent on some of the most popular Linux distributions. References Qemu-guest-agent – Proxmox VE View the full article
  4. VMware Workstation Pro virtual machines can be exported and imported back into VMware Workstation Pro on other computers or other hypervisor programs such as Proxmox VE, KVM/QEMU/libvirt, XCP-ng, etc. VMware Workstation Pro virtual machines can be exported in OVF and OVA formats. OVF: The full form of OVF is Open Virtualization Format. The main goal of OVF is to provide a platform-independent format for distributing virtual machines between different platforms/hypervisors. A VMware Workstation Pro virtual machine exported in OVF format will export a few files containing metadata, disk images, and other files to help deploy the virtual machine on other platforms/hypervisors. OVA: The full form of OVA is Open Virtualization Appliance. While OVF exports of VMware Workstation Pro virtual machines generate a few files for each virtual machine, OVA combines all those files into a single archive. In short, OVA export is a compressed format of OVF exported files. OVA files are easier to distribute among different platforms/hypervisors. In this article, I am going to show you how to export VMware Workstation Pro virtual machines in OVF/OVA format for keeping a copy of the virtual machine as a backup, or for importing them back to other platforms/hypervisors. Table of Contents: How to Export VMware Workstation Pro VMs in OVA Format How to Export VMware Workstation Pro VMs in OVF Format Conclusion References How to Export VMware Workstation Pro VMs in OVA Format: To export a VMware Workstation Pro virtual machine in OVA format, select it[1] and click on File > Export to OVF[2]. Navigate to a folder/directory where you want to export the VMware Workstation Pro virtual machine in OVA format. Type in a file name for the export file ending with the extension .ova (i.e. docker-vm.ova)[1], and click on Save[2]. The VMware Workstation Pro virtual machine is being exported in OVA format. It will take a while to complete depending on the size of the virtual disks of the virtual machine. Once the VMware Workstation Pro virtual machine is exported in OVA format, you will find an OVA file in your selected folder/directory. How to Export VMware Workstation Pro VMs in OVF Format: To export a VMware Workstation Pro virtual machine in OVF format, select it[1] and click on File > Export to OVF[2]. Navigate to a folder/directory where you want to export the VMware Workstation Pro virtual machine in OVA format. As OVF export will create a few files for each virtual machine, you should create a dedicated folder/directory (engineering-vm in this case) for the virtual machine export and navigate to it[1]. Type in a file name for the export file ending with the extension .ovf ( i.e. engineering-ws.ovf)[2], and click on Save[3]. The VMware Workstation Pro virtual machine is being exported in OVF format. It will take a while to complete depending on the size of the virtual disks of the virtual machine. Once the VMware Workstation Pro virtual machine is exported in OVF format, you will find a few virtual machine files in the selected folder/directory. Conclusion: In this article, I have shown you how to export a VMware Workstation Pro virtual machine in OVA format. I have also shown you how to export a VMware Workstation Pro virtual machine in OVF format. References: Open Virtualization Format (OVF and OVA) | XenCenter CR View the full article
  5. Virt-Viewer is a SPICE client that is used to access the KVM/QEMU/libvirt virtual machines remotely. Proxmox VE is built using the KVM/QEMU/libvirt technologies. So, you can use Virt-Viewer to remotely access the Proxmox VE virtual machines as well. Virt-Viewer can also be used to access the Proxmox VE LXC containers remotely via SPICE. In this article, we will show you how to install Virt-Viewer on Windows 10/11, Ubuntu, Debian, Linux Mint, Kali Linux, and Fedora operating systems and access the Promox VE virtual machines and LXC containers remotely via SPICE protocol using Virt-Viewer. Topic of Contents: Installing Virt-Viewer on Windows 10/11 Installing Virt-Viewer on Ubuntu/Debian/Linux Mint/Kali Linux Installing Virt-Viewer on Fedora Configuring the SPICE/QXL Display for the Proxmox VE Virtual Machines and LXC Containers Accessing the Proxmox VE Virtual Machines Remotely via SPICE Protocol Using Virt-Viewer Accessing the Proxmox VE LXC Containers Remotely via SPICE Protocol Using Virt-Viewer Sharing the Remote Access to Proxmox VE Virtual Machines and LXC Containers with Others Conclusion Installing Virt-Viewer on Windows 10/11 To download Virt-Viewer for Windows 10/11, visit the official website of Virtual Machine Manager from your favorite web browser. Once the page loads, click on “Win x64 MSI” from the “virt-viewer 11.0” section as marked in the following screenshot: Your browser should start downloading the Virt-Viewer installer file. It takes a while to complete. At this point, the Virt-Viewer installer file for Windows 10/11 should be downloaded. To install Virt-Viewer on your Windows 10/11 system, double-click (LMB) on the Virt-Viewer installer file (that you just downloaded). The Virt-Viewer installer file should be found in the “Downloads” folder of your Windows 10/11 system. Click on “Yes”. Virt-Viewer is being installed on your Windows 10/11 system. It takes a while for it to complete the installation. Installing Virt-Viewer on Ubuntu/Debian/Linux Mint/Kali Linux Virt-Viewer is available in the official package repository of Ubuntu/Debian/Linux Mint/Kali Linux. So, you can easily install it on your computer if you’re using Ubuntu/Debian or any Ubuntu/Debian-based operating systems (i.e. Linux Mint, Kali Linux). First, update the APT package database cache with the following command: $ sudo apt update To install Virt-Viewer on Ubuntu/Debian/Linux Mint/Kali Linux, run the following command: $ sudo apt install virt-viewer To confirm the installation, press “Y” and then press <Enter>. Virt-Viewer is being installed. It takes a while to complete. Virt-Viewer should now be installed. Installing Virt-Viewer on Fedora Virt-Viewer can be easily installed from the official package repository of Fedora. First, update the DNF package database cache with the following command: $ sudo dnf makecache To install Virt-Viewer on Fedora, run the following command: $ sudo dnf install virt-viewer To confirm the installation, press “Y” and then press <Enter>. You might be asked to confirm the GPG key of the official Fedora package repository. To do that, press “Y” and then press <Enter>. Virt-Viewer should now be installed on your Fedora system. Configuring the SPICE/QXL Display for the Proxmox VE Virtual Machines and LXC Containers SPICE is enabled for LXC containers by default on Proxmox VE. So, you don’t need to do anything to access the Proxmox VE LXC containers with Virt-Manager via SPICE protocol. SPICE is not enabled for Proxmox VE virtual machines. To access the Proxmox VE virtual machines with Virt-Viewer via SPICE protocol, you must configure SPICE for the display of the virtual machines that you want to access. To configure the SPICE access for a Proxmox VE virtual machine, navigate to the “Hardware” section of the virtual machine from the Proxmox VE web management interface[1]. Double-click (LMB) on the “Display” hardware[2], select SPICE from the “Graphic card” dropdown menu[3], and click on “OK”[4]. SPICE should be enabled for your Proxmox VE virtual machine. Now, you can access the Proxmox VE virtual machine with Virt-Viewer via SPICE protocol. Accessing the Proxmox VE Virtual Machines Remotely via SPICE Protocol Using Virt-Viewer To acess a Proxmox VE virtual machine remotely via SPICE protocol using Virt-Viewer, open the virtual machine in the Proxmox VE server and click on Console > SPICE from the top-right corner of the Proxmox VE dashboard. A SPICE connection file for the virtual machine should be downloaded. To access the virtual machine with Virt-Viewer, click on the downloaded SPICE connection file. The Proxmox VE virtual machine should be opened with Virt-Viewer via the SPICE protocol. Figure 1: Ubuntu 22.04 LTS Proxmox VE virtual machine remotely accessed with Virt-Viewer from Windows 10 Figure 2: Ubuntu 22.04 LTS Proxmox VE virtual machine remotely accessed with Virt-Viewer from Fedora Accessing the Proxmox VE LXC Containers Remotely via SPICE Protocol Using Virt-Viewer You can access a Proxmox VE LXC container with Virt-Viewer in the same way as you access a Proxmox VE virtual machine. To access a Proxmox VE LXC container remotely via SPICE protocol using Virt-Viewer, open the LXC container in the Proxmox VE server and click on Console > SPICE from the top-right corner of the Proxmox VE dashboard. A SPICE connection file for the LXC container should be downloaded. To access the LXC container with Virt-Viewer, click on the downloaded SPICE connection file. The Proxmox VE LXC container should be opened with Virt-Viewer via the SPICE protocol. Sharing the Remote Access to Proxmox VE Virtual Machines and LXC Containers with Others If you want to share a Proxmox VE virtual machine with someone, all you have to do is share the SPICE connection file (ending in “.vv” file extension) of the virtual machine that you downloaded from the Proxmox VE web management interface. Anyone can access the Proxmox VE virtual machine only once using the SPICE connection file. NOTE: The person with whom you shared the SPICE connection file must be able to access your Proxmox VE server to be able to access the Proxmox VE virtual machine. If your Proxmox VE server has a private IP address, only the people connected to your home network will be able to connect to the shared virtual machines. If your Proxmox VE server has a public IP address, anyone can connect to the shared virtual machines. Conclusion In this article, we showed you how to install Virt-Viewer on Windows 10/11, Ubuntu, Debian, Linux Mint, Kali Linux, and Fedora. We also showed you how to access the Proxmox VE virtual machines and LXC containers remotely with Virt-Viewer via SPICE protocol. We showed you how to share the access to Proxmox VE virtual machines and LXC containers with other people as well. View the full article
  6. KVM virtualization technology supports various disk image formats. Two of the most popular and widely used disk formats are qcow2 and raw disk images. The View the full article
  7. The post Terraform: Create Azure Windows VM with file, remote-exec & local-exec provisioner appeared first on DevOpsSchool.com. View the full article
  8. Creating virtual machines (VMs) from golden images is a common practice. It minimizes the deployment time for new VMs and provides a familiar environment for the VM's owner. The admin benefits from creating golden images in an automated manner because it reflects the current configuration. View the full article
  9. Authors: Fabian Kammel (Edgeless Systems), Mikko Ylinen (Intel), Tobin Feldman-Fitzthum (IBM) In this blog post, we will introduce the concept of Confidential Computing (CC) to improve any computing environment's security and privacy properties. Further, we will show how the Cloud-Native ecosystem, particularly Kubernetes, can benefit from the new compute paradigm. Confidential Computing is a concept that has been introduced previously in the cloud-native world. The Confidential Computing Consortium (CCC) is a project community in the Linux Foundation that already worked on Defining and Enabling Confidential Computing. In the Whitepaper, they provide a great motivation for the use of Confidential Computing: Data exists in three states: in transit, at rest, and in use. …Protecting sensitive data in all of its states is more critical than ever. Cryptography is now commonly deployed to provide both data confidentiality (stopping unauthorized viewing) and data integrity (preventing or detecting unauthorized changes). While techniques to protect data in transit and at rest are now commonly deployed, the third state - protecting data in use - is the new frontier. Confidential Computing aims to primarily solve the problem of protecting data in use by introducing a hardware-enforced Trusted Execution Environment (TEE). Trusted Execution Environments For more than a decade, Trusted Execution Environments (TEEs) have been available in commercial computing hardware in the form of Hardware Security Modules (HSMs) and Trusted Platform Modules (TPMs). These technologies provide trusted environments for shielded computations. They can store highly sensitive cryptographic keys and carry out critical cryptographic operations such as signing or encrypting data. TPMs are optimized for low cost, allowing them to be integrated into mainboards and act as a system's physical root of trust. To keep the cost low, TPMs are limited in scope, i.e., they provide storage for only a few keys and are capable of just a small subset of cryptographic operations. In contrast, HSMs are optimized for high performance, providing secure storage for far more keys and offering advanced physical attack detection mechanisms. Additionally, high-end HSMs can be programmed so that arbitrary code can be compiled and executed. The downside is that they are very costly. A managed CloudHSM from AWS costs around $1.50 / hour or ~$13,500 / year. In recent years, a new kind of TEE has gained popularity. Technologies like AMD SEV, Intel SGX, and Intel TDX provide TEEs that are closely integrated with userspace. Rather than low-power or high-performance devices that support specific use cases, these TEEs shield normal processes or virtual machines and can do so with relatively low overhead. These technologies each have different design goals, advantages, and limitations, and they are available in different environments, including consumer laptops, servers, and mobile devices. Additionally, we should mention ARM TrustZone, which is optimized for embedded devices such as smartphones, tablets, and smart TVs, as well as AWS Nitro Enclaves, which are only available on Amazon Web Services and have a different threat model compared to the CPU-based solutions by Intel and AMD. IBM Secure Execution for Linux lets you run your Kubernetes cluster's nodes as KVM guests within a trusted execution environment on IBM Z series hardware. You can use this hardware-enhanced virtual machine isolation to provide strong isolation between tenants in a cluster, with hardware attestation about the (virtual) node's integrity. Security properties and feature set In the following sections, we will review the security properties and additional features these new technologies bring to the table. Only some solutions will provide all properties; we will discuss each technology in further detail in their respective section. The Confidentiality property ensures that information cannot be viewed while it is in use in the TEE. This provides us with the highly desired feature to secure data in use. Depending on the specific TEE used, both code and data may be protected from outside viewers. The differences in TEE architectures and how their use in a cloud native context are important considerations when designing end-to-end security for sensitive workloads with a minimal Trusted Computing Base (TCB) in mind. CCC has recently worked on a common vocabulary and supporting material that helps to explain where confidentiality boundaries are drawn with the different TEE architectures and how that impacts the TCB size. Confidentiality is a great feature, but an attacker can still manipulate or inject arbitrary code and data for the TEE to execute and, therefore, easily leak critical information. Integrity guarantees a TEE owner that neither code nor data can be tampered with while running critical computations. Availability is a basic property often discussed in the context of information security. However, this property is outside the scope of most TEEs. Usually, they can be controlled (shut down, restarted, …) by some higher level abstraction. This could be the CPU itself, the hypervisor, or the kernel. This is to preserve the overall system's availability, not the TEE itself. When running in the cloud, availability is usually guaranteed by the cloud provider in terms of Service Level Agreements (SLAs) and is not cryptographically enforceable. Confidentiality and Integrity by themselves are only helpful in some cases. For example, consider a TEE running in a remote cloud. How would you know the TEE is genuine and running your intended software? It could be an imposter stealing your data as soon as you send it over. This fundamental problem is addressed by Attestability. Attestation allows us to verify the identity, confidentiality, and integrity of TEEs based on cryptographic certificates issued from the hardware itself. This feature can also be made available to clients outside of the confidential computing hardware in the form of remote attestation. TEEs can hold and process information that predates or outlives the trusted environment. That could mean across restarts, different versions, or platform migrations. Therefore Recoverability is an important feature. Data and the state of a TEE need to be sealed before they are written to persistent storage to maintain confidentiality and integrity guarantees. The access to such sealed data needs to be well-defined. In most cases, the unsealing is bound to a TEE's identity. Hence, making sure the recovery can only happen in the same confidential context. This does not have to limit the flexibility of the overall system. AMD SEV-SNP's migration agent (MA) allows users to migrate a confidential virtual machine to a different host system while keeping the security properties of the TEE intact. Feature comparison These sections of the article will dive a little bit deeper into the specific implementations, compare supported features and analyze their security properties. AMD SEV AMD's Secure Encrypted Virtualization (SEV) technologies are a set of features to enhance the security of virtual machines on AMD's server CPUs. SEV transparently encrypts the memory of each VM with a unique key. SEV can also calculate a signature of the memory contents, which can be sent to the VM's owner as an attestation that the initial guest memory was not manipulated. The second generation of SEV, known as Encrypted State or SEV-ES, provides additional protection from the hypervisor by encrypting all CPU register contents when a context switch occurs. The third generation of SEV, Secure Nested Paging or SEV-SNP, is designed to prevent software-based integrity attacks and reduce the risk associated with compromised memory integrity. The basic principle of SEV-SNP integrity is that if a VM can read a private (encrypted) memory page, it must always read the value it last wrote. Additionally, by allowing the guest to obtain remote attestation statements dynamically, SNP enhances the remote attestation capabilities of SEV. AMD SEV has been implemented incrementally. New features and improvements have been added with each new CPU generation. The Linux community makes these features available as part of the KVM hypervisor and for host and guest kernels. The first SEV features were discussed and implemented in 2016 - see AMD x86 Memory Encryption Technologies from the 2016 Usenix Security Symposium. The latest big addition was SEV-SNP guest support in Linux 5.19. Confidential VMs based on AMD SEV-SNP are available in Microsoft Azure since July 2022. Similarly, Google Cloud Platform (GCP) offers confidential VMs based on AMD SEV-ES. Intel SGX Intel's Software Guard Extensions has been available since 2015 and were introduced with the Skylake architecture. SGX is an instruction set that enables users to create a protected and isolated process called an enclave. It provides a reverse sandbox that protects enclaves from the operating system, firmware, and any other privileged execution context. The enclave memory cannot be read or written from outside the enclave, regardless of the current privilege level and CPU mode. The only way to call an enclave function is through a new instruction that performs several protection checks. Its memory is encrypted. Tapping the memory or connecting the DRAM modules to another system will yield only encrypted data. The memory encryption key randomly changes every power cycle. The key is stored within the CPU and is not accessible. Since the enclaves are process isolated, the operating system's libraries are not usable as is; therefore, SGX enclave SDKs are required to compile programs for SGX. This also implies applications need to be designed and implemented to consider the trusted/untrusted isolation boundaries. On the other hand, applications get built with very minimal TCB. An emerging approach to easily transition to process-based confidential computing and avoid the need to build custom applications is to utilize library OSes. These OSes facilitate running native, unmodified Linux applications inside SGX enclaves. A library OS intercepts all application requests to the host OS and processes them securely without the application knowing it's running a TEE. The 3rd generation Xeon CPUs (aka Ice Lake Server - "ICX") and later generations did switch to using a technology called Total Memory Encryption - Multi-Key (TME-MK) that uses AES-XTS, moving away from the Memory Encryption Engine that the consumer and Xeon E CPUs used. This increased the possible enclave page cache (EPC) size (up to 512GB/CPU) and improved performance. More info about SGX on multi-socket platforms can be found in the Whitepaper. A list of supported platforms is available from Intel. SGX is available on Azure, Alibaba Cloud, IBM, and many more. Intel TDX Where Intel SGX aims to protect the context of a single process, Intel's Trusted Domain Extensions protect a full virtual machine and are, therefore, most closely comparable to AMD SEV. As with SEV-SNP, guest support for TDX was merged in Linux Kernel 5.19. However, hardware support will land with Sapphire Rapids during 2023: Alibaba Cloud provides invitational preview instances, and Azure has announced its TDX preview opportunity. Overhead analysis The benefits that Confidential Computing technologies provide via strong isolation and enhanced security to customer data and workloads are not for free. Quantifying this impact is challenging and depends on many factors: The TEE technology, the benchmark, the metrics, and the type of workload all have a huge impact on the expected performance overhead. Intel SGX-based TEEs are hard to benchmark, as shown by different papers. The chosen SDK/library OS, the application itself, as well as the resource requirements (especially large memory requirements) have a huge impact on performance. A single-digit percentage overhead can be expected if an application is well suited to run inside an enclave. Confidential virtual machines based on AMD SEV-SNP require no changes to the executed program and operating system and are a lot easier to benchmark. A benchmark from Azure and AMD shows that SEV-SNP VM overhead is <10%, sometimes as low as 2%. Although there is a performance overhead, it should be low enough to enable real-world workloads to run in these protected environments and improve the security and privacy of our data. Confidential Computing compared to FHE, ZKP, and MPC Fully Homomorphic Encryption (FHE), Zero Knowledge Proof/Protocol (ZKP), and Multi-Party Computations (MPC) are all a form of encryption or cryptographic protocols that offer similar security guarantees to Confidential Computing but do not require hardware support. Fully (also partially and somewhat) homomorphic encryption allows one to perform computations, such as addition or multiplication, on encrypted data. This provides the property of encryption in use but does not provide integrity protection or attestation like confidential computing does. Therefore, these two technologies can complement to each other. Zero Knowledge Proofs or Protocols are a privacy-preserving technique (PPT) that allows one party to prove facts about their data without revealing anything else about the data. ZKP can be used instead of or in addition to Confidential Computing to protect the privacy of the involved parties and their data. Similarly, Multi-Party Computation enables multiple parties to work together on a computation, i.e., each party provides their data to the result without leaking it to any other parties. Use cases of Confidential Computing The presented Confidential Computing platforms show that both the isolation of a single container process and, therefore, minimization of the trusted computing base and the isolation of a `` full virtual machine are possible. This has already enabled a lot of interesting and secure projects to emerge: Confidential Containers Confidential Containers (CoCo) is a CNCF sandbox project that isolates Kubernetes pods inside of confidential virtual machines. CoCo can be installed on a Kubernetes cluster with an operator. The operator will create a set of runtime classes that can be used to deploy pods inside an enclave on several different platforms, including AMD SEV, Intel TDX, Secure Execution for IBM Z, and Intel SGX. CoCo is typically used with signed and/or encrypted container images which are pulled, verified, and decrypted inside the enclave. Secrets, such as image decryption keys, are conditionally provisioned to the enclave by a trusted Key Broker Service that validates the hardware evidence of the TEE prior to releasing any sensitive information. CoCo has several deployment models. Since the Kubernetes control plane is outside the TCB, CoCo is suitable for managed environments. CoCo can be run in virtual environments that don't support nesting with the help of an API adaptor that starts pod VMs in the cloud. CoCo can also be run on bare metal, providing strong isolation even in multi-tenant environments. Managed confidential Kubernetes Azure and GCP both support the use of confidential virtual machines as worker nodes for their managed Kubernetes offerings. Both services aim for better workload protection and security guarantees by enabling memory encryption for container workloads. However, they don't seek to fully isolate the cluster or workloads against the service provider or infrastructure. Specifically, they don't offer a dedicated confidential control plane or expose attestation capabilities for the confidential cluster/nodes. Azure also enables Confidential Containers in their managed Kubernetes offering. They support the creation based on Intel SGX enclaves and AMD SEV-based VMs. Constellation Constellation is a Kubernetes engine that aims to provide the best possible data security. Constellation wraps your entire Kubernetes cluster into a single confidential context that is shielded from the underlying cloud infrastructure. Everything inside is always encrypted, including at runtime in memory. It shields both the worker and control plane nodes. In addition, it already integrates with popular CNCF software such as Cilium for secure networking and provides extended CSI drivers to write data securely. Occlum and Gramine Occlum and Gramine are examples of open source library OS projects that can be used to run unmodified applications in SGX enclaves. They are member projects under the CCC, but similar projects and products maintained by companies also exist. With these libOS projects, existing containerized applications can be easily converted into confidential computing enabled containers. Many curated prebuilt containers are also available. Where are we today? Vendors, limitations, and FOSS landscape As we hope you have seen from the previous sections, Confidential Computing is a powerful new concept to improve security, but we are still in the (early) adoption phase. New products are starting to emerge to take advantage of the unique properties. Google and Microsoft are the first major cloud providers to have confidential offerings that can run unmodified applications inside a protected boundary. Still, these offerings are limited to compute, while end-to-end solutions for confidential databases, cluster networking, and load balancers have to be self-managed. These technologies provide opportunities to bring even the most sensitive workloads into the cloud and enables them to leverage all the tools in the CNCF landscape. Call to action If you are currently working on a high-security product that struggles to run in the public cloud due to legal requirements or are looking to bring the privacy and security of your cloud-native project to the next level: Reach out to all the great projects we have highlighted! Everyone is keen to improve the security of our ecosystem, and you can play a vital role in that journey. Confidential Containers Constellation: Always Encrypted Kubernetes Occlum Gramine CCC also maintains a list of projects View the full article
  10. OpenShift Virtualization provides a great solution for non-containerized applications, but it does introduce some challenges over legacy virtualization products and bare-metal systems. One such challenge involves interacting with virtual machines (VMs). OpenShift is geared toward containerized applications that do not usually need incoming connections to configure and manage them, at least not the same type of connections as a VM would need for management or use. View the full article
  • Forum Statistics

    67.4k
    Total Topics
    65.3k
    Total Posts
×
×
  • Create New...