Installing Proxmox VE on a Spare PC.
Or: Setting Up the Foundation for a Homelab.

TL;DR.
This post is a comprehensive walk-through on how I install PVE (Proxmox Virtual Environment) on a spare PC. I cover the step-by-step installation process, and tips for optimizing the virtual environment. This article is ideal for tech enthusiasts who want to maximize the capabilities of their Homelab by setting up a robust virtualization platform that supports both containers and virtual machines.
Attributions:
A post from Proxmox ↗ on setting up a Proxmox VE virtual machine ↗, and
A video from Tailscale ↗ about installing Proxmox VE on a PC ↗.
An Introduction.
Containers and virtual machines are technologies that allow operating systems and applications to be isolated within a runtime environment. Depending on the hardware specifications, PVE allows multiple containers and virtual machines to run on a single PC:
The purpose of this post is to demonstrate how I install PVE and create a container.
The Big Picture.
Learn how I efficiently install PVE on a spare PC with this comprehensive guide. Discover the prerequisites, step-by-step installation process, and tips that I use to optimize my virtual environment setup. PVE is perfect for tech enthusiasts looking to maximize their Homelab PCs.
Prerequisites.
A Debian-based Linux distro (I use Ubuntu),
A USB Thumb Drive.
Updating my Base System.
- From the (base) terminal, I update my (base) system:
sudo apt clean && \
sudo apt update && \
sudo apt dist-upgrade -y && \
sudo apt --fix-broken install && \
sudo apt autoclean && \
sudo apt autoremove -y
NOTE: The Ollama LLM manager is already installed on my (base) system.
What are the Specs for my Spare PC?
An Intel NUC (Next Unit of Computing) is a small-form-factor computer designed by Intel, which offers a compact and powerful computing solution. This PC typically comes without RAM, storage, or an operating system, allowing me to customize the hardware according to my needs.
NUC Specifications.
| Model | BXNUC10i3FNHN |
| Processor | Intel i3-10110U 2.10GHz Dual Core, 4 Threads, Up to 4.10GHz, 4MB SmartCache |
| Memory | Dual Channel, 2x DDR4-2666 SODIMM slots, 1.2V |
| Graphics | Intel UHD Graphics, 1x HDMI 2.0a Port, 1x USB 3.1 Gen 2 (10 Gbps), DisplayPort 1.2 via USB-C |
| Audio | Up to 7.1 surround audio via HDMI or DisplayPort signals, Headphone/microphone jack on the front panel, dual array front mics on the chassis front |
| Peripheral Connectivity | 1x HDMI 2.0 Port with 4K at 60Hz, 1x USB 3.1 Gen 2 (10 Gbps), DisplayPort 1.2 via USB-C, 1x Front USB 3.1 Type A (Gen 2) Port, 1x Front USB 3.1 Type-C (Gen 2) Port, 2x Rear USB 3.1 Type A (Gen 2), 2x Ethernet Ports, 2x Internal USB 2.0 via header |
| Bluetooth | |
| Storage | 1x M.2 22x42/80 (key M) slot for SATA3 or PCIe X4 Gen3 NVMe, SATA Interface, SDXC slot with UHS-II support |
| Networking | Intel Wi-Fi 6 AX201, Bluetooth, i219-V Gigabit Ethernet |
| Power Adapter | 19VDC Power Adapter |
Hardware Specifications.
Storage | 256GB M.2 internal (50GB/CT), 256GB SSD internal, 2TB HDD external |
Memory | 64GB (12288/CT, 2048/Swap) |
|---|---|
OS | A modified Debian LTS kernel running under PVE |
What is PVE?
PVE (Proxmox Virtual Environment) is an open-source virtualization platform designed for setting up hyper-converged infrastructure and, under the GNU AGPLv3 license, can be used for commercial purposes. It lets me deploy and manage containers and virtual machines. PVE is built on a modified Debian LTS (Long Term Service) kernel, and supports two types of virtualization: containers with LXC (Linux Containers) and virtual machines with KVM (Kernel-level Virtual Machines). PVE features a web-based management interface, and there is also a mobile app available for managing PVEs.
Creating a PVE Installation Thumb Drive.
I download the PVE ISO file from https://proxmox.com/en/downloads/proxmox-virtual-environment/iso.
I grab a 32GB thumb drive and label it.

I plug the thumb drive into my PC.
I start the
balenaEtcher-1.14.3-x64.AppImageimaging utility that runs on Ubuntu.
NOTE: There are versions of Balena Etcher for Windows, macOS, and (x64 & x86) Linux.
- I select the 1.57GB ISO file as the source, the 32GB thumb drive as the target, and then I click the blue
Flashbutton.

- After the ISO has been successfully flashed onto the thumb drive, I eject the thumb drive and remove it from my PC.
Installing PVE.
NOTE: PVE requires at least 3 drives that are directly connected to the NUC. I have an internal 256GB M.2 drive that uses the NVMe interface labelled
prox-int-nvme, an internal 256GB SSD that uses the SATA interface which has been split into 2 × 128GB partitions labelledprox-int-sata1&prox-int-sata2, and an external 2TB HDD that uses the USB 3.0 interface labelledprox-ext-usb3. These configurations will be altered during the PVE setup process.
I plug the PVE installation thumb drive into the NUC.
I power up the NUC.
I follow the installation instructions.
I use the following network settings that work on my LAN:

NOTE: During the Management Network Configuration setup, I can use IPv4 or IPv6 but I CANNOT mix the 2 protocols. The Management Interface is the name of the NIC (Network Interface Card) that is installed in the NUC which, in my case, is eno1. The Hostname (FQDN) only matters if I intend to open, and host, PVE over the Internet. The IP Address (CIDR) is 192.168.0.60/24. The PVE tells the router that this is the IP address it wants. The Gateway is the IP address of my router, which is 192.168.0.1, and is needed to connect PVE to the LAN. The DHCP server is found at 192.168.0.1 because my router includes the server that is responsible for assigning IP addresses.
After installation, the spare PC will reboot.
At this time, I remove the USB installation thumb drive.
At the login screen, I make a note of the PVE IP address and :port number that is displayed.
On a PC that is connected to the same network as PVE, I open a browser, visit the IP address and :port, and bookmark that address.
At the browser login screen, my user name is ‘root’ and my ‘password’ is the same one I gave during the installation.
From the terminal, I SSH into PVE with root@ip_address and password.
A Note about Routers and DHCP Servers.
My router has 2 jobs:
Connect to an ISP (Internet Service Provider) that, in turn, provides access to the Internet, and
Route that connection to all the wired, and wireless, devices that share the LAN (Local Area Network).
As the name suggests, Internet connectivity is routed to all of the linked devices in the LAN. There are many devices, like smart phones, tablets, PCs, notebooks, and others, that use an Internet connection to improve their functionality. Many devices, like smart TVs, require that connectivity.
The problem is that all the devices that connect to the LAN require unique identifiers. These identifiers are called IP addresses. But where does a device get an IP address? To solve the IP address problem, my router has a built-in DHCP server where DHCP stands for Dynamic Host Configuration Protocol. Almost all routers have a DHCP server and the purpose of this server is to assign a dynamic IP address to every wired, and wireless, device in the LAN.
In most cases, each device in the LAN is dynamically, i.e. automatically, assigned an IP address from a pool of available, unassigned addresses. Most often, devices will use the same dynamic IP addresses when they connect to the LAN, but sometimes the DHCP server will issue a new IP address. This is a fine solution and is NOT a problem. In most cases.
Servers, however, are special use cases. PVE, as well as the containers and virtual machines it manages, require static IP addresses. My NUC 10 will need static IP addresses if it, and the containers, want to be accessible in my local LAN. The reason I need IP addresses that DO NOT CHANGE is because I will setup a Kubernetes cluster and each node needs to know how to find each other. (Setting up a cluster is beyond the scope of this post.)
Replacing dynamic IP addresses with static IP addresses requires:
Accessing my router and making changes to the DHCP settings for each container and virtual machine (which is also beyond the scope of this post), and
Reflecting those changes to each container and virtual machine running on PVE.
The PVE Server.
PVE (Proxmox Virtual Environment) is the server that hosts the containers and virtual machines I deploy.
Accessing PVE.
- I use a browser to login to PVE:

On the left of the screen, I go to
Datacenter > nuclab60.In the 2nd pane, I click ‘Shell‘.
The Helper Script.
In a new browser tab, I visit http://helper-scripts.com/.
I search for ‘pve post install‘.

- I copy the script command.
Running the Helper Script.
- Back in the Shell for nuclab60, I run the helper script command from the terminal:

NOTE: I answer ‘yes’ to MOST of the questions when asked but there are 3 exceptions, as listed below.
- I answer ‘no’ to ‘Disable high availability?’:

NOTE: High availability will be used when other nodes are created.
- I answer ‘no’ to ‘Update Proxmox VE now?‘:

NOTE: I will update PVE manually later in this post.
- I answer ‘no’ to ‘Reboot Proxmox VE now?‘:

NOTE: I will reboot once I finish updating the remote system and upgrading PVE.
- Once the script has finished, I update the system:

NOTE: The ‘sudo’ command is not required as the root account has full privileges.
- Once the updates have been downloaded, I run the ‘pveupgrade‘ command to update the system and the PVE installation:

- Due to the installation of a kernel update, I need to reboot PVE:

Preparing PVE.
Before I can create any containers or virtual machines, I need to prepare PVE and the assets it will use.
Downloading an OS to PVE.
I use a browser to login to PVE.
On the left of the screen, under Server View, I go to
Datacenter > nuclab60 > local (nuclab60).In the 2nd pane, I click
ISO Images.In the 3rd pane, I click the grey
Download from URLbutton:

In the pop-up modal, I add
https://releases.ubuntu.com/24.04.3/ubuntu-24.04.3-live-server-amd64.isoto theURL:field so that PVE can download the ISO for Ubuntu Server 24.04.2 LTS.I click the blue
Query URLbutton to check the link:

- I click the blue
Downloadbutton to start the download:

- I close the modal once I receive the ‘TASK OK’ message:

Preparing the Disks.
NOTE: The following is adapted from the instructions provided by the PVE team.
I use a browser to login to PVE.
On the left of the screen, under Server View, I go to
Datacenter > pve.In the 2nd pane, I click
Disks.In the 3rd pane, I select the
dev/sdadrive (that currently has 2 partitions).

I click the grey
Wipe Diskbutton.In the Confirm modal, I click the blue
Yesbutton.

I repeat the process for the
/dev/sdbdisk.Back in the 2nd pane, I click
Disks > ZFS.In the 3rd pane, I click the grey
Create: ZFSbutton:

- In the
Create: ZFSmodal, I add the following details and then click the blue ‘Create‘ button:

NOTE: The
/dev/sdbdisk is an external SDD that uses the USB 3.0 interface.
Installing a Container Template.
I use a browser to login to PVE.
On the left of the screen, under Server View, I go to
Datacenter > pve > local (pve).In the 2nd pane, I click
CT Templates.In the 3rd pane, I click the grey
Templatesbutton.

- In the Templates modal, I select the
ubuntu-24.04-standardtemplate and click the blueDownloadbutton:

Now that all of the requirements are in place, I can take the next step by clicking the blue
Create CTbutton (top-right of the screen), following the resulting prompts, and creating a container.
Creating a New Container
This container is built to be cloned. As such, there is a lot of effort that goes into building it.
“Create CT.”
I use a browser to login to PVE.
At the top-right of the screen, I click the blue ‘Create CT‘ button:

- I add details to the ‘General’ tab, then I click the blue ‘Next’ button:

- I select the ‘Template:’, then I click the blue ‘Next’ button:

- I select the ‘Storage:‘ (zfs-disk), set the size (56GB), then I click the blue ‘Next’ button:

- I leave the ‘Cores:‘ set to 1, then I click the blue ‘Next‘ button:

- I set the ‘Memory (MiB):’ (12288), the ‘Swap (MiB):’ (4096), then I click the blue ‘Next‘ button:

- I set the ‘IPv4/CIDR:’, ‘Gateway (IPv4):’, then I click the blue ‘Next’ button:

- I leave the DNS settings blank, then I click the blue ‘Next’ button:

- I check my settings, then I click the blue ‘Finish’ button:

“Start at boot.”
I use a browser to login to PVE.
On the left of the screen, under Server View, I go to
Datacenter > nuclab60 > 101 (nuclab61).In the 2nd pane, I click
Options.In the 3rd pane, I select “Start at boot“ from the list and click the gray
Editbutton.In the “Edit: Start at boot“ modal, I tick the “Start at boot” option followed by the blue
OKbutton:

Creating a User Account for the Container.
I use a browser to login to PVE.
On the left of the screen, under Server View, I go to
Datacenter > nuclab60 > 101 (nuclab61):I start the container.
In the 2nd pane, I click
Console.In the 3rd pane, I login to the container using root as the
nuclab61 login:and the password I created while building the container.

- Once logged in, I create a new user account:
adduser brian
- I add the new user to the 'sudo' group:
usermod -aG sudo brian
- I log out of the root account for the container:
logout
- Towards the top-right of the console window, I open the drop down menu of the
Shutdownoption, by clicking the down arrow (⌄), and selectingReboot:

Once the reboot is complete, I switch to a terminal on my local PC.
Creating an RSA Key Pair on the Local PC.
- From a terminal (
CTRL+ALT+T) on my local PC, I start the ssh-agent:
eval "$(ssh-agent -s)"
- I generate a pair of RSA keys called "/home/brian/.ssh/key-name" (where I replace "key-name" with the name of the remote server):
ssh-keygen -b 4096
NOTE: It is my convention to name RSA keys after the remote server on which they will be used.
- I add the SSH key to my workstation account (where I replace "key-name" with the actual name of the ssh key):
ssh-add /home/brian/.ssh/nuclab61
Uploading the Public Key to the Remote Container.
- From the
workstationterminal (CTRL+ALT+T), I use "ssh-copy-id" to upload the locally-generated public key to the remote container (where I replace "container-name" with the actual name of the container):
ssh-copy-id -i /home/brian/.ssh/nuclab61.pub brian@192.168.0.61
SSH Folder and File Permissions.
- Change the permission for the .ssh folder:
chmod 0700 /home/brian/.ssh
- Change the permission for the private key:
chmod 0600 /home/brian/.ssh/nuclab61
- Change the permission for the public key:
chmod 0644 /home/brian/.ssh/nuclab61.pub
SPECIAL NOTES
The "Permission denied (publickey)" error indicates that your SSH connection was rejected because the server could not authenticate your public key. To resolve this, ensure your public key is correctly added to your account on the server and that your private key has the correct file permissions.
Understanding the "Permission Denied (publickey)" Error
The "Permission denied (publickey)" error occurs when your SSH client cannot authenticate with the server using the provided public key. This can happen for several reasons.
Common Causes and Solutions
1. Incorrect SSH Key Configuration
Public Key Not Added: Ensure your public key is added to the server's
~/.ssh/authorized_keysfile.Key Format: Verify that the key is in the correct format and not corrupted.
2. SSH Key Permissions
File Permissions: The permissions for your SSH keys must be set correctly:
Private key:
chmod 600 ~/.ssh/id_rsaPublic key:
chmod 644 ~/.ssh/id_rsa.pub.sshdirectory:chmod 700 ~/.ssh
3. SSH Agent Issues
SSH Agent Not Running: Start the SSH agent with
eval "$(ssh-agent -s)".Key Not Loaded: Add your private key to the agent using
ssh-add ~/.ssh/id_rsa.
4. Connection User
- Correct User: Always connect using the "git" user for GitHub or the appropriate user for your server. For example, use
ssh -Tgit@github.com.
Additional Troubleshooting Steps
Verbose Mode: Use
ssh -v user@hostto get detailed output about the connection process. This can help identify where the failure occurs.Firewall or Network Issues: Ensure that your network allows SSH connections and that the server's firewall is not blocking your access.
By following these steps, you should be able to resolve the "Permission denied (publickey)" error and successfully connect to your server.
Logging In to the Remote Container.
- From the terminal (CTRL + ALT + T), I login to the account of the remote server:
ssh -i /home/brian/.ssh/nuclab61 'brian@192.168.0.61'
- For ‘Too many authentication failures‘, use the following:
ssh -o IdentitiesOnly=yes brian@192.168.0.61
Preparing the Container.
The next step is to prepare the container for cloning.
Updating the Container.
- I update Ubuntu:
sudo apt clean && \
sudo apt update && \
sudo apt dist-upgrade -y && \
sudo apt --fix-broken install && \
sudo apt autoclean && \
sudo apt autoremove -y
Installing the Unattended Upgrades Utility.
- I install the
unattended-upgradespackage:
sudo apt install unattended-upgrades
- I manually trigger an Unattended Upgrade:
sudo unattended-upgrade
NOTE: -d is the switch for running this command in debug mode.
- I check the Unattended Updates log to ensure everything is worked as expected:
sudo cat /var/log/unattended-upgrades/unattended-upgrades.log
Hardening the Container.
- I open the "sshd_config" file:
sudo nano /etc/ssh/sshd_config
- I add (CTRL + V) the following to the bottom of the "sshd_config" page, save (CTRL + S) the changes, and exit (CTRL + X) the Nano text editor:
PasswordAuthentication no
PermitRootLogin no
Protocol 2
- I restart the "ssh" service:
sudo systemctl restart ssh.service
Enabling, and Setting Up, UFW on the Container.
- I check the UFW status:
sudo ufw status
- I enable the UFW:
sudo ufw enable
- I install a UFW rule:
sudo ufw allow from 192.168.0.2
NOTE: I specify the IP address of the PC from which I will connect using SSH***.***
- I check the status of the UFW and list the rules by number:
sudo ufw status numbered
NOTE 1: UFW will, by default, block all incoming traffic, including SSH and HTTP.
NOTE 2: I will update the UFW rules as I deploy other services to the remote server.
- I can delete a UFW rule by number if needed:
sudo ufw delete 1
- I can also disable UFW if needed:
sudo ufw disable
Installing, and Setting Up, Fail2Ban on the Container.
- I install Fail2Ban:
sudo apt install -y fail2ban
- I copy the
jail.conffile asjail.local:
sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local
- I open the
jail.localfile in Nano:
sudo nano /etc/fail2ban/jail.local
- I make the following changes to a few (SSH-centric) settings in the
jail.localfile, then I save (CTRL + S) those changes, and exit (CTRL + X) the Nano text editor:
[DEFAULT]
⋮
bantime = 30m
ignoreip = 127.0.0.1/8 your_ip_address
⋮
[sshd]
enabled = true
port = ssh,22
- I restart Fail2Ban:
sudo systemctl restart fail2ban
- I check the Fail2ban whitelist:
sudo fail2ban-client status
- I check the status of Fail2Ban:
sudo systemctl status fail2ban
- I enable Fail2Ban to auto-start on boot:
sudo systemctl enable fail2ban
- I reboot the container:
sudo reboot
“Clone.”
A clone is a direct, functional copy of a container that includes all of the settings from the original. After making the clones, I will adjust the settings of each so they will function correctly.
Cloning the Container.
I use a browser to login to PVE.
On the left of the screen, under Server View, I go to
Datacenter > nuclab60.In the second pane, I select “Search”.
In the third pane, I right-click the ‘101 (nuclab61)’ container and, in the pop-up menu, I click the ‘Shutdown’ option if the container is running.
In the third pane, I right-click the ‘101 (nuclab61)’ container and, in the pop-up menu, I click the ‘Clone’ option:

- In the ‘Clone CT 101 (nuclab61)’ modal, I enter the following details, then I click the blue ‘Clone’ button:

- After a moment, a clone of the original container appears under
Datacenter > nuclab60:

- I repeat this process two more times to meet my requirements.
Network Settings for Each Clone.
I use a browser to login to PVE.
On the left of the screen, under Server View, I go to
Datacenter > nuclab60 > 101 (nuclab61).In the 2nd pane, I click
Network.In the 3rd pane, I select the Network Device and click the gray,
Editbutton:

In the ‘Edit: Network Device’ modal, I change:
the ‘Name:’,
Ensure the ‘IPv4:’ radio button is set to ‘Static’,
Ensure the ‘IPv4/CIDR:’ setting is correct, and
Ensure the ‘Gateway (IPv4):’ setting is correct:

- Once I confirm these settings, I click the blue
OKbutton and repeat the process for the three remaining containers.
Setting Up the Local Terminal.
The following describes:
How to locally generate an RSA Key Pair for an SSH connection,
Pushing the public key to the container,
Logging in to the remote container,
Updating the OS that is running on the container,
Hardening the container by changing a few settings, and
Installing and enabling security utilities like UFW and Fail2Ban.
NOTE: These operations need to be performed for (and on) each container.
The Results.
Installing PVE (Proxmox Virtual Environment) onto a spare PC results in a compact and efficient solution for creating, and managing, containers and virtual environments. The process involves preparing the installation hardware, creating a USB drive that is used as the installation media, and configuring the system to suit my network and storage needs. By following the steps above, I can set up a robust virtualization platform that supports both containers and virtual machines. This setup not only maximizes the capabilities of the spare PC but also offers flexibility and scalability for various computing tasks.
In Conclusion.
In this guide, I created a USB installation thumb drive for PVE, installed PVE onto a spare PC, learned how to download an OS to PVE, installed CT templates, created a container, created a new account for that container, and cloned that container multiple times. By following these steps, I maximized the capabilities of the spare PC and now enjoy a robust virtualization platform that supports both containers and virtual machines. This setup offers flexibility and scalability for various computing tasks, making it perfect for tech enthusiasts and professionals alike.
Have you tried setting up PVE on a spare PC? What challenges did you face? How did you overcome those challenges? Let's discuss in the comments below!
Until next time: Be safe, be kind, be awesome.
Hash Tags.
#ProxmoxVE #pve #IntelNUC #Virtualization #Homelab #Containers #VirtualMachines #Networking #ServerSetup #ServerCluster #Linux #Debian #Ubuntu #TechGuide





