Proxmox Server

thanks for explaining it in a way I can understand, finally this term makes a bit more sense,
with cloud becoming more of a ingrained technology, are hypervisors still necessary?
Currently yes, cloud is nice but super expensive recurring cost. Most companies have moved to a hybrid (half cloud/half on-premise/Data Centre server hosting) where critical business applications are in the cloud.
 
noob question, but is proxmox better than ESXI server, or am I getting Proxmox confused with where it sits as compared to VMware, HyperV, in the windows stack,

is arcserve something else entirely then?

total noob questions, but if you dont ask, you will never know these terms flying around.

In order of best to worst ....

VMWare -> enterprise business
Hyper-V -> enterprise business
Proxmox -> Home labs

Personally i run vmware in my home environment in a 3 node cluster.Doing all the bells and whistles such as high availablity , drs , fault tolerance
 
How do they compare to docker because you are going to need extensive nat rules if you want to access them externally.
To be honest I just run a little homelab, so all I have to manage things remotely is a Tailscale LXC that gives me remote access from wherever I may be.

I haven’t dabbled with PVE’s networking bits and bobs just yet but it seems like it can get quite involved.
 
How do they compare to docker because you are going to need extensive nat rules if you want to access them externally.
LXC containers are system containers versus Docker which is an application container. Different Use cases.

Docker containers are designed to be ephemeral and run a single service/app with all of its dependencies.

System containers such as LXC, LXD or Incus are generally intended to be more like traditional VM's but more lightweight than a VM in terms of system resources. They tend to be good for use cases where the lifecycle of the container or VM is longer or more persistant - DB service or file storage for apps on your Docker Swarm or k8s cluster might be a good example.

Not sure what you mean WRT to the Nat rules. I don't see how they are really different in that respect. You can still put them in a DMZ behind your favorite reverse proxy or VPN.
 
Managed to do a few things today:

Setup Proxmox on the HP T530 that I bought off of Bobshop for cheap (R850.00).
Added 8GB RAM to replace the 4GB module that it shipped with.
Replaced the 16GB mSATA with a 256GB mSATA drive.
Added my existing node to a cluster, then added the newly built node to the cluster as well.
Realised I forgot to enable KVM Virt in the BIOS, and performance suffered greatly as a result.
Got KVM Virt working, then migrated my services across (2x VM and 3x LXC).
I still need to remove the old Proxmox from the cluster, then destroy the cluster once I am confident all references to the old box is gone.

1741529949823.png

Why did I move from a 6C6T to a 2C2T PC?

I can better utilise the stronger PC as it's a bit overkill for a node that sits idle for most of its life. Plus. now the closet is completely silent as my router (Mikrotik hEX S) and work gateway (Meraki MX68) are both fanless as well.
 
PBS up and running. PBS wouldn't work too well with iVentoy but a normal Ventoy USB did the trick

1742652342853.png
 
Nothing too crazy on mine.
HAOS and InfluxDB for collecting stats.

I do want to do a network-wide ad blocker at some point so I need to set that up when I'm not lazy.
 
Nothing too crazy on mine.
HAOS and InfluxDB for collecting stats.

I do want to do a network-wide ad blocker at some point so I need to set that up when I'm not lazy.

Pihole is quite easy to setup esp when you then add on unbound into the mix
 
Moved my install base to an Optiplex 3000 Thin Client and oh boy what a disaster it was.

Got some errors saying the node I was moving to (10.0.0.2) had a link ID of 10.0.0.4. Faffed around in corosync settings, fixed it (or so I thought), ended up making the actual IP 10.0.0.4 and then the link ID changed to 10.0.0.2! WTF? This was causing cluster issues so my new node (Optiplex) couldn't join.

Thought let me rather just nuke the cluster and start over. The GUI doesn't support this but you can do this via the CLI. Did the deed, rebooted the node and lo and behold, no more cluster. Also, no more VM's or LXC's...all were MIA.

Scrounged through the folders and found references to my missing LXC's and VM's, but the actual stuff was gone. But at least the cluster was destroyed. I then created another cluster on the Optiplex, and joined the old node to it which worked. I then restored all my VM's and LXC's from my 3AM backup, and migrated them to the new node. I believe there is a way to restore backups straight to the other node but it also involved some CLI faffery and I didn't want to FAFO more than I already have.

I think from installing PVE on the Optiplex to having all my services up took around 2.5 hours. One caveat was that as I moved Unifi from a system running an 8500T to one running an N5105, MongoDB wouldn't start as the N5105 is missing AVX instructions. It took another half an hour of messing about with previous versions but eventually found that Mongo 4.2 worked, but it did nuke my Unifi settings. Thankfully, I had backups of these as well and all was right in the world :)

1742825248760.png
 
Moved my install base to an Optiplex 3000 Thin Client and oh boy what a disaster it was.

Got some errors saying the node I was moving to (10.0.0.2) had a link ID of 10.0.0.4. Faffed around in corosync settings, fixed it (or so I thought), ended up making the actual IP 10.0.0.4 and then the link ID changed to 10.0.0.2! WTF? This was causing cluster issues so my new node (Optiplex) couldn't join.

Thought let me rather just nuke the cluster and start over. The GUI doesn't support this but you can do this via the CLI. Did the deed, rebooted the node and lo and behold, no more cluster. Also, no more VM's or LXC's...all were MIA.

Scrounged through the folders and found references to my missing LXC's and VM's, but the actual stuff was gone. But at least the cluster was destroyed. I then created another cluster on the Optiplex, and joined the old node to it which worked. I then restored all my VM's and LXC's from my 3AM backup, and migrated them to the new node. I believe there is a way to restore backups straight to the other node but it also involved some CLI faffery and I didn't want to FAFO more than I already have.

I think from installing PVE on the Optiplex to having all my services up took around 2.5 hours. One caveat was that as I moved Unifi from a system running an 8500T to one running an N5105, MongoDB wouldn't start as the N5105 is missing AVX instructions. It took another half an hour of messing about with previous versions but eventually found that Mongo 4.2 worked, but it did nuke my Unifi settings. Thankfully, I had backups of these as well and all was right in the world :)

View attachment 1806642
aaah the life of homelabbing - its a never ending cycle of upgrades and fixes
 
aaah the life of homelabbing - its a never ending cycle of upgrades and fixes
It's fun though, mainly because if stuff gets lost or destroyed it's no big deal.

If I try that at work I will have a very short work week ahead of me:popcorn:

My next step will be to setup the Tiny as a backup server, and have nightly backups run for replication and DR purposes. I don't need off-site backups but having something to fall back to if the SSD conks out would be handy.

I might also do Wake on Timer and have it boot up at 00:50, have the backups run at 01:00, and then shutdown once complete so the Tiny doesn't have to run 24/7 for no reason.
 
Right, so my nightly setup is working nicely:

at 00:55, a WOL script on my Tik wakes up the Proxmox Backup Server
at 01:00, backups kick off from the PVE to PBS and does a verify of existing files every three days
at 01:15, PBS runs Garbage Collection
at 01:20, it runs a Prune job
at 01:30, I have a script in crontab that shuts PBS down again
at 02:00, PVE kicks off local backups to itself
 
Yaal got lots of money for power I see

Gooi Raspberry Pi with docker and let them all fight for CPU time, yolo
 
bit of a side quest but found this kinda cool - "Synchronize Pi-hole v6.x configuration to replicas."


Will follow this once i get v6 up and running - i tried it in the first few days after release it it wasn't a great experience.
 
Top
Sign up to the MyBroadband newsletter