I’m still running a 6th-generation Intel CPU (i5-6600k) on my media server, with 64GB of RAM and a Quadro P1000 for the rare 1080p transcoding needs. Windows 10 is still my OS from when it was a gaming PC and I want to switch to Linux. I’m a casual user on my personal machine, as well as with OpenWRT on my network hardware.

Here are the few features I need:

  • MergerFS with a RAID option for drive redundancy. I use multiple 12TB drives right now and have my media types separated between each. I’d like to have one pool that I can be flexible with space between each share.
  • Docker for *arr/media downloaders/RSS feed reader/various FOSS tools and gizmos.
  • I’d like to start working with Home Assistant. Installing with WSL hasn’t worked for me, so switching to Linux seems like the best option for this.

Guides like Perfect Media Server say that Proxmox is better than a traditional distro like Debian/Ubuntu, but I’m concerned about performance on my 6600k. Will LXCs and/or a VM for Docker push my CPU to its limits? Or should I do standard Debian or even OpenMediaVault?

I’m comfortable learning Proxmox and its intricacies, especially if I can move my Windows 10 install into a VM as a failsafe while building a storage pool with new drives.

  • catloaf@lemm.ee
    link
    fedilink
    English
    arrow-up
    21
    ·
    11 days ago

    Proxmox is Debian under the hood. It’s just a qemu and lxc management interface.

    • Justin@lemmy.jlh.name
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      14
      ·
      11 days ago

      yeah, and qemu and lxc are very much legacy at this point. Stick with docker/podman/kubernetes for containers.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        11 days ago

        Agreed.

        I run podman w/ rootless containers, and it works pretty well. Podman is extra nice in that it has decent suppor for kubernetes, so there’s a smooth transition path from podman -> kubernetes if you ever want/need it. Docker works well too, and docker compose is pretty simple to get into.

        • Justin@lemmy.jlh.name
          link
          fedilink
          English
          arrow-up
          2
          ·
          11 days ago

          Yeah, Kubernetes is more automated and expandable, but docker compose has a ton of good examples and it’s really easy to get into as a beginner.

          • sugar_in_your_tea@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            5
            ·
            11 days ago

            Kubernetes is also designed for clustered workloads, so if you are mostly hosting on one or two machines, YAGNI applies.

            I recommend people start w/ docker compose due to documentation, but I personally am switching to podman quadlets w/ rootless containers.

            • Justin@lemmy.jlh.name
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              11 days ago

              Yeah, definitely true.

              I’m a big fan of single-node kubernetes though, tbh. Kubernetes is an automation platform first and foremost, so it’s super helpful to use Kubernetes in a homelab even if you only have one node.

              • sugar_in_your_tea@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                1
                ·
                11 days ago

                What’s so nice about it? Have you tried quadlets or docker compose? Could you give a quick comparison to show what you one like about it?

                • Justin@lemmy.jlh.name
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  10 days ago

                  Sure!

                  I haven’t used quadlets yet, but I did set up a few systemd services for containers back in the day before quadlets came out. I also used to use docker compose back in 2017/2018.

                  Docker compose and Kubernetes are very similar as a homelab admin. Docker compose syntax is a little less verbose, and it has some shortcuts for storage and networking. But that also means it’s less flexible if you are doing more complex things. Docker compose doesn’t start containers on boot by default I think(?) which is pretty bad for application hosting. Docker-compose has no way of automatically deploying from git like ArgoCD does.

                  Kubernetes also has a lot of self-healing automation, like health checks that can either disable the load balancer and/or restart the container if an app is failing, automatic killing of containers when resources are low, preventing the scheduling of new containers when resources are low, gradual roll-out of containers so that the old version of a container doesn’t get killed until the new version is up and healthy (helpful in case the new config is broken), mounting secrets as files in a container, and automatic retry on failed containers.

                  There’s also a lot of ubiquitous automation tools in the Kubernetes space, like cert-manager for setting up certificates (both ACME and local CA), Ingress for setting up reverse proxy, CNPG for setting up postgres clusters with automated backups, and first-class instrumentation/integration with prometheus and loki (both were designed for kubernetes first).

                  The main downsides with Kubernetes in a homelab is that there is about a 1-2GiB RAM overhead for small clusters, and most documentation and examples are written for docker-compose, so you have to convert apps into a Deployment (you get used to writing deployments for new apps though). I would say installing things like Ingress or CNPG is probably easier than installing similar reverse-proxy automations on Docker-compose, though.

  • tofu@lemmy.nocturnal.garden
    link
    fedilink
    English
    arrow-up
    8
    ·
    11 days ago

    Your CPU should be perfectly capable of that. I ran Proxmox with some VMs and containers on an i5-2400 with 16GB RAM just fine.

    You could run on bare Debian as well but virtualization will give you more flexibility. If you get a Zigbee Dongle or the like, you can pass it through to the VM Home Assistant is running in.

    I don’t know MergeFS but usually the recommendation is ZFS.

  • geography082@lemm.ee
    link
    fedilink
    English
    arrow-up
    6
    ·
    11 days ago

    Promox runs on debian. But anyway you will be surprised about proxmox can run in limited hardware. I have it running in a garbage mini PC and an old notebook :D

  • Ferawyn@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    11 days ago

    Proxmox is Debian. :-)
    I do always suggest installing Debian first, and then installing Proxmox on top. This allows you to properly set up your disks, and networking as needed, as the Proxmox installer is a bit limited: https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_12_Bookworm
    Once you have it up and running, have a look at the CT Templates. There’s a whole set of pre-configured templates from TurnkeyLinux (again, debian+) that make it trivial to set up all kinds of services in lightweight LXC Containers.
    For Home Assistant a VM is your best bet, as it makes setting up connectivity way easier than messing with docker networking. It also allows easy USB passthrough, for things like ZWave/Zigbee/Bluetooth adapters.

  • irmadlad@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    11 days ago

    OP, I’m running Proxmox on and old Dell T320 /32gb RAM. I am not having any real issues doing so. I run Docker and a handful of Docker containers. I’m really not into the arr stack, but I wouldn’t think you’d have much issue.

  • noli@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    ·
    9 days ago

    My needs are pretty similar to yours and I’ve recently moved back to using hypervisors after running everything from Debian to Arch to NixOS bare-metal over the last decade or so. It’s so easy to bring-up/tear-down environments, which is great for testing things and pretty much the whole point of a homelab. I’ve got a few VMs + one LXC running on Proxmox with some headroom on a 6th gen i7, you should be fine resource wise tbh. Worth mentioning that you’ll most likely need to passthrough your drives to the guest VM which is not supported via the webUI, but the config is documented on their wiki.

    Overall, I’m happy with this setup and loving CoreOS as a base-OS for VMs and rootless podman containers for applications.

  • glizzyguzzler@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    2
    ·
    11 days ago

    I’m surprised no one’s mentioned Incus, it’s a hypervisor like Proxmox but it’s designed to install onto Debian no prob. Does VMs and containers just like Proxmox, and snapshots too. The web UI is essential, you add a repo for it.

    Proxmox isn’t reliable if you’re not paying them, the free people are the test people - and a bit back there was a bad update they pushed that broke shit. If I’d have updated before they pulled it, I’d have been hosed.

    Basically you want a device that you don’t have to worry about updates, because updates are good for security. And Proxmox ain’t that.

    On top of their custom kernel and stuff, it’s just less eyes than, say, the kernel Debian ships. Proxmox isn’t worth the lock-in and brittleness for just making VMs.

    So to summarize, Debian and Incus installed. BTRFS if you’re happy with 1 drive or 2 RAID 1 drives. BTRFS gets scrubbing and bitrot detection (protection with RAID 1). ZFS for more drives. Toss on Cockpit too.

    If you want less hands-on, do to OpenMediaVault. No room for Proxmox in my view, esp. for no clustering.

    Also the iGPU on the 6600K likely is good enough for whatever transcoding you’d do (esp. if it’s rare and 1080p, it’ll do 4k no prob and multiple streams at once). The Nvidia card is just wasting power.

  • TBi@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 days ago

    I use OpenMediaVault to run something similar. It’s a headless Debian distribution with web based config. Takes a bit of work but I like it.

  • bigb@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 days ago

    Thanks everyone, I feel much better about moving forward. I’m leaning towards Proxmox at this point because I could still run Windows as a VM while playing around and setting up a new drive pool. I’d like a setup that I can gradually upgrade because I don’t often have a full day to dedicate to these matters.

    MergerFS still seems like a good fit for my media pool, simply only to solve an issue where one media type is filling a whole drive as another sits at 50% capacity. I’ve lost this data before and it was easy to recover by way of my preferred backup method (private torrent tracker with paid freeleech). A parity drive with SnapRaid might be a nice stop gap. I don’t think I feel confident enough with ZFS to potentially sacrifice uptime.

    My dockers and server databases, however, are on a separate SSD that could benefit from ZFS. These files are backed up regularly so I can recover easily and I’d like as many failsafes as possible to protect myself. Having my Radarr database was indispensable when I lost a media drive a few weeks ago.

  • kr0n@piefed.social
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 days ago

    I don’t know about your first need (“MergerFS”) but if you find useful, I have an old Intel NUC 6i3SYH (i3-6100U) with 16Gb RAM and I was running with Windows 10 for Plex+Arr and also HomeAssistant in VirtualBox. I was running into issues until I switched to Proxmox. Now I’m running Proxmox to run Docker with a bunch of containers (plex+arr and others) and also a virtual machine which has HomeAssistant and everything was smooth. I have to say that there is a learning curve, but it’s very stable.