• 38 Posts
  • 1.39K Comments
Joined 2 years ago
cake
Cake day: June 23rd, 2023

help-circle

  • I printed bushings for the augers we have on the bottoms of a couple grain bins. They’ve lasted for about 10 years in ABS, and the old ones were ridiculously expensive to replace even though they were just made of maple. Probably run a couple of million bushels of grain through those augers since I replaced them.

    I also replace the impellers in a couple of pumps we use to pump river water up to cattle, and the design I cribbed is probably twice as effective as the originals, making it more efficient for the solar panels we use to power them.

    I’ve replaced various implement parts around the farm with other prints, things like parts for our seeder and sprayer.



  • So if I want a new container stack, I make a new Proxmox “disk” in the ZFS filesystem under the Hardware tab of the VM. This adds a “disk” to the VM when I reboot the VM (there are ways of refreshing the block devices online, but this is easier). I find the new block device and mount it in the VM at a subfolder of /stacks, which will be the new container stack location. I also add this mount point to fstab.

    So now I have a mounted volume at /stacks/container-name. I put a docker-compose.yml in there and all data that the stack will use will be subfolders of that folder with bind mounts in the compose file. When I back up, that ZFS dataset that contains everything in that compose stack is snapshotted and backed up as a point-in-time. If that stack has a postgres database, it and all the data it references is internally consistent because it was snapshotted before backup. If I restore the entire folder from backup, it just thinks it had a power outage, replays it’s journals in the database, and all’s well.

    So when you have a backup in PBS, from your Proxmox node you can access the backups via the filesystem browser on the left.

    When you go to that backup, you can choose to do a File Restore instead of restoring the entire VM. Here I am walking the storage for my nextcloud data within the backups, and I can walk this storage for all discrete backups.

    If I want to just restore a container, I will download that “partition” and transfer it to the docker VM. Down the container stack in question, blow out everything in that folder and then restore the contents of the download to the container folder. Start up the docker stack for that folder and it’s back to where it was. Alternatively, I could just restore individual files if I wanted.



  • Yes. So my debian docker host has some datasets attached:

    mounted via fstab:

    and I specify that path as the datadir for NCAIO:

    Then when PBS calls a backup of that VM, all the datasets that Proxmox is managing for that backup take a snapshot, and that’s what’s backed up to PBS. Since it’s a snapshot, I can backup hourly if I want, and PBS dedups so the backups aren’t using a lot of space.

    Other docker containers might have a mount that’s used as a bind mount inside the compose.yml to supply data storage.

    Also, I have more than one backup job running on PBS so I have multiple backups, including on removable USB drives that I swap out (I restart the PBS server to change drives so it automounts the ZFS volumes on those removable drives and is ready for the next backup).

    You could mount ZFS datasets you create in Proxmox as SMB shares in a sharing VM, and it would be handled the same.

    As for documentation, I’ve never really seen any done this way but it seems to work. I’ve done restores of entire container stacks this way, as well as walked the backups to individually restore files from PBS.

    If you try it and have any questions, ping me.


  • I run a docker host in Proxmox using ZFS datasets for the VM storage for things like my mailserver and NexcloudAIO. When I backup the docker VM, it snapshots the VM at a point in time, and backs up the snapshot to PBS. I’ve restored from that backup and it’s like the machine had just shut down as far as the data is concerned. It journals itself back to a consistent state and no data loss.

    I wouldn’t run TrueNAS at all because I have no idea how that’s managing it’s storage and wouldn’t trust the result.










  • When Linus gets pissy, it’s to defend the standards and practices that he and the rest of the kernel community have set to advance the project. Yah, he’s direct and probably more unfiltered than he often should be. But it’s resulted in a product that’s given a spectacularly successful platform for FOSS that would have never existed if the companies that controlled everything in the 90s had their way. I guarantee that for all the feelings that he’s hurt over the years, it’s isn’t a patch on the suffering that Microsoft and IBM have laid on their employees. And people still clamor to contribute to the kernel.

    Seems like 99% of the contributors manage to work within that framework and get stuff done, even with the threat of being chewed out for submitting bad code at the wrong time hangs over their heads. Kent apparently can’t manage that so maybe he should fade into the background and let someone else interact with the community for him.