• 0 Posts
  • 57 Comments
Joined 1 year ago
cake
Cake day: June 21st, 2024

help-circle
  • One such app I can think of would be a client side issue. If the public cert doesnt match the back end private cert it will sever the connection and mark it as insecure. Hopefully I won’t need to deal with it much longer though.

    I just heard back from my other team that “this project sounds great for your team” even though they manage many of their own apps and certificates. Perhaps I should just let them burn then!


  • Unfortunately some apps require the certificate be bound to the internal application, and need to be done so through cli or other methods not easily automated. We could front load over reverse proxy but we would still need to take the proxy cert and bind to the internal service for communication to work properly. Thankfully that’s for my other team to figure out as I already have a migration plan for systems I manage.




  • While I agree for my personal use, it’s not so easy in an enterprise environment. I’m currently working to get services migrated OFF my servers that utilize public certificates to avoid the headache of manual intervention every 45 days.

    While this is possible for servers and services I manage, it’s not so easy for other software stacks we have in our environment. Thankfully I don’t manage them, but I’m sure I’ll be pulled into them at some point or another to help figure out the best path forward.

    The easy path is obviously a load balanced front-end to load the certificate, but many of these services are specialized and have very elaborate ways to bind certificates to services outside of IIS or Apache, which would need to trust the newly issued load balancer CA certificate every 47 days.



  • Welcoming the incoming dowvotes for correcting your comment just like the many similar comments and posts I’ve seen on Reddit, but this is purely a configuration issue.

    Transcoding on local network is allowed without a subscription. If you are running your own DNS server (like pihole or unbound) you need to configure an internal “plex.direct” record. You also need to uncheck an option to “treat your WAN IP as internal” option which corrects double NAT issues.

    I have yet to see a need to move away from Plex. I paid for the cheap lifetime sub over a decade ago at this point and everyone I invite to my server has no complaints and has not had to pay Plex a dime. I don’t use their plex.tv proxy, I direct connect to my own IP and leave their remote proxy option off in the server and everything works great.

    I will check out Jellyfin at some point if Plex makes things more difficult in time, but for now these articles are literally just rage bait in the homelab ecosystem. They enacted this back in April of 2025 already!










  • Same recommendation here. I went through two QNAP units before being fed up and building my own 12 Bay for about 1200. My first QNAP died shortly after the 3 year warranty expired and the second died shortly before. I was able to RMA the second and sell it to recoup some money towards building my own TrueNAS system that I can now fix myself and not rely on proprietary anything.



  • I was a little unfair in my post towards Proxmox. It really is a great solution and I can’t really complain, but it sucks in comparison to ESX where many “custom” items are still hidden in the cli or custom configuration items,. Many of these things are available in the GUI in ESX which is a pretty rough translation for some that have worked in ESX for many years like myself. ESX isn’t without it’s CLI moments but they are rarely ever needed, and if needed only for drastic measures.

    The UI is not very intuitive and really looks quite dated too. ESX, Nutanix and XCP-NG have much better interfaces imo, and if Proxmox could throw some of that extra money they’ve earned from the VMware exodus in their UI it would be worthwhile.

    Again, I shouldn’t complain but as I get older there’s not much “tinkering” time anymore, and the less time I have to sift through forum posts or official documentation on why something isn’t working as intended, the more easily frustrated I get.


  • Don’t go Podman. When I started years ago I installed Fedora with the “containerization” option. This installs podman, not docker as I’m sure most know. I did not.

    Podman works great for the most part, but it’s slight differences from docker will have you fighting tooth and nail for certain services to work correctly. And not many (if any at all) have any documentation on getting their containers working with Podman of they don’t start. If you make a GitHub issue asking why or how to get things running in Podman because their docker stack doesn’t work flawlessly like it will in docker, good luck getting help (Mailcow comes to mind specifically here).

    Looking back, this decision really shoehorned some very fundamental ideals about containers in my mind, but it was a long fought road I would not choose again. The knowledge I gained about containers with docker would have come soon enough on the easy road.

    And yes, you can install Docker on Fedora, but I was much too far down the Podman track before finding out. My environment has changed drastically as of late and most things have been migrated to docker apps in Truenas now, living directly next to their storage as intended (the arr stacks really take a performance hit running their databases over NFS once you have a lot of media for example).

    Quick note about Proxmox after coming from ESX myself - it sucks compared to ESX. I’ve tried to move away from it and Nutanix was the closest I could find to ESX, but after my server started complaining it’s drives were not compatible I jumped ship to avoid any write damage to them. I’m downsizing my lab now, I have proxmox running in 3 small NUCs with CEPH storage share and it’s working pretty good. Would love to run ESX or Nutanix instead, but they require a loaf of bread in resource requirements where proxmox only needs a slice of bread in comparison.