That mail/[.]my-mail/[.]rocks maps to the tor network.
https://metrics.torproject.org/rs.html#details/E3F16EEB32F9C0B28325891F7BAACA8EC212343D
That mail/[.]my-mail/[.]rocks maps to the tor network.
https://metrics.torproject.org/rs.html#details/E3F16EEB32F9C0B28325891F7BAACA8EC212343D
They don’t really as the “smart” subsidies the cost of the TV. Some commerical grade TVs and large monitors won’t be smart or need Internet connectivity.
I’ll be honest that I haven’t watched his videos so maybe it ends up stable. TrueNAS basically says in their docs you can end up with weird issues.
If you host it in proxmox directly there’s less overhead, as in it’s not going bare metal > proxmox > TrueNAS > application. You might run into issues but honestly try it and keep a configuration backup if it fails. Pcie passthrough instead of devices for the HBA card and any external graphics cards works the most stable but you won’t be able to “share” those resources.
I personally like docker for most everything I can with a few things hosted within proxmox. I originally started with portainer which gave me a web GUI for docker but honestly docker-compose files are a better approach. So proxmox > debian > docker Proxmox > trueNAS and proxmox > other VMs. This has its own challenges like passing storage from the NAS to jellyfin but works for me.
As for components, I’m stable on an old office desktop computer potato (albeit it does hit some limits with file transfers and transcoding multiple streams). I wouldn’t necessarily recommend going out and buying an equivalent but if you want to mess around, don’t be afraid of not enough resources in a test config.
For #3 officially, nesting TrueNAS in another hypervisor and then using it as a hypervisor is not really recommended, especially with any kind of virtual drives. It could lead to challenges. Virtualizing drives is definitely not recommended and the most stable choice is passing pcie through with a hba card.
Given that, I have a similar setup and I’ve made backups for important data, I passed a pcie data/SAS hba card that I connect any TrueNAS drives to directly instead of a virtualized drive.
https://www.truenas.com/blog/yes-you-can-virtualize-freenas/
I personally haven’t explored self hosting mail. This thread is a year old but might give you insight from people who have.
I’ve heard about using mailbox.org to do what you’re talking about. It seems the general consensus is getting a clean IP mentioned in the thread linked above is the biggest challenge.
Edit: mailbox isn’t the what I was thinking of. I’ve definitely heard of services that let you self host half of it and just do the send receive part.
Setting up jellyfin, I used docker on debian, and an old Quadro card. What could possibly go wrong?
Turns out that week the Nvidia drivers got a faulty update pushed to debian stable and caused an error with getting the GPU to work in any container. I could either wait a week or pull the simple fix from testing. So impatiently I pulled it from testing.
Learning Linux is a great start.
Learning any coding language will help you understand a bit more about the programs will work, however there isn’t much need to actually learn a specific language unless you plan to add custom programs or scripts.
The general advice for email is don’t. It’s very risky to host and it’s a big target for spam. Plus there’s challenges getting the big companies to trust your domain.
However hosting things behind a VPN (or locally on your home network) can let you learn a lot about networking and firewalls without exposing yourself to much risk.
I have no direct experience with next cloud but I understand it can be hosted on Linux, you can buy a Synology NAS and run it in that, or use something like TrueNAS.
Personally my setup is on one physical server so I use Proxmox which lets me run 2 different Linux servers and trueNAS on one single computer through virtual machines. I like it because it lets me tinker with different stuff like home assistant and it won’t affect say my adblocker/VPN/reverse proxy. I also use Docker to run multiple services on one virtual machine without compatibility issues. If I started again, I’d probably have gotten bigger drives or invested in SSDs. My NAS is hard drives because of cost but it’s definitely hitting a limit when I need to pull a bunch of files. Super happy with wireguard-easy for VPN. I started with a proprietary version of openVPN on Oracle Linux and that was a mistake.
In that case, and if you do need a gpu (such as jellyfin, Plex or another reason) look at the GPU transcoding link in my previous comment. You can flash Nvidia consumer cards or price compare with Intel A series GPUs. This means a bit of tinkering but if you need transcoding and cheap Quadros aren’t available to you, it’s an option.
You can always go for a used PC with integrated graphics like Intel and see if that works for your use case. Follow the recommendation for a big case with lots of space, look at any of the dell Optiplex or similar office PCs. If you have specific applications Google them + minimum or recommended requirements. An SSD as a boot drive is absolutely worth it over an HDD.
Your camera setup probably won’t need an external graphics card but if it does you can always upgrade later.
All of those components should be used and a few generations behind to save cost. A used Quadro m4000 is about $100 usd in the US. A used Xeon based office PC all in should be ~$400-600 USD max stateside and you can find whichever drives you need to add. I don’t know what your local economy is like or what you can expect. If you’re able to find a used office PC or and older device, give that a try and see if it works. If you have 15 users all hitting a computer it’s going to take resources. Those resources are going to depend on what they’re doing. If you want enterprise fault tolerance, ECC may be worth the extra cost. If you want to budget it out you can probably get everything you want running on something 4-5 generations behind for around $100 USD + drives cost.
Consider if you’re going media streaming like a Plex/jellyfin server. It would be kinda similar to playing 15 YouTube videos on your desktop.
If it’s 15 users with maybe 2-3 hitting it at any one time then you can build cheaper and get decent performance. If you’re just hosting static pages/simple programs with low resource requirements anything post 2010 with 4 cores and 8GB RAM will probably run it fine and work as file storage for cameras.
Based on your description, your exposing something to the Internet. You absolutely should have things virtualized/containers and use a reverse proxy. Use cloudflare for the domain name registration and take advantage of their ddos protection. Keeping everything virtualized/separated would also give an IDS a fighting chance since they’d have to pivot if you bothered to setup firewalls between the devices.
If you have the space for some used servers, you can find something affordable. Any enterprise server will be loud and electricity costs should be factored.
If you don’t have the space for a noisy server, an old workstation on the used market can be affordable. Otherwise you can build something yourself using consumer parts. Ryzen 5 (Ryzen will allow you to use ECC RAM which is something you might want) or an i7/Xeon from the previous generation or two should be more than enough. Add 32-64Gb of RAM and a SSD boot drive. I’d probably get HDDs designed for surveillance to save cost and put your file server storage on an SSD separate from the OS. Backups on VMs are stupid easy too which means you’re more likely to bother using and testing them.
Edit: forgot about GPU. If you’re using as a media server and need transcoding or another reason, an external GPU like the Nvidia p600 m4000 will work. Use this link to figure out what you need (you don’t have to use Plex it’s just a guideline)
Virtualization can be nice in that you can tinker and not worry about dependencies. Plus you can have one resource that’s stable on FreeBSD, another that works well on Unix, etc.
Headless servers can run surveillance stuff via web interfaces or API/app integrations. Plus you can use the GUI via vnc, spice or another service to get to your x11 environment. I find proxmox easier than docker/containers as most of my troubleshooting is there. I’ve got security cameras linked to home assistant and it’s all headless. You could plug a monitor in and pass that to a virtual machine to get the desktop experience.
Hardware recommendations are going to need more information. Number of users? Number of cameras/tasks the server is expected to do concurrently, will you have media/NAS hosting and if so, how much space and how fast do you want that to be?
Your use case in the OP for less than 4 users could probably be run on a potato (my potato is bottlenecked by wifi @ 10MBps). 10-15 users streaming media or 20 cameras constantly streaming to a monitor could easily eat up a decent chunk of resources.
If you’re not exposing anything to the Internet, you probably don’t need an IDS. It’s a lot of effort to reduce false positives/tune it and the benefits are probably not worth it unless this is a business use case. Enterprise IDS/SIEMS used by actual companies is typically not FOSS because it needs that support provided by the vendor.
Proxmox has been pretty good to me. I have an ancient office PC that has proxmox installed as the hypervisor. It’s based on debian but everything is done via a web interface (you can ssh or whatever into it too if you needed to). Then I have debian with docker containers, TrueNAS, and home assistant all installed as VMs. Benefits to this means you can put mission critical stuff on the “boring” debian and then have fun with whatever you want to tinker with on an entirely different os/Virtual Machine. I also use wireguard easy which is stupid simple to setup a VPN with. I would strongly recommend keeping all management of the server on the local network and use a VPN to connect. This will get you the “enterprise grade” security. Anything public should go through a reverse proxy/DMZ VM if you host something on the Internet. Use cloudflare or similar as an extra layer if you need a domain name and want a buffer between users and your network. Keep that device and software up to date and you should have a great defense.
IDS wise, it’s a lot of work. You’re better off spending that time working on building security by design by doing the above and ensuring anything that touches the public Internet has as little permissions as possible (no running the web server as root/user account), firewall management, etc. If you do want the challenge, or are Interested in learning something like security onion, wazuah or whatnot, don’t let it stop you.
Hardware wise, affordable and uptime could mean it might be cheaper to have a backup machine. Proxmox has features to support high availability where if one of your physical servers go down, another can take over (2 physical servers that are copies of each other). You could have a decent workstation and then a used PC or whatnot as the backup. More important is probably a UPS and some workstation gear unless you want a screaming server jet in whatever room it goes in. Nothing you’ve mentioned seems too performance heavy so technical PC recommendations are going to vary based on expected traffic or use cases. My 2014 DDR3 office PC manages just fine but it’s for very few people and in air conditioned space. You could probably price out mid grade consumer equipment for the main server and a used office PC for redundancy.
More likely it’s probably a non-free repository that many people choose to use like an Intel driver or something.
It’s a protocol called OAuth that pretty much lets google or whatever “sign in with XYZ” company take over the login process then share a unique identifier+ all information requested by the app on that “allow 3rd party to access the following” page. It’s essentially letting Google manage the user/password authentication instead of udemy.
Mike’s Weather Page is a hobbyist page that aggregates a ton of info.
I’ve never used it before but it appears to fit your criteria https://f-droid.org/packages/com.jerameeldelosreyes.sushi/
Apparently you can save it to Google drive then download the Google drive program and make that folder available offline so it downloads it to the computer.
When you setup the Google Takeout export choose Save in a Google Drive folder
Install the Google Drive PC client (Drive for desktop)
It will create a new drive (i.e. G:) in your explorer. Right click on the takeout folder and select “Make available offline”. All files in that folder will be downloaded by the Google Drive Desktop in the background, and you will be able to copy to another location, as they will be local files.
I’m using a commercial desktop with an i5 Sandy bridge. I maxed out to 32Gb of ram only because I’m running trueNAS, debian with containers, and home assistant. Most RAM goes to trueNAS and trueNAS doesn’t accurately report ram. For CPU, mostly just task limited but I don’t really think thats a proxmox issue. Obviously it’s not going to support an enterprise or even small business but it works for what I need of less than 4 users on my budget.
Proxmox doesn’t really ask for much but I probably would recommend docker for your arm devices.
https://en.wikipedia.org/wiki/Robots.txt
Should cover any polite web crawlers but it is voluntary.
https://platform.openai.com/docs/gptbot
Might have to put it behind a captcha or other type to severely limit automated access.
It’s not realistic to assume it won’t get scraped eventually. Such as someone paying people to bypass capatcha or web crawlers that don’t respect robots.txt. I also don’t know if Google and Microsoft bundle their AI data collection that doesn’t also remove your site from web search.
If it’s like a vpn interface, it’s still running as a deamon in the background even if the browsers are closed.
Like others have said you can check what application is using each open port. You can also check running processes (ps | grep keywords) and interfaces (ip a).