something like 95% stays local and is remote accessed via wireguard, The rest is stuff I need to host via a hostname with a trusted cert because apps I use require that or if I need to share links to files for work, school etc. For the external stuff I use Cloudflare tunnels just because I use DDNS and want to avoid/can’t use port forwarding. works well for me.
Just in case you missed this, you can issue valid HTTPS Certificates with the DNS challenge. I use LetsEncrypt, DeSEC and Traefik, but any other supported provider with Lego (CLI) would work.
Everything is accessible through VPN (Wireguard) only
Same here. Taught my wife how to start WireGuard on her android phone and then access any of the services I run. This way I only have one port open and don’t have to worry too much.
The only externally accessible service is my wireguard vpn. For anything else, if you are not on my lan or VPN back into my lan, it’s not accessible.
This is the way.
Funnily enough it’s exactly the opposite way of where the corporate world is going, where the LAN is no longer seen as a fortress and most services are available publically but behind 2FA.
Corporate world, I still have to VPN in before much is accessible. Then there’s also 2FA.
Homelab, ehhh. Much smaller user base and within smackable reach.
Nothing I host is internet-accessible. Everything is accessible to me via Tailscale though.
I had everything behind my LAN, but published things like Nextcloud to the outside after finally figuring out how to do that even without a public IPv4 (being behind DS-Lite by my provider).
I knew about Cloudflare Tunnels but I didn’t want to route my stuff through their service. And using Immich through their tunnel would be very slow.
I finally figured out how to publish my stuff using an external VPS that’s doing several things:
- being a OpenVPN server
- being a cert server for OpenVPN certs
- being a reverse proxy using nginx with certbot
Then my servers at home just connect to the VPS as VPN clients so there’s a direct tunnel between the VPS and the home servers.
Now when I have an app running on 8080 on my home server, I can set up nginx so that the domain points to the VPS public IPv4 and IPv6 and that one routes the traffic through the VPN tunnel to the home server and it’s port using the IPv4 of the VPN tunnel. The clients are configured to have a static IPv4 inside the VPN tunnel when connecting to the VPN server.
Took me several years to figure out but resolved all my issues.
What benefit does it have instead of getting a dynamic DNS entry and port forwarding on your internet connection?
With DS-Lite you don’t have a public IPv4. Not a static one but also not a dynamic one. The ISP just gives you a public IPv6. You share your IPv4 address with other users. This is done to use less IPv4s. But not having a dynamic IPv4 causes you to be unable to use DynDNS etc. It’s simply not possible.
You could publish your stuff via IPv6 only but good luck accessing it from a network without IPv6.
You could also spin up tunnels with SSH actually between a public server and the private one (yes SSH can do stuff like that) but that’s very hard to manage with many services so you’re better of building a setup like mine.
Thanks for the great explanation!
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters DNS Domain Name Service/System HA Home Assistant automation software ~ High Availability HTTP Hypertext Transfer Protocol, the Web HTTPS HTTP over SSL IMAP Internet Message Access Protocol for email IP Internet Protocol NAS Network-Attached Storage NAT Network Address Translation Plex Brand of media server package SMTP Simple Mail Transfer Protocol SSH Secure Shell for remote terminal access SSL Secure Sockets Layer, for transparent encryption TLS Transport Layer Security, supersedes SSL VPN Virtual Private Network VPS Virtual Private Server (opposed to shared hosting) nginx Popular HTTP server
[Thread #549 for this sub, first seen 26th Feb 2024, 21:45] [FAQ] [Full list] [Contact] [Source code]
Everything is behind a wireguard vpn for me. It’s mostly because I don’t understand how to set up Https and at this point I’m afraid to ask so everything is just http.
I’ve been using YunoHost, which does this for you but I’m thinking of switching to a regular Linux install, which is why I’ve been searching for stuff to replace YunoHost’s features. That’s why I came across Nginx Proxy Manager, which let’s you easily configure that stuff with a web UI. From what I understand it also does certificates for you for https. Haven’t had the chance to try it out myself tho because I only found it earlier today.
Its not hard really, and you shouldn’t be afraid to ask, if we don’t ask then we don’t learn :)
Look at Caddy webserver, it does automated SSL for you.
Careful with Caddy as its had a few security issues.
All software has issued, such is the nature of software. I always say if you selfhost, at least follow some security related websites to keep up to date about these things :)
Do you have any suggestions for reputable security related websites?
too many :) Here is a snippet of my RSS feed, save it as an xml file and most rss reeders should be able to import it :) https://pastebin.com/q0c6s5UF
few days late here, but that pastebin had some really good feeds 🙏 I noticed the OPML file was labeled FreshRSS and I also use FreshRSS. So I fixed up the feeds and configured FreshRSS to scrape the full articles (when possible) and bypass ads, tracking and paywalls.
I figured I’d pay it forward by sharing my revised OPML file.
I also included some of my other feeds that are related (if you or anyone else is interested).
Some of the feeds are created from scratch since a few if these sites don’t offer RSS, so if the sites change their layout the configs may need to be adjusted a bit, but in my experience this rarely happens.
I had to replace some of the urls with publicly hosted versions of the front-ends I host locally and scrape, but feel free to change it up however you like.
https://gist.akl.ink/Idly9231/22fd15085f1144a1b74e2f748513f911
Thank you :)
Each time I’ve read into self-hosting it often sounds like opening stuff up to the internet adds a bunch of complexity and potential headaches, but I’m not sure how much of it is practicality vs being excessively cautious.
Limiting the attack surface is a big part, geo restrictions, reputation lists, brute force mitigation, it all plays a role. Running a vulnerability scanner against your stuff is important to catch things before others do and regular patching is important too. It’s can be a rewarding challenge.
Can you recommend me a vulnerability scanner?
100% is lan only cause my isp is a cunt
Ah, CG-NAT, is it? There are workarounds
NAT to extremes… it’s Starlink so I think I’m almost completely obfuscated from the internet entirely.
quite frankly i don’t really host anything that needs to be accessible from the general Internet so I never bothered with workarounds.