Hmm, so sounds like they’re moving the kernel scheduler down to a hardware layer? Basically just better smp?
Infrastructure nerd, gamer, and Lemmy.ca maintainer
Hmm, so sounds like they’re moving the kernel scheduler down to a hardware layer? Basically just better smp?
I spent a year tracking down random afci circuit breaker trips, until I realized it was my powerline Ethernet. Never again.
Lspci doesn’t care about drivers. What’s lshw say?
Sounds like maybe a fake card or something. Do you also have a 3060 in there?
If you need support outside of business hours, you’re fucked.
Friend had a network misconfig on their side take his server out on Friday night and they didn’t fix it until Monday.
Same for Samsung afaik. Pop into the bootloader and just wipe everything.
SMB.
The windows nfs implementation sucks, but everything talks SMB.
With the hw MCE errors, it’s probably toast.
You could try reseating or swapping the ram around, if it’s socketed
I have a sliding door that I want to toss a stepper motor on, so my dog can push a button and let himself in / out.
I share my jellyfin with my mom
Interesting, thanks for the link!
Well that’s technically correct, but if you’re so dependent on disk cache for system performance that you can’t live without it then you really need to look at doing an upgrade.
When a box swap deaths, it usually struggles to actually fill swap enough to have the kernel still OOM kill it at any point. Generally the massive performance impact of swapping just slows the app down to the point of being useless, along with the entire rest of the box. Disk cache should not be a concern during these abnormal events.
Just turn off swap? You don’t really need it, and the kernel wiil just oom kill without it.
Self hosting email is even more of a pain.
A lot of reasonably competent geeks just never get deep into networking, and VPNs can be overwhelming. It doesn’t really help that for a long time it was all IPSec which basically you need to learn voodoo to manage. Thankfully we have much better tools now, but it’s still just a tech layer that many people don’t touch frequently.
The tailscale client should have created an interface, but I’ve never used it on a box also running wg. You don’t have a tailscale specific interface in ip addr show
at all? That’s… odd.
Do you have a device at /dev/net/tun
?
How do I do this?
Run ip route show table all
I would expect to see a line like:
192.168.178.0/24 dev tailscale0 table 52
Out of curiosity on a remote node do tcpdump -i tailscale0 -n icmp
and then do a ping from the other side, does tcpdump see the icmp packets come in?
Relay “ams” means you’re using tailscales DERP node in amsterdam, this is expected if you don’t have direct connectivity through your firewall. Since you opened the ports that’s unusual and worth looking into, but I’d worry about that after you get basic connectivity.
So to confirm your behavior, you can tailscale ping each other fine and tailscale ping to the internal network. You cannot however ping from the OS to the remote internal network?
Have you checked your routing tables to make sure the tailscale client added the route properly?
Also have you checked your firewall rules? If you’re using ipfw or something, try just turning off iptables briefly and see if that lets you ping through.
Can your nodes ping each other on the tailscale ips? Check tailscale status
and make sure the nodes see each other listed there.
Try tailscale ping 1.2.3.4
with the internal IP addresses and see what message it gives you.
tailscale debug netmap
is useful to make sure your clients are seeing the routes that headscale pushes.
I’d recommend avoiding spinning disks and going all ssd if possible.
You can get 12v in atx power supplies.
You may want to consider something like a Lenovo tiny with a few large ssds.