If it’s the same then after installing docker, creating a vaultwarden user, adding said user to docker group, and creating your vaultwarden directories, all that’s left is to curl the install script and answer the questions it asks.
Admin: lemux
Issues and Updates: !server_news
Find me:
mastodon: @[email protected]
matrix: @minnix:minnix.dev
peertube: @[email protected]
funkwhale: @[email protected]
writefreely: @[email protected]
If it’s the same then after installing docker, creating a vaultwarden user, adding said user to docker group, and creating your vaultwarden directories, all that’s left is to curl the install script and answer the questions it asks.
I use bitwarden and the setup was fairly standard with the helper script. I use my own isolated proxy for all my services so that was already built. I haven’t used vaultwarden but if anyone that has used both can tell me the differences I could maybe help out.
One of our co-hosts on the Lugcast got one and gave a little review of it. Star Labs responded in the comments https://youtu.be/0MG8c5HJew4?si=UnGhLtcWBkJG2D4M
I would say that is not the best way to keep/restore backups as you are missing the integrity checking features of a true backup system. But honestly what really matters is how important the data is to you.
I did something similar when migrating to 8. Consumer SSDs suck with proxmox so I bought 2 enterprise SSDs on Ebay before the migration and decided to do everything at once. I didn’t have all the moving parts you did though. If you have an issue, you will more than likely not be able to pop back in the old SSDs and expect everything to work as normal. I’m not sure what you’re using to create backups but if you’re not already I would recommend PBS. This way if there is an issue, restoring your VMs is trivial. As long as that PBS is up and running correctly (makes sure to restore a backup before making any changes to make sure it works as intended) it should be ok. I have 2 PBS’s. One on and off site.
PBS will keep the correct IPs of your VMs so reconnecting NFS shares shouldn’t be an issue either.
Used to be CentOS until the stream debacle. Now Debian.
I’ve ran jitsi for 4 years now. You can keep your personal variables in an environment file that doesn’t really change and pull down a new compose file whenever you want to update. Ever since the switch to docker from native install it has made things much easier to maintain. I’m using a lxc with debian 12. 4 cores and 4gb ram. The only reason I’ve allocated that many resources is because we use it to record a podcast with anywhere from 4 to 10 people on the server at a time. As far as bitrate, resolution, etc, that’s all handled within your env file. You’d have to look at the docs to see what’s available for you to choose from.
Before you buy anything, put some of the same content that buffers on a USB stick or powered drive and play it directly from the pi4. Also connect via ethernet to your router from another PC and check your dl speed from the NFS share.
CPU is only one factor regarding specs, a small one at that. What kind of t/s performance are you getting with a standard 13B model?
What are your laptop specs?
Ollama without a GPU is pretty useless unless you’re using with Apple silicon. I’d just get rid of it until you get a GPU.
I guess I don’t understand. You followed the docker installation directions correctly and it didn’t work or you modified the directions in a way that you prefer and it didn’t work?
I have it installed for a few years now. I started with the AIO but moved to the separate container install after AIO was deprecated. I imagine the install process is too complex for portainer. https://docs.funkwhale.audio/stable/administrator/installation/docker.html
I did steps 1-4 and skipped the rest because I already have a proxy server running. Don’t remember anything related to snapd though. Mine is running in a Debian 11 VM on proxmox instead of an LXC, but the process should be the same. Also they have a matrix channel for help https://matrix.to/#/#funkwhale-support:matrix.org
From what I remember it was relatively painless to install, but upgrading can be a chore, especially this last upgrade. My main interest in FW was the federation aspect as far as finding new music. If you don’t care about federation, maybe a simpler option would work better for you.
At the very least you need to install a webserver and you need a proxy of some kind. If you truly want old school you can just create html pages hosted from the root of your webserver (although there are now easier modern ways to do this, you might learn more the classic way rather than using a CMS).
You will want a reverse proxy to lie between your webserver and the internet that handles SSL. Let’s Encrypt is a good option to generate a cert so that you only expose port 443 on your router to the internet and your webserver. You’ll have to open port 80 to generate the cert but can close it again once generated. Then you will have https.
That’s the basics. The how-to’s are easy to find online.
I’m not sure how soon you need this, but if you can wait sipeed has a $20 kvm with ATX control that should be out soon https://lunar.computer/news/sipeed-announces-new-20-risc-v-kvm-device/
I am using Kinoite for quite a while now and not once did layering break anything.
That’s great for you. Not everyone may use their distro in the same way as you.
https://discussion.fedoraproject.org/t/is-silverblue-rpm-ostree-intended-to-be-used-with-layered-packages/26162/2 https://discussion.fedoraproject.org/t/fedora-silverblue-36-will-not-succesfully-deploy-after-layering-packages/77502/3 https://gitlab.gnome.org/GNOME/gnome-software/-/issues/991 https://github.com/coreos/rpm-ostree/issues/4280
Not to mention the whole Firefox debacle of including an outdated borked version based with the system install instead of just moving to Flatpak install of most recent stable release. There’s a very valid reason why package layering is discouraged by atomic maintainers and why toolbox is there by default as part of OS. And don’t even get me started on DKMS and driver installation.
So, the points in favor of Kinoite is sticking closer to upstream, however it seems like I would need to layer quite a few packages. My understanding is that this is discouraged in an rpm-ostree setup, particularly due to update time and possible mismatches with RPMFusion
It’s not only discouraged but often times it’s system breaking. I used Kinoite for a year before I just became too frustrated and gave up. The first thing I learned though was to stay away from package layering because it tended to break things more often than not. Basically if you can’t find or build a flatpak and you don’t want to use toolbox all the time, just stick with workstation. Immutable is great when deploying to multiple servers or locked-down corporate workstations, but it makes no sense for your personal setup especially if you’re already familiar with Linux.
Just keep in mind that even with a jetson board you’ll need one of the higher memory configurations to have a non-frustrating stable diffusion experience. 32-64GB like the Orin and those aren’t cheap. The nanos just don’t cut it without severe optimizations and very long generate times.
Elaborate on why samba is bad when it comes to security? Like list a bunch of links like this or write a paragraph summarizing them like a chatbot?
Check out their matrix https://pine64.org/community/