• 2 Posts
  • 78 Comments
Joined 2 years ago
cake
Cake day: August 8th, 2023

help-circle
  • If it works, I don’t update unless I’m bored or something. I also spread things out on multiple machines, so there’s less chance of stuff happening like you describe with the charts feature going away. My NAS is pretty much just a NAS now.

    You can probably backup your configs/data, upgrade, then deploy jellyfin again, restore, and reconfigure. You should probably backup your data on your ZFS pool. But, I recently updated to the latest TrueNas Scale from ~5 year old FreeBSD version of TrueNas and the pools still worked fine (none of the “apps” or jails worked, obviously). The upgrade process even ported my service configurations over. I didn’t care about much of the data in the pools, so only backed up the most important stuff.


  • I personally use a dual core pentium with 16GB of RAM. When I first installed TrueNas (FreeNas back then), I only had 8GB of RAM, but that proved to be not enough to run all the services I wanted, so I would suggest 12-16GB. Depending on the services you want to run any multi-core x86 CPU that allows 16GB of RAM to be used should be adequate. I believe TrueNas recommends ECC RAM, but I don’t think using consumer grade RAM and hardware has caused me any problems. I’m also using an old SSD for the system drive, which I is recommended now (I used to use 2 mirrored USB thumb drives, buy that’s not recommended anymore). Very importantly, make sure the HDD(s) you get are not shingled drives; made that mistake initially, and performance was ridiculously bad.








  • The PC I’m using as a little NAS usually draws around 75 watt. My jellyfin and general home server draws about 50 watt while idle but can jump up to 150 watt. Most of the components are very old. I know I could get the power usage down significantly by using newer components, but not sure if the electricity use outweighs the cost of sending them to the landfill and creating demand for more newer components to be manufactured.



  • Last time I looked it up and calculated it, these large models are trained on something like only 7x the tokens as the number of parameters they have. If you thought of it like compression, a 1:7 ratio for lossless text compression is perfectly possible.

    I think the models can still output a lot of stuff verbatim if you try to get them to, you just hit the guardrails they put in place. Seems to work fine for public domain stuff. E.g. “Give me the first 50 lines from Romeo and Juliette.” (albeit with a TOS warning, lol). “Give me the first few paragraphs of Dune.” seems to hit a guardrail, or maybe just forced through reinforcement learning.

    A preprint paper was released recently that detailed how to get around RL by controlling the first few tokens of a model’s output, showing the “unsafe” data is still in there.




  • I use GPT (4o, premium) a lot, and yes, I still sometimes experience source hallucinations. It also will sometimes hallucinate incorrect things not in the source. I get better results when I tell it not to browse. The large context of processing web pages seems to hurt its “performance.” I would never trust gen AI for a recipe. I usually just use Kagi to search for recipes and have it set to promote results from recipe sites I like.


  • Hmm. I just assumed 14B was distilled from 72B, because that’s what I thought llama was doing, and that would just make sense. On further research it’s not clear if llama did the traditional teacher method or just trained the smaller models on synthetic data generated from a large model. I suppose training smaller models on a larger amount of data generated by larger models is similar though. It does seem like Qwen was also trained on synthetic data, because it sometimes thinks it’s Claude, lol.

    Thanks for the tip on Medius. Just tried it out, and it does seem better than Qwen 14B.





  • I assume the “kill it” comment was a little tongue-in-cheek. On small SBCs, like a Pi, or old hardware, it could be a problem. I’ve seen people with flatpaks taking up 30GB of space, which is significant. I’m not sure how much RAM it wastes. I assume running 6 different applications that have loaded 6 different versions of Qt libraries would also use significantly more RAM than just loading the system’s shared Qt libraries once.