• Evil_Shrubbery@lemm.ee
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    14 hours ago

    We underfund our heroes, don’t we?

    (Also that monitors model name in the thumbnail “UHD 4K 2K” :D

  • merthyr1831@lemmy.ml
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    3
    ·
    1 day ago

    yet another reason to back flatpaks and distro-agnostic software packaging. We cant afford to use dozens of build systems to maintain dozens of functionally-identical application repositories

    • chaoticnumber@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      6 hours ago

      This is such a superficial take.

      Flatpaks have their use-case. Alpine has its use-case as a small footprint distro, focused on security. Using flatpaks would nuke that ethos.

      Furthermore, they need those servers to build their core and base system packages. There is no distro out there that uses flatpaks or appimages for their CORE.

      Any distro needs to build their toolchain, libs and core. Flatpaks are irrelevant to this discussion.

      At the risk of repteating myself, flatpaks are irrelevant to Alpine because its a small footprint distro, used alot in container base images, containers use their own packaging!

      Furthermore, flatpaks are literal bloat, compared to alpines’ apk packages which focus on security and minimalism.

      Edit: Flatpak literally uses alpine to build its packages. No alpine, no flatpaks. Period

      Flatpaks have their use. This is not that. Check your ignorance.

      • merthyr1831@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 hours ago

        I know there’s limitations to flatpak and other agnostic app bundling systems but there’s simply far too many resources invested into repacking the same applications across each distro. These costs wouldnt be so bad if more resources were pooled behind a single repository and build system.

        As for using flatpaks at the core of a distro, we know from snaps that it is possible to distribute core OS components/updates via a containerised package format. As far as I know there is no fundamental design flaw that makes flatpak incapable of doing so, rather than the fact it lacks the will of distro maintainers to develop the features in flatpak necessary to support it.

        That being said, it’s far from my point. Even if Alpine, Fedora, Ubuntu, SUSE etc. all used their native package formats for core OS features and utilities, they could all stand to save a LOT in the costs of maintaining superfluous (and often buggy and outdated) software by deferring to flatpak where possible.

        There needs to be a final push to flatpak adoption the same way we hovered between wayland and xorg for years until we decided that Wayland was most beneficial to the future of Linux. Of course, this meant addressing the flaws of the project, and fixing a LOT of broken functionality, but we’re not closer than ever to dropping xorg.

    • harsh3466@lemmy.ml
      link
      fedilink
      arrow-up
      8
      ·
      17 hours ago

      I’m a fan of flatpaks, so this isn’t to negate your argument. Just pointing out that Flathub is also using Equinix.

      Source

      Interlude: Equinix Metal née Packet has been sponsoring our heavy-lifting servers doing actual building for the past 5 years. Unfortunately, they are shutting down, meaning we need to move out by the end of April 2025.

    • ubergeek@lemmy.today
      link
      fedilink
      English
      arrow-up
      9
      ·
      22 hours ago

      Pretty sure flatpak uses alpine as a bootstrap… Flatpak, after all, brings along an entire distro to run an app.

    • balsoft@lemmy.ml
      link
      fedilink
      arrow-up
      6
      ·
      23 hours ago

      I don’t think it’s a solution for this, it would just mean maintaining many distro-agnostic repos. Forks and alternatives always thrive in the FOSS world.

    • Mwa@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      19 hours ago

      Let the community package it to deb,rpm etc while the devs focus on flatpak/appimage

    • Karna@lemmy.mlOP
      link
      fedilink
      arrow-up
      7
      ·
      18 hours ago

      That solves the media distribution related storage issue, but not the CI/CD pipeline infra issue.

  • ryannathans@aussie.zone
    link
    fedilink
    arrow-up
    11
    arrow-down
    1
    ·
    2 days ago

    How are they so small and underfunded? My hobby home servers and internet connection satisfy their simple requirements

      • DaPorkchop_@lemmy.ml
        link
        fedilink
        arrow-up
        4
        ·
        18 hours ago

        That’s ~2.4Gbit/s. There are multiple residential ISPs in my area offering 10Gbit/s up for around $40/month, so even if we assume the bandwidth is significantly oversubscribed a single cheap residential internet plan should be able to handle that bandwidth no problem (let alone a for a datacenter setup which probably has 100Gbit/s links or faster)

        • synicalx@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 hours ago

          If you do 800TB in a month on any residential service you’re getting fair use policy’ed before the first day is over, sadly.

      • chaoticnumber@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        2 days ago

        That averages out to around 300 megabytes per second. No way anyone has that at home comercially.

        One of the best comercial fiber connections i ever saw will provide 50 megabytes per second upload, best effort that is.

        No way in hell you can satisfy that bandwidth requirement at home. Lets not mention that they need 3 nodes with such bw.

        • Evil_Shrubbery@lemm.ee
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          14 hours ago

          Yeah, thats almost 150% more than my (theoretical) bandwidth at home (Gbps but I live alone & just don’t want to pay much), and that is just assuming constant workload (peaks must be massive).

          This is indeed considerate, yet hopefully solvable. It certainly is from the link perspective.

        • DaPorkchop_@lemmy.ml
          link
          fedilink
          arrow-up
          4
          ·
          edit-2
          18 hours ago

          50MB/s is like 0.4Gbit/s. Idk where you are, but in Switzerland you can get a symmetric 10Gbit/s fiber link for like 40 bucks a month as a residential customer. Considering 100Gbit/s and even 400Gbit/s links are already widely deployed in datacenter environments, 300MB/s (or 2.4Gbit/s) could easily be handled even by a single machine (especially since the workload basically consists of serving static files).

      • ryannathans@aussie.zone
        link
        fedilink
        arrow-up
        5
        ·
        2 days ago

        On my current internet plan I can move about 130TB/month and that’s sufficent for me, but I could upgrade plan to satisfy the requirement

        • Karna@lemmy.mlOP
          link
          fedilink
          arrow-up
          2
          ·
          18 hours ago

          Your home server might have the required bandwidth but not requisite the infra to support server load (hundreds of parallel connections/downloads).

          Bandwidth is only one aspect of the problem.

          • ryannathans@aussie.zone
            link
            fedilink
            arrow-up
            3
            ·
            17 hours ago

            Ten gig fibre for internal networking, enterprise SFP+ network hardware, big meaty 72 TB FreeBSD ZFS file server with plenty of cache, backup power supply and UPS

            The tech they require really isn’t expensive anymore