• InverseParallax@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    ·
    2 months ago

    This seems overblown, we’ve faced these things before.

    The straightforward path is adding new calls and structs and leaving the old code in place, then having tests that return -1 for time32_t and seeing what breaks.

    It’s not pretty, but this is life in the new epoch, gentoo doesn’t have it harder than anyone else except when they’re trying to rebuild while the transition is happening.

    I know nobody wants 2 apis, 1 deprecated, but this is an ancient design decision we have to live with, this is how we live with them.

    • Markaos@lemmy.one
      link
      fedilink
      arrow-up
      8
      ·
      edit-2
      2 months ago

      Ah, the joys of requiring non-standard library calls for apps to function.

      The problem is that this approach breaks the C standard library API, which is one of the few things that are actually pretty universal and expected to work on any platform. You don’t want to force app developers to support your snowflake OS that doesn’t support C.

      The current way forward accepted by every other distro is to just recompile everything against the new 64-bit libraries. Unless the compiled software makes weird hardcoded assumptions about sizes of structs (hand-coded assembly might be one somewhat legitimate reason for that, but other distros have been migrating to 64-bit time_t for long enough that this should have been caught already), this fixes the problem entirely for software that can be recompiled.

      That leaves just the proprietary software, for which you can either have a separate library path with 32-bit time_t dependencies, or use containers to effectively do the same.

      Sneaky edit: why not add new 64-bit APIs to C? Because the C standard never said anything about how to represent time_t. If the chosen implementation is insufficient, it’s purely on the platform to fix it. The C17 standard:

      The range and precision of times representable in clock_t and time_t are implementation-defined.

      • InverseParallax@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        Your argument is to have 2 subtly incompatible abis and one day binaries magically break.

        You’re right it breaks c stdlib, but that’s literally the point, libc is broken by design, this is the fix.

        No program with time32_t will ever work after 2038, so any compiled that way are broken from compilation.

        You’re right that the length isn’t specified though, the issue is changing types for triplets silently has unfortunate side effects.

        If you really want to be clever, mangle the symbols for the functions that handle time so they encode time64 as appropriate, but doing it silently is begging for trouble.

        • Markaos@lemmy.one
          link
          fedilink
          arrow-up
          1
          ·
          2 months ago

          Your argument is to have 2 subtly incompatible abis and one day binaries magically break.

          Whereas your argument seems to be to have a special C variant for 32bit Linux - there’s no reason to have a special time64_t anywhere else.

          No program with time32_t will ever work after 2038, so any compiled that way are broken from compilation.

          Yeah, so what will breaking the ABI do? Break it a bit more?

          If you really want to be clever, mangle the symbols for the functions that handle time so they encode time64 as appropriate

          That’s what MUSL libc does, and the result is two subtly incompatible ABIs - statically linked programs are fine, but if a dynamically linked library exports any function with time_t parameter or return value, it will use whatever size was configured at build time and it becomes a part of its ABI. So fixing this properly would require every library that wants to pass time_t values in its API to implement its own name mangling. That’s not a reasonable request for a barely used platform (remember, this is just 32bit userland, 64bit was always unaffected).

          • InverseParallax@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            2 months ago

            Great, then we just leave everything alone and say 32-bit user land is broken past 2038, doubt too many people are dying to run 32-bit userland after that, but if they are I can guarantee they’ll be running old binaries probably without source.

            • CarrotsHaveEars@lemmy.ml
              link
              fedilink
              arrow-up
              1
              ·
              2 months ago

              I might be selfish for saying so, but if anyone set up their mind to run anything on a 32-bit system after 2038, they must care enough to compile themselves, right? Any binaries compiled today will be EOL by then.

              • InverseParallax@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                2 months ago

                I think this is a reasonable assumption, but my experience suggests it will absolutely not be true for a lot of proprietary software.

                That being said, that stuff will only be supported on rhel which will bend over backwards to keep it sort of working somehow.

  • nyan@sh.itjust.works
    link
    fedilink
    arrow-up
    7
    ·
    2 months ago

    One thing people reading this should remember is that you cannot guarantee all packages on a Gentoo system will be updated simultaneously. It just can’t be done. Because several of the arches affected by this are old, slow, and less-used (32-bit PowerPC, anyone?), it’s also impossible to test all combinations of USE flags for all arches in advance, so sooner or later someone will have something break in mid-compile. For this change, that could result in an unbootable system, or a badly broken one that can’t continue the upgrade because, for example, Python is broken and so portage can’t run.

    The situation really is much more complicated than it would be on a binary distro whose package updates are atomic. Not intractable, but complicated.

    That being said, even a completely borked update would not make the system unrecoverable—you boot from live media, copy a known-good toolchain from the install media for that architecture over the borked install, chroot in, and try again (possibly with USE flag tweaks) until you can get at least emerge --emptytree system or similar to run to completion. It’s a major, major pain in the ass, though, and I can understand why the developers want to reduce the number of systems that have to be handled in that way to as few as possible.

  • yoevli@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    2 months ago

    I’m not familiar with the specific install/upgrade process on Gentoo so maybe I’m missing something, but what’s wrong with forcing new installations to use time64 and then forcing existing installs to do some kind of offline migration from a live disk a decade or so down the line? I feel like it’s probably somewhat uncommon for an installation of any distro to be used continuously for that amount of time (at least in a desktop context), and if anyone could be expected to be able to handle a manual intervention like this, it’s long-time Gentoo users.

    The bonus of this would be that it wouldn’t be necessary to introduce a new lib* folder - the entire system either uses time64 or it doesn’t. Maybe this still wouldn’t be possible though depending on how source packages are distributed; like I said I dont really know Gentoo.

    • Aiwendil@lemmy.ml
      link
      fedilink
      arrow-up
      2
      ·
      2 months ago

      I imagine the “update from another system” path runs in troubles with more complex gentoo installs than just the base system. For a full update from the live disk it will have to include lots and lots of (often exotic) tools that might be used in the building process (document generators like doxgen, lexer, testing frameworks, several build systems and make-likes. programming languages…) in addition to being able to build against the already installed updates for packages while not accidental building against packages that are not updated yet.

      Or you go the simpler way and only do a base update from the live-system…only update the base build system and package management of the gentoo system and afterwards boot in a “broken” system in which only the basics works and rebuild it from there.

      For be both those options sound less desirable than what is suggested in the blog.

  • BoringHusband@lemmy.world
    link
    fedilink
    arrow-up
    0
    arrow-down
    1
    ·
    2 months ago

    Why would anyone be running a 32bit application in 2038? It is already hard enough now to continue 32bit support.