There are quite a few choices of brands when it comes to purchasing harddisks or ssd, but which one do you find the most reliable one? Personally had great experiences with SeaGate, but heard ChrisTitus had the opposite experience with them.

So curious to what manufacturers people here swear to and why? Which ones do you have the worst experience with?

    • BigMikeInAustin@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      In general and simplifying, my understanding is:

      There is the area where data is written, and there is the File Allocation Table that keeps track of where files are placed.

      When part of a file needs to be overwritten (either because it inserted or there is new data) the data is really written to a new area and the old data is left as is. The File Allocation Table is updated to point to the new area.

      Eventually, as the disk gets used, that new area eventually comes back to a space that was previously written to, but is not being used. And that data gets physically overwritten.

      Each time a spot is physically overwritten, it very very slightly degrades.

      With a larger disk, it takes longer to come back to a spot that has already been written to.

      Oversimplifying, previously written data that is no longer part of a file is effectively lost, in the way that shredding a paper effectively loses whatever is written, and in a more secure way than as happens in a spinning disk.

      • teawrecks@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        Afaik, the wear and tear on SSDs these days is handled under the hood by the firmware.

        Concepts like Files and FATs and Copy-on-Write are format-specific. I believe that even if a filesystem were to deliberately write to the same location repeatedly to intentionally degrade an SSD, the firmware will intelligently shift its block mapping around under the hood so as to spread out the wear. If the SSD detects a block is producing errors (bad parity bits), it will mark it as bad and map in a new block. To the filesystem, there’s still perfectly good storage at that address, albeit with a potential one-off read error.

        The larger sizes SSD just gives the firmware more extra blocks to pull from.