• 4 Posts
  • 393 Comments
Joined 9 months ago
cake
Cake day: October 4th, 2023

help-circle

  • The planning board’s decision was based on health concerns due to the possible negative environmental impact of telecommunication on the residents, especially the children studying at the school who could potentially be exposed to electromagnetic radiation. The town felt the residents would be ‘unsafe’ due to radio frequencies and rejected the company’s notion of building the tower on the land.

    I mean, I think that the planning board is idiotic, but I don’t see why T-Mobile cares enough to fight it. If they don’t build it, okay. It looks like the school in question is right in the middle of town. Then Wanaque is going to have crummy cell coverage. Let them have bad cell coverage and build a tower somewhere else. It’s not like this is the world’s only place that could use better cell coverage. The main people who benefit from the coverage are Wanaque residents. Sure, okay, there’s some secondary benefit to travelers, but if we get to the point that all the dead zones that travelers pass through out there are covered, then cell providers can go worry about places that are determined not to have have cell coverage.

    If I were cell companies, I’d just get together with the rest of the industry and start publishing a coverage score for cities for cell coverage. Put it online in some accessible database format, so that when places like city-data.com put up data on a city, they also show that the city has poor cell coverage and that would-be residents are aware of the fact.




  • Yes. I wouldn’t be preemptively worried about it, though.

    Your scan is going to try to read and maybe write each sector and see if the drive returns an error for that operation. In theory, the adapter could respond with a read or write error even if a read or write worked or even return some kind of bogus data instead of an error.

    But I wouldn’t expect this to likely actually arise or be particularly worried about the prospect. It’s sort of a “could my grocery store checkout counter person murder me” thing. Theoretically yes, but I wouldn’t worry about it unless I had some reason to believe that that was the case.



  • I don’t really have a problem with this – I think that it’s rarely in a consumer’s interest to choose a locked phone. Buying a locked phone basically means that you’re getting a loan to pay for hardware that you pay back with a higher service price. But I’d point out that:

    • You can get unlocked phones and service now. I do. There are some privacy benefits to doing so – my cell provider doesn’t know who I am (though they could maybe infer it from usage patterns of their network and statistical analysis). It’s not a lack of unlocked service that’s at issue. To do this, Congress is basically arguing that the American consumer is just making a bad decision to purchase a plan-combined-with-a-locked-phone and forcing them not to do so.

    • Consumers will pay more for cell phones up front. That’s not necessarily a bad thing – it maybe makes the carrier market more competitive to not have a large portion of consumers locked to one provider. But there are also some benefits to having the carrier selecting cell phones that they offer in that the provider is probably in a better position to evaluate what phone manufacturers have on offer in terms of things like failure rates than do consumers.



  • If ISP routers are anything like the west that means they control the DNS servers and the ones on router cannot be changed, and likely it blocks 1.1.1.1 and 8.8.8.8 and so on, as Virgin Media does (along with blocking secure DNS) in the UK for example, which definitely opens up a massive attack vector for an ISP to spin up its own website with a verified cert and malware and have the DNS resolve to that when users try to access it to either download the software needed to access this Grid System or if it’s a web portal - the portal itself.

    Browser page integrity – if you’re using https – doesn’t rely on DNS responses.

    If I go to “foobar.com”, there has to be a valid cert for “foobar.com”. My ISP can’t get a valid cert for foobar.com unless it has a way to insert its own CA into my browser’s list of trusted CAs (which is what some business IT departments do so that they cans snoop on traffic, but an ISP probably won’t be able to do, since they don’t have access to your computer) or has access to a trusted CA’s key, as per above.

    They can make your browser go to the wrong IP address, but they can’t make that IP address present information over https that your browser believes to belong to a valid site.


  • I’d also add, on an unrelated note, that if the concern is bandwidth usage, which is what the article says, I don’t see why the ISP doesn’t just throttle users, based entirely on bandwidth usage. Like, sure, there are BitTorrent users that use colossal amounts of bandwidth, will cause problems for pricing based on overselling bandwidth, which is the norm for consumer broadband.

    But you don’t need to do some kind of expensive, risky, fragile, and probably liability-issue-inducing attack on BitTorrent if your concern is bandwidth usage. Just start throttling down bandwidth as usage rises, regardless of protocol. Nobody ever gets cut off, but if they’re using way above their share of bandwidth, they’re gonna have a slower connection. Hell, go offer to sell them a higher-bandwidth package. You don’t lose money, nobody is installing malware, you don’t have the problem come right back as soon as some new bandwidth-munching program shows up (YouTube?), etc.


  • I don’t really understand the attack vector the ISP is using, unless it’s exploiting some kind of flaw in higher-level software than BitTorrent itself.

    A torrent should be identified uniquely by a hash in a magnet URL.

    When a BitTorrent user obtains a hash, as long as it’s from an https webpage, the ISP shouldn’t be able to spoof the hash. You’d have to either get your own key added to a browser’s keystore or have access to one of the trusted CA’s keys for that.

    Once you have the hash, you should be able to find and validate the Merkle hash tree from the DHT. Unless you’ve broken SHA and can generate collisions – which an ISP isn’t going to – you shouldn’t be able to feed a user a bogus hash tree from the DHT.

    Once you have the hash tree, you shouldn’t be able to feed a user any complete chunks that are bogus unless you’ve broken the hash function in BitTorrent’s tree (which I think is also SHA). You can feed them up to one byte short of a chunk, try and sandbag a download, but once they get all the data, they should be able to reject a chunk that doesn’t hash to the expected value in the tree.

    I don’t see how you can reasonably attack the BitTorrent protocol, ISP or no, to try and inject malware. Maybe some higher level protocol or software package.


  • tal@lemmy.todaytoSelfhosted@lemmy.worldServer for a boat
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    4 days ago

    What hardware and Linux distro would you use in this situation?

    The distro isn’t likely to be a factor here. Any (non-super-specialized) distro will be able to solve issues in about the same way.

    I mean, any recommendation is going to just be people mentioning their preferred distro.

    I don’t know whether saltwater exposure is a concern. If so, that may impose some constraints on heat generation (if you have to have it and storage hardware in a waterproof case).






  • If there’s a better way to configure Docker, I’m open to it, as long as it doesn’t require rebuilding everything from scratch.

    You could try using lvmcache (block device level) or bcachefs (filesystem level caching) or something like that, have rotational storage be the primary form of storage but let the system use SSD as a cache. Dunno what kind of performance improvements you might expect, though.



  • I’d call Reddit and the Threadiverse and Usenet and such forums. They’re just broad, with many different categories, or “meta-forums”, as compared to a site with a dedicated-to-a-single-topic forum.

    Some other drawbacks of having many independent forums:

    • You have to create and maintain a ton of accounts.

    • Different, incompatible markup syntax.

    • Often missing features (e.g. Markdown has tables; few forums let one create tables)

    • Some forum systems ordered comments by time rather than parent comment, which was awful to browse.

    • Often insane requirements to get an account. I can think of a few forums that were very difficult to get access to, either because the “new user” system was incompatible with some email system or just had other problems.

    I mean, there are a lot of websites with “comment” sections, which is kind of a lightweight forum attached to a webpage, and they’re almost invariably awful.