• 0 Posts
  • 12 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle

  • The problem is those blocking extensions are based on timestamps. Those timestamps are added by the users, it’s a crowdsourced thing. But the ads a single user will see differ from what another user will see. It’s likely the length of the ads is different, which makes the whole timestamp thing a no go.

    Along with the timestamp, there needs to be a way to detect where the actual video begins. That way at least an offset can be applied and timestamps maintained, but it would introduce a certain level of error.

    The next issue would be to then advance the video to the place where the actual video begins. This can be very hard, as it would need to include some way of recognizing the right frame in the buffer. One requirement is that the starting frame is actually in the buffer (with ads more than a few seconds, this isn’t guaranteed). The add-on has access to this buffer (depending on the platform, this isn’t guaranteed). And there’s a reliable way to recognize the right frame, given the different encoding en quality setups.

    And this needs to be done cheap, so with as little as infrastructure as possible. A database of timestamps is very small and crowdsourcing those timestamps is relatively easy. But recognizing frames requires more data to be stored and crowdsourcing the right frame is a lot harder than a timestamp. If the infrastructure ends up being complex and big, someone needs to pay for that. I don’t know if donations alone would cut it. So you would need to play ads, which is exactly what you intend on not doing.

    I’m sure the very smart and creative people working on these things will find a way. But it won’t be easy, so I don’t expect a solution very soon.



  • No the “AI” isn’t a threat in itself. And treating generative algorithms like LLM like it’s general intelligence is dumb beyond words. However:

    It massively increases the reach and capacity of foreign (and sadly domestic) agents to influence people. All of those Russian trolls that brought about fascism, Brexit and the rise of the far right used to be humans. Now a single human can do more than a whole army of people could in the past using AI. Spreading misinformation has never been easier.

    Then there’s the whole replacing peoples jobs with AI. No the AI can’t actually do those jobs, not very well at least. But if management and the share holders think they can increase profits using AI, they will certainly fire a lot of folk. And even if that ends up ruining the company down the line, that costs even more jobs and usually impacts the people lower in the organization the most.

    Also there’s a risk of people literally becoming less capable and knowledgeable because of AI. If you can have a digital assistant you carry around on your pocket at all times answer every question ever, why bother learning anything yourself? Why take the hard road, when the easy road is available? People are at risk of losing information, knowledge and the ability to think for themselves because of this. And it can become so bad, when the AI just makes shit up, people think it’s the truth. And in a darker tone, if the people behind the big AIs want something to not be known or misrepresented, they can make it happen. And people would be so reliant on it, they wouldn’t even know this happens. This is already an issue with social media, AI is much much worse.

    Then there is the resource usage for AI. This makes the impact of crypto currency seem like a rounding error. The energy and water usage is huge and becoming bigger every day. This has the potential to undo almost all of the climate wins we’ve had for the past two decades and push the Earth beyond the tipping point. What people seem to forget about climate change is once things start becoming bad, it’s way too late and the situation will deteriorate at an exponential rate.

    That’s just a couple of big things I can think of on the top of my head. I’m sure there are many more issues (such as the death of the internet). But I think this is enough to call the current level of “AI” a threat to humanity.


  • And just about anything you try to do to actually read the description would start the movie, including doing nothing for 10 secs (because you are fucking reading the description). Till you hit the back button which just boots you back to the home screen, so you can start the selection process all over again.

    Helpfully they do include IMDB scores when browsing for stuff, sadly all their stuff is total shite so all the scores are low. But hey, at least they include them.

    The only way to watch anything on Prime is to make your selection in advance somewhere else and then search for it. If you type in the literal title of the movie, it will mostly be in the top 10 of the search results. This includes resuming watching something you were watching, like their hit series Fallout. You would expect the resume watching thing to be proudly the first item on the home page. OMG you actually watched something of ours, we are so happy. Nope it’s buried away on the 5th or 6th line and you need to scroll to get to it. It also happily resumes the previous episode at the credits, without the helpful next episode button. If you do manage to get to the next episode, you will need to watch the first 5 secs of same ad you’ve seen a million times (because they only seem to have the one ad on their platform) before you can skip to the content.

    I don’t know what those guys are smoking, but their app is total garbage.

    Usually big corps collect all your personal information and tell you it’s a good thing because they use it to make useful recommendations. That way you at least get something out of it. At Amazon they just take all your personal data and when it comes to recommendations it gives you a big middle finger. I don’t know here’s a romcom from 12 years ago, you like that stuff right? Whatever fuck off.



  • For those not in the know: The big issue with quantum computers is decoherence. This is (simply put) noise produced in the system, which interferes or overwrites the calculation / signal we want to get out of the computer. A large part of this is thermal energy, all that energy bouncing around destroys any chance of reading out the signal. So the solution would be to cool the machine within a fraction of a degree of absolute zero, which is hard but not impossible. Then there’s also EM radiation coming from all around us (wifi and cellphones, but also things like radio), this is relatively easy to shield against. A bit of a pain, but still something that can be done. But then there’s cosmic rays, there’s a real chance a cosmic ray hits with enough energy to disrupt the calculation within milliseconds. Milliseconds isn’t enough to do a useful calculation, so that’s a problem. Shielding against this is also pretty hard, since cosmic rays can have a lot of energy.

    Then there’s the issue of measurement itself, measuring automatically means putting in energy to the system. This means it’s very hard (or maybe even impossible) to read out the results, without destroying them. Even if you get the damn thing stable enough to do a useful calculation.

    The more qubits in a system, the more powerful it becomes, and you need quite a number of them to do anything useful with the machine. But the more qubits the bigger the decoherence issue.

    This is why some people (me included) don’t believe the current form of quantum computers we are researching can actually work in the real world. We need some kind of big breakthrough on this to create an actually useful quantum computer system. With all the cooling and shielding requirements we certainly won’t be using them at home any time soon.

    But of course as with anything these days the marketing department and media runs with everything they can, spouting out nonsense about quantum computers becoming mainstream any day now and all the amazing things they can do. This can make it hard to figure out what the actual level of development is right now. Plus anybody working on this is putting in billions of dollars and sure as heck won’t share anything with anybody. So maybe someone has already made a breakthrough, but I doubt it.


  • The biggest thing is the UI being completely different. I did use VS Code before, but only for my own projects, not stuff for work. So I did know how to use VS Code, but still it’s a major mental adjustment with everything being in a different place, features and shortcuts working differently etc.

    I really missed to Solution Explorer, which is probably my most used tool during work. But thankfully there is an excellent plugin which provides a Solution Explorer in VS Code. It’s a bit different from what I’m used to but it works just fine.

    Normally for casual profiling I’d use VS builtin tools. Only switching to something like DotMemory when really diving into optimization. This seems to be missing from VS Code. Probably there’s a plugin to fix that, but I want to keep the number of plugins to a minimum to prevent issues of plugins not being updated or having compatibility issues as much as possible. So now I switched to a different work flow for this to use tools like DotMemory sooner instead of the builtin stuff from VS.

    Resharper isn’t available for VS Code yet, but I don’t mind it. Some of my colleagues use it, but I prefer to do everything myself anyways and not use automated tools for code.

    I miss the Nuget package manager. Everything can be done using the terminal, both in VS and VS Code, which works the same. But the UI provided by the manager is so nice, it shows all the info you need, let’s you do almost anything with two clicks. I’ve checked out some plugins which are supposed to help with this, but have found none as good as the VS package manager. I’m proficient enough with the terminal it doesn’t really matter, but I still miss the manager and find myself checking different sources manually which used to be a lot more efficient. So I’ve taken an efficiency hit here, but I still can get the job done.

    Having everything done in the terminal panel takes some getting used to, where VS often launches different windows to get different kind of outputs. This is just something to get used to and could probably be changed in the settings, but I think it’s fine.

    In VS the project is launched as a separate process and then VS attaches itself to the process for debug and inspection purposes. In VS Code it’s a subprocess of the main editor process. This has some implications using third party tools for profiling for example. But I haven’t noticed anything going wrong. I think the way VS does it is better, but it’s probably fine? In theory an application could crash the whole VS Code process. But my code never crashes so I should be fine, right?

    Running and debugging is different but fine, with different profiles and debugging flags being managed from the UI and working perfectly. Publishing however is done only using the terminal, not the UI. Everything I need is available, but it took some figuring out how I need to do stuff using the terminal with regards to publishing. I’ve created a page on Confluence for myself with all the different stuff, which flags etc. It took some time but I think I’ve got everything figured out.

    For version management we already used a third party tool, so luckily no changes there. I have had to set some new ignores, but other than that no changes.

    Creating new projects is something I haven’t figured out how to do. For work I only ever work in existing projects that have been around for ages. I don’t know how easy it would be to create something new with all the required files and parameters so my colleagues can also use it. The other day I wanted to quickly check something in an empty project and I had to reach for VS again (for shame). I need to put in some time figuring this out in VS. It’s probably not complicated, but as I said I wanted to check something quickly so I didn’t have the time.

    There are probably a thousand little things I have changed or have to get used to. But these are the main ones.



  • Would really depend on the version of MP3. The first versions had some major issues with artifacts being introduced. People probably listened to that and concluded all compressed music must be shit. Later versions were much better, even though I would think 128k is probably too low and would be noticeable with some effort. I agree, starting at 192k and people can’t tell anymore.

    Does anybody use MP3 anymore? I don’t really know to be honest.