![](/static/66c60d9f/assets/icons/icon-96x96.png)
![](https://fry.gs/pictrs/image/c6832070-8625-4688-b9e5-5d519541e092.png)
Okay good, thanks for confirming. I remember Kate feeling very nice to use during my studies, more responsive than VS Code or Eclipse. But I also had 16Gigabytes of RAM, so I couldn’t be sure.
Okay good, thanks for confirming. I remember Kate feeling very nice to use during my studies, more responsive than VS Code or Eclipse. But I also had 16Gigabytes of RAM, so I couldn’t be sure.
The lede by OP here contains this:
[…] addition to Xcode 16 […] is a feature called Predictive Code Completion. Unfortunately, if you bought into Apple’s claim that 8GB of unified memory was enough for base-model Apple silicon Macs, you won’t be able to use it
So either RecluseRamble meant that development with a feature like predictive code completion would work on 8 GB of RAM if you were using Linux or his comparison was shit.
The techradar article is terrible, the techcrunch article is better, the Flow website has some detail.
But overall I have to say I don’t believe them. You can’t just make threads independent if they logically have dependencies. Or just remove cache coherency latency by removing caches.
ARM is like Hotwheels, there are lots of cars, but you can’t make your own.
That’s not entirely true. There are companies that have the ARM achitecture license, like Apple or Cavium (now bought by Marvell). They are allowed to make their own hotwheels using the spring system or the wheels or whatever.
Better yet you can configure gitignore globally for git.
I think you really need the project specific gitignore as well, to make sure any other contributor that joins by default has the same protections in place.
Wake me up when the AI travels to the network PoPs for me to replace broken parts, to install new transponder cards and new routers, to cable everything up correctly, to label it all and to photograph the result for documentation.
A language for noobs
That assertion surprises me; I find C easier to use than Rust.
Wow, thanks for the link. It seems things have gotten a lot more complicated with PoS. I didn’t even know about PBS. I haven’t been following along properly.
It’s a private MEV mempool
Are you sure there is such a thing? My understanding was that they just submit their sandwich transactions to the mempool with higher and lower gas respectively to achieve their desired priority ranking. Could be wrong though.
by fraudulently gaining access to pending transactions
That makes no sense to me. The mempool is public, everyone can see pending transactions.
I’m also using btrfs, but I originally wanted ZFS before seeing that it was only available through FUSE on my distro.
That’s why I even noticed ZFS was one of the features of Proxmox :)
They both use KVM in the end, so they are both Type 1 hypervisors.
Loading the KVM kernel module turn your kernel into the bare metal hypervisor.
It’s really just Debian with more packages preinstalled
And a custom kernel with ZFS support
AV1 is based on VP9. Google made VP9 and it’s open source and royalty free.
Google just joined the Alliance for Open Media and gave their VP9 as a starter for AV1 instead of making some other successor called VP10 or something on their own.
During development of AV1 Google contributed a lot to libaom, the reference implementation in C++, but since this codebase grew together with the codec it is not the most clean design. Also the reference implementation benefits from being clear more than being fast.
Therefore, instead, these days the later projects rav1e (encoder in rust, started by Xiph Foundation) and dav1d (decoder in C, started by the VideoLAN non-profit) are the fastest, because they started from a green field approach when the wire-format for AV1 was mostly fixed and they focused on speed.
I think overall Google’s stance on the Alliance for Open Media makes sense. As part of the new media streaming techno bubble they (as well as Amazon, Facebook, even Microsoft) have an interest in getting an interoperable royalty free codec into the market, and spread it as far as possible, to avoid the rent seeking behaviour of the old guard, Moving Picture Experts Group (MPEG) from Hollywood and similar groups. For every device that wants support for H265 the OEM has to pay a license of around 1 dollar currently.
Disaggregated compute might be able to leverage this in the data center.
I don’t think people would fuck with amplifiers in a DC environment. Just using more fiber would be so much cheaper and easier to maintain. At least I haven’t heard of any current Datacenters even using conventional DWDM in the C-band.
At best Google was using Bidir Optics, which I suppose is a minimal form of wavelength division multiplexing.
over 90 channels of 400G each
You mean with 50 GHz channels in the C-band? That would put you at something like 42 Gbaud/s with DP-QAM64 modulation, it probably works but your reach is going to be pretty shitty because your OSNR requirements will be high, so you can’t amplify often. I would think that 58 channels at 75 GHz or even 44 channels at 100 GHz are the more likely deployment scenarios.
On the other hand we aren’t struggling for spectrum yet, so I haven’t really had to make that call yet.
The zero dispersion wavelength of G.652.D fiber is between 1302 nm and 1322 nm, in the O-band.
Dispersion pretty much linearly increases as you move away from its zero dispersion wavelength.
Typical current DWDM systems operate in the range of 1528.38 nm to 1563.86 nm, in the C-band.
Group dispersion in the E-band and S-band is lower than at current DWDM wavelengths, because these bands sit between the O-band and the C-band.
1988 TAT-8 already went into productive use as the first transatlantic fiber optic connection. So the lab work must have happened in the 80’s already.
First of all some corrections:
By constructing a device called an optical processor, however, researchers could access the never-before-used E- and S-bands.
It’s called an amplifier not processor, the Aston University page has it correct. And at least the S-band has seen plenty of use in ordinary CWDM systems, just not amplified. We have at least 20 operational S-band links at 1470 and 1490 nm in our backbone right now. The E-band maybe less so, because the optical absorption peak of water in conventional fiber sits somewhere in the middle of it. You could use it with low water peak fiber, but for most people it hasn’t been attractive trying to rent spans of only the correct type of fiber.
the E-band, which sits adjacent to the C-band in the electromagnetic spectrum
No, it does not, the S-band is between them. It goes O-band, E-band, S-band, C-band, L-band, for “original” and “extended” on the left side, and “conventional”, flanked by “short” and “long” on the right side.
Now to the actual meat: This is a cool material science achievement. However in my professional opinion this is not going to matter much for conventional terrestrial data networks. We already have the option of adding more spectrum to current C-band deployments in our networks, by using filters and additional L-band amplifiers. But I am not aware of any network around ours (AS559) that actually did so. Because fundamentally the question is this:
Which is cheaper:
Currently, for us, there is enough spectrum still open in the C-band. And our hardware supplier is only just starting to introduce some L-band equipment. I’m currently leaning towards renting another pair being cheaper if we ever get there, but that really depends on where the big buying volume of the market will move.
Now let’s say people do end up extending to the L-band. Even then I’m not so sure that extending into the E- and S- bands as the next further step is going to be even equally attractive, for the simple reason that attenuation is much lower at the C-band and L-band wavelengths.
Maybe for subsea cables the economics shake out differently, but the way I understand their primary engineering constraint is getting enough power for amplifiers to the middle of the ocean, so maybe more amps, and higher attenuation, is not their favourite thing to develop towards either. This is hearsay though, I am not very familiar with their world.
I generally do mention that I like my Fedora KDE, but I’m a little worried about SELinux. I have had two or three run-ins with it, and I think that would be hard to diagnose for a noob.