A judge in Washington state has blocked video evidence that’s been “AI-enhanced” from being submitted in a triple murder trial. And that’s a good thing, given the fact that too many people seem to think applying an AI filter can give them access to secret visual data.
“Your honor, the evidence shows quite clearly that the defendent was holding a weapon with his third arm.”
RIP Charlie Robinson
If you ever encountered an AI hallucinating stuff that just does not exist at all, you know how bad the idea of AI enhanced evidence actually is.
Everyone uses the word “hallucinate” when describing visual AI because it’s normie-friendly and cool sounding, but the results are a product of math. Very complex math, yes, but computers aren’t taking drugs and randomly pooping out images because computers can’t do anything truly random.
You know what else uses math? Basically every image modification algorithm, including resizing. I wonder how this judge would feel about viewing a 720p video on a 4k courtroom TV because “hallucination” takes place in that case too.
There is a huge difference between interpolating pixels and inserting whole objects into pictures.
Both insert pixels that didn’t exist before, so where do we draw the line of how much of that is acceptable?
Look it this way: If you have an unreadable licence plate because of low resolution, interpolating won’t make it readable (as long as we didn’t switch to a CSI universe). An AI, on the other hand, could just “invent” (I know, I know, normy speak in your eyes) a readable one.
You will draw yourself the line when you get your first ticket for speeding, when it wasn’t your car.
License plates is an interesting case because with a known set of visual symbols (known fonts used by approved plate issuers) you can often accurately deblur even very very blurry text (but not with AI algorithms, but rather by modeling the blur of the cameras and the unique blur gradients this results in for each letter). It does require a certain minimum pixel resolution of the letters to guarantee unambiguity though.
Interesting example, because tickets issued by automated cameras aren’t enforced in most places in the US. You can safely ignore those tickets and the police won’t do anything about it because they know how faulty these systems are and most of the cameras are owned by private companies anyway.
“Readable” is a subjective matter of interpretation, so again, I’m confused on how exactly you’re distinguishing good & pure fictional pixels from bad & evil fictional pixels
Being tickets enforced or not doesn’t change my argumentation nor invalidates it.
You are acting stubborn and childish. Everything there was to say has been said. If you still think you are right, do it, as you are not able or willing to understand. Let me be clear: I think you are trolling and I’m not in any mood to participate in this anymore.
Sorry, it’s just that I work in a field where making distinctions is based on math and/or logic, while you’re making a distinction between AI- and non-AI-based image interpolation based on opinion and subjective observation
You can safely ignore those tickets and the police won’t do anything
Wait what? No.
It’s entirely possible if you ignore the ticket, a human might review it and find there’s insufficient evidence. But if, for example, you ran a red light and they have a photo that shows your number plate and your face… then you don’t want to ignore that ticket. And they generally take multiple photos, so even if the one you received on the ticket doesn’t identify you, that doesn’t mean you’re safe.
When automated infringement systems were brand new the cameras were low quality / poorly installed / didn’t gather evidence necessary to win a court challenge… getting tickets overturned was so easy they didn’t even bother taking it to court. But it’s not that easy now, they have picked up their game and are continuing to improve the technology.
Also - if you claim someone else was driving your car, and then they prove in court that you were driving… congratulations, your slap on the wrist fine is now a much more serious matter.
I mean we “invent” pixels anyway for pretty much all digital photography based on Bayer filters.
But the answer is linear interpolation. That’s where we draw the line. We have to be able to point to a line of code and say where the data came from, rather than a giant blob of image data that could contain anything.
What’s your bank account information? I’m either going to add or subtract a lot of money from it. Both alter your account balance so you should be fine with either right?
Has this argument ever worked on anyone who has ever touched a digital camera? “Resizing video is just like running it through AI to invent details that didn’t exist in the original image”?
“It uses math” isn’t the complaint and I’m pretty sure you know that.
normie-friendly
Whenever people say things like this, I wonder why that person thinks they’re so much better than everyone else.
Tangentially related: the more people seem to support AI all the things the less it turns out they understand it.
I work in the field. I had to explain to a CIO that his beloved “ChatPPT” was just autocomplete. He become enraged. We implemented a 2015 chatbot instead, he got his bonus.
We have reached the winter of my discontent. Modern life is rubbish.
Normie, layman… as you’ve pointed out, it’s difficult to use these words without sounding condescending (which I didn’t mean to be). The media using words like “hallucinate” to describe linear algebra is necessary because most people just don’t know enough math to understand the fundamentals of deep learning - which is completely fine, people can’t know everything and everyone has their own specialties. But any time you simplify science so that it can be digestible by the masses, you lose critical information in the process, which can sometimes be harmfully misleading.
Or sometimes the colloquial term people have picked up is a simplified tool for getting the right point across.
Just because it’s guessing using math doesn’t mean it isn’t hallucinating in a sense the additional data. It did not exist before and it willed it into existence much like a hallucination while being easy for people to catch onto quickly as not trustworthy thanks to previous definitions and understanding of the word.
Part of language is finding the right words to use so that people can quickly understand topics even if it means giving up nuance but absolutely it should be based on getting them to the right conclusion even if in a simplified form which doesn’t always happen when there is bias. I think this one works just fine.
It’s not just the media who uses this term. According to this study which I’ve had a very brief skim of, the term “hallucination” was used in literature as early as 2000, and in Table 1, you can see hundreds of studies from various databases which they then go on to analyse the use of “hallucination” in.
It’s worth saying that this study is focused on showing how vague the term is, and how many different and conflicting definitions of “hallucination” there are in the literature, so I for sure agree it’s a confusing term. Just it is used by researchers as well as laypeople.
LLMs (the models that “hallucinate” is most often used in conjunction with) are not Deep Learning normie.
https://en.m.wikipedia.org/wiki/Large_language_model
LLMs are artificial neural networks
https://en.m.wikipedia.org/wiki/Neural_network_(machine_learning)
A network is typically called a deep neural network if it has at least 2 hidden layers
I’m not going to bother arguing with you but for anyone reading this: the poster above is making a bad faith semantic argument.
In the strictest technical terms AI, ML and Deep Learning are district, and they have specific applications.
This insufferable asshat is arguing that since they all use fuel, fire and air they are all engines. Which’s isn’t wrong but it’s also not the argument we are having.
@OP good day.
When you want to cite sources like me instead of making personal attacks, I’ll be here 🙂
computers aren’t taking drugs and randomly pooping out images
Sure, no drugs involved, but they are running a statistically proven random number generator and using that (along with non-random data) to generate the image.
The result is this - ask for the same image, get two different images — similar, but clearly not the same person - sisters or cousins perhaps… but nowhere near usable as evidence in court:
Tell me you don’t know shit about AI without telling me you don’t know shit. You can easily reproduce the exact same image by defining the starting seed and constraining the network to a specific sequence of operations.
But if you don’t do that then the ML engine doesn’t have the introspective capability to realize it failed to recreate an image
And if you take your eyes off of their sockets you can no longer see. That’s a meaningless statement.
The point is that the AI ‘enhanced’ photos have nice clear details that are randomly produced, and thus should not be relied on. Are you suggesting that we can work around that problem by choosing a random seed manually? Do you think that solves the problem?
It’s not AI, it’s PISS. Plagiarized information synthesis software.
Just like us!
computers can’t do anything truly random.
Technically incorrect - computers can be supplied with sources of entropy, so while it’s true that they will produce the same output given identical inputs, it is in practice quite possible to ensure that they do not receive identical inputs if you don’t want them to.
IIRC there was a random number generator website where the machine was hookup up to a potato or some shit.
Bud, hallucinate is a perfect term for the shit AI creates because it doesnt understand reality, regardless if math is creating that hallucination or not
No computer algorithm can accurately reconstruct data that was never there in the first place.
Ever.
This is an ironclad law, just like the speed of light and the acceleration of gravity. No new technology, no clever tricks, no buzzwords, no software will ever be able to do this.
Ever.
If the data was not there, anything created to fill it in is by its very nature not actually reality. This includes digital zoom, pixel interpolation, movement interpolation, and AI upscaling. It preemptively also includes any other future technology that aims to try the same thing, regardless of what it’s called.
One little correction, digital zoom is not something that belongs on that list. It’s essentially just cropping the image. That said, “enhanced” digital zoom I agree should be on that list.
Suddenly thinking of someone using moon photos from their phone as evidence.
Are you saying CSI lied to me?
Even CSI: Miami?
Horatio would NEVER 😎
Yeeeeeahhh he would. Sneaky one.
Digital zoom is just cropping and enlarging. You’re not actually changing any of the data. There may be enhancement applied to the enlarged image afterwards but that’s a separate process.
But the fact remains that digital zoom cannot create details that were invisible in the first place due to the distance from the camera to the subject. Modern implementations of digital zoom always use some manner of interpolation algorithm, even if it’s just a simple linear blur from one pixel to the next.
The problem is not in how a digital zoom works, it’s on how people think it works but doesn’t. A lot of people (i.e. [l]users, ordinary non-technical people) still labor under the impression that digital zoom somehow makes the picture “closer” to the subject and can enlarge or reveal details that were not detectable in the original photo, which is a notion we need to excise from people’s heads.
I 100 % agree on your primary point. I still want to point out that a detail in a 4k picture that takes up a few pixels will likely be invisible to the naked eye unless you zoom. “Digital zoom” without interpolation is literally just that: Enlarging the picture so that you can see details that take up too few pixels for you to discern them clearly at normal scaling.
Hold up. Digital zoom is, in all the cases I’m currently aware of, just cropping the available data. That’s not reconstruction, it’s just losing data.
Otherwise, yep, I’m with you there.
See this follow up:
https://lemmy.world/comment/9061929
Digital zoom makes the image bigger but without adding any detail (because it can’t). People somehow still think this will allow you to see small details that were not captured in the original image.
Also since companies are adding AI to everything, sometimes when you think you’re just doing a digital zoom you’re actually getting AI upscaling.
There was a court case not long ago where the prosecution wasn’t allowed to pinch-to-zoom evidence photos on an iPad for the jury, because the zoom algorithm creates new information that wasn’t there.
There’s a specific type of digital zoom which captures multiple frames and takes advantage of motion between frames (plus inertial sensor movement data) to interpolate to get higher detail. This is rather limited because you need a lot of sharp successive frames just to get a solid 2-3x resolution with minimal extra noise.
If people don’t get the second law of thermodynamics, explaining this to them is useless. EDIT: … too.
It preemptively also includes any other future technology that aims to try the same thing
No it doesn’t. For example you can, with compute power, for distortions introduced by camera lenses/sensors/etc and drastically increase image quality. For example this photo of pluto was taken from 7,800 miles away - click the link for a version of the image that hasn’t been resized/compressed by lemmy:
The unprocessed image would look nothing at all like that. There’s a lot more data in an image than you can see with the naked eye, and algorithms can extract/highlight the data. That’s obviously not what a generative ai algorithm does, those should never be used, but there are other algorithms which are appropriate.
The reality is every modern photo is heavily processed - look at this example by a wedding photographer, even with a professional camera and excellent lighting the raw image on the left (where all the camera processing features are disabled) looks like garbage compared to exactly the same photo with software processing:
No computer algorithm can accurately reconstruct data that was never there in the first place.
What you are showing is (presumably) a modified visualisation of existing data. That is: given a photo which known lighting and lens distortion, we can use math to display the data (lighting, lens distortion, and input registered by the camera) in a plethora of different ways. You can invert all the colours if you like. It’s still the same underlying data. Modifying how strongly certain hues are shown, or correcting for known distortion are just techniques to visualise the data in a clearer way.
“Generative AI” is essentially just non-predictive extrapolation based on some data set, which is a completely different ball game, as you’re essentially making a blind guess at what could be there, based on an existing data set.
making a blind guess at what could be there, based on an existing data set.
Here’s your error. You yourself are contradicting the first part of your sentence with the last. The guess is not “blind” because the prediction is based on an existing data set . Looking at a half occluded circle with a model then reconstructing the other half is not a “blind” guess, it is a highly probable extrapolation that can be very useful, because in most situations, it will be the second half of the circle. With a certain probability, you have created new valuable data for further analysis.
But you are not reporting the underlying probability, just the guess. There is no way, then, to distinguish a bad guess from a good guess. Let’s take your example and place a fully occluded shape. Now the most probable guess could still be a full circle, but with a very low probability of being correct. Yet that guess is reported with the same confidence as your example. When you carry out this exercise for all extrapolations with full transparency of the underlying probabilities, you find yourself right back in the position the original commenter has taken. If the original data does not provide you with confidence in a particular result, the added extrapolations will not either.
And then circles get convictions so even if the model did somehow start off completely unbiassed people are going to start feeding it data that weighs towards finding more circles since a prosecution will be used as a ‘success’ to feed back into the model and ‘improve’ it.
Looking at a half circle and guessing that the “missing part” is a full circle is as much of a blind guess as you can get. You have exactly zero evidence that there is another half circle present. The missing part could be anything, from nothing to any shape that incorporates a half circle. And you would be guessing without any evidence whatsoever as to which of those things it is. That’s blind guessing.
Extrapolating into regions without prior data with a non-predictive model is blind guessing. If it wasn’t, the model would be predictive, which generative AI is not, is not intended to be, and has not been claimed to be.
None of your examples are creating new legitimate data from the whole cloth. They’re just making details that were already there visible to the naked eye. We’re not talking about taking a giant image that’s got too many pixels to fit on your display device in one go, and just focusing on a specific portion of it. That’s not the same thing as attempting to interpolate missing image data. In that case the data was there to begin with, it just wasn’t visible due to limitations of the display or the viewer’s retinas.
The original grid of pixels is all of the meaningful data that will ever be extracted from any image (or video, for that matter).
Your wedding photographer’s picture actually throws away color data in the interest of contrast and to make it more appealing to the viewer. When you fiddle with the color channels like that and see all those troughs in the histogram that make it look like a comb? Yeah, all those gaps and spikes are actually original color/contrast data that is being lost. There is less data in the touched up image than the original, technically, and if you are perverse and own a high bit depth display device (I do! I am typing this on a machine with a true 32-bit-per-pixel professional graphics workstation monitor.) you actually can state at it and see the entirety of the detail captured in the raw image before the touchups. A viewer might not think it looks great, but how it looks is irrelevant from the standpoint of data capture.
They talked about algorithms used for correcting lens distortions with their first example. That is absolutely a valid use case and extracts new data by making certain assumptions with certain probabilities. Your newly created law of nature is just your own imagination and is not the prevalent understanding in the scientific community. No, quite the opposite, scientific practice runs exactly counter your statements.
This is just smarter post processing, like better noise cancelation, error correction, interpolation, etc.
But ML tools extrapolate rather than interpolate which adds things that weren’t there
offtopic: I like the picture on the left more. It feels more alive. Colder in color, but warmer in expression. Dunno how to say that. And I’ve been in a forest yesterday, so my perception is skewed.
In my first year of university, we had a fun project to make us get used to physics. One of the projects required filming someone throwing a ball upwards, and then using the footage to get the maximum height the ball reached, and doing some simple calculations to get the initial velocity of the ball (if I recall correctly).
One of the groups that chose that project was having a discussion on a problem they were facing: the ball was clearly moving upwards on one frame, but on the very next frame it was already moving downwards. You couldn’t get the exact apex from any specific frame.
So one of the guys, bless his heart, gave a suggestion: “what if we played the (already filmed) video in slow motion… And then we filmed the video… And we put that one in slow motion as well? Maybe do that a couple of times?”
A friend of mine was in that group and he still makes fun of that moment, to this day, over 10 years later. We were studying applied physics.
That’s wrong. With a degree of certainty, you will always be able to say that this data was likely there. And because existence is all about probabilities, you can expect specific interpolations to be an accurate reconstruction of the data. We do it all the time with resolution upscaling, for example. But of course, from a certain lack of information onward, the predictions become less and less reliable.
No computer algorithm can accurately reconstruct data that was never there in the first place.
Okay, but what if we’ve got a computer program that can just kinda insert red eyes, joints, and plums of chum smoke on all our suspects?
By your argument, nothing is ever real, so let’s all jump on a chasm.
There’s a grain of truth to that. Everything you see is filtered by the limitations of your eyes and the post-processing applied by your brain which you can’t turn off. That’s why you don’t see the blind spot on your retinas where your optic nerve joins your eyeball, for instance.
You can argue what objective reality is from within the limitations of human observation in the philosophy department, which is down the hall and to your left. That’s not what we’re talking about, here.
From a computer science standpoint you can absolutely mathematically prove the amount of data that is captured in an image and, like I said, no matter how hard you try you cannot add any more data to it that can be actually guaranteed or proven to reflect reality by blowing it up, interpolating it, or attempting to fill in patterns you (or your computer) think are there. That’s because you cannot prove, no matter how the question or its alleged solution are rephrased, that any details your algorithm adds are actually there in the real world except by taking a higher resolution/closer/better/wider spectrum image of the subject in question to compare. And at that point it’s rendered moot anyway, because you just took a higher res/closer/better/wider/etc. picture that contains the required detail, and the original (and its interpolation) are unnecessary.
You cannot technically prove it, that’s true, but that does not invalidate the interpolated or extrapolated data, because you will be able to have a certain degree of confidence in them, be able to judge their meaningfulness with a specific probability. And that’s enough, because you are never able to 100% prove something in physical sciences. Never. Even our most reliable observations, strongest theories and most accurate measurements all have a degree of uncertainty. Even the information and quantum theories you rest your argument on are unproven and unprovable by your standards, because you cannot get to 100% confidence. So, if you find that there’s enough evidence for the science you base your understanding of reality on, then rationally and by deductive reasoning you will have to accept that the prediction of a machine learning model that extrapolates some data where the probability of validity is just as great as it is for quantum physics must be equally true.
Entropy and information theory is very real, it’s embedded in quantum physics
Unicorns are also real - we created them through our work in fiction.
Well that’s a bit close minded.
Perhaps at some point we will conquer quantum mechanics enough to be able to observe particles at every place and time they have ever and will ever exist. Do that with enough particles and you’ve got a de facto time machine, albeit a read-only one.
So many things we believe to be true today suggest this is not going to happen. The uncertainty principle, and the random nature of nuclear decay chief among them. The former prevents you gaining the kind of information you would need to do this, and the latter means that even if you could, it would not provide the kind of omniscience one might assume.
Limits of quantum observation aside, you also could never physically store the data of the position/momentum/state of every particle in any universe within that universe, because the particles that exist in the universe are the sum total of the materials with which we could ever use to build the data storage. You’ve got yourself a chicken-and-egg scenario where the egg is 93 billion light years wide, there.
Complexity relates nonlinearly to the amount of moving parts.
We might be able to spend an ungodly amount of energy to do that for one particle for an hour of its existence.
Being able to build a computer (in a wide sense) that can emulate in short time (less than human life) processes consistent of more energy than was spent on its creation - it’s something else.
I think we need to STOP calling it “Artificial Intelligence”. IMHO that is a VERY misleading name. I do not consider guided pattern recognition to be intelligence.
A term created in order to vacuum up VC funding for spurious use cases.
It’s the new “4k”. Just buzzwords to get clicks.
My disappointment when I realised “4k” was only 2160p 😔
I can’t disagree with this… After basing the size off of the vertical pixel count, we’re now going to switch to the horizontal count to describe the resolution.
on the contrary! it’s a very old buzzword!
AI should be called machine learning. much better. If i had my way it would be called “fancy curve fitting” henceforth.
Technically speaking AI is any effort on the part of machines to mimic living things. So computer vision for instance. This is distinct from ML and Deep Learning which use historical statistical data to train on and then forecast or simulate.
“machines mimicking living things” does not mean exclusively AI. Many scientific fields are trying to mimic living things.
AI is a very hazy concept imho as it’s difficult to even define when a system is intelligent - or when a human is.
That’s not what I said.
What I typed there is not my opinion.
This the technical, industry distinction between AI and things like ML and Neural networks.
“Mimicking living things” is obviously not exclusive to AI. It is exclusive to AI as compared to ML, for instance.
deleted by creator
There is no technical, industry specification for what AI is. It’s solely and completely a marketing term. The best thing I’ve heard is that you know it’s ML if the file extension is cpp or py, and you know it’s AI if the extension is pdf or ppt.
I don’t see how “AI” is mimicking living things while neural networks are, just because neural networks are based on neurons, the living things in your head.
Incorrect. 15 years in the industry here. Good day.
Optical Character Recognition used to be firmly in the realm of AI until it became so common that even the post office uses it. Nowadays, OCR is so common that instead of being proper AI, it’s just another mundane application of a neural network. I guess, eventually Large Language Models will be outside there scope of AI.
My Concious Cognative Correlator is the real shit.
What is the definition of intelligence? Does it require sentience? Can a data set be intelligently compiled into interesting results without human interaction? Yes the term AI is stretched a bit thin but I believe it has enough substance to qualify.
I do not consider guided pattern recognition to be intelligence.
That’s a you problem, this debate happened 50 years ago and we decided Intelligence is the right word.
Good thing there have been no significant changes to technology, psychology, philosophy, or society in the past 50 years.
Fallacious reasoning.
You forget that we can change these definitions any time we see fit.
You cannot, because you are not a scientist and judging from your statements, you do not know what you’re talking about.
Here is an alternative Piped link(s):
https://www.piped.video/watch?v=mNPh2z3W_WY
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
It seems you are sadly stuck in your own thought patterns.
It does not take a scientist to change things. It takes a society to change definitions.
We could… if it made any sense to do so, which it doesn’t.
How is guided pattern recognition is different from imagination (and therefore intelligence) though?
There’s a lot of other layers in brains that’s missing in machine learning. These models don’t form world models and
somedon’t have an understanding of facts and have no means of ensuring consistency, to start with.I mean if we consider just the reconstruction process used in digital photos it feels like current ai models are already very accurate and won’t be improved by much even if we made them closer to real “intelligence”.
The point is that reconstruction itself can’t reliably produce missing details, not that a “properly intelligent” mind will be any better at it than current ai.
They absolutely do contain a model of the universe which their answers must conform to. When an LLM hallucinates, it is creating a new answer which fits its internal model.
Statistical associations is not equivalent to a world model, especially because they’re neither deterministic nor even tries to prevent giving up conflicting answers. It models only use of language
It models only use of language
This phrase, so casually deployed, is doing some seriously heavy lifting. Lanuage is by no means a trivial thing for a computer to meaningfully interpret, and the fact that LLMs do it so well is way more impressive than a casual observer might think.
If you look at earlier procedural attempts to interpret language programmatically, you will see that time and again, the developers get stopped in their tracks because in order to understand a sentence, you need to understand the universe - or at the least a particular corner of it. For example, given the sentence “The stolen painting was found by a tree”, you need to know what a tree is in order to interpret this correctly.
You can’t really use language *unless* you have a model of the universe.
But it doesn’t model the actual universe, it models rumor mills
Today’s LLM is the versificator machine of 1984. It cares not for truth, it cares for distracting you
They are remarkably useful. Of course there are dangers relating to how they are used, but sticking your head in the sand and pretending they are useless accomplishes nothing.
Your comment is a good reason why these tools have no place in the courtroom: The things you describe as imagination.
They’re image generation tools that will generate a new, unrelated image that happens to look similar to the source image. They don’t reconstruct anything and they have no understanding of what the image contains. All they know is which color the pixels in the output might probably have given the pixels in the input.
It’s no different from giving a description of a scene to an author, asking them to come up with any event that might have happened in such a location and then trying to use the resulting short story to convict someone.
They don’t reconstruct anything and they have no understanding of what the image contains.
With enough training they, in fact, will have some understanding. But that still leaves us with that “enhance meme” problem aka the limited resolution of the original data. There are no means to discover what exactly was hidden between visible pixels, only approximate. So yes you are correct, just described it a bit differently.
they, in fact, will have some understanding
These models have spontaneously acquired a concept of things like perspective, scale and lighting, which you can argue is already an understanding of 3D space.
What they do not have (and IMO won’t ever have) is consciousness. The fact we have created machines that have understanding of the universe without consciousness is very interesting to me. It’s very illuminating on the subject of what consciousness is, by providing a new example of what it is not.
I think AI doesn’t need consciousness to be able to say what is on the picture, or to guess what else could specific details contain.
You, and humans in general, are also just sophisticated pattern recognition and matching machines. If neural networks are not intelligent, then you are not intelligent.
This may be the dumbest statement I have yet seen on this platform. That’s like equating a virus with a human by saying both things replicate themselves so they must be similar.
You can say what you like but absolutely zero true and full understand of what human intelligence actually is or how it works.
“AI”, or whatever you want to call it, is not at all similar.
I do not consider guided pattern recognition to be intelligence.
Humanity has entered the chat
Seriously though, what name would you suggest?
Maybe guided pattern recognition (GPR).
Or Bob.
Calling it Bob is not going to help discourage people from attributing intelligence. They’ll start wishing “Bob” a happy birthday.
Do not personify the machine.
Maybe Boob then.
I agree. It’s restricted intelligence (RI), at best, and even that can be argued against.
How long until we got upscalers of various sorts built into tech that shouldn’t have it? For bandwidth reduction, for storage compression, or cost savings. Can we trust what we capture with a digital camera, when companies replace a low quality image of the moon with a professionally taken picture, at capture time? Can sport replays be trusted when the ball is upscaled inside the judges’ screens? Cheap security cams with “enhanced night vision” might get somebody jailed.
I love the AI tech. But its future worries me.
Dehance! [Click click click.]
Just print the damn thing!
That scene gets replayed in my mind three or four times a month.
It will wild out for the foreseeable future until the masses stop falling for it in gimmicks then it will be reserved for the actual use cases where it’s beneficial once the bullshit ai stops making money.
Lol, you think the masses will stop falling for it in gimmicks? Just look at the state of the world.
AI-based video codecs are on the way. This isn’t necessarily a bad thing because it could be designed to be lossless or at least less lossy than modern codecs. But compression artifacts will likely be harder to identify as such. That’s a good thing for film and TV, but a bad thing for, say, security cameras.
The devil’s in the details and “AI” is way too broad a term. There are a lot of ways this could be implemented.
I don’t think loss is what people are worried about, really - more injecting details that fit the training data but don’t exist in the source.
Given the hoopla Hollywood and directors made about frame-interpolation, do you think generated frames will be any better/more popular?
Han shot first.
Over Greedo’s dead body.
Correct!
In the context of video encoding, any manufactured/hallucinated detail would count as “loss”. Loss is anything that’s not in the original source. The loss you see in e.g. MPEG4 video usually looks like squiggly lines, blocky noise, or smearing. But if an AI encoder inserts a bear on a tricycle in the background, that would also be a lossy compression artifact in context.
As for frame interpolation, it could definitely be better, because the current algorithms out there are not good. It will not likely be more popular, since this is generally viewed as an artistic matter rather than a technical matter. For example, a lot of people hated the high frame rate in the Hobbit films despite the fact that it was a naturally high frame rate, filmed with high-frame-rate cameras. It was not the product of a kind-of-shitty algorithm applied after the fact.
I don’t think AI codecs will be anything revolutionary. There are plenty of lossless codecs already, but if you want more detail, you’ll need a better physical sensor, and I doubt there’s anything that can be done to go around that (that actually represents what exists, not an hallucination).
It’s an interesting thought experiment, but we don’t actually see what really exists, our brains essentially are AI vision, filling in things we don’t actually perceive. Examples are movement while we’re blinking, objects and colors in our peripheral vision, the state of objects when our eyes dart around, etc.
The difference is we can’t go back frame by frame and analyze these “hallucinations” since they’re not recorded. I think AI enhanced video will actually bring us closer to what humans see even if some of the data doesn’t “exist”, but the article is correct that it should never be used as evidence.
Nvidia’s rtx video upscaling is trying to be just that: DLSS but you run it on a video stream instead of a game running on your own hardware. They’ve posited the idea of game streaming becoming lower bit rate just so you can upscale it locally, which to me sounds like complete garbage
There are plenty of lossless codecs already
It remains to be seen, of course, but I expect to be able to get lossless (or nearly-lossless) video at a much lower bitrate, at the expense of a much larger and more compute/memory-intensive codec.
The way I see it working is that the codec would include a general-purpose model, and video files would be encoded for that model + a file-level plugin model (like a LoRA) that’s fitted for that specific video.
I think there’s a possibility for long format video of stable scenes to use ML for higher compression ratios by deriving a video specific model of the objects in the frame and then describing their movements (essentially reducing the actual frames to wire frame models instead of image frames, then painting them in from the model).
But that’s a very specific thing that probably only work well for certain types of video content (think animated stuff)
AI-based video codecs are on the way.
Arguably already here.
Look at this description of Samsungs mobile AI for their S24 phone and newer tablets:
AI-powered image and video editing
Galaxy AI also features various image and video editing features. If you have an image that is not level (horizontally or vertically) with respect to the object, scene, or subject, you can correct its angle without losing other parts of the image. The blank parts of that angle-corrected image are filled with Generative AI-powered content. The image editor tries to fill in the blank parts of the image with AI-generated content that suits the best. You can also erase objects or subjects in an image. Another feature lets you select an object/subject in an image and change its position, angle, or size.
It can also turn normal videos into slow-motion videos. While a video is playing, you need to hold the screen for the duration of the video that you want to be converted into slow-motion, and AI will generate frames and insert them between real frames to create a slow-motion effect.
Not all of those are the same thing. AI upscaling for compression in online video may not be any worse than “dumb” compression in terms of loss of data or detail, but you don’t want to treat a simple upscale of an image as a photographic image for evidence in a trial. Sport replays and hawkeye technology doesn’t really rely on upscaling, we have ways to track things in an enclosed volume very accurately now that are demonstrably more precise than a human ref looking at them. Whether that’s better or worse for the game’s pace and excitement is a different question.
The thing is, ML tech isn’t a single thing. The tech itself can be used very rigorously. Pretty much every scientific study you get these days uses ML to compile or process images or data. That’s not a problem if done correctly. The issue is everybody is both assuming “generative AI” chatbots, upscalers and image processers are what ML is and people keep trying to apply those things directly in the dumbest possible way thinking it is basically magic.
I’m not particularly afraid of “AI tech”, but I sure am increasingly annoyed at the stupidity and greed of some of the people peddling it, criticising it and using it.
Cheap security cams with “enhanced night vision” might get somebody jailed.
Might? We’ve been arresting the wrong people based on shitty facial recognition for at least 5 years now. This article has examples from 2019.
On one hand, the potential of this type of technology is impressive. OTOH, the failures are super disturbing.
It’s already being used for things it shouldn’t be.
Probably not far. NVidia has had machine learning enhanced upscaling of video games for years at this point, and now they’ve also implemented similar tech but for frame interpolation. The rendered output might be 720p at 20FPS but will be presented at 1080p 60FPS.
It’s not a stretch to assume you could apply similar tech elsewhere. Non-ML enhanced, yet still decently sophisticated frame interpolation and upscaling has been around for ages.
Nvidias game upscaling has access to game data and also training data generated by gameplay to make footage that is appealing to the gamers eye and not necessarily accurate. Security (or other) cameras don’t have access to this extra data and the use case for video in courts is to be accurate, not pleasing.
Your comparison is apples to oranges.
No, I think you misunderstood what I’m trying to say. We already have tech that uses machine learning to upscale stuff in real-time, but I’m not that it’s accurate on things like court videos. I don’t think we’ll ever get to a point where it can be accurate as evidence because by the very nature of the tech it’s making up detail, not enhancing it. You can’t enhance what isn’t there. It’s not turning nothing into accurate data, it’s guessing based on input and what it’s been trained on.
Prime example right here, this is the objectively best version of Alice in Wonderland, produced by BBC in 1999, and released on VHS. As far as I can tell there was never a high quality version available. Someone used machine learning to upscale it, and overall it looks great, but there are scenes (such as the one that’s linked) where you can clearly see the flaws. Tina Majorino has no face, because in the original data, there wasn’t enough detail to discern a face.
Now we could obviously train a model to recognise “criminal activity”, like stabbing, shooting, what have you. Then, however, you end up with models that mistake one thing for another, like scratching your temple turning into driving while on the phone, now if instead of detecting something, the model’s job is to fill in missing data we have a recipe for disaster.
Any evidence that has had machine learning involved should be treated with at least as much scrutiny as a forensic sketch, while while they can be useful in investigations, generally don’t carry much weight as evidence. That said, a forensic sketch is created through collaboration with an artist and a witness, so there is intent behind those. Machine generated artwork lacks intent, you can tweak the parameters until it generates roughly what you want, but it’s honestly better to just hire an artist and get exactly what you want.
Here is an alternative Piped link(s):
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
Security (or other) cameras don’t have access to this extra data
Samsung’s AI on their latest phones and tablets does EXACTLY what @[email protected] is describing. It will literally create data including parts of scenes and even full frames, in order to make video look better.
So while a true security camera may not be able to do it there’s now widely available consumer products that WILL. You’re also forgetting that even Security Camera footage can be processed through software so footage from those isn’t immune to AI fiddling either.
Would that not fall under the “enhanced” evidence that is banned by this court decision?
The real question is could we ever really trust photographs before AI? Image manipulation has been a thing long before the digital camera and Photoshop. What makes these images we see actually real? Cameras have been miscapturing image data for as long as they have existed. Do the light levels in a photo match what was actually there according to the human eye? Usually not. What makes a photo real?
They can. But theres a reasonable level of trust that a security feed has been kept secure and not tampered with by the owner if he doesnt have a motive. But what if not even the owner know that somewhere in their tech chain, maybe the camera, maybe the screen, maybe the storage device, maybe all 3, the image was “improved”. No evidence of tampering. We’ll have the police blaming Count Rugen for a bank robbery he didnt do, but the camera clearly shows a six fingered man!
Jesus Christ, does this even need to be pointed out!??
Unfortunately it does need pointing out. Back when I was in college, professors would need to repeatedly tell their students that the real world forensics don’t work like they do on NCIS. I’m not sure as to how much thing may or may not have changed since then, but based on American literacy levels being what they are, I do not suppose things have changed that much.
you might be referring to the CSI Effect
Its certainly similar in that CSI played a role in forming unrealistic expectations in student’s minds. But. Rather than expecting more physical evidence in order to make a prosecution, the students expected magic to happen on computers and lab work (often faster than physically possible).
AI enhancement is not uncovering hidden visual data, but rather it generates that information based on previously existing training data and shoe horns that in. It certainly could be useful, but it is not real evidence.
ENHANCE !
Yes. When people were in full conspiracy mode on Twitter over Kate Middleton, someone took that grainy pic of her in a car and used AI to “enhance it,” to declare it wasn’t her because her mole was gone. It got so much traction people thought the ai fixed up pic WAS her.
Don’t forget people thinking that scanlines in a news broadcast over Obama’s suit meant that Obama was a HOLOGRAM and ACTUALLY A LIZARD PERSON.
I met a student at university last week at lunch who told me he is stressed out about some homework assignment. He told me that he needs to write a report with a minimum number of words so he pasted the text into chatGPT and asked it about the number of words in the text.
I told him that every common text editor has a word count built in and that chatGPT is probably not good at counting words (even though it pretends to be good at it)
Turns out that his report was already waaaaay above the minimum word count and even needed to be shortened.
So much about the understanding of AI in the general population.
I’m studying at a technical university.
I’m studying at a technical university.
AI is gonna fuck up an entire generation or more.
The layman is very stupid. They hear all the fantastical shit AI can do and they start to assume its almighty. Thats how you wind up with those lawyers that tried using chat GPT to write up a legal brief that was full of bullshit and didnt even bother to verify if it was accurate.
They dont understand it, they only know that the results look good.
The layman is very stupid. They hear all the fantastical shit AI can do and they start to assume its almighty. Thats how you wind up with those lawyers that tried using chat GPT to write up a legal brief that was full of bullshit and didnt even bother to verify if it was accurate.
Especially since it gets conflated with pop culture. Someone who hears that an AI app can “enhance” an image might think it works like something out of CSI using technosmarts, rather than just making stuff up out of whole cloth.
There’s people who still believe in astrology. So, yes.
And people who believe the Earth is flat, and that Bigfoot and the Loch Ness Monster exist, and there are reptillians replacing the British royal family…
People are very good at deluding themselves into all kinds of bullshit. In fact, I posit that they’re better even at it than learning the facts or comprehending empirical reality.
Good god, there are still people who believe in phrenology!
Of course, not everyone is technology literate enough to understand how it works.
That should be the default assumption, that something should be explained so that others understand it and can make better, informed, decisions. .
It’s not only that everyone isn’t technologically literate enough to understand the limits of this technology - the AI companies are actively over-inflating their capabilities in order to attract investors. When the most accessible information about the topic is designed to get non-technically proficient investors on board with your company, of course the general public is going to get an overblown idea of what the technology can do.
Its not actually worse than eyewitness testimony.
This is not an endorsement if AI, just pointing out that truth has no place in a courtroom, and refusing to lie will get you locked in a cafe.
Too good, not fixing it.
I’d love to see the “training data” for this model, but I can already predict it will be 99.999% footage of minorities labelled ‘criminal’.
And cops going “Aha! Even AI thinks minorities are committing all the crime”!
deleted by creator
Tell me you didn’t read the article without telling me you didn’t read the article
Imagine a prosecution or law enforcement bureau that has trained an AI from scratch on specific stimuli to enhance and clarify grainy images. Even if they all were totally on the up-and-up (they aren’t, ACAB), training a generative AI or similar on pictures of guns, drugs, masks, etc for years will lead to internal bias. And since AI makers pretend you can’t decipher the logic (I’ve literally seen compositional/generative AI that shows its work), they’ll never realize what it’s actually doing.
So then you get innocent CCTV footage this AI “clarifies” and pattern-matches every dark blurb into a gun. Black iPhone? Maybe a pistol. Black umbrella folded up at a weird angle? Clearly a rifle. And so on. I’m sure everyone else can think of far more frightening ideas like auto-completing a face based on previously searched ones or just plain-old institutional racism bias.
just plain-old institutional racism bias
Every crime attributed to this one black guy in our training data.
According to the evidence, the defendant clearly committed the crime with all 17 of his fingers. His lack of remorse is obvious by the fact that he’s clearly smiling wider than his own face.
clickity clackity
“ENHANCE”
AI enhanced = made up.
It’s incredibly obvious when you call the current generation of AI by its full name, generative AI. It’s creating data, that’s what it’s generating.
Everything that is labeled “AI” is made up. It’s all just statistically probable guessing, made by a machine that doesn’t know what it is doing.
Society = made up, so I’m not sure what your argument is.
My argument is that a video camera doesn’t make up video, an ai does.
video camera doesn’t make up video, an ai does.
What’s that even supposed to mean? Do you even know how a camera works? What about an AI?
Yes, I do. Cameras work by detecting light using a charged coupled device or an active pixel sensor (CMOS). Cameras essentially take a series of pictures, which makes a video. They can have camera or lens artifacts (like rolling shutter illusion or lens flare) or compression artifacts (like DCT blocks) depending on how they save the video stream, but they don’t make up data.
Generative AI video upscaling works by essentially guessing (generating) what would be there if the frame were larger. I’m using “guessing” colloquially, since it doesn’t have agency to make a guess. It uses a model that has been trained on real data. What it can’t do is show you what was actually there, just its best guess using its diffusion model. It is literally making up data. Like, that’s not an analogy, it actually is making up data.
Ok, you clearly have no fucking idea what you’re talking about. No, reading a few terms on Wikipedia doesn’t count as “knowing”.
CMOS isn’t the only transducer for cameras - in fact, no one would start the explanation there. Generative AI doesn’t have to be based on diffusion. You’re clearly just repeating words you’ve seen used elsewhere - you are the AI.Yes, I also mentioned CCDs. Charge Coupled Device is what that stands for. You can tell I didn’t look it up, because I originally called it a “charged coupled device” and not a “charge coupled device”. My bad, I should have checked Wikipedia.
Can you point me to a generative AI that doesn’t make up data? GANs are still generative, and generative AIs make up data.
The fact that it made it that far is really scary.
I’m starting to think that yes, we are going to have some new middle ages before going on with all that “per aspera ad astra” space colonization stuff.
Aren’t we already in a kind of dark age?
People denying science, people scared of diseases and vaccination, people using anything AI or blockchain as if it were magic, people defending power-hungry, all-promising dictators, people divided over and calling the other side barbaric. And of course, wars based on religion.
Seems to me we’re already in the dark.
Aren’t we already in a kind of dark age?
A bit over 150 years ago, slavery was legal (and commonplace) in the United States.
Sure, lots of shitty stuff in the world today… but you don’t have to go far back to a time when a sherif with zero evidence relying on unverified accusations and heresy would’ve put up a “wanted dead or alive” poster with a drawing of the guy’s face created by an artist who had never even laid eyes on the alleged murderer.
Well, the dark ages came after the late antiquity where slavery was normal. And it took a few centuries for slavery to die out in European societies, though serfdom remained which wasn’t too different. And then serfdom in England formally existed even in XIXth century. I’m not talking about Russia, of course, where it played the same role as slavery in the US south.
EDIT: What I meant - this is more about knowledge and civilization, not good and bad. Also 150 years is too much, but compared to 25 years ago - I think things are worse in many regards.
Oh for sure. We are already in a period that will have some fancy name in future anthropology studies but the question is how far down do we still have to go before we see any light.
Aren’t we already in a kind of dark age?
In the sense of actually making things in the backbone of our civilization becoming a process and knowledge heavily centralized and removed from most people living their daily lives, yes.
Via many small changes we’ve come to the situation where everybody uses Intel and AMD or other very complex hardware, directly or in various mechanisms, which requires infrastructure and knowledge more expensive than most nation-states to produce.
People no more can make a computer usable for our daily processes via soldering something together using TTL logic and elements bought in a radio store, and we could perform many tasks via such computers, if not for network effect. We depend on something even smart people can’t do on their own, period.
It’s like tanks or airplanes or ICBMs.
A decent automatic rifle or grenade or a mortar can well be made in a workshop. Frankly even an alternative to a piece of 50s field artillery can be, and the ammunition.
What we depend on in daily civilian computing is as complex as ICBMs, and this knowledge is even more sparsely distributed in the society than the knowledge of how ICBMs work.
And also, of course, the tendency for things to be less repairable (remember the time when everything came with manuals and schematics?) and for people to treat them like magic.
This is both reminiscent of Asimov’s Foundation (only there Imperial machines were massive, while Foundation’s machines were well miniaturized, but the social mechanisms of the Imperial decay were described similarly) and just psychologically unsettling.
Why not make it a fully AI court and save time if they were going to go that way. It would save so much time and money.
Of course it wouldn’t be very just, but then regular courts aren’t either.
In the same vein Bloomberg just did a great study on ChatGPT 3.5 ranking resumes and it had an extremely noticeable bias of ranking black names lower than the average and Asian/white names far higher despite similar qualifications.
Archive source: https://archive.is/MrZIm
Perfect, a drop-in replacement!
You forgot the /s
Be careful what you wish for.
Honestly, an open-source auditable AI Judge/Justice would be preferable to Thomas, Alito, Gorsuch and Barrett any day.
Me, testifying to the AI judge: “Your honor I am I am I am I am I am I am I am I am I am”
AI Judge: “You are you are you are you are you are you…”
Me: Escapes from courthouse while the LLM is stuck in a loop