Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Of course, but what are the odds that the algorithm just lucked into the correct book title and other cover text? It doesn't have a dictionary or semantic network.

You are right that the raw sensor data should always be preserved. But sticking with the license plate example, you could challenge a picture of a single car with a visible license plate far away in a wooded area, but it would be hard to refute a picture of the same car in a parking lot surrounded by other (non-suspect) vehicles whose presence there at the same time could be independently verified. In other words, if I can show that it accurately read the license plates of 9 other cars, the chances that it got yours wrong go way down.

That's assuming a single photo taken in the dark by an investigator. With a fixed security camera you would have an even larger basis of comparison, with a population of hundreds or thousands of license plates against which to rate it. I predict that before long we'll see preemptive certification for devices warranting the reliability of their image pipeline out to a certain distance at either the manufacturing or installation stage.



Xerox used to replace numbers in documents while copying:

https://www.theregister.co.uk/2013/08/06/xerox_copier_flaw_m...

License plates are an ideal breeding ground for false enhancement owing to standardisation of appearance; an ML algo trained on lots of examples might, without due care, learn to replace as a well-known texture.


The pre-emptive certification I mention would be a validation of due care. It doesn't matter how many theoretical arguments you want to throw up against this, once there's sufficient empirical evidence for its reliability (and there will be) it will be accepted as evidence.

Also, y'all need to think more like prosecutors. Say you are dragged you into court on the basis of photos showing your car in the dark, and you object that the photo is from the ML 9000 security camera and it might be just imaging your license plate. The police/prosecutors will just 'borrow' your car and leave it there for a night and leave it up to the jury.

Forensic evidence can be and is regularly abused, but it can also be quite easily validated and it's massively persuasive to juries.


I don’t understand the scenario you’re picturing.

Let’s assume I’m innocent but some neural net has placed my car at the location of a crime.

You’re saying that if I challenge the evidence, the prosecutors will counter that by showing that if my car were there, the neural net would have produced a picture of my car? They don’t need to do that and it adds nothing to their argument. I’m not challenging that the neural net is capable of producing an image of my car.

No, the point is I am placed in the position of having to demonstrate that there exists some other car which under those lighting conditions the neural net would mistake for mine. That’s a far harder burden of proof for me to reach.

Honestly this is similar to the way fingerprint, DNA and hair sample matches are presented to courtrooms all the time so it isn’t a new problem. As you say, forensics are persuasive.


I personally think juries are a great way to convict innocent people, and that adversarial court systems privilege people who can afford to pay for the best storyteller, so arguments from that direction start out hobbled.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: