I’ve never used them first hand, but crackpots sure do love claiming to solve Riemann hypothesis, P vs NP, Collatz conjecture etc and then peddle out some huge slop. My experience has solely been curiously following what the LLM’s have been generating.
You have to be very, VERY careful. With how predisposed they are to helping, they’ll turn to “dishonesty” rather than just shut down and refuse. What I tend to see is they get backed into a corner, and they’ll do something like prove something different under the guise of another:
They’ll create long pattern matching chains as to create labyrinths of state machines.
They’ll keep naming functions, values and comments to seem plausible, but you have to follow these to make sure they are what they say. A sneaky little trick is to drop important parameters in functions, they appear in the call but not in the actual body.
They’ll do something like taking a Complex value, but only working with the real projection, rounding a number, creatively making negatives not appear by abs etc etc
So even when it compiles, you’ve got the burden of verifying everything is above board which is a pretty huge task.
And when it doesn’t work, introducing an error or two in formal proof systems often means you’re getting exponentially further away from solving your problem.
I’ve not seen a convincing use that tactics or goals in the proof assistant themselves don’t already provide
>So even when it compiles, you’ve got the burden of verifying everything is above board which is a pretty huge task.
Is this true?
e.g. the Riemann hypothesis is in mathlib:
def RiemannHypothesis : Prop :=
∀ (s : ℂ) (_ : riemannZeta s = 0) (_ : ¬∃ n : ℕ, s = -2 * (n + 1)) (_ : s ≠ 1), s.re = 1 / 2
If I construct a term of this type without going via one of the (fairly obvious) soundness holes or a compiler bug, it's very likely proved, no? No matter how inscrutable the proof is from a mathematical perspective. (Translating it into something mathematicians understand is a separate question, but that's not really what I'm asking.)
Only if the shop assistant took your ID, photocopied it and stored it in a box marked “do not touch” under the counter, alongside transcriptions of everything you ever say inside the store.
That’s a strawman. It’s such a tiny aspect of the what the majority of activity on the internet is that it’s irrelevant to any large scale discussion of the internet.
Just like how school shooters are irrelevant to any large scale discussion of education? It is very relevant and impactful and you can't just hand-wave it away by saying percentages mean it doesn't count.
I’ve lived in two apartments with the setup OP described, and they were both built 2003-2006. But I’ve not had it anywhere else, so it does seem constrained to a specific window of apartment developments
It's a setup seen in a lot of new builds flat from the 2000s and 2010s, which is a very large amount of the housing stock in London for flats (There has been so many constructions!).
Really? Color me corrected I only ran into TC after SonicPi.
Though this entire discussion reminds me I need to fix my TidalCycles setup, had it working on Linux with vscode but I tried it out again a month or two ago and it wasn't playing anymore.
You can pick it up passively over time, and with your skills, if you were to actively engage then I suspect pick up the necessary very quickly, and the rest comes from experience.
I picked up Linux at 13, fortuitously just in time for the release of the Nokia 770 (later getting, and still owning the N900 too).
At that time, getting real dirty with the kernel, hardware, cross compiling etc was necessary, so 1) there were more resources 2) it was seen as mundane, busy work rather than mystical and difficult.
If I were to say how to learn the same things today, I’d probably say Gentoo is ideal - it’s insanely flexible in tinkering, has good resources on compiling the kernel and packages, and I’m a fan of crossdev for cross compiling.
Getting real dirty with hardware and electronics, the obvious answer would be one of the Raspberry Pi lineup, but if you’re very tenacious, patient and a touch unhinged, then I would actually say now’s the time to get in on RISC-V.
It’s still early days, so there’s lots of resources that have very thin abstractions between hardware <-> tooling <-> code. Devices are cheap and exciting. You’ll be on the same footing as most other people so you won’t feel like a dunce.
The cons are that a lot of RISC-V devices get shipped out with very little documentation (and sometimes only in Chinese), binary blobs making mainstream kernels difficult, and you’re learning at the same time, so you might feel you’re ice skating uphill.
Wrt to the bootloader and partition corruption; towards the twilight years of the life of the N900, when it became clear N900 had been abandoned and the N950 was still only available to select few, a bunch of smart people on the Maemo forums started reversing and writing open drivers (uboot bootloader, wifi, camera iirc), so they became pretty documented.
90% manned. A lot of money and time goes into getting track access.
And collecting unmanned data is still such a pain. At the moment, you stick calibration gear to a train and hope it gets as much noise free data as it can. All whilst going at least 40mph over the area you want - you’re fighting vibrations, vehicle grease, too much sunlight, not enough sunlight, rain, ballast covering things, equipment not calibrated before going out etc etc.
You have to be very, VERY careful. With how predisposed they are to helping, they’ll turn to “dishonesty” rather than just shut down and refuse. What I tend to see is they get backed into a corner, and they’ll do something like prove something different under the guise of another:
They’ll create long pattern matching chains as to create labyrinths of state machines.
They’ll keep naming functions, values and comments to seem plausible, but you have to follow these to make sure they are what they say. A sneaky little trick is to drop important parameters in functions, they appear in the call but not in the actual body.
They’ll do something like taking a Complex value, but only working with the real projection, rounding a number, creatively making negatives not appear by abs etc etc
So even when it compiles, you’ve got the burden of verifying everything is above board which is a pretty huge task.
And when it doesn’t work, introducing an error or two in formal proof systems often means you’re getting exponentially further away from solving your problem.
I’ve not seen a convincing use that tactics or goals in the proof assistant themselves don’t already provide
reply