Refresh rate directly affects one of the components of total input lag, and increasing refresh rate is one of the most straightforward ways for an end user to chip away at that input lag problem.
> I read somewhere that TearFree is triple buffering, so -if true- it's my (perhaps mistaken) understanding that this adds a frame of latency.
True triple buffering doesn't add one frame of latency, but since it enforces only whole frames be sent to the display instead of tearing, it can cause partial frames of latency. (It's hard to come up with a well-defined measure of frame latency when tearing is allowed.)
But there have been many systems that abused the term "triple buffering" to refer to a three-frame queue, which always does add unnecessary latency, making it almost always the wrong choice for interactive systems.
The matrix/tensor math units added to GPUs do see widespread use, both for running LLMs and for the ML-based upscaling used by most video games these days (eg. NVIDIA DLSS). The NPUs that are separate from the GPU and designed more with efficiency in mind rather than raw performance are a different thing, and that's what's still looking for a killer app in spite of all the marketing effort.
What you linked to is not really evidence, just an unsubstantiated allegation. Over the top public shaming is something that should be pretty easy to provide direct evidence of. When Linus Torvalds does it, it gets repeatedly brought up in forums like this for many years.
> I have no reason to believe it is a lie, and it sounds plausible
Except for all of the responses from people saying it doesn't sound plausible for the project in question, and for the acute lack of real evidence or even details to accompany the allegation.
Additionally, I think "last resort" is way too high a bar. It's totally reasonable for an open source project to have a zero-tolerance policy for AI-generated spam patches or bug reports, and to respond with public shaming after the first offense. Nobody should be expected to make any allowance for such egregious behavior.
A user who is genuine but simply doesn't know how to usefully communicate about their problem doesn't deserve that treatment and should simply be ignored if the devs don't have time to engage in the interrogation necessary to extract a useful bug report. But if the user decides to try to use an LLM to compensate for a lack of content in their bug report, that user would be earning a negative response by making a bad decision. (If you're going to use an LLM, ask it how to write a bug report, rather than asking it to make up a bug report for you.)
That doesn't seem to actually provide a usable OS to run on any remotely recent Apple hardware. The most recent test build available for download is a virtual machine image of a version that aligns with macOS from eight years ago.
"The internet no longer exists" is a particularly extreme subset of off-grid scenarios. For the more plausible off-grid scenarios—the ones that have actually happened—the unavailability of the internet has been varying degrees of localized and temporary. In that context, being able to bootstrap the entire network without any reliance on internet infrastructure is more of a convenience than a hard requirement.
In particular, it seems obvious to me that any preparedness plan that requires a user to acquire in advance specialized hardware (eg. a battery/solar-powered long-range radio of some kind) to be used with an off-grid network can reasonably expect that user to also be prepared with the software to drive that hardware.
The whole project is a convenience. If I were in a situation where I actually had to rely on Meshtastic for comms, I'd be pretty nervous. It doesn't really work that well. Luckily, I've only enjoyed Meshtastic recreationally. Where this comes from is from me trying to learn about and set up some nodes on vacation in an area with very limited internet. I followed the tutorials, thought I had what I needed, but I was wrong. Woops, documentation is online. Within the community, I've seen "that same thing happened to me" more than once.
As with many hobbies, this is a "just because I can, I will" type of thing.
What you don't think we can put up a shadow internet running at 250kbps?
That said, I picked up a couple of prebuilt lora solar nodes and a couple of mobile nodes (seed solar jobies and seeed mobile jobies) and stuck the solar ones into my upper story windows just over new years, one is set up as a meshtastic repeater the other as a meshcore repeater.
I'm pretty amazed at the distances I hear from, I'm getting stuff this morning over meshcore all the way from vancouver bc into my office in seattle (pugetnet.org).
To get it all dialed in having a discord full of old HAM guys that know RF pretty well certainly doesn't hurt.
It's certainly hobbiest grade at best. It seems like it could be very interesting for installs in small communities and larger estates for backhaul for remote iot applications. Obviously you aren't going to push video over that bandwidth but for weather stations and the like seems cool.
Reticulum becomes more interesting when you are talking about some of the more robust radio technologies. Building a mesh LAN out of old wifi gear is interesting in concept.
Not only any computer of last resort would have software installed in advance and easily prepared redundant archives to install it again, but "pip install" is perfectly fine for other use cases: testing Reticulum, regularly updating everyday computers, improvised installations on someone else's computer, etc.
You should look at pictures of Apple's Pro Display XDR. The Kuycon monitor is an obvious rip-off of that in terms of styling, especially the ventilation on the back.
At the end of the month, laptops with Intel's latest processors will start shipping. These use Intel's 18A process for the CPU chiplet. That makes Intel the first fab to ship a process using backside power delivery. There's no third party testing yet to verify if Intel is still far behind TSMC when power, performance and die size are all considered, but Intel is definitely making progress, and their execs have been promising more for the future, such as their 14A process.
PA, Intrinsity wasn't front of mind for me. My point is, Apple has proven they can buy their way into vertical integration, let's look at the history.
68K -> PowerPC, practically seamless
Mac OS 9 -> BSD / OS X with excellent backward compatibility
PowerPC -> x86
x86 -> ARM
Each major transition, biting off orders of magnitude more complexity of integration. Looking at this continuum, the next logical vertical integration step for Apple is fabrication. The only question in my mind, does Tim have the guts to take that risk.
The person in question is https://en.wikipedia.org/wiki/Jamie_Zawinski
reply