Android supports DHCPv6, just not stateful DHCPv6. You can give each device its own /64 or if you really want to track a devices usage you should use an authenticated layer on top of your base network.
Is software just going to get worse from now on? Was the level of quality and feature improvement we've come to expect an artifact of high levels of investment based on expectations of growth that are no longer seen a valid?
We've built stacks so high we're afraid to jump off.
Nobody is really competing because nobody can build a complete product. So there's less pressure to fix the little irritations. Users are mostly satisfied, and problems get worse slowly enough that for the average user they don't notice right away how bad it's getting. So they stay because it's too hard or completely impossible to leave.
I think the bigger issue is the update model. In the past, if a new version sucked, people wouldn't upgrade. Now with subscriptions / continuous delivery, there's less ability to vote with one's wallet/feet
If you're dependent on updating your OS for security fixes and basic compatibility, you are also forced to update the things you may not want to. It's all bundled together.
But it's not just the OS, but apps too, to say nothing of web SaaS products.
How many times have you launched something only to find the UI had been redone, some feature was now gone or changed, something that worked was now broken, etc.
But it's fine, you see, because we have telemetry and observability and robust CI/CD.
Users and their work are nothing more than ephemeral numbers on a metrics dashboard
This does not always work for specific programs which do not do that, and even then, there are updates that you might want other than security updates without updating other parts of the same program. Separate programs can usually be updated individually, but if they are all in one program then it can make it more difficult (sometimes configuration can be done but not always; sometimes they change things that make this not work either).
100% this. And cars are following down this road as well. For example, my Tesla 3 radio will go bonkers every so often and will refuse to change the channel, no matter what I do. Tapping a new channel icon changes the "currently playing" view, but the audio from the original channel continues to play. This happens until you restart the entire UI (by turning off the car or rebooting the display).
But, hey, they managed to add a Tron cross-over tie-in feature, and maybe some new fart noises!
Undoubtedly when they fix that radio bug, something else will fail. Like the SRS (supplemental restraint system, aka airbag) error message that was introduced at some point in the past six months, then silently got fixed with a more recent firmware update.
Incentives Rule Everything Around Me. What incentive does Apple have not to be shit? People aren't going to switch to anything else, they'll just suck it up and shove it in their enormous sack of learned helplessness.
Yup, it's time to let go. The forces that eat away at quality software are running an indoctrination campaign with budgets in the billions of dollars to ensure that people don't remember what quality software is. You can do right in your own work and with your own people but most peoples' experiences are going to suck for the foreseeable future.
There have been bugs and regressions since forever. It’s easy to look back with rose colored glasses, but I don’t think software has actually gotten worse.
Just look back at the Snow Leopard release of OS X. It was specifically marketed at having no new features and just being a fix and optimization release because Leopard was such a mess. And people were happy about this.
> Just look back at the Snow Leopard release of OS X. It was specifically marketed at having no new features and just being a fix and optimization release because Leopard was such a mess.
This is wrong. Leopard wasn’t “such a mess”. No one was saying Leopard was more buggy than Tiger.
Further Snow Leopard wasn’t a bug fixing release. It had a lot of new features. The difference is the features were not user facing but geared towards the underlying tech.
From Wikipedia:
> The goals of Snow Leopard were improved performance, greater efficiency and the reduction of its overall memory footprint, unlike previous versions of Mac OS X which focused more on new features.
> Much of the software in Mac OS X was extensively rewritten for this release in order to take full advantage of modern Macintosh hardware and software technologies (64-bit, Cocoa, etc.). New programming frameworks, such as OpenCL, were created, allowing software developers to use graphics cards in their applications.
I suspect that people not really paying for certain things has had an impact. Remember when there were a lot of high quality, paid keyboards for Android?
I doubt those were particularly profitable, but there was a lot of innovation back then.
Why pay for a keyboard app when the default keyboard is already good enough?
Moreover, why risk installing a 3rd-party keyboard app when the App Store is filled with adware and malware? All those handy flashlight and camera apps are a Trojan's Horse, why should one assume that the various keyboard apps in the App Store aren't keyloggers trying to steal my login info?
In 2025 I can do mostly error-free blind typing on the Pixel 7 keyboard, with all autocorrect and predictive spelling intentionally turned off. Why would I need innovation?
>why should one assume that the various keyboard apps in the App Store aren't keyloggers trying to steal my login info?
Honestly, you shouldn't.
Theoretically, Apple + Google take a % of all payments that go through their store, with the expressed reason being to "monitor and police the safety of the apps on the app store". You really should be able to trust apps on the official app stores, but I don't trust Apple or Google, so the whole system is moot I guess
>Moreover, why risk installing a 3rd-party keyboard app when the App Store is filled with adware and malware? All those handy flashlight and camera apps are a Trojan's Horse, why should one assume that the various keyboard apps in the App Store aren't keyloggers trying to steal my login info?
And unless the app gets acquired by the big companies, it will eventually turn into malware.
> Why pay for a keyboard app when the default keyboard is already good enough?
That's probably what people would have said before Swype was invented too. But lots of people use that in their default keyboards thanks to the people that _did_ pay for keyboards back then.
Who knows what innovations we are missing out on today just because we've consolidated things down to 2-3 suppliers?
Improving quality (or degrading, for that matter) of existing features doesn’t figure into career promotions anymore. Only new features count. Or changing the visual design.
> Is software just going to get worse from now on?
I mean, yes? I think, as a pretty universal rule, you can expect commercial software to (on average) get worse every time it is changed. Companies spend little or no time fixing bugs and spend most of their time cramming (wanted or unwanted) features. Of course software is just going to get worse and worse over time.
It's possible to have both overpopulation(too large of a population for a given metric like water, energy, pollution, etc) and demographic collapse(too many old people, not enough young workers). It's not intuitive but they are separate phenomenon.
The reaction to overpopulation concerns probably discouraged people from having kids but it's unlikely to be the main cause.
Are you sure? It's been a few years, but last I tried Firefox used its own CA store on Windows. I'm pretty sure openjdk uses "<JAVA_HOME>/jre/lib/security/cacerts" instead of the system store too.
I just reworked my home server backup strategy to use rsync.net and it's been a great experience.
I'm using btrfs and snapper to take hourly snapshots. The snapborg[0] tool then pushes those snapshots to a borg repo on rsync.net. snapper and snapborg can be configured to keep the number of hourly/daily/weekly/monthly/yearly snapshots you want and can automatically prune them.
> Are people just going to stop buying devices and computers?
I'm sure Apple and Samsung will still have access to chips. Maybe this is just the beginning of the end for access to general-purpose computing for the masses.
desec.io allows you to create (through the api) tightly-scoped tokens that can only update the "_acme-challenge.subdomain.example.com" domain needed for DNS-01 challenges.
I switched to them from cloudflare dns for that specific functionality and it works great.
The gpu driver for Apple silicon is Rust and the author stated it would have been much more difficult to implement in C. It isn't upstreamed yet.
"""
Normally, when you write a brand new kernel driver as complicated as this one, trying to go from simple demo apps to a full desktop with multiple apps using the GPU concurrently ends up triggering all sorts of race conditions, memory leaks, use-after-free issues, and all kinds of badness.
But all that just… didn’t happen! I only had to fix a few logic bugs and one issue in the core of the memory management code, and then everything else just worked stably! Rust is truly magical! Its safety features mean that the design of the driver is guaranteed to be thread-safe and memory-safe as long as there are no issues in the few unsafe sections. It really guides you towards not just safe but good design.
"""
> the whole thing seems kinda cute but like, shouldn't this experiment in programming language co-development be taking place somewhere other than the source tree for the world's most important piece of software?
Rustlang doesn't aim to address race conditions. Sounds to me like overly "cautious" inefficient code you can write in any language. Think using `std::shared_ptr` for everything in C++, perchance…?
The comment probably refers to data races over memory access, which are prevented by usage of `Send` and `Sync` traits, rather than more general race conditions.
I see, but that's not the point of my comment. I don't know rustlang, perhaps I could address that if someone translated the rust-specific parlance to more generally accepted terms.
I'm not sure I understand the point of your comment at all.
Rust does, successfully, guarantee the lack of data races. It also guarantees the lack of memory-unsafety resulting from race conditions in general (which to be fair largely just means "it guarantees a lack of data races", though it does also include things like "race conditions won't result in a use after free or an out of bounds memory access").
If by address it you mean "show how C/C++ does this"... they don't and this is well known.
If by address it you mean "prove that rust doesn't do what it says it does"... as that point you're inviting someone to teach you the details of how rust works down to the nitty gritty in an HN comment. You'd be much better off finding and reading the relevant materials on the internet than someones off hand attempt at recreating them on HN.
The point of my comment is that in my experience, incompetently written, overly-cautious code tends to be more safe at the expense of maintainability and/or performance.
Sadly, I don't know rustlang, so I can't tell if the inability to describe its features in more commonly used terms is due to incompetence or the features being irrelevant to this discussion (see the title of the thread).
The thing is you aren't really asking about a "feature" of rust (as the word is used in the title of the thread), unless that feature is "the absence of data races" or "memory safety" which I think are both well defined terms† and which rust has. Rather you're asking how those features were implemented, and the answer is through a coherent design across all the different features of rust that maintains the properties.
As near as I can tell to give you the answer you're looking for I'd have to explain the majority of rust to you. How traits work, and auto traits, and unsafe trait impls, and ownership, and the borrow checker, and for it to make sense as a practical thing interior mutability, and then I could point you at the standard library concepts of Send and Sync which someone mentioned above and they would actually make sense, and then I could give some examples of how everything comes together to enable memory safe, efficient, and ergonomic, threading primitives.
But this would no longer be a discussion about a rust language feature, but a tutorial on rust in general. Because to properly understand how the primitives that allow rust to build safe abstractions work, you need to understand most of rust.
Send and Sync (mentioned up thread) while being useful search terms, are some of the last things in a reasonable rust curriculum, not the first. I could quickly explain them to someone who already knew rust, and hadn't used them (or threads) at all, because they're simple once you have the foundation of "how the rest of rust works". Skipping the foundation doesn't make sense.
† "Memory safety" was admittedly possibly popularized by rust, but is equivalent to "the absence of undefined behaviour" which should be understandable to any C programmer.
> The point of my comment is that in my experience, incompetently written, overly-cautious code tends to be more safe at the expense of maintainability and/or performance
Well, yes, but that's the whole value of Rust: you don't need to use these overly-cautious defensive constructs, (at least not to prevent data races), because the language prevents them for you automatically.
Safe Rust does. To the extend Rust interfaces that wrap kernel APIs will achieve safety for the drivers that make use of them remains to be seen. I think it will indeed do this to some degree, but I have some doubts whether the effort and overhead is worth it. IMHO all these resources would better be invested elsewhere.
Thats kinda the problem, there are concepts in rust that don't have equivalents in other common languages. In this case, rust's type system models data-race-safety: it prevents data races at compile time in a way unlike what you can do in c or c++. It will prevent getting mutable access (with a compile time error) to a value across threads unless that access is syncronized (atomics, locks, etc)
And from what I can see, rustlang mutability is also a type system construct? I.e. it assumes that all other code is Rust for the purpose of those checks?
> rustlang mutability is also a type system construct?
Yes
> I.e. it assumes that all other code is Rust for the purpose of those checks?
Not exactly, it merely assumes that you upheld the documented invariants when you wrote code to call/be-called-from other languages. For example that if I have a `extern "C" fn foo(x: &mut i32)` that
- x points to a properly aligned properly allocated i32 (not to null, not to the middle of un-unallocated page somewhere)
- The only way that memory will be accessed for the duration of the call to `foo` is via `x`. Which is to say that other parts of the system won't be writing to `x` or making assumptions about what value is stored in its memory until the function call returns (rust is, in principle, permitted to store some temporary value in `x`s memory even if the code never touches x beyond being passed it. So long as when `foo` returns the memory contains what it is supposed to). Note that this implies that a pointer to the same memory isn't also being passed to rust some other way (e.g. through a static which doesn't have a locked lock around it)
- foo will be called via the standard "C" calling convention (on x86_64 linux this for instance means that the stack pointer must be 2-byte aligned. Which is the type of constraint that is very easy to violate from assembly and next to impossible to violate from C code).
That it's up to the programmer to verify the invariants is why FFI code is considered "unsafe" in rust - programmer error can result in unsoundness. But if you, the programmer, are confident you have upheld the invariants you still get the guarantees about the broader system.
Rust is generally all about local reasoning. It doesn't actually care very much what the rest of the system is, so long as it called us following the agreed upon contract. It just has a much more explicit definition of what that contract is then C.
Also we can (in 2024 Edition) say we're vouching for an FFI function as safe to call, avoiding the need for a thin safe Rust wrapper which just passes through. We do still need the unsafe keyword to introduce the FFI function name, but by marking it safe all the actual callers don't care it wasn't written in Rust.
This is fairly narrow, often C functions for example aren't actually safe, for example they take a pointer and it must be valid, that's not inherently safe, or they have requirements about the relative values of parameters or the state of the wider system which can't be checked by the Rust, again unsafe. But there are cases where this affordance is a nice improvement.
Also "safe" and "unsafe" have very specific meanings, not the more widely used meanings. It's not inherently dangerous to call unsafe code that is well written, it's really more a statement about who is taking responsibility for the behavior, the writer or the compiler.
I like the term "checked" and "unchecked" better but not enough to actually lobby to change them, and as a term of art they're fine.
Yes. Just like C++ "const" is a type system construct that assumes all other code is C++ (or at least cooperates with the C++ code by not going around changing random bytes).
As far as I can tell, ANY guarantee provided by ANY language is "just a language construct" that fails if we assume there is other code executing which is ill-behaved.
a data race is specific kind of race condition; it's not rust parlance, but that specificity comes up a lot in rust discussions because that's part of the value
> since Rust is not the only language susceptible to data races.
The point is rather that it’s not. The “trait send sync things” specify whether a value of the type is allowed to be respectively move or borrowed across thread boundaries.
I mean, reliably tracking ownership and therefore knowing that e.g. an aliased write must complete before a read is surely helpful?
It won't prevent all races, but it might help avoid mistakes in a few of em. And concurrency is such a pain; any such machine-checked guarantees are probably nice to have to those dealing with em - caveat being that I'm not such a person.
Heh. This is such a C++ thing to say: “I want to do the right thing, but then my code is slow.” I know, I used to write video games in C++. So I feel your pain.
I can only tell you: open your mind. Is Rust just a fad? The latest cool new shiny, espoused only by amateurs who don’t have a real job? Or is it something radically different? Go dig into Rust. Compile it down to assembly and see what it generates. Get frustrated by the borrow checker rules until you have the epiphany. Write some unsafe code and learn what “unsafe” really means. Form your own opinion.
> Is rust going to synchronize shared memory access for me?
Much better than that. (safe) Rust is going to complain that you can't write the unsynchronized nonsense you were probably going to write, shortcutting the step where in production everything gets corrupted and you spend six months trying to reproduce and debug your mistake...
> aren't they just annotations? proper use of mutexes and lock ordering aren't that hard, they just require a little bit of discipline and consistency.
Spatial memory safety is easy, just check the bounds before indexing an array. Temporal memory safety is easy, just free memory only after you've finished using it, and not too early or too late. As you say, thread safety is easy.
Except we have loads of empirical evidence--from widespread failures of software--that it's not easy in practice. Especially in large codebases, remembering the remote conditions you need to uphold to maintain memory safety and thread safety can be difficult. I've written loads of code that created issues like "oops, I forgot to account for the possibility that someone might use this notification to immediately tell me to shut down."
What these annotations provide is a way to have the compiler bop you in the head when you accidentally screw something up, in the same way the compiler bops you in the head if you fucked up a type or the name of something. And my experience is that many people do go through a phase with the borrow checker where they complain about it being incorrect, only to later discover that it was correct, and the pattern they thought was safe wasn't.
Proper use of lock ordering is reasonably difficult in a large, deeply connected codebase like a kernel.
Rust has real improvements here, like this example from the fuschia team of enforcing lock ordering at compile time [0]. This is technically possible in C++ as well (see Alon Wolf's metaprogramming), but it's truly dark magic to do so.
The lifetimes it implements is the now unused lexical lifetimes of early Rust. Modern rust uses non-lexical lifetimes which accepts a larger amount of valid programs and the work on Polonius will further allow more legal programs that lexical lifetimes and non lexical lifetimes can’t allow. Additionally, the “borrow checker” they implement is RefCell which isn’t the Rust borrow checker at all but an escape hatch to do limited single-threaded borrow checking at runtime (which the library won’t notice if you use in multiple threads but Rust won’t let you).
Given how the committee works and the direction they insist on taking, C++ will never ever become a safe language.
Oh and to add on, in c++ there’s no borrow checker and no language guarantees that exploit UB in the way Rust does with ownership. What does it matter if two parts of a single threaded program have simultaneous mutable references to something - it’s not a safety or correctness issue as there’s no risk of triggering UB and there’s no ill formed program that could be generated that way. IMHO a RefCell equivalent in C++ is utterly pointless.
Bit of a fun fact, but as one of the linked articles states the C++ committee doesn't seem to be a fan of stateful metaprogramming so its status is somewhat unclear. From Core Working Group issue 2118:
> Defining a friend function in a template, then referencing that function later provides a means of capturing and retrieving metaprogramming state. This technique is arcane and should be made ill-formed.
> Notes from the May, 2015 meeting:
> CWG agreed that such techniques should be ill-formed, although the mechanism for prohibiting them is as yet undetermined.
"Just" annotations... that are automatically added (in the vast majority of cases) and enforced by the compiler.
> proper use of mutexes and lock ordering aren't that hard, they just require a little bit of discipline and consistency.
Yes, like how avoiding type confusion/OOB/use-after-free/etc. "just require[s] a little bit of discipline and consistency"?
The point of offloading these kinds of things onto the compiler/language is precisely so that you have something watching your back if/when your discipline and consistency slips, especially when dealing with larger/more complex systems/teams. Most of us are only human, after all.
> how well does it all hold up when you have teamwork and everything isn't strictly adherent to one specific philosophy.
Again, part of the point is that Send/Sync are virtually always handled by the compiler, so teamwork and philosophy generally aren't in the picture in the first place. Consider it an extension of your "regular" strong static type system checks (e.g., can't pass object of type A to a function that expects an unrelated object of type B) to cross-thread concerns.
> aren't they just annotations? proper use of mutexes and lock ordering aren't that hard, they just require a little bit of discipline and consistency.
No, they are not. You also don't need mutex ordering as much since Mutexes in Rust are a container type. You can only get ahold of the inside value as a reference when calling the lock method.
> You also don't need mutex ordering as much since Mutexes in Rust are a container type. You can only get ahold of the inside value as a reference when calling the lock method.
Mutex as a container has no bearing on lock ordering problems (deadlock).
> What does rust have to do with thread safety and race conditions? Is rust going to synchronize shared memory access for me?
Rust’s strict ownership model enforces more correct handling of data that is shared or sent across threads.
> Speaking seriously, they surely meant data races, right? If so, what's preventing me from using C++ atomics to achieve the same thing?
C++ is not used in the Linux kernel.
You can write safe code in C++ or C if everything is attended to carefully and no mistakes are made by you or future maintainers who modify code. The benefit of Rust is that the compiler enforces it at a language level so you don’t have to rely on everyone touching the code avoiding mistakes or the disallowed behavior.
Rust's design eliminates data races completely. It also makes it much easier to write thread safe code from the start. Race conditions are possible but generally less of a thing compared to C++ (at least that's what I think).
Nothing is preventing you from writing correct C++ code. Rust is strictly less powerful (in terms of possible programs) than C++. The problem with C++ is that the easiest way to do anything is often the wrong way to do it. You might not even realize you are sharing a variable across threads and that it needs to be atomic.
> What does rust have to do with thread safety and race conditions? Is rust going to synchronize shared memory access for me?
Well, pretty close to that, actually! Rust will statically prevent you from accessing the same data from different threads concurrently without using a lock or atomic.
> what's preventing me from using C++ atomics to achieve the same thing
Now, is it okay to call `frobFoo` from multiple threads at once? Maybe, maybe not -- if it's not documented (or if you don't trust the documentation), you will have to read the entire implementation to answer that.
Now, is `frobFoo` okay to call from multiple threads at once? No, and the language will automatically make it impossible to do so.
If we had `&self` instead of `&mut self`, then it might be okay, you can discover whether it's okay by pure local reasoning (looking at the traits implemented by Foo, not the implementation), and if it's not then the language will again automatically prevent you from doing so (and also prevent the function from doing anything that would make it unsafe).
i don't really care for mindless appeals to authority. make your own arguments and defend them or don't bother.
this gpu driver looks pretty cool though. looks like there's much more to the rust compatibility layer in the asahi tree and it is pretty cool that they were able to ship so quickly. i'd be curious how kernel rust compares to user space rust with respect to bloat. (user rust is pretty bad in that regard, imo)
Mindless appeal to authority? I don't think that's how the fallacy really works. It's pretty much the authority that seems to disagree with your sentiment, that is if we can agree that Torvalds still knows what he's doing. Him not sharing your skepticism is a valid argument. The point being that instead of giving weight to our distant feelings, maybe we could just pause and be more curious as to why someone with much closer involvement would not share them. Why should we care more about the opinions of randos on hn?
To be fair, assigning the highly competent BDFL of Linux who has listened to a bunch of highly competent maintainers some credibility isn't mindless.
Unless you have a specific falsifiable claim that is being challenged or defended, it's not at all a fallacy to assume expert opinions are implicitly correct. It's just wisdom and good sense, even if it's not useful to the debate you want to have.
Not every mention of an authority's opinion needs to be interpreted as an "appeal to authority". In this case I think they're just trying to give you perspective, not use Torvalds opinion as words from god.
It’s very intellectually lazy of you not to be curious about why the creator and decades long, knowledgeable guardian of Linux has the opposite opinion as you, all because you read the Wikipedia about logical fallacies one time.
Also the guy that created "the world's most important piece of software", as you put it. Appealing to the authority on the exact thing you raised concern about is the single most important authority one can cite.
> Surely it's better to cite the authority's reasons as to why they think this way than just to cite the authority itself
Why? When disagreeing with an authority, you want the audience to pay closer attention to your arguments as you demonstrate why the authority has it wrong. When you're just sharing distant and likely under-informed opinions with no arguments to back them up, it's not up to other people to do homework to show you why you're wrong. Appeal to authority is a legit call to a fallacy only when people give next to no consideration to your arguments, focusing instead on the opposing party's stature.
So rather than pointing to experts who're in the best position to know, you'd prefer bad rephrasing and airchair experts? Do you 'do your own research' too?
On a Mac, you can switch between apps with Command-Tab or windows of the same app with Command-` but there's no way to cycle between all windows or bounce between to two most recently used windows.
Maybe this used to make sense when apps were single purpose but I do basically everything in a web browser or a terminal so not being able to bounce between the previously selected window(of whatever kind), as I can with Alt-Tab on linux or windows, is frustrating.
Also Command-` switches to the next window, not the previous one like I would expect.
MacOS removed subpixel antialiasing, honestly for understandable reasons, making rendering on low-ppi displays blurry, but high-ppi displays are still super expensive. I got a 32" 4k monitor(~140ppi) at Costco for $250. A >200ppi display of the same size costs 20x that amount.
For web apps, spinning them into “installed” apps (doable in both Chrome and Safari now) is the move. This unclogs your tab bar, gets rid of the pointless persistent browser chrome, and gives you the benefit of OS task management capabilities.
You can add Shift to both Command-Tab and Command-` to move in the reverse direction.
Also I find the default Command-` to be unintuitive, especially on non-US keyboards (` is next to left Shift for me). I remapped Command-` to Option-Tab so you only have to move your thumb.
The solution is subpar, even if it's nice to have one. What windows and linux have is hinting for text and good antialiasing on vector elements. They map these those the actual hardware pixels so you won't have wobbly lines.
These don't matter as much when you have high PPI. But they're a lifeline on low PPI displays (and there are a lot of those).
I completely agree, having gone through that frustration myself a couple years ago, but it at least makes the experience sort of good enough for my backend swe usage instead of making my eyes hurt. It’s still much better on other oses on the same display, absolutely.
Then you are deliberately handicapping yourself, this isn't something you can blame on the OS. It's like complaining that a car has bad fuel economy because you always stay in first gear.
As for the displays, you are comparing apples to oranges. You can get a high DPI monitor which is smaller than 32 inches for cheap. Which is plenty of screen for the distances where DPI differences are important.
My experience is just the opposite. I have never encountered a cloud app which is anywhere near the best paid apps in quality. What cloud app is better at photo editing than Affinity or Photoshop? What cloud calendar is better than BusyCal? What cloud spreadsheet is better than Excel? IDE and text editor? Etc.
If you think that the purpose of OS X or Apple devices is to live in the web browser or live in the terminal, then you've been very misinformed. It's on the level of buying a motorcycle and expecting it to have a roof. And then complaining about the manufacturer. Apple stuff has worked like this for decades.
reply