New York City has a more European balance of cars versus light trucks than most of the USA. Not easy to park a modern American pickup in any bourough except maybe Staten Island. Source: lived there
I have a Modi DAC I've used for years with several different gaming and development rigs and I've never had a problem like this. Sounds like a failing component, maybe a capacitor or regulator—the article author should contact Schiit.
California did not require numbered paper plates when Jobs did this. Car dealers would put paper plates advertising themselves on the car, but you could remove them. Your temporary registration was taped on the inside of the front windshield.
I personally saw his SL500 with dealer plates a couple of times while visiting the Apple campus as a vendor. He'd park in the handicap spot too.
One of the most striking things about this article were the photos of the disguised cameras, especially the ones dressed up as traffic cones and electrical boxes.
How is that striking? We've had nanny cams with cameras hidden in teddy bears and other items for a really long time now. That's like saying you're shocked cops go undercover and do not ID themselves as cops.
Have most Americans never considered undercover operations? If you are investigating someone, you don't want them to know about it. Otherwise, you wouldn't be bothering with the undercover aspect. Now that the department has cool hidden cameras, of course they will be used for other purposes.
It's not like I'm out there hunting down police abuses, but having hidden cameras is just something I would absolutely expect them to have. I did not know they specifically had cameras hidden as traffic cones, but I'm also not shocked they do. That's the shocking part to me is the shock of others instead of others also going "of course they do"
I keep hearing this, but I spent a lot of time outdoors as a child. Hours every day, riding bikes with friends and running around with BB guns in the woods. We played a lot of video games and read books, too, but we spent plenty of time making tree forts and "sword fighting" with old pipes. Still needed glasses by the age of 7.
Obviously, genetics also plays a role in individual cases. We're talking about the population level here, and genetics doesn't explain the myopia epidemic, because population-level genetics hasn't changed rapidly. Time outdoors has.
I believe that more screen and reading time causes more childhood myopia. That seems hard to refute. I just do not believe, without serious peer-reviewed studies, that "a couple hours a day outside" is the magic cure. Out of my friend group, 3/4 of us needed glasses, and we definitely met the "couple hours a day outside"
criterion. But we also loved our Street Fighter 2.
Thanks! The first study is interesting because it links axial eye growth inhibition to high light levels, not necessarily being strictly outdoors (UV) nor avoiding close-up activities like reading. Interesting.
"High light levels" makes it sound like you could just replace 50 watt bulbs with 100 watt bulbs indoors to fix the problem. It's not like that at all. Sunlight is dramatically brighter than indoor lighting, far more than people realize. It would be impractical to replicate sunlight brightness indoors for many reasons: energy consumption, the cost and size of the fixtures, the immense heat it would produce, and generally because it would be uncomfortable to have light sources that bright in close proximity to you.
And again I am skeptical of claims that it is just one factor like brightness alone. There is other research about spectrum and distant objects in view that also shows effects. So even if you did use 100x brighter lights in your house there's no guarantee you would fix the problem.
I did nerdy kid stuff like read lots of books and use computer screens (uncommon back then, I'm 52)... but I also played outdoors for hours nearly every single day.
Ended up with vision at -6.00 by the time I was a young teenager (don't remember the age it started to slide that way but would estimate around 7-8 or so). Hasn't gotten any worse (or better) since then.
Seconded. I was working for a storage vendor when AWS was first ascendant. After we delivered hardware, it was typically 6-12 weeks to even get it powered up, and often a few weeks longer to complete deployment. This is with professional services, e.g. us handling the setup once we had wires to plug in. Similar lead time for ordering, racking, and provisioning standard servers.
The paperwork was massive, too. Order forms, expense justifications, conversations with Legal, invoices, etc. etc.
And when I say 6-12 weeks, I mean that was a standard time - there were outliers measured in months.
> "Desert wasteland" teems with life, just maybe not the kind that most people care about.
As somebody who lives in the Southwest US, thank you. There are so many people on HN who think the desert is just Martian dunes to be paved over like a Civ tile.
Just in the hills around me there are 30 species of plant, century-old trees, snakes, lizards, horny toads, bobcats, coyotes, hare, quail, multitudes of ants, the incredible red velvet mite, roadrunners (yes they’re real), flies, wasps, native bumblebees, mice, god it goes on and on. And the soil is encrusted, literally, with countless microbiota. In fact a single vehicle smashing it can damage that crust for years.
I know we need renewables, and yes, the Southwest is a great place for solar. But there is real ecological damage to some of the most pristine places left in America involved in developing unused land.
I take no position on the development which is under discussion here, or whether the cancellation was fair. I haven’t researched it, and probably never will. I’m just sick of the “it’s just desert, who cares, paint it with solar/oil fields/asphalt” attitude that’s everywhere.
That’s fair, it’s not ok to pretend desert has no life worth protecting.
However, there is a lot of it, and as far as impacted animals per acre, it’s got to be near the bottom. Thus of all the places to locate big solar projects, huge expanses of low life density flat land with lots have sun seems like it would minimize the harm.
As the article states, there's plenty of already disturbed land that can be used, instead of nature parks that harbor fragile ecosystems.
Also what people call "desert" isn't, like, the Sahara. There are many kinds of arid and semi-arid landscapes that people tend to underestimate because they aren't really habitable by humans or suitable for growing agricultural crops. The kinds of landscapes I'm referring are highlighted on the Friends of Nevada Wilderness website:
Yes, and deserts are just as susceptible to the effects of climate change as everywhere else. You have to build solar somewhere or they’re all doomed too.
> I’m just sick of the “it’s just desert, who cares, paint it with solar/oil fields/asphalt” attitude that’s everywhere.
There is plentiful desert to expand upon and plenty of expansion coming due to ongoing climate impact. Desert fauna is not in danger, at any almost any rate.
Does it even work great for nerds? I have seen a distressing amount of turning host key warnings off, or ignoring the warnings forever, or replacing a host key without any curiosity or investigation. Seems even worse in the cloud, where systems change a lot.
Even amongst nerds I've seen a significant amount of key pair re-use in my time, both 1:n::dev:servers and sometimes even 1:n::organization:devs. The transport security is moot when the user(s) discard all precautions and best practices on either end.
Even in such cases it's not really moot if a forward-secure scheme is used, only old legacy implementations might not by now. So just the key being shared between machines does usually not compromise the security of individual sessions, especially not retroactively.
I think it's pretty reasonable to turn off the "yes i would like to accept this key" on first connect. Just scream if it ever changes. I get that they're expecting me to compare it to something out of band but nobody does that.
Depends on the server. A VM you just installed on your own machine? A lab machine on the proxmox cluster? Probably.
A new cloud VM running in another city? I would trust it by default, but you don't have a lot of choice in many corporate environments.
Funnily enough, there is a solution to this: SSH has a certificate authority system that will let your SSH clients trust the identity of a server if the hostkey is signed and matches the domain the SSH CA provided.
Like with HTTPS, this sort of works if you're deploying stuff internally. No need to check fingerprints or anything, as long as whatever automation configured your new VM signs the generated host key. Essentially, you get DV certificates for SSH except you can't easily automate them with Let's Encrypt/ACME because SSH doesn't have tooling like that.
> I think it's pretty reasonable to turn off the "yes i would like to accept this key" on first connect.
Why is it reasonable to trust the key on first use? What if the first use itself has a man-in-the-middle that presents you the middle-man's key? Why should I trust it on first use? How do I tell if the key belongs to the real website or to a middle-man website?
What is the "real website"? You do not know this in the general case, it is just some rando on the internet, which is indistinguishable from a middle-man.
It is whoever owns the domain. The point is when your client talks to a domain, you know you are actually getting the domain, even if you don't know who owns it or if they're trustworthy.
Not at all. I might have my own DNS server to answer my DNS request or I may trust a specific DNS server (not controlled by my ISP). My DNS entry may be 100% correct and resolving to my actual bank website. But with TOFU, on first use, it is still possible for a router/ISP/middle-man in between to intercept the response being served from the real website and serve a MITM certificate to me.
Why do you say I should consider the middle-man answering the request to be the real website?
If I want to visit chase.com, I want to consider a server controlled by Chase (the company) to be the real website. PKI with a CA that attests the legal entity behind the website guarantees this. I agree with you that Let's Encrypt does not guarantee this. Is your comment scoped to Let's Encrypt only? If so, I agree.
But if we're talking about PKI in general, it seems like a terrible compromise to me to consider a middle-man to be the real website.
I guess we both disagree on a very fundamental point. To me it seems ridiculous to accept that it is reasonable to consider to the middle-man to be the real website. But to you it apparently makes sense. I am unable to understand why though.
> You do not know this in the general case, it is just some rando on the internet, which is indistinguishable from a middle-man.
What I was talking about is this: I read some URL, maybe on HN. I do not know who sits behind it. The content can be malicious, but whether they were that when the IP packet was assembled, who knows? In fact what is the difference between the "MITM" and "the real one". If "the real one" has never even received my request and the "MITM" doesn't forward it, then the "MITM" isn't a middle man but my connection partner. All the protocols form TCP, DNS to IP and DHCP assume that the connection partner is the one answering.
The contents can be deceptive or something, but this doesn't matter, because things on the internet in general come from people I don't know, should be taken with a grain of salt and don't matter in the real world. I don't care who is the middle man and who not, because both are the same to me.
The only problem here is if you establish credentials with one party and then accidentally send them to another party, e.g. a user/password login. This is solved by TOFU. Note, that this issue is symmetric. I neither want to send data from "the real one" to the "MITM", but I also don't want credentials established with the "MITM" to be received by "the real one", because this is the third-party in this case.
Consider Chase (the company) to provide food delivery. I visit the MITM's site, pay money to the MITM, the MITM serves me food. Where is the problem here outside of trademark issues? This problem only occurs when you have contact with Chase (the company) out-of-band.
What you are talking about is something totally different. You want to be assured that when you "visit" chase.com you are talking with Chase (the company). This is not assured in the general case. Even if there is nothing malicious going on, someone that is not Chase (the company) can be the one answering that. That is on top of issues like goog1e.com.
Yes, this is solved by
- establishing out-of-band that chase.com is Chase (the company)
- you inputting the correct string, not some look-a-like
- nothing being wrong with address resolution
- PKI with a CA that attest the legal entity
Note that you still need out-of-band knowledge.
What Let's Encrypt solves is that browsers don't like self-signed certificates. It would be also solved with self-signed certificates and TOFU.
> Is your comment scoped to Let's Encrypt only?
The article we are discussing this under criticizes Let's Encrypt. However due to the PKI hiding that you still need out-of-band data, we accepted the current state. This is what the article and I criticizes and why the problem isn't only with Let's Encrypt.
In the now deleted comment you wrote:
> Yes, this is one point where I agree with you. But no bank really use Let's Encrypt for certificates. Banks do use certification authority where the legal entity is validated.
This essentially means that Let's Encrypt shouldn't be treated like a trustworthy certificate, which I think I actually would agree. But I wouldn't propose this, because that means we are back to square one before Let's Encrypt, and I couldn't host websites that the common people would visit.
I think this problem can't be solved, before it is accepted that it shouldn't be solved by private companies, but by jurisdictions. It is essentially the age-old identity problem.
The platform engineering team at my big corp work simply disabled host key checking in the cloud tool Python script they wrote for all of us to log into our bastion hosts.
Wow, that is a level of DGAF I haven't encountered before in production. No wonder data breaches are so common with that kind of YOLO security practices.
> Using modern techniques like higher order methods, scatter-gather arrays (similar to map-reduce), passing by value via copy-on-write, etc, code can be written that works like piping data between unix executables. Everything becomes a spreadsheet basically.
I have built a decent amount of multithreaded and distributed systems. I agree in principle with everything you wrote in that paragraph. In practice, I find such techniques add a lot of unstable overhead in the processing phase as memory is copied, and again while the results are merged, conflicts resolved, etc. They also lock you into a certain kind of architecture where global state is periodically synchronized. So IMO the performance of these things is highly workload-dependent; for some, it is barely an improvement over serialized, imperative code, and adds a lot of cognitive overhead. For others, it is the obvious and correct choice, and pays huge dividends.
Mostly I find the benefit to be organizational, by providing clear interfaces between system components, which is handy for things developed by teams. But as you said, it requires developers to understand the theory of operation, and it is not junior stuff.
Completely agree that we could use better language & compiler support for such things.
Almost every branch of every story in Choose Your Own Adventure books ended with bold, centered text that said:
THE END
But there was one story I vaguely recall where if you made the "wrong" choice, you fell into a bottomless pit (the books were always in the second person) and you kept falling and falling, forever, and the text said:
THERE IS NO END
I still remember the chills I got reading this. I wonder how kids these days get their introduction to existential horror?