Hacker Newsnew | past | comments | ask | show | jobs | submit | nwellinghoff's commentslogin

You could restrict the ssh port by ip as well.

He is absolutely right. The soap opera effect totally ruins the look of most movies. I still use a good old 1080p plasma on default. It always looks good

Drives me insane when people say they can't tell the difference while watching with motion smoothing on. I feel for the filmmakers.

The soap opera effect drives me nuts. I just about can't watch something when it's on. It makes a multimillion dollar movie look like it was slapped together in an afternoon.

It's funny, people complain about this but I actually like smooth panning scenes over juddery ones that give me a headache trying to watch them. I go so far as to use software on my computer called SVP 4 that does this but in a way better GPU accelerated implementation. I'm never sure why people think smoothness means cheapness except that they were conditioned to it.

I watched the most recent avatar and it was some HDR variant that had this effect turned up. It definitely dampens the experience. There’s something about that slightly fuzzed movement that just makes things on screen look better

from what I heard, the actions scenes are shot in 48 fps and others are in 24 fps or something along those lines. You might be talking about that ?

My parents’ new TV adds a Snapchat like filter to everything. Made Glenn Close look young instead of the old woman she’s supposed to be in Knives Out.

Turning it off was shocking. So much better. And it was buried several levels deep in a weirdly named setting.


Nope, nope, I can't watch 24-30hz without my eyes bleeding during camera pans.

The “non exclusive” thing may come back to bite them. If another big player comes in to lic the tech and get “different” tech than nvidia it opens up law suits. Also this seems like it’s just a bet on time. The head engineer who invented this technology will be replicated. But I guess that will take a while and the margin money machine will print Bs while the dust settles.

I'm pretty sure Nvidia overpaid so that groq can charge the same absurd price to the second customer to whom the company's IP is worth maybe a billion or two.

As your parents age you should convince them to transfer their assets into a trust where they still maintain control but withdraws etc can be optionally approved by a spouse or other family member. The trust has many other benefits but is especially good for fraud as it can disassociate the holders identity from the assets and have specific conditions for withdrawal. It also can provide a clean transfer of ownership in the event of a death etc. I am sorry this happened to you, it is becoming more common in the us too. And all of these “companies” seem to establish bank accounts and addresses in Delaware…

I said this in a previous post and was shot down hard. I think you are right. Every time I look at a ipv6 address my brain goes “fack this”.


IPv4 isn't perfect, but it was designed to solve a specific set of problems.

IPv6 was designed by political process. Go around the room to each engineer and solve for their pet peeve to in turn rally enough support to move the proposal forward. As a bunch of computer people realized how hard politics were they swore never to do it again and made the address size so laughably large that it was "solved" once and for all.

I firmly believe that if they had adopted any other strategy where addresses could be meaningfully understood and worked with by the least skilled network operators, we would have had "IPv6" adoption 10 years ago.

My personal preference would have been to open up class E space (240-255.*) and claw back the 6 /8s Amazon is hoarding, be smarter about allocations going forward, and make fees logarithmic based on the number of addresses you hold.


> IPv4 isn't perfect, but it was designed to solve a specific set of problems.

IPv4 was not designed as such, but as an academic exercise. It was an experiment. An experiment that "escape the lab". This is per Vint Cerf:

* https://www.pcmag.com/news/north-america-exhausts-ipv4-addre...

And if you think there wasn't politics in iPv4 you're dead wrong:

* https://spectrum.ieee.org/vint-cerf-mistakes

> IPv6 was designed by political process.

Only if by "political process" you mean a bunch of people got together (physically and virtually) and debated the options and chose what they thought was best. The criteria for choosing IPng were documented:

* https://datatracker.ietf.org/doc/html/rfc1726

There were a number of proposals, and three finalists, with SIPP being chosen:

* https://datatracker.ietf.org/doc/html/rfc1752

> I firmly believe that if they had adopted any other strategy where addresses could be meaningfully understood and worked with by the least skilled network operators, we would have had "IPv6" adoption 10 years ago.

The primary reason for IPng was >32 bits of address space. The only way to make them shorter is to have fewer bits, which completely defeats the purpose of the endeavour.

There was no way to move from 32-bits to >32-bits without every network stack of every device element (host, gateway, firewall, application, etc) getting new code. Anything that changed the type and size of sockaddr->sa_family (plus things like new DNS resource record types: A is 32-bit only; see addrinfo->ai_family) would require new code.


This is a lot of basically sharpshooting, but I will address your last point:

> There was no way to move from 32-bits to >32-bits without every network stack of every device element (host, gateway, firewall, application, etc) getting new code. Anything that changed the type and size of sockaddr->sa_family (plus things like new DNS resource record types: A is 32-bit only; see addrinfo->ai_family) would require new code.

That is simply not true. We had one bit left (the reserved/"evil" bit) in IPv4 headers that could have been used to flag that the first N bytes of the payload were an additional IPv4.1 header indicating additional routing information. Packets would continue to transit existing networks and "4.1" capable boxes at edges could read the additional information to make further routing decisions inside of a network. It would have effectively used IPv4 as the core transport network and each connected network (think ASN) having a handful of routed /32s.

Overlay networks are widely deployed and have very minor technical issues.

But that would have only addressed the numbering exhaustion issues. Engineers often get caught in the "well if I am changing this code anyway" trap.


An explicit goal of IPv6 considered as important as the address expansion was the simplification of the packet header, by having fewer fields and which are correctly aligned, not like in the IPv4 header, in order to enable faster hardware routing.

The scheme described by you fails to achieve this goal.


I am glad you brought this up, that is another big issue with IPv6. A lot of the problems it was trying to solve literally don't exist anymore.

Header processing and alignment were an issue in the 90s when routers repurposed generic components. Now we have modern custom ASICs that can handle IPv4 inside of a GRE tunnel on a VLAN over MPLS at line rate. I have switches in my house that do 780 Gbps.


It is irrelevant what we can do now.

At the time when it was designed, IPv6 was well designed, much better than IPv4, which was normal after all the experience accumulated while using IPv4 for many years.

The designers of IPv6 have made only one mistake, but it was a huge mistake. The IPv4 address space should have been included in the IPv6 space, allowing transparent intercommunication between any IP addresses, regardless whether they were old IPv4 addresses or new IPv6 addresses.

This is the mistake that has made the transition to IPv6 so slow.


> The IPv4 address space should have been included in the IPv6 space […]

See IPv4-mapped ("IPv4-compatible") IPv6 addresses from RFC 1884 § 2.4.4 (from 1995) and follow-on RFCs:

* https://datatracker.ietf.org/doc/html/rfc1884

* https://en.wikipedia.org/wiki/IPv6#IPv4-mapped_IPv6_addresse...


> The IPv4 address space should have been included in the IPv6 space, allowing transparent intercommunication between any IP addresses, regardless whether they were old IPv4 addresses or new IPv6 addresses.

How would you have implemented it that is different from the NAT64 that actually exists, including shoving all IPv4 addresses into 64:ff9b::/96?


Ideally, 464XLAT should have been there from the beginning and its host part (CLAT) should have been a mandatory part of IP stack.


> That is simply not true. We had one bit left (the reserved/"evil" bit) in IPv4 headers […]

Great, there's an extra bit in the IPv4 packet header.

I was talking about the data structures in operating systems: are there any extra bits in the sockaddr structure to signal things to applications? If not, an entirely new struct needs to be deployed.

And that doesn't even get into having to deploy new DNS code everywhere.


But v6 did do what you're describing here?

They didn't use the reserved bit, because there's a field that's already meant for this purpose: the next protocol field. Set that to 0x29 and it indicates that the first bytes of the payload contain a v6 address. Every v4 address has a /48 of v6 space tunnelled to it using this mechanism, and any two v4 addresses can talk v6 between them (including to the entire networks behind those addresses) via it.

If doing basically exactly what you suggested isn't enough to stop you from complaining about v6's designers, how could they possibly have done any better?


Imo they should have just clawed 1 or 2 bits out of the ipv4 header for additional routing and called it good enough


This would require new software and new ASICs on all hosts and routers and wouldn't be compatible with the old system. If you're going to cause all those things, might as well add 96 new bits instead of just 2 new bits, so you won't have the same problem again soon.


IPv6 is literally just IPv4 + longer addresses + really minor tweaks (like no checksum) + things you don't have to use (like SLAAC). Is that not what you wanted? What did you want?

And what's wrong with a newer version of a thing solving all the problems people had with it...?

There are more people than IPv4 addresses, so the pigeonhole principle says you can't give every person an IPv4 address, never mind when you add servers as well. Expanding the address space by 6% does absolute nothing to solve anything and I'm confused about why you think it would.


> Every time I look at a [long] ipv6 address my brain goes “fack this”.

I do get that but I also get 'There are so many I could have all I wanted ... or I could if any of our fiber ISPs would support it, that is'


I finally clicked when I worked out it was 2^64 subnets . You have a common prefix of you /48, which isn’t much longer than an ipv4 address - especially as it seems everything is 2001::/16, which means you basically have to remember a 32 bit network prefix just like 12.45.67.8/32.

That becomes 2001:0c2d:4308::/48 instead

After that you just need to remember the subnet number and the host number. If you remember 12.45.67.8 maps to 192.168.13.7 you might have

2001:0c2d:4308:13::7

So subnet “13” and host “7”

It’s not much different to remebering 12.45.67.8>192.168.13.7


> especially as it seems everything is 2001::/16

I was sort of expecting that this week.

I had to transcribe a v6 addy for a WAN-WAN test (a few mi apart).

That's when I noticed that Charter (Spectrum) had issued

   2603:: for one WAN and 
   2602:: for the other WAN.
ref: https://bgp.he.net/AS33363#_prefixes6


The current global unicast space is actually limited to just 2000::/3.

https://www.iana.org/assignments/ipv6-address-space/ipv6-add...


Pfsense firewall. There is a week long learning curve and it’s best to put it on dedicated hardware.


What a fucking joke. They are going to charge me for running a script I wrote on MY server that is merely launched by their server that I am already paying an outrageous amount for to have a private repository. By the minute!!!! It never ends.


Nice write up. It would be great if the authors could follow up with a detailed technical walk through of how to use the various tooling to figure out what an extension is really doing.

Could one just feed the extension and a good prompt to claude to do this? Seems like automation CAN sniff this kind of stuff out pretty easily.


Too bad aws does not support any of these other vector extensions in managed rds.


You single handedly built all of Cloudfare workers? Impressive, most of us would have required a team or a “we”.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: