Hacker Newsnew | past | comments | ask | show | jobs | submit | CivBase's commentslogin

> Use an Apple TV for the "smart" features.

Use a PC for "smart" features. Used PC hardware is cheap and plenty effective. And the Logitech K400 is better than any TV remote.

No spying (unless you run Windows). Easy ad blocking. No reliance on platform-specific app support. Native support for multiple simultaneous content feeds (windows) - even from different services.

And it's not like it's complicated. My parents are as tech-illiterate as they come and they've been happily using an HTPC setup for over well over a decade. Anyone who can operate a "Smart TV" can certainly use a web browser.


Of course that's a viable option, but likely uses far more electricity in a year and unless you're going the high seas, unlikely to always get a better 4k HDR resolution from streaming services.

>but likely uses far more electricity in a year

Unlikely, Apple TV is itself a "PC", not much different.

An actual PC doesn't cost much for electricity in a year either (say $30/year headless for watching several hours a day and sleep mode the rest). Make it an ARM and it will be quite less.


I have the same setup and have never looked back. My kids can control the TV now via the browser instead of asking me to fiddle with a smartphone, and I can easily block e.g. YouTube via the hosts file. The ability to have multiple streaming services open in different tabs and reading online reviews all on the same screen is also vastly superior to any UX offered by e.g. Chromecast or similar devices.

I also have a 10 year old laptop with no TPM 2.0 module. It was pretty high end for the time too (Dell XPS). I haven't needed it for much in recent years, but it still runs perfectly fine and I'm happy to continue using it if the need arises again. Sounds like I'll have to switch that over to Linux like I have all my other PCs.

The author just wants Microsoft to stop harassing him. He's not asking for handouts. He's not even asking to be allowed to bypass the hardware requirements for Windows 11. He just wants to stop getting nagged by Microsoft to upgrade.

He could buy new hardware and run Windows 11. But this pattern will only continue from Microsoft. The only way out is to run a non-Microsoft OS (assuming he can).


The important point here is that data collection and telemetry is worthless and was never about improving the experience for you as a user. The coders behind the update nag had every opportunity to do a hardware check, but as I say, big data is never used to improve anything for end users.

You're not getting what I'm saying. Hassling him is the point. They want him to use Windows 11 or go away. He's a security update expense because he's too cheap to upgrade his laptop or run Linux on it.

I don't think you understand the situation. He's not getting security updates. He's not an expense. Microsoft is incurring no costs by allowing him to continue using his existing operating system without updates.

Microsoft doesn't want him to go away. They want him to buy their new product.


I switched back to Firefox around the quantum release and have been very happy with it since. I certainly have some complaints, but it's night and day compared to what Google wants me to deal with.

Ofcourse it is. But that also doesn't make my above comment wrong though. Not to mention, many were silent for so long against their actions. Now it looks like the entire community has started voicing against it. The ball is now on Mozilla's court.

Not to mention there is more than just technical aspect with Firefox and community. A lot of people have invested a ton of time in it.

Mozilla warrants all the flack they are getting. I am just saying they can't virtue signal their way through this. It wont work.


I think the author was close to something here but messed up the landing.

To me the difference between something like AI translation and an LLM is that the former is a useful feature and the latter is an annoyance. I want to be able to translate text across languages in my web browser. I don't want a chat bot for my web browser. I don't want a virtual secretary - and even if I did, I wouldn't want it limited to the confines of my web browser.

It's not about whether there is machine learning, LLMs, or any kind of "AI" involved. It's about whether the feature is actually useful. I'm sick of AI non-features getting shoved in my face, begging for my attention.


This article has a weird progression.

It starts with the origins of TNR. Then it basically says it's a decent font with no significant problems. Then it talks about how it's popular because it's the default.

Then in the last paragraph it takes a hard stance that you should not use TNR unless required. It even implores the reader with a bold "please stop". It makes no arguments to support this stance and offers no alternatives.


That's because it's not an article, it's a section of Butterick's book. (He also has a book at https://practicaltypography.com/ that isn't targeted at lawyers, and I think a lot of the content overlaps.)

I agree that he's a bit too mean to mainstream fonts, though.


The problem with some of the less mainstream fonts is that they are not always readily available/transferrable. I don't tend to have issues with TNR going funny in another format. As for Helvetica, I don't think Microsoft supports it and created Arial which is an inferior version of it.

Here's what is says about Times New Roman:

> Objectively, there’s nothing wrong with Times New Roman. It was designed for a newspaper, so it’s a bit narrower than most text fonts—especially the bold style. (Newspapers prefer narrow fonts because they fit more text per line.) The italic is mediocre. But those aren’t fatal flaws. Times New Roman is a workhorse font that’s been successful for a reason.

It says that there are problems. They're just not fatal.

> It even implores the reader with a bold "please stop". It makes no arguments to support this stance and offers no alternatives.

It says that there are plenty of alternatives (it specifically mentions Helvetica) that are better than Times New Roman. The argument is that Times New Roman is okay, but that it has flaws, and that there are easily available fonts that are superior. If someone is devoted enough to fonts to write a blog about them, then the existence of superior alternatives is enough of a reason to not use a font.


The author provides a single critisism ("The italic is mediocre"), does not elaborate, then immediately hedges their critique.

Helvetica is used as an example of a font which garners more "affection" in contrast to TNR, but is never praised by the author or recommended as an alternative - at least not in the linked passage.


The author also criticizes the narrowness of the font (and particularly of the bold style). They're not trying to argue that Times New Roman is terrible - just that it's substandard.

Helvetica is not usually in the running for use by lawyers.

As a body copy font, sans serif is generally seen as "friendlier" and more casual--which is one reason you see more of it than you used to in marketing copy and many other uses. Friendly and casual are generally not things I'm looking for in legal documents.

[flagged]


Did you wake up on the wrong side of the bed this morning?

This is not a standalone article but a section from Butterick's book, "Typography for Lawyers", which is hosted in full on the website. The book is an opinionated style manual, and many alternatives are described in nearby sections.

It does seem like it is trying to force a trend without giving one solid reason.

Extension (adblock) support on mobile is worth more to me than anything you just listed off.

It'd be kinda funny if they asked each AI every day and updated the clocks/rationale accordingly.


They do (with only one AI though): "How does it work? Every night, a script runs that asks Gemini to search the web for the latest AI news—both hype and criticism. Based on the sentiment and economic indicators it finds, it updates the predicted "burst date" and explains its reasoning."


the LLM doesn't have the concept of time and it doesn't incorporate new data, unless it's put into the context, so I don't see the point of this suggestion.


The whole thing is a joke anyways. The numbers are meaningless. The inconsistency in output from day to day highlights that even more.


WSL1 felt like a useful compatibility layer for running some Linux applications in Windows. It had plenty of warts, but it quickly became my preferred command shell for Windows.

WSL2 is more capable, but it's not Windows anymore. I might as well run a proper Linux VM or dual boot. Better yet, I'd rather run a Windows VM in a bare metal Linux OS. Why even bother with WSL2? What's the value add?


Vs WSL1:

GPU access. Actual graphics use is so so, but essential for doing CUDA/AI stuff

Faster file system access on the Linux side (for Linux compiles etc). Ironically, accessing Windows filesystem is slower than WSL1.

Better Linux compatibility.

Vs a Linux VM:

GPU access!

Easier testing for localhost stuff, Linux ports get autoforwarded to Windows (if your test http server is running in WSL2 at port 8080, you can browse to http://localhost:8080 in your Windows browser)

Easy Windows filesystem interaction. Windows local drives show up in /mnt automatically.

Mix Windows commands with Linux commands. I use this for example to pipe strings.exe, which is UTF-16 aware, with Linux text utils.

I think WSL2 tends to be better at sharing memory (releasing unused memory) with the rest of the system than a dedicated VM.

You can mimic some of this stuff to a degree with a VM, but the built in convenience facetor can't be overlooked, and if you are doing CUDA stuff there isn't a good alternative that I am aware of. You could do PCI passthrough using datacenter class GPUs and Windows Server, but $$$.


> I might as well run a proper Linux VM or dual boot.

Obviously you don't do the thing you're writing about. A "proper" Linux VM would incur more work for you and would be less useful. Dual boot would remove your ability to use the computer for activities that need a Windows OS. Running a Windows VM on Linux would take you down a rabbit hole of pain and annoyances, unless your use case for Windows is extremely limited.


I dual boot, but I avoid Windows as much as possible.


I concur. This was my main experience with WSL1 vs. WSL2.

If I'm running Windows, it means that the files and projects that I care about are on the Windows file system. And they need to be there, because my IDE and other GUI apps needs files to be on a real file system to work optimally. (A network share to a WSL2 file system would not let the IDE watch for changes, for instance.)

WSL1 was a great way to get a UNIX-style command line, with git, bash, latex etc., for the Windows file system. WSL2 was just too slow for this purpose; commands like "git status" would take multiple seconds on a large codebase.

Now I switched back to MacOS, and the proper UNIX terminal is a great advantage.


Full access to the windows filesystem

You can call windows programs from linux (e.g. explorer.exe .)


What is a table other than an array of structs?


It’s not that you can’t model data that way (or indeed with structs of arrays), it’s just that the user experience starts to suck. You might want a dataset bigger than RAM, or that you can transparently back by the filesystem, RAM or VRAM. You might want to efficiently index and query the data. You might want to dynamically join and project the data with other arrays of structs. You might want to know when you’re multiplying data of the wrong shapes together. You might want really excellent reflection support. All of this is obviously possible in current languages because that’s where it happens, but it could definitely be easier and feel more of a first class citizen.


Well it could be a struct of arrays.

Nitpicking aside, a nice library for doing “table stuff” without “the whole ass big table framework” would be nice.

It’s not hard to roll this stuff by hand, but again, a nicer way wouldn’t be bad.


The difference is semantics.

What is a paragraph but an array of sentences? What is a sentence but an array of words? What's a word but an array of letters? You can do this all the way down. Eventually you need to assign meaning to things, and when you do, it helps to know what the thing actually is, specifically, because an array of structs can be many things that aren't a table.


I would argue that's about how the data is stored. What I'm trying to express is the idea of the programming language itself supporting high level tabular abstractions/transformations such as grouping, aggregation, joins and so on.


Implementing all of those things is an order of magnitude more complex than any other first class primitive datatype in most languages, and there's no obvious "one right way" to do it that would fit everyones use cases - seems like libraries and standalone databases are the way to do it, and that's what we do now.


Sounds a lot like LINQ in .NET (which is usually compatible with ORMs actually querying tables).


Map/filter/reduce are idiomatic Java/Kotlin/Scala.

SELECT thing1, thing2 FROM things WHERE thing2 != 2;

val thingMap = things.map { it.thing2 to it.thing2 }.filter { it.thing2 !=2 }

Then you've got distinct(), sorting methods, take/drop for limits, count/sumOf/average/minOf/maxOf.

There are set operations, so you can do unions and differences, check for presence, etc.

Joins are the hard part, but map() and some lambda work can pull it off.


Yeah, that's LINQ+EF. People have hated ORMs for so long (with some justification) that perhaps they've forgotten what the use case is.

(and yes there's special language support for LINQ so it counts as "part of the language" rather than "a library")


Ah, that makes more sense. Thanks for the clarification.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: