Hacker Newsnew | past | comments | ask | show | jobs | submit | cdavid's commentslogin

I can believe it is deliberate at the top, I've certainly seen first hand in several orgs I've worked at.

My sense is that unless actively managed against, any org big enough to have a financial department and financial planning will work under assumption of fungibility.


You had to accept some license terms before you could download the VST SDK. When linux audio started to get "serious" 20 years ago, it was a commonly discussed pain point.

Concretely, it made distributing OSS VST plugins a pain. Especially for Linux which generally will want to build their packages.


Note that his was the VST2 era. VST3 was commercial license or GPL 3, which was an improvement, but only slightly, because it excluded open-source software released under the GPL 2, and also MIT/BSD/whatever-licensed software couldn't use it (without effectively turning the whole software into GPL-licensed software).


I agree the big deal is tool calling.

But MCP has at least 2 advantages over cli tools

- Tool calling LLM combined w/ structured output is easier to implement as MCP than CLI for complex interactions IMO.

- It is more natural to hold state between tool calls in an MCP server than with a CLI.

When I read the OT, I initially wondered if I indeed bought into the hype. But then I realized that the small demo I built recently to learn about MCP (https://github.com/cournape/text2synth) would have been more difficult to build as a cli. And I think the demo is representative of neat usages of MCP.


Since the OT is about EU, it is important to keep in mind that costs per MW are much lower in EU than in the US (or the UK).

E.g. according to https://www.samdumitriu.com/p/infrastructure-costs-nuclear-e..., UK/US is ~10 millions GBP, France ~4.5, and China/Korea/Japan around 2.5.

I don't know much about nuclear plan, but I doubt UK are much safer in practice than French ones, or even Korean/Japanese ones. I suspect most of the cost difference across countries of similar development to be mostly regulation. And it is a nice example that sometimes EU can be better than the US at regulations :) (I don't know how much nuclear-related regulations are EU vs nation-based though).


Maybe I am too mathematically enclined, but this was not easy to understand.

The ELI5 explanation of floating point: they approximately give you the same accuracy (in terms of bits) independently of the scale. Whether your number if much below 1, around 1, or much above 1, you can expect to have as much precision in the leading bits.

This is the key property, but internalizing it is difficult.


I like "between each power of 2, there are the same number of numbers."

So between 1/2 and 1 there are the same number of numbers as between 1024 and 2048. If you have 1024 numbers between each power of 2, then each interval is 1/2048 in the first case and 1 in the second case.

I reality there are usually:

bfloat16: 128 numbers between each power of 2

float16: 1024 numbers between each power of 2

float32: 2*23 numbers (~8 million) between each power of 2

float64: 2*52 numbers (~4.5 quadrillion) between each power of 2


Or, say, you can write any number you want, but it has to be a whole number from 0 to 9, and you can only make the number bigger or smaller by moving the decimal point, and you can only move the decimal point up to 10 spaces. And you can add or remove a negative sign in front.


I am surprised to see those discussions w/o a single mention of roon. As a music lover, roon is a software I've happily paid 100 of $ for.

While not OSS, roon 1) can run on linux 2) supports large local libraries (I have > 2k albums in FLAC, and it supports much more) 3) have roon arc that allows you to listen from phone anywhere 4) has a very good system to link metadata and recommendation within your library.

The metadata support is truly wonderful, you can easily browse your music like wikipedia, can find music per composer, performer, discover related musicians, etc. I strongly recommend people serious about music to try it out.

I've happily replaced spotify with it a few years ago, and will never go back.


Roon seems great but the pricing is really steep in my opinion... Costs practically as much as a streaming service, but you still need to get your own music.

At least they have a lifetime purchase option, though it costs $830!


It is not cheap, but it is clearly made by people who care about music. In those days where "slop" is so common, for people who can afford it, it is a nice refresher.

Another minor inconvenience is that it is memory hungry for large libraries. In my case, for ~1 TB of flac, the docker takes 5-6 GB RAM on my debian NAS. Limiting it at 4 GB definitely crashed w/ OOM, at 8Gb never had issue.


the big O copmlexity makes assumptions that break down in this case. E.g. it "ignores" memory access cost, which seems to be a key factor here.

[edit] I should have said "basic big O complexity" makes assumptions that break down. You can ofc decide to model memory access as part of "big O" which is a mathematical model


I agree ability to use python to "script HPC" was key factor, but by itself would not have been enough. What really made it dominate is numpy/scipy/matplotlib becoming good enough to replace matlab 20 years ago, and enabled an explosion of tools on top of it: pandas, scikit learn, and the DL stuff ofc.

This is what differentiates python from other "morally equivalent" scripting languages.


I agree it is confusing, because starting with notation will confuse you. I personally don't like the partial derivative-first definition of those concepts, as it all sounds a bit arbitrary.

What made sense to me is to start from the definition of derivative (the best linear approximation in some sense), and then everything else is about how to represent this. vectors, matrices, etc. are all vectors in the appropriate vector space, the derivative is always the same form in a functional form, etc.

E.g. you want the derivative of f(M) ? Just write f(M+h) - f(M), and then look for the terms in h / h^2 / etc. Apply chain rules / etc. for more complicated cases. This is IMO a much better way to learn about this.

As for notation, you use vec/kronecker product for complicated cases: https://janmagnus.nl/papers/JRM093.pdf


I was surprised at previous comparison on omarchy website, because apple m* work really well for data science work that don't require GPU.

It may be explained by integer vs float performance, though I am too lazy to investigate. A weak data point, using a matrix product of N=6000 matrix by itself on numpy:

  - SER 8 8745, linux: 280 ms -> 1.53 Tflops (single prec)
  - my m2 macbook air: it is ~180ms ms -> ~2.4 Tflops (single prec)
This is 2 mins of benchmarking on the computers I have. It is not apple to orange comparison (e.g. I use the numpy default blas on each platform), but not completely irrelevant to what people will do w/o much effort. And floating point is what matters for LLM, not integer computation (which is what the ruby test suite is most likely bottlenecked by)


It's all about the memory bandwidth.

Apple M chips are slower on the computation that AMD chips, but they have soldered on-package fast ram with a wide memory interface, which is very useful on workloads that handle lots of data.

Strix halo has a 256-bit LPDDR5X interface, twice as wide as the typical desktop chip, roughly equal to the M4 Pro and half of that of the M4 Max.


You're most likely bottlenecked by memory bandwidth for a LLM.

The AMD AI MAX 395+ gives you 256GB/sec. The M4 gives you 120GB/s, and the M4 Pro gives you 273GB/s. The M4 Max: 410GB/s (14‑core CPU/32‑core GPU) or 546GB/s (16‑core CPU/40‑core GPU).


It’s both. If you’re using any real amount of context, you need compute too.


Yeah, memory bandwidth is often the limitation for floating point operations.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: