Hacker Newsnew | past | comments | ask | show | jobs | submit | lanza's commentslogin

> It fixates on one particular basis and it results in a vector space with few applications and it can not explain many of the most important function vector spaces, which are of course the L^p spaces.

Except just about all relevant applications that exist in computer science and physics where fixating on a representation is the standard.


In physics it is common to work explicitily with the components in a base (see tensors in relativity or representation theory), but it's also very important to understand how your quantities transform between different basis. It's a trade-off.


Most relevant applications use L^2 spaces which can not be defined point wise.

If you want to talk about applications, then this representation is especially bad. Since the intuition it gives is just straight up false.


Fwiw, my favourite textbook in communication theory (Lapidoth, A Foundation in Digital Communication) explicitly calls out this issue of working with equivalence classes of signals and chooses to derive most theorems using the tools available when working in ℒ_2 (square-integrable functions) and ℒ_1 space


> I own an M4 iPad Pro and can't figure out what to do with even a fraction of the horsepower, given iPadOS's limitations.

Literally everything you do gets the full power of the chips. They finish tasks faster using less power than previous chips. They can then use smaller batteries and thinner devices. A higher ceiling on performance is only one aspect of an upgraded CPU. A lower floor on energy consumed per task is typically much more important for mobile devices.


Right but what if I don't notice the difference between rendering a web page taking 100ms and it taking 50ms? What if I don't notice the difference between video playback consuming 20% of the chip's available compute and it consuming 10%?


I'm pretty sure that users of the announced Blender for iPad port will notice any additional horsepower.


what users?


> but what if I don't notice the difference between rendering a web page taking 100ms and it taking 50ms?

You probably won’t notice this when using the new machine.

For me, it only becomes noticeable when I go back to something slower.

It’s easy to take the new speed as a given.

> What if I don't notice the difference between video playback consuming 20% of the chip's available compute and it consuming 10%?

You would notice it in increased battery life. A CPU that finishes the task faster and more efficiently will get back into low power mode quicker.


Faster can also mean more efficient for a lot of tasks, because the cpu can idle sooner so your battery can last longer, or be smaller and lighter.


"Literally everything" doesn't amount to much if I can't actually control the stupid thing.


The difference in usefulness between ChatGPT free and ChatGPT Pro is significant. Turning up compute for each embedded usage of LLM inference will be a valid path forward for years.


That's a JIT. It uses the same compiler infrastructure but swaps out the AoT backend and replaces it with the JIT backend in LLVM. Notably, this blog post is targeting on-device usage which a custom JIT is not allowed. You can only interpret.


Because the usefulness of an AI model is reliably solving a problem, not being able to solve a problem given 10,000 tries.

Claude Code is still only a mildly useful tool because it's horrific beyond a certain breadth of scope. If I asked it to solve the same problem 10,000 times I'm sure I'd get a great answer to significantly more difficult problems, but that doesn't help me as I'm not capable of scaling myself to checking 10,000 answers.


Without reading an entire novel's worth of text, do they explain why they picked these dates? They have a separate timeline post where the 90th percentile of superhuman coder is later than 2050. Did they just go for shock value and pick the scariest timeline?


Only gripe I have with the tool is that once you've gotten a country right a few times it zooms in too far. I still had no clue where Eritrea was after getting it right like four times. Just got lucky.

But now that the map only shows me three possible countries I can trivially remember which one it was. Ask me again tomorrow while only showing me the full map and I might guess it's in South America.


Wrong thread?


> I really don't understand what their endgame is here.

To not lose. History is full of stories of incumbents not wanting to cannibalize themselves and dying because of it.


> Nobody up to this day has been able to give a formal mathematical definition of intelligence, let alone a proof that it can be reduced to a computable function.

We can't prove the correctness of the plurality of physics. Should we call that a dead end too?


> Llama 3.1 405B can currently replace junior engineers

lol


Can LLM join a standup call? Can LLM create a merge request?

At the moment it looks like an experienced engineer can pressure LLM to hallucinate a junior level code.


The argument is that, instead of hiring a junior engineer, a senior engineer can simply produce enough output to match what the junior would have produced and then some.

Of course, that means you won't be able to train them up, at least for now. That being said, even if they "only" reach the level of your average software developer, they're already going to have pretty catastrophic effects on the industry.

As for automated fixes, there are agents that _can_ do that, like Devin (https://devin.ai/), but it's still early days and bug-prone. Check back in a year or so.


Not training new workers and relying on senior engineers with tools is short sighted and foolish.

LLMs seem to be accelerating the trend


On one hand, I somewhat agree; on the other hand, I think LLMs and similar tooling will allow juniors to punch far beyond their weight and learn and do things that they would have never have dreamed of before. As mentioned in another comment, they're the teacher that never gets tired and can answer any question (with the necessary qualifications about correctness, learning the answer but not the reasoning, etc)

It remains to be seen if juniors can obtain the necessary institutional / "real work" experience from that, but given the number of self-taught programmers I know, I wouldn't rule it out.


I think many people using llms are faking it and have no interest in “making it”.

It’s not about learning for most.

Just because a small subset of intelligent and motivated people use tools to become better programmers, there is a larger number of people that will use the tools to “cheat”.


Tools are foolish? Like, should we remove all of the other tools that make senior engineers more productive, in favor of hiring more people to do those same tasks? That seems questionable.


Tools are great, but there is a way to learn the fundamentals and progress through skills and technology.

Learn to do something manually and then learn the technology.

Do you want engineers who are useless if their calculator breaks or do you want someone who can fall back on pen and paper and get the work done?


Well what if their pen breaks? Perhaps a good fluid dynamics engineer needs to be able to create ink from common plants?

I get the argument, it’s just silly. Calculators don’t “break”. I would rather have an engineer who uses highly reliable tools than one who is so obsessed with the lowest levels of the stack that they aren’t as strong at the top.

I’m willing to live with a useless day in the insanely unlikely event that all readily available calculators stop working.


There's an incentive problem because the benefit from training new workers is distributed across all companies whereas the cost of training them is allocated to the single company that does so


Most broken systems have bad incentives.

Companies don’t want to train people ($) because employees with skills and experience are more valuable to other companies because retention is also expensive.

We are not training AND retaining talent.


> The argument is that, instead of hiring a junior engineer, a senior engineer can simply produce enough output to match what the junior would have produced and then some.

...and that's just as asinine of a claim as the original one


Why? I can say that, in my personal experience, AI has allowed me to work more efficiently as a senior engineer: I can describe the behaviour I want, scan over the generated code and make any necessary fixes much faster than either writing the code myself or having a junior do it.


Plain grift, or are they high on their own supply?


Both? Both is good.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: