Hacker Newsnew | past | comments | ask | show | jobs | submit | elcomet's commentslogin

The heat pump will pull heat from inside the house? This sounds terrible for efficiency in winter, as you will need to reheat the room


There are systems (like the sanco2) that use an indoor/outdoor pump.

> This sounds terrible for efficiency in winter, as you will need to reheat the room

Sure, but lots of people have some point of the year they want cooling.

Even during the heating season it's only worse if you're heating the living space with something _worse_ than what you're using to heat the hot water. If you have a heat pump for room heat then you're moving heat from outside, to in the house, to in the water heater.

If you're heating the room with electric then in the winter it's no different than using an electric water heater (100% efficient).


Where does it day that? Would be great if it chose inside or outside based on target inside temp (e.g. cools indoor air in summer to heat water for washing and showers)


Looking at pictures it must pull heat from ambient air or from electricity. There is not enough tubes to have interior and external heat exchanging units.


Yeah this sounds disingenuous. I can also see my heater to 70C if I want, that does not increase its size...


It is a scalding hazard though. At 70°C skin burns occur in about a second or so.

The thermostatic valve makes it so that the water that comes out of the water heater is at a more reasonable temperature.


I was surprised at how cold "hot" water actually was. I thought it was 60-70, but apparently what feels "hot" is around 45-50. Especially for me, that finds anything beyond my shower's "middle" heat as uncomfortably hot, I must be showering with around 40 C water, which is basically "hot day" hot.


Going less than 55 °C tap temperature / 60 °C in the tank is bad though, at least in larger installations - otherwise, you risk legionella and other microbial infestations [1].

[1] https://www.verbraucherzentrale.sh/pressemeldungen/lebensmit...


Oh definitely, I'm purely talking about "feel".


Then again, I grew up with thermostatic valves in the shower mixer, and a "child safety" latch to restrict it to even less hot water in normal use. I don't see why putting the mixer in the water heater is that different.


It's supposed to be HOT water, not lukewarm... I have my water heater set to 150F because it makes the hot water last longer, especially in the winter when the incoming supply is barely above freezing, but that doesn't make the tank bigger.


I think what this "smart tank" does is mix the super-hot with cold.

If the automatic mixing feature malfunctions to super-hot, then it could be risky...


Why do you care what size it is though? It's only the capacity that matters. For example a tankless only has a few gallons inside, but that doesn't limit how long of a shower you can take.


I don't understand all the hype for generating SVG with LLM. The task is not really useful, doesn't seem that interesting in single shot as it's really hard, and no human could do it (it would be more useful if the model has visual feedback and could correct the result).

And also, since it becomes a popular task, companies will add the examples in their training set, so you're just benchmarking who has the better text to SVG training set, not the overall quality of the model.


My take is no one really cares about generating SVG, but it's a structured "code" format with very direct visual results. I can't look at 3 piles of code and instantly tell which is best (assuming minimum competence) , but I can judge the SVG outputs very easily. As a quick shot it gets a point across faster and with easier comparison. As a technical comparison it's not so strong, but thats harder to do and judge and less fun to read.


One of my co-founders lost the SVG of our startup logo, and the designer who helped us was away on vacation. I really wanted to experiment with some logo animations for an upcoming demo, so I decided to take matters into my own hands.

I grabbed a high-quality PNG, gave it to ChatGPT, and managed to recreate the SVG from the image, after quite a bit of prompting and tweaking. But it worked out great!


But isn't this something Inkscape can do since forever?


It goes back to Sparks of AGI [0] unless I am mistaken. Can recommend the talk, one that has stayed in the back of my mind since I first saw it two years ago. Personally, still have major reservations about throwing claims of intelligence or understanding around, but I do agree that SVG code generation can be a very effective source to get a quick and easy to present understanding of a models ability to output code with a rather open ended prompt that needs a high degree of coherence and were a lot of layers depend/build on each other.

Helps that these are eye catching (literally as the output is visual) and easy to grasp. Same reason a lot of hype is created around the web desktops.

[0] https://youtu.be/qbIk7-JPB2c?si=_TNRrxN-_5FOlfy5&t=1342


It's obviously a pointless benchmark-but it's fun so people like doing it


How can you know?


By thinking about what a computer is actually doing & realizing that attributing thought to an arthmetic gadget leads to all sorts of nonsensical consequences like an arrangement of dominoes & their cascade being a thought. The metaphysics of thinking computers is incoherent & if you study computability theory you'll reach the same conclusion.


I'd say that thoughts and reasoning are two different things, you're moving the goalpost.

But what makes the computer hardware fundamentally incompatible with thinking? Compared to a brain


I've already explained it in several places. The burden of proof is on those drawing the equivalence to provide actual evidence for why they believe carbon & silicon are interchangeable & why substrate independence is a valid assumption. I have studied this problem for much longer than many people commenting on this issue & I am telling you that your position is metaphysically incoherent.


Can you explain more? Which things are impossible in blender


Spiritually, Blender is to FreeCAD what Gimp is to Inkscape or what BMP is to SVG. With Blender you're massaging piles of anonymous polygons so they look right aestheticallY, while with CAD you're composing geometric primitives to make a precise blueprint for a 3D object that just happens to be rendered with polygons. The former is better for art while the latter is better for manufacturing.


Any there any open CAD file formats that lay a foundation for describing this kind of 3D data without classic triangles?


A .step or .stp file encodes the model as mathematical shapes, rather than approximating it with polygons, but it doesn't save the entire parametric workflow or history, only the final result. As far as I know, there is no widely adopted file format that also saves this information.


Parent's comparison is pretty great, but it shouldn't be "overdone". It's not really the format that's different/a problem (it's not hard to make a blender object from a CAD design - the same way an SVG can be rendered to PNG, and similarly irreversible in both cases), it's the whole design flow.

CAD uses geometry primitives with parameters and exact sizing (e.g. you draw a rectangle of this size, and cut a whole into it this and this offset from one of the corners, and you expand this shape to 3D). As mentioned this can be approximated via geometry nodes, but they are very different in "ideology".


For architecture, there is Industry Foundation Classes (IFC). IFC is a standard for describing building. FreeCAD supports this natively. There's a tutorial here: https://yorik.uncreated.net/?blog%2F2025%2F002-nativeifc-tut...

Blender has and extension for IFC called Bonsai. https://extensions.blender.org/add-ons/bonsai/


CAD modeller are good at producing parametric 3d models. You can make use of spreadsheets and constraints to create a piece, that will later super easily be changed.

https://en.wikipedia.org/wiki/Constraint_(computer-aided_des...


> CAD modeller are good at producing parametric 3d models

If that's the only thing they do better than Blender, then it sounds like their days are numbered. Has to be more benefits right? Blender exposes a pretty wide Python API, loading spreadsheets ends up pretty simple, and together with Geometry Nodes, you can even visualize it in a way that makes somewhat sense. Constraints been existing for a long time in Blender too.


They are better at it on a fundamental level. It’s a completely different approach for data representation, offering precision and repeatability which is not possible with Blender's data model.

Blender may as well replace CAD apps in the hobbyist 3D printing space, but it will never replace them in the industry and professional work. Solid modeling CAD software commonly features more than just creating mathematically precise digital 3D objects, but also planning for CNC machining, FEM analysis, assembly and so on.


> It’s a completely different approach for data representation, offering precision and repeatability which is not possible with Blender's data model.

How exactly? And why not?

You need useful measurements/units, reproducibility, parameters, constraints, and I guess something more? As Blender can give you those things, it's not impossible in Blender. Want to have 3D objects automatically created based on values from CSVs together with constraints? Blender can already do that today, just as one example.

I don't really mind if Blender has a chance of replacing CAD apps or not, more curious about why exactly people find it so fundamentally impossible for Blender to be a useful alternative, and I have yet to hear any convincing arguments.


An analogy is the difference between vector and bitmap graphics.

CAD programs aren't just a different set of operations on the same data, they use an entirely different representation (b-rep [1] vs Blender's points, vertices, and polygons).

These representations are much more powerful but also much more complex to work with. You typically need a geometric kernel [2] to perform useful operations and even get renderable solids out of them.

So sure, I suppose you could build all of that into Blender. But it's the equivalent of building an entire new complex program into an existing one. It also raises major interoperation issues. These two representations do not easily convert back and forth.

So at that point, you basically have two very different programs in a trenchcoat. So far the ecosystem has evolved towards instead building two different tools that are masters of their respective domains. Perhaps because of the very different complexities inherent in each, perhaps because it makes the handover / conversion from one domain to the other explicit.

1. https://en.m.wikipedia.org/wiki/Boundary_representation

2. https://en.m.wikipedia.org/wiki/Geometric_modeling_kernel


> CAD programs aren't just a different set of operations on the same data, they use an entirely different representation (b-rep [1] vs Blender's points, vertices, and polygons).

So with that in mind, there should be something that is possible to build in CAD, but impossible then to build in Blender?

I know the differences between the two, I understand they're fundamentally different, yet I seem to be able to produce similar results to others using CAD, so I'm curious what results I wouldn't be able to reproduce in Blender.

Any concrete examples I could try out?


Sure. Create a diamond polygon and revolve it around a point.

Blender has methods and tools to _approximate_ doing this. It has a revolve tool... where the key parameter is the number of steps.

This is not a revolution, it's an approximation of a revolution with a bunch of planar parts.

BREP as I understand it allows you to describe the surfaces of this operation precisely and operate further on them (e.g. add a fillet to the top edge).

Ditto for things like circular holes in objects. With blender, you're fundamentally operating on a bunch of triangles. Fundamental and important solid operations must be approximated within that model.

BREP has a much richer set of primatives. This dramatically increases complexity but allows it to precisely model a much larger universe of solids.

(You can kinda rebuild functionality that geometric kernels have with geometry nodes now in blender. This is a lot of work and is not a great user interface compared to CAD programs)


I don’t have explanatory knowledge on the matter, sorry.

If you are interested you may look up the difference between solid, surface and mesh modeling. They all have strengths and weaknesses.

Ultimately you have to translate any model into a lossy representation/approximation due to discrete numerical control requirements and so on. However, the gist if it is, with mesh modeling this happens earlier in the design process. Even with procedural and parametric modeling in Blender, you will always encounter issues with approximation and floating point precision, which are inherent to the data representation.

For 3D printing that often doesn’t matter, because mesh approximation is precise enough. For hobbyists, CAD apps are kinda too niche and bothersome to be worth learning for simple models in 3D printing. The overall versatility of Blender and basic CAD-like capabilities are much more valuable and rewarding, in this space. In the end, you probably massively benefit from learning something like Blender anyway, because it’s much better suited for quickly conceptualizing an idea in 3D, than CAD. I think CAD works best, if the shape and specs of the object are already known. Organic shapes, clay-like deformations, which can’t be easily reduced to mathematical defined solid body functions, are something where Blender will always be better suited than CAD.


>Even with procedural and parametric modeling in Blender, you will always encounter issues with approximation and floating point precision, which are inherent to the data representation.

A common problem people run into with CAD models is importing a STEP file and modeling directly off of geometry in it. They later find out that some face they used as a reference was read by the CAD package as 89.99999994 degrees to another, and discover it's thrown the geometry of everything else in their model subtly off when things aren't lining up the way they should.

And that's with a file that has solid body representation! It's an entire new level of nightmare when you throw meshes into the mix.

The heart of any real CAD package is a geometry kernel[1]. There are really only a handful of them out there; Parasolid is used by a ton of 'big name' packages, for example. This is what takes a series of descriptions of geometry and turns it into clear, repeatable geometry. The power of this isn't just where geometry and dimensions are known. It's when the geometry and dimensions are critical to the function of whatever's being modeled. It's the very core of what these things do. Mesh modeling is fantastic for a lot of things, but it's a very different approach to creating geometry and just isn't a great fit for things like mechanical engineering.

1 - https://en.wikipedia.org/wiki/Geometric_modeling_kernel


> The power of this isn't just where geometry and dimensions are known. It's when the geometry and dimensions are critical to the function of whatever's being modeled.

Yes, but I meant making a case for workflow differences.

CAD is bad at aiding visual thinking and exploration, since you kinda have to be precise and constrain everything. You can pump out a rough idea of an object, edit it much, so much faster in Blender.

Sketching on paper, or visualizing in one’s mind, is pretty hard for most people when it comes to 3D. CAD is not at all inviting for creative impulses and flow. People who can do this in CAD are probably trained engineers who learned a very discipled, analytical way to approach problems, people who think in technical drawings.

So, CAD is good at getting a precise and workable digital representation of a "pre-designed" object for further (digital) processing, analysis, assembly and production. I think Blender is better at the early design process, figuring out shapes and relations.


I don't entirely agree there.

In a vacuum for a standalone object, a 3D mesh app like Blender can be useful for brainstorming.

Most of my CAD usage is designing parts that have to fit together with other things. The fixed elements drive the rest of the design. A lot of the work is figuring out "how do I make these two things fit together and be able to move in the ways they need to."

There is still a lot of room for creativity. My workflow is basically "get the basic functionality down as big square blocks, then keep cutting away and refining until you have something that looks like a real product." My designs very rarely end up looking like what they started out as. But the process of getting them down in CAD is exactly what lets me figure out what's actually going to work.

It's a very different workflow, and it's definitely not freeform in the same way as a traditional mesh modeling app, but CAD is for when you have to have those constraints. You can always (and it's not an uncommon pattern) go back and use a mesh modeler to build the industrial design side of things on top once the mechanical modeling is done.

ETA:

I'd also add: I'm not sure "thinking in CAD" comes naturally to anyone; it's a skillset that has to be built.


If you try OpenScad-style adding and subtracting volumes, the syntax is pretty horrific. It is impossible to script objects that way. Quote Gemini:

  However, implementing a full OpenSCAD-like syntax and robust CSG system from   scratch in Blender Python is complex due to Blender's mesh-based nature versus OpenSCAD's mathematical description. Blender's boolean operations on complex meshes can sometimes lead to topological errors.


To be fair though, OpenSCAD works best too if you do this during the generative step and not after the fact. I've used it to remix existing STLs so it definitely does work but you really have to watch the areas where two shapes get close to each other, especially if there is a lot of fine detail.


Did I solve all problems that OpenSCAD might have? Compared to FreeCAD. Me myself think I did: https://youtu.be/eG5lhLYvihQ?si=yA00IYVU4_Zemdxi


Too many acronyms, what's FE, BFF?


I was asking the same questions.

- FE is short for the Front End (UI)

- BFF is short for Backend For Frontend


Front end and a backend for a frontend. In which you generally design apis specific for a page by aggregating multiple other apis, caching, transforming etc.


Arc browser unifies the tabs and bookmarks in a very clever way.


I'm wondering if you can prompt it to work like this - make minimal changes, and run the tests at each step to make sure the code is still working


This thing can "fix" tests, not code. It just adjusts tests to incorrect code. So you need to keep an eye on the test code as well. That sounds crazy, of course. You have to constantly keep in mind that LLM doesn't understand what it is doing.


If it's chromium based, they will need to remove manifest v2 at some point to stay close to the upstream version.


Possibly in Arc, although Brave also continues to support Manifest v2 so it’s possible it will continue to persist in some subset of Chromium-based browsers and as I said, it ships with the browser and is installed by default; but Orion is not Chromium-based.


Brave supports it right now, which is 2 months after it's been removed upstream.

I strongly suspect they're gonna drop support as soon as the first bigger merge issue happens along with a heartfelt blog that "they did they everything to support it, but it was just too much for the resources available to them"

I doubt it's gonna take more then 1-2 years (December 2027) for this to happen, but we will see.


Chrome officially supports Manifest V2 extensions until at least June 2025, hidden behind an enterprise flag: https://developer.chrome.com/docs/extensions/develop/migrate...

I expect Brave to easily support it until then and then drop it very quickly as you described.


You know, Google's really playing with fire here. There are enough browser companies running Chrome underneath, to more than equal Google's commitment.

That is, if those companies choose.

If even 80% of them wanted to fork? Not a biggie. And they could still cherry pick commits from the alt fork.


I think you might be underestimating the scope of work that happens on chromium a tad, from Github's "pulse" feature:

"Excluding merges, 684 authors have pushed 3,139 commits to main and 3,866 commits to all branches. On main, 14,924 files have changed and there have been 740,516 additions and 172,682 deletions."

That's stats from last week. Last year Google apparently was responsible for about 95% of contributions. Other than Microsoft (which has the same bad incentives as Google) none of the alt-chromium browser companies has like, 5% of the engineers to maintain a real alternative


Yes, but as I said they can merge in changes. Apparently more than I thought, but still, they can.

Opera has pinch-zoom text-reflow in a chromium backend, and that seems to be substantial, and yet it is (on purpose) kept out of mainline chrome. So they do loads of tracking/merging too.

The scope of work to do a few small features on top of chrome wouldn't be a biggie, compared to the entire project.


How hard would it be to "wrap" the browser in a ublock like shell, so that all network requests are filtered through a firewall before they even reach the chrome application layer.

It might be easier to maintain than an actual extension interface with hooks thought the code.


I don't think you'd need manifest V2 for such a rudimenty logic.

The reason why ublock origin is so powerful is because it works with the DOM/not at the network level and can use heuristics to determine wherever something is a advertisement or not.


I imaged you would use both.


Brave supports uBO blocklists OOTB, no extension needed.

So even when they have to say farewell to Manifest v2 it really doesn't matter, at least in case of privacy (and for some medical) protection.


In addition, since their adblocker isn't an extension and doesn't care about extension APIs, they can do things even Manifest v2 Chrome extensions can't. For example, full-fat uBO can't do CNAME uncloaking on Chromium due to API limitations, but can do it on Firefox which has the APIs. Brave is Chromium-based, but since Shields isn't an extension they've built CNAME uncloaking into it.


This is good info to keep in my back pocket. Thanks!


As does Vivaldi


This is arguably the most compelling reason for people to switch to Brave. If there are smart people over there, they'll make a concerted effort to keep Manifest v2 in their fork.


I don't understand or know alot about extensions, but what is so incredibly impossible about adding new capabilities to manifestv3? It's a manifest describing what the addon wants to do and some UX to allow it right?


It’s not really about the manifest. It’s about the APIs available to extension programmers. Chrome has made the "webRequestBlocking" API unavailable and that’s what’s affecting adblockers. Chrome will eventually remove the code supporting this API, and it is not feasible for downstream to make it available anyway.


Why can’t forks just maintain an independent implementation afterwards?


They could, theoretically. But just imagine what that actually means. Unless you cease merging upstream/the project you've forked, you'll have to resolve all conflicts caused by this divergence.

And that's a lot of work for a multi million LOC project, unless the architecture is specifically made to support such extensions... which isn't the case here.

And freezing your merges indefinitely isn't really viable either for a browser


A quick look at the code gives me the impression that webRequestBlocking is a fairly trivial modification to webRequest, and they seem to be keeping the latter. This leads me to two conclusions: it wouldn't be terribly hard for a fork maintainer to keep webRequestBlocking, and Google's technical excuses for removing it are disingenuous.


> ... and Google's technical excuses for removing it are disingenuous.

That's been the default assumption of pretty much everyone anyway.


That may be true now but will it still be true when Google next refactors their request code under the assumption that no requirements for a webRequestBlocking API exist.


So go make an LLM manage the fork or something. Everyone keeps telling me they are amazing at code these days. Surely it can do a task like that if that's all it's doing all day.

If not today maybe soon...


Because these aren't really independent browsers but reskins.

Being independent of google requires actually doing the work and not just copying google.


The codebase is huge, sure, but the particular feature is relatively small and (as I assume and as verified by another poster) quite easy to implement: https://news.ycombinator.com/item?id=43204603


I think if a bunch of Chromium forks come together, they can maintain v2 support for quite a while. A fork maintained by a combination of Brave, Opera, Vivaldi, and maybe some of those startup-based browsers can probably keep the most important APIs running for quite some time.

At some point the issues will become too difficult to fix, but none of these companies need to be doing it alone. Adding a separate upstream with some "fuck off Google" fixes for them to base their proprietary browser on seems like a smart thing to do.


You can offload tensors to the cpu memory. It will make your model run much slower but it will work


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: