Fascinating - I find the opposite is true. I think of edge cases more and direct the exploration of them. I’ve found my 35 years experience tells me where the gaps will be and I’m usually right. I’ve been able to build much more complex software than before not because I didn’t know how but because as one person I couldn’t possibly do it. The process isn’t any easier just faster.
I’ve found also AI assisted stuff is remarkable for algorithmically complex things to implement.
However one thing I definitely identify with is the trouble sleeping. I am finally able to do a plethora of things I couldn’t do before due to the limits of one man typing. But I don’t build tools I don’t need, I have too little time and too many needs.
If you’ve worked on a code base built by more than you, you don’t understand and you don’t have control. Part of being an experienced engineer is understanding how to deal with that effectively at scale.
Future me would have a really high quality EV with an amazing charging network, clean air and water, a habitable planet for my grand children, no domestic political extrajudicial paramilitary surveilling everyone with megawarehouse detention cities everywhere, outright ideological warfare against urban areas, etc.
We had a choice between Star Trek future world and Blade Runner+MadMax+Idiocracy, and we predictably chose the one we deserve because memes and a podcast.
They were sued by the current administration and recorded as domestic terrorists,held down and sprayed in the face by irregular paramilitary with extrajudicial powers, detained without probable cause or charges, investigated by the FBI in the dead of night, placed on no fly lists, post retirement rank demoted, fired, laid off, swatted, delivered pizza in the name of dead relatives, and all the wonderful stuff that’s making America great again.
Enterprise, government, and regulated institutions. It’s also defacto standard for programming assistants at most places. They have a better story around compliance, alignment, task based inference, agentic workflows, etc. Their retail story is meh, but I think their view is to be the aws of LLMs while OpenAI can be the retail and Gemini the whatever Google does with products.
After having spent a few days with OpenClaw I have to say it’s about the worst software I’ve worked with ever. Everyone focused on the security flaws but the software itself is barely coherent. It’s like Moltbook wrote OpenClaw wrote Moltbook in some insidious wiggum loop from hell with no guard rails. The commit rate on the project reflects this.
Here’s an example. Agents get exposed a set of tools one of which is file system tools. They are basically read and write or edit a file. The edit requires a replacement syntax. The write function truncates the file. There is no append. These are generally documented as how you work with adding memories. Memories are expected to be read, then rewritten, by the LLM. This is watched by a watchdog and vectorized for RAG. Note however that you have to read the memory in and write it out to append to it through the LLM. Why?
I rewrote almost all the agent functions and denied the existing ones because they are flawed deeply and don’t do what you need to do for any specific purpose. The plugin distribution model is a bit weird and inscrutable. Instead they seem to advocate for skills distribution. These though depend on being able to exec arbitrary bash code. Really?
Moltbook itself depends on agents execing curl commands for each operation. Why? Presumably because the plugin distribution model is inscrutable. I wrote plugins for all the Moltbook operations with convenience and structured memory logs etc. Agent adherence went through the roof.
Sessions don’t seem to reliably work or make sense. Heartbeats randomly stop firing. I turned off heartbeats because they were so flakey despite them being documented as the canonical model for regular interaction in favor of cron jobs that I decomposed my heartbeat task into prime number intervals based on relative frequencies but it seems to randomly inject some heartbeat info into the promoting occasionally if you run cron jobs a certain way. Despite being called cron they don’t actually fire reliably or on the prescribed schedule somehow. The web UI is a mess. Configuration management in the UI is baffling. The separation between the major MD files per agent seems to not matter at all and are inexplicably organized. Hotloading works except when it doesn’t. Logging doesn’t seem to log things that should clearly be logged.
I am down with vibe coding and produce copious amounts of such code myself. But there’s an art to producing code worth using let alone distributing. Entropy and scope need to be rigorously controlled and things need to ship in a functional state - actually functional not aspirationally functional. Decisions need to be considered and guidance given. None of this seems to have happened here. Once it gets to a certain level of chaos IMO it’s unmaintainable and OpenClaw is way past that point and rapidly getting beyond that. It’s probably also a supply chain party bag.
Basically you concentrate the heat into a high emissivity high temperature material that’s facing deep space and is shaded. Radiators get dramatically smaller as temperature goes up because radiation scales as T⁴ (Stefan–Boltzmann). There are many cases in space where you need to radiate heat - see Kerbal Space Program
"High emissivity, high temperature" sounds good on paper, but to create that temperature gradient within your spacecraft the way you want costs a lot of energy. What you actually do is add a shit load of surface area to your spacecraft, give that whole thing a coating that improves its emissivity, and try your hardest to minimize the thermal gradient from the heat source (the hot part) throughout the radiator. Emissivity isn't going past 1 in that equation, and you're going to have a very hard time getting your radiator to be hotter than your heat source.
Note that KSP is a game that fictionalizes a lot of things, and sizes of solar panels and radiators are one of those things.
I’m not sure I understand why creating the gradient is hard - use a phase transitioning heat pump to a high surface area radiator. The radiator doesn’t have to be hotter than the heat source the radiator just has to be hot, but given the fact we are talking about a space data center, you can certainly use the heat pump to make the radiator much hotter than any single GPU, and even use the energy from the heat cycle to power the pumps, but I imagine such a data center the power draw of the heat pump would be tiny compared to the GPUs.
To be clear I’m not advocating KSP as a reality simulator, or that data centers in space isn’t totally bonkers. However the reality is the hotter the radiator the smaller the surface area for pure radiance dissipation of heat.
I am referring to the "using a heat pump to make the radiator hotter than the GPU" as "creating a thermal gradient." No matter the technology, moving heat like this is always pretty expensive in power terms, and the price goes way up if you want the radiator hotter that the thing it's cooling.
Can you point to a terrestrial system similar to what you are proposing? Liquid cooling and phase change cooling in computers always has a radiator that is cooler than the component it is chilling.
You can do this in theory, but it takes so much power you are better off with some heat pumping to much bigger passive radiators that are cooler than your silicon (like everything else in space).
Yah but the key is that it’s not the power draw that’s the issue is the dissipation of thermal energy through pure radiation. The heat of the radiator is really important because it reduces the required surface area immensely as it scales up.
However the radiators you’re discussing are not pure radiance radiators. They transfer most heat to some other material like forced air. This is why they are cooler - they aren’t relying on the heat of the material to radiate rapidly enough.
I would note an obvious terrestrial example though is a home heat pump. The typical radiator is actually hotter than the home itself, and especially the heads and material being circulated. Another is any adiabatic refrigerator where the coils are much hotter than the refrigerated space. Peltier coolers even more so where you can freeze the nitrogen in the air with a peltier tower but the hot surface is intensely hot and unless you can move the heat from it rapidly the peltier effect collapses. (I went through a period of trying to freeze air at home for fun so there you go)
For radiation of heat the equation is
P = \varepsilon \sigma A T^4
P = radiated power
• A = surface area
• T = absolute temperature (Kelvin)
• \varepsilon = emissivity
• \sigma = Stefan–Boltzmann constant
This means the temperature of the material increases radiation by the fourth power of its value. This is a dramatic amount of variance at it scales. If you can expend the power to double the heat it emits 16x the heat. You can use a much lower mass and surface area.
This is why space based nuclear reactors are invariably high temperature radiators. The idea radiators are effectively carbon radiators in that they have nearly perfect emissivity and extraordinarily high temperature tolerances and even get harder at very high temperatures. They’re just delicate and hard to manufacture. This is very different than conduction based radiators where metals are ideal.
Making your radiator hotter than the thing you're pulling heat out of is very, very expensive in energy terms. This is why home AC is so expensive and why nobody uses systems like this to cool computers. All that energy has to come from a solar panel you fly, too, so you're not saving mass by doing this. You're just shifting it from cooling to power. If you need 200W to cool 100W of compute, you're tripling the amount of power you need to do that work.
Also, peltiers are less energy-efficient than compressors. That is why no home AC uses a peltier.
As a piece of software it’s pretty awful frankly. Here’s an example : memories are written using the write function but the write function truncates files. There is an edit function but it requires substitution and can’t append. There is no append function.
It also really depends heavily on large models. It’s not practical so far to run on anything that fits into a 4090 because the tool calling semantics are complex, and the instructions pretty vaguely grounded by default. It requires a lot of prompt tuning to get to work marginally. Probably with a clearer semantic in the tools and some fine tuning things would be better on this front. I’ve tried a variety of quantized high tool following models and it’s pretty hit or miss. The protocols around heartbeat and stuff are unnecessarily complex and agentic when they could more reasonably be imperatively driven. It seems to depend on token burning for life more or less.
I frequently see sessions get confused in its internals and general flakiness.
It however has made me consider what a system like this might look like with a better hierarchical state machine, management interface, hierarchical graph based memory, etc. It’s too bad I’ve got a day job and a family at a time like this! It’s a fun time in computing IMO.
Who knows, but ICE agents are terrorizing people with the threat. It’s fascinating that the Ministry of Information is trying to redefine terrorist to mean something other than the people who try to induce mass terror for political gain:
Would be cool if America required law enforcement not to be able to lie to citizens so we could actually know what is going on, and what law enforcement is doing. But I guess police secrets/secret policing/lies are better for an open society.
Along with MAGA supporters who buy pizzas and leave threatening messages for judges and politicians who rule against or oppose Trump after he makes a social media post decrying them. Senator Elissa Slotkin talked about all the death threats she and her family received when the president was calling her treasonous and saying she along with the five others who were reminding military and intelligence members of their oath to the Constitution.
Let’s be clear tho - they are betting on two markets that have no history or assurance of success by crapping the bed with their golden goose.
Tax credits and foreign competition are bad enough, but they really could innovate through it. They could even try robots and taxis and see if it floats. Instead it’s sink the ships behind them because Elon is never wrong.
Their only benefit is the fact they can raise ludicrous capital to spend their way through many years of dumb. But that’s probably not enough to salvage the tire fire.
My odds on their humanoid robots are basically zero. China is too far ahead here and I’m not convinced there’s a material market that could develop fast enough. I am a fan of Waymo’s and use FSD but will not believe in their taxi product - they’re too far behind Waymo. I don’t think they will gain the regulatory traction they’ll need especially after pissing off every democrat and their mayors and city councils.
I’ve found also AI assisted stuff is remarkable for algorithmically complex things to implement.
However one thing I definitely identify with is the trouble sleeping. I am finally able to do a plethora of things I couldn’t do before due to the limits of one man typing. But I don’t build tools I don’t need, I have too little time and too many needs.
reply