Hacker Newsnew | past | comments | ask | show | jobs | submit | leohart's commentslogin

I have also found this as well. CLI outputs text and input text in an interactive manner, exactly the way that is most conducive to text-based-text-trained LLM.

I do believe that as vision/multi-modal models get to a better state, we would see even crazier interaction surfaces.

RE: duckdb. I have a wonderful time with ChatGPT talking to duckdb but I have kept it to inmemory db only. Do you set up some system prompt that tell it to keep a duckdb database locally on disk in the current folder?


> RE: duckdb. I have a wonderful time with ChatGPT talking to duckdb but I have kept it to inmemory db only. Do you set up some system prompt that tell it to keep a duckdb database locally on disk in the current folder?

No, I don't use DuckDB's database format at all. DuckDB for me is more like an engine to work with CSV/Parquet (similar to `jq` for JSON, and `grep` for strings).

Also I don't use web-based chat (you mentioned ChatGPT) -- all these interactions are through agents like Kiro or Claude Code.

I often have CSVs that are 100s of MBs and there's no way they fit in context, so I tell Opus to use DuckDB to sample data from the CSV. DuckDB works way better than any dedicated CSV tool because it packs a full database engine that can return aggregates, explore the limits of your data (max/min), figure out categorical data levels, etc.

For Parquet, I just point DuckDB to the 100s of GBs of Parquet files in S3 (our data lake), and it's blazing fast at introspecting that data. DuckDB is one of the best Parquet query engines on the planet (imo better than Apache Spark) despite being just a tiny little CLI tool.

One of the use cases is debugging results from an ML model artifact (which is more difficult that debugging code).

For instance, let's say a customer points out a weird result in a particular model prediction. I highlight that weird result, and tell Opus to work backwards to trace how the ML model (I provide the training code and inference code) arrived at that number. Surprisingly, Opus 4.6 is does a great job using DuckDB to figure out how the input data produced that one weird output. If necessary, Opus will even write temporary Python code to call the inference part of the ML model to do inference on a sample to verify assumptions. If the assumptions turn out to be wrong, Opus will change strategies. It's like watching a really smart junior work through the problem systematically. Even if Opus doesn't end up nailing the actual cause, it gets into the proximity of the real cause and I can figure out the rest. (usually it's not the ML model itself, but some anomaly in the input). This has saved me so much time in deep-diving weird results. Not only that, I can have confidence in the deep-dive because I can just run the exact DuckDB SQL to convince myself (and others) of the source of the error, and that it's not something Opus hallucinated. CLI tools are deterministic and transparent that way. (unlike MCPs which are black boxes)


This is the one you want https://www.seeedstudio.com/reTerminal-E1002-p-6533.html?srs...

Spectra color so high res and beautiful with built in esp32.


Do you have a recommended hot air station for tinkerer use? I am trying to move up from breadboarding into something more field-deployable proof of concept.


Look up '959D' as a model. They're serviceable hot air stations made by a bunch of different white label vendors that are usually $40-60 on the likes of Amazon, cheaper elsewhere.


I like Zed. It's super fast and seems to be getting lots of improvement constantly.

My issue with it is when I use it on my codebase, it doesn't like my (probably old style) eslintrc. So it decides to go ahead and reformat my file on save :(.


You can disable this in your global or project settings, and even per language: https://zed.dev/docs/configuring-zed#format-on-save


I've seen a lot of similar complaints. I think it would be helpful when Zed sees a new codebase to offer an onboarding questionnaire and apply the settings that can't be auto-detected from the codebase itself: coding styles, linting configuration, etc.


I think this would be pretty trivial to fix zed’s autoformatter settings.


I would love to love Zed. In practice it’s everything I want.

But it’s a text editor first. And what I want, and it’s non negotiable, in a text editor is good text editing. Except font rendering is atrocious and broken on Linux ( what I use ) and on Windows ( what my employer force me to use ).

I understand it’s an alpha / beta. But still


I think monodraw is way better and at a one time price of $10, it's really is a fantastic piece of software.


too bad it's only for a not fantastic piece of OS


While True, I believe the reason Monodraw doesn't expand is because the ASCII / TUI tooling economy is not rewarding. There are few users and even fewer customers who are willing to pay for ASCII/TUI tools.

If you look at tooling companies annual revenue

- Atlassian: 1.4B - Jetbrains: 400M - Vercel: 100M - Supabase: 16M - Prisma: 10M (maybe?)

We dive into the long tail quickly. I would be very surprised if Monodraw make more than 1M a year. At that revenue, it's going to be hard to expand into a multi-platform / web strategy (especially if your existing product is platform-locked).


This is a super awesome project. Truly the best of all worlds. A real keyboard, a beefy enough compute, an xr glass that has constant use case.

How is the Xreal One Pro for extended use? My concern is I have to put up with low res screen as I code away.


It’s quite good! I have not the one’s and the one pros. The pros are a pretty noticeable leap in clarity and FOV in my experience, and the screen is as readable as any other 1080p monitor.


Can you link to some low cost FPGA info? I have always been interested in FPGA as a way to improve run speed for certain code.


sure thing! https://a.co/d/hWkyPAw

search amazon for ICESugar-nano


I have always found Bitwarden to be the best one after trying many alternatives. One thing that stood out is how its phone app works seamlessly with FaceID/Fingerprint. From logged out to login is as smooth as allowing your phone to use biometrics.

Bitwarden seems to be getting updates often as well which I value in a security conscious product.


This is awesome. With this, I would be able to run JS code that my user provides. I have been looking for a way to bundle my user Typescript code using a bundler in a sandbox environment. Any recommendation on ways to run a bundler (webpack/...) in QuickJS?


I don't know about using QJS, but if you want to run a bundler in the browser that sounds like the sort of thing that WebContainers[1] were built for.

[1]: https://webcontainers.io/


Oh? Any links?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: