Very fun. You have hidden the controls on the video, is it because you want it to be more of a game and prevent people, normies at least, from seeking through the video or is it for some other reason?
I've built it earlier and also did a Show HN, now I am going through some of the steps that get recommended to me such as creating Product Hunt launches, etc. But I am struggling a bit with the concept of PH. What is the audience? People into new apps? It all feels a bit desperate to be honest and this app is just a hobby side project, I am not.
So if anyone knows of a good way to get some attention to my useful fun tool, please let me know.
I feel like the audience of the file is more for me the reader rather than the LLM.
> Add this file to your AI assistant's system prompt or context to help it avoid
common AI writing patterns.
So if I put this into my LLM's conversation it is like I am instructing it to put this into its AI assistant's system prompt, so the AI assistant's AI assistant.
The alternative is to say:
"Here is a list of common AI tropes for you to avoid"
All tropes are described for me to understand what that AIs do wrong:
> Overuse of "quietly" and similar adverbs to convey subtle importance or understated power.
But this in fact instructs the assistant to start overusing the word 'quietly' rather than stop overusing it.
This is then counteracted a bit with the 'avoid the following...' but this means the file is full of contradictions.
Instead you'd need to say:
"Don't overuse 'quietly', use ... instead"
So while this is a great idea and list, I feel the execution is muddled by the explanation of what it is. I'd separate the presentation to us the user of assistants and the intended consumer, the actual assistants.
I've had claude rewrite it and put it in this gist:
The source doc and the gist name dozens of specific bad patterns by label (“Negative Parallelism,” “Gerund Fragment Litany,” etc.) and repeat examples of them.
An LLM guide would do better to avoid every one of those labels and examples, since the whole point is not to prime the pattern.
Instead each instruction should describe the positive shape of good writing – what a well-constructed sentence, paragraph, or piece actually looks like.
I completely agree. This is a good list, but a poor prompt.
Also, I sometimes find a sort of Streisand effect: when you tell the LLM to avoid something is starts doing it more. Like, if you say "don't use delve" it contains the words "use delve" which, amongst a larger context, seems to get picked up.
I have more success telling the LLM to write in the style of a particular author I like. It seems to activate different linguistic patterns and feel less generic.
Then, I make an "editor agent" comb through, looking for tropes and rewording them. Their sole focus is eliminating the tropes, which seems to work better.
Hey HN, I'm Achilleas. I built Decoy because I didn't feel like running Docker every time I needed to test some webhooks. All I really wanted was to return a custom response on localhost to test my application code interacting with external systems.
Decoy is a native Mac app (SwiftUI + Network.framework) that lets you create mock HTTP endpoints. You pick a method, define a response (JSON, HTML, XML, a file, a redirect), and it just works. Incoming requests show up in real time with full headers and body, which is handy for debugging webhooks. It supports parameterized paths like /users/:id and you can group endpoints into projects with subdomain support. Requests are persisted to SQLite immediately and CORS is handled automatically.
To be clear about what it isn't: it doesn't send requests (not a Postman replacement), it's not a proxy, and it's Mac only.
It's $24.99 on the Mac App Store, one-time purchase. I'm an indie developer — no team, just me building tools I want to use myself.
Fair question. The reason I built Decoy is that I wanted a good looking native macOS app. I hadn't really considered Mockoon to be honest, but looking at it now I can see a few differences that might be worthwhile in choosing between the two.
Decoy is 10 MB vs Mockoon's 329 MB and looks and feels differently because it is a native vs cross platform app.
Mockoon seems to have lots of great features such as online mocks and proxies for which you can pay their monthly fee or you could have a lighter weight app that does one thing well natively for a one time fee purchase.
I also notice that Mockoon calls home even when I don't have an account, probably for some sort of tracking. There is no tracking in Decoy.
So overal I think it is about lightness of the applications and UX.
I see the word 'replication' mentioned quite a few times. Is this managed by pgdog? Would I be able to replace other logical replication setups with pgdog to create a High Availability cluster?
I'll need a bit more info about your use case to answer. We use logical replication to move data between shards, with the intention of creating new shards.
This is managed by PgDog. We are building a lot of tooling here, and a lot of it is configurable and can be used separately. For example, we have a CLI and admin database commands to setup replication streams between databases, irrespective of their sharded status, so it can be used for other purposes as well, like moving tables or entire databases to new hardware. If you keep the stream(s) running, you can effectively keep up-to-date logical replicas.
We don't currently manage DDL replication (CREATE/ALTER/DROP) for logically replicated databases - this is a known limitation that we will address shortly. After all, we don't want users to pause schema migrations during resharding. I think once that piece is in, you'll be able to run pretty much any kind of long-lived logical replicas for any purpose, including HA.
Heh, I find Codex to be a far, far smarter model than Claude Code.
And there's a good reason the most "famous" vibe coders, including the OpenClaw creator all moved to Codex, it's just better.
Claude writes a lot more code to do anything, tons of redundent code, repeated code etc. Codex is only model I've seen which occasionally removes more code than it writes.
Funnily enough I've been using Codex 5.3 on maximum thinking for bug hunting and code reviews and it's been really good at it (it's just seem to have a completely different focus than Opus.)
I generally don't like the way codex approaches coding itself so I just feed its review comments back in to Claude Code and off we go.
I just created an OpenCode skill where both these models will talk to each other and discuss bug-finding approaches.
In my experience, two different models together works much better than one, that's why this subscription banning is distressing. I won't be able to use a tool that can use both models.
Bro science is rampant in the AI world. Every new model that comes out is the best there ever was, every trick you can think of is the one that makes all the other users unsophisticated, "bro, you are still writing prompts as text? You have to put them into images so the AI can understand them visually as well as textually".
It isn't strange that this is the case, because you'd be equally hard pressed to compare developers at different companies. Great to have you on the team Paul, but wouldn't it be better if we had Harry instead? What if we just tell you to think before you code, would that make a difference?
I don't have a video doorbell so I don't know. What is so great about them? Has it changed your life in a positive way? To those who do have one that is.
reply