Hacker Newsnew | past | comments | ask | show | jobs | submit | saliagato's commentslogin

Can't comment on the cost (varies depending on who you are and what you do in your life) but the benefit is really high.

source: I started from 0 and the very first bounty I submitted (related to CEX) was valued to $100k+


How long did that take? How long was the 0 to reward process, and how much time did you spend?


Speed of development, developer experience, ready to go templates. You should check it out, it’s really cool


"Really cool" is not compelling enough for me to decide to build a product against something like this. Here is, at a minimum, I would require to even consider:

- Comparisons against other types of stacks, like laravel livewire or phoenix liveview

- Performance metrics/stats/benchmarks whatever. You need to be faster and more robust than the other things out there, or provide some other benefit

- First Class self-hosted on premise install version without dependency on any cloud provider. Kubernetes helm charts or docker compose stacks or whatever

- I do actually like that you have a time-windowed source available license. That is something that alleviates the concern if you go under

- Jepsen or similar analysis, need to make sure whatever consistency guarantees you are advertising hold up


One public comparison on latency is https://db-latency.vercel.app/

For comparisons, you can check out:

https://stack.convex.dev/convex-vs-firebase https://stack.convex.dev/convex-vs-relational-databases https://www.convex.dev/compare/supabase https://www.convex.dev/compare/mongodb

I'll save you more of a marketing pitch, since you seem to have enough of my pitching Convex in the article:) The bullet points at the bottom of the article should be a pretty concise list - I'd call out the reactivity / subscriptions / caching. To learn how all that magic works, check out https://stack.convex.dev/how-convex-works


How does the speed of development of the entire app, not just the backend, compare to Rails + Hotwire or Laravel Livewire?


Author here- and yes as a disclaimer I work at Convex. As a caveat to that disclaimer, I pivoted my career to work here b/c I genuinely believe it moves the industry forward b/c of the default correctness guarantees, along with other things.

To this question here, some of the things that accelerate full stack development:

1. Automatically updates your UI when DB data changes, not just on a per-document one-off subscription, but based on a whole server function's execution which is deterministic and cached. And regardless if the changes were made in the current browser tab or by a different user elsewhere. Not having to remember all the places to force refresh when a user updates their profile name, e.g., makes it way faster. And not only do the Convex client react hooks fire on data changes, the data they return for a page render is all from the same logical timestamp. You don't have to worry about one part of the UI saying that payment is pending when another says it's been paid.

2. End-to-end types without codegen. When you define a server function, you define the argument validators, which immediately show up on the types for calling it from the frontend. You can iterate on your backend function and frontend types side-by-side without redefining types or codegen in the loop.

3. Automatic retries for database conflicts, as well as retries for mutations fired from the client. B/c mutations are deterministic and side-effect-free (other than transactional changes to the DB and scheduler), the client can keep retrying them and guarantee exactly-once execution. And if your mutation had DB conflicts, it's automatically retried (up to a limit with backoff). So the client can submit operations without worrying about how to recover on conflict.

There's obv. a bunch more in the article about features like text search and other things out of the box, but those maybe are more conveniences on the backend, since maybe a frontend person wouldn't have been setting up Algolia, etc.


Remix is really cool too! On the paper


gpt-4 was indeed trained on gpt-3 instruct series (davinci, specifically). gpt-4 was never a newly trained model


what are you talking about? you are wrong, for the record


They have pretty much admitted that GPT4 is a bunch of 3.5s in a trenchcoat.


They have not. You probably read "MoE" and some pop article about what that means without having any clue.


If you know better it would be nice of you to provide the correct information, and not just refute things.


gpt-4 is a sparse MoE model with ~1.2T params. this is all public knowledge and immediately precludes the two previous commentators assertions


worked 100% of the time for me


which software?


This sounds so fictionalized that may be fake


This post is still the first in the feed


Most likely that was me being slow.


basically, yes. Pinecone? Dead. Azure AI Search? Dead. Quadrant? Dead.


Prompt token cost still a variable.


Hard to believe OpenAI uses Quadrant when they are backed by Microsoft, thus having Azure Cognitive Search (now "AI" Search)


Cognitive Search is nowhere as good as a 'pure' vector DB. Behind the scenes, it's a managed elasticsearch/opensearch with some vector search capabilities. The 'AI' implementations I've done with Cognitive Search always boil down to hybrid(vector+fts) text search.


In context of RAG, the goal is not to have a pure vector DB but to have all the relevant data that we can gather for a user's prompt. This is where Cognitive Search and other existing DBs shine because they offer a combination of search strategies. Hybrid search on Cognitive Search performs both full text and vector queries in parallel and merges results which I find a better approach. Further, MS is rebranding Cognitive Search as Azure AI Search to bring it more in line with the overall Azure AI stack including Azure OpenAI.


Cognitive Search already contains hybrid search (vector + BM25 + custom ML reranking) and they use chunks of 2048 tokens with a custom tokenizer. So it should be now better than most vector DBs. One could probably make something better by using some version of SPLADE instead of BM25 but their secret sauce lies in their custom ML model for reranking that gives them the largest search performance boost.


Do you have any experience in AI search to compare it to other products?

I’m genuinely curious to know if it’s any good.


no


I wonder how many were left out


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: