Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Show HN: I got tired of switching AI tools, so I built an IDE with 11 of them (hivetechs.io)
13 points by hivetechs 13 hours ago | hide | past | favorite | 13 comments
Each AI has strengths - Claude reasons well, Gemini handles long context, Codex integrates with GitHub. But switching between them means losing context.

Built HiveTechs: one workspace where Claude Code, Gemini CLI, Codex, DROID, and 7 others run in integrated terminals with shared memory.

Also added consensus validation - 3 AIs analyze independently, 4th synthesizes.

Real IDE with Monaco editor, Git, PTY terminals. Not a wrapper.

Looking for feedback: hivetechs.io





The 'multi-model consensus' feature actually looks very useful! I'm going to give this a go.

A question on OpenRouter - is it just a place to consolidate the various AI models through one billing platform, or does it do more than that? And are the costs slightly more as they take a cut in between?


> is it just a place to consolidate the various AI models through one billing platform, or does it do more than that

You can easily switch models, use the cheapest provider (especially for open models), and not have to reach certain "tiers" to get access to limits like you might on OpenAI/Anthropic's direct offerings.

> And are the costs slightly more as they take a cut in between?

5% more, you buy credits upfront and pay 5% extra. Aside from that you pay the normal prices listed (which have always matched the direct providers as well AFAIK).


Note that you also might need to think a little bit about caching: https://openrouter.ai/docs/guides/best-practices/prompt-cach...

Depending on the way how the context grows, it can matter quite a bit!


Great call out! Yes I have tried to follow these, to make Consensus compliant with OpenRouter's prompt caching best practices.

Appreciate the reply mate, thank you.

What's great about OpenRouter is you have access to all providers and models and they do the work of standardizing the interface. Our new HiveTechs Consensus IDE configures 8 profiles for you and your AI conversations, each using its own LLM from OpenRouter and unlimited custom profiles, you pick the providers and LLM's from a list and name the profile. Also, we have our own built in HiveTechs CLI that gives you the ability to use any LLM from OpenRouter, updated daily. So the moment a new model drops, you can test it out without waiting for it to release in your other favorite apps.

My apologies for the digression. But it reminds me a post I saw long time ago when a guy installed all the antivirus/antimalware software he could find on a Windows machine. It started an antivirus civil war and the Windows fell into a coma within seconds.

I haven't admined windows for 16 years, but I had this conspiracy theory back in the days that when you install an anti virus A, you'd find some viruses, and then you'd install anti virus B, you'd find some more, but then when you went back to anti virus A, you'd find a couple more - that the free versions were installing their own viruses. It might just have been because I was using bootleg copies.

I started using Gemini and see no need for other models.

Hey, I fully understand. A model or CLI like Gemini releases a new version and it seems like a new place to call home. However, in this time of AI growth, each providers new advancements are reason to make a change, today for you Gemini, perhaps next week, Claude or OpenAI. With HiveTechs Consensus you have use of every leading provider at all times, so use the one you love and compare others any time. You may discover that Gemini excels in frontend but Claude and its latest model excels at backend, reducing your development time.

is this a vscode fork? how compatible are existing vscode extensions with this? what is your tech stack

No this is not a fork, I built it from scratch. It is not intended to be used with vscode extensions. Its an Electron app. Desktop Framework

  - Electron - Desktop app with
  main/renderer process
  architecture
  - TypeScript - Primary language
  (strict mode)

  Frontend/UI

  - Monaco Editor - VS Code-style
  code editing
  - HTML/CSS - UI rendering
  - WebSockets - Real-time
  communication with backend

  Backend Services

  - Node.js - Runtime
  - Express - Memory Service API
  server
  - SQLite - Local database for
  memory persistence
  - Cloudflare D1 - Remote sync for
   memory backup

interesting, did you ever consider building it with tauri?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: