higher ROI than anything else I've ever done for my career, by a long shot
with my website [1] I found investors, I was contacted by a highly sought after silicon valley startup, moved to the US and got my visa sponsored, I even found friends here to get my network going and a professional network much more significant than anything I could ever have on linkedin
the only downside is that writing on your blog takes a long time to become clearly worth it (read: years), so most people don't stick to it and never find out -- do it!
I don't think there's anything about YC inherently misaligned with the next startup era — in fact, they're adapting their model accordingly, hedging their bets + investing in harder tech with higher capex and longer feedback loops. It's gonna take a while to see the new strategy play out, much longer than before in fact. That said, agreed with the general point that the model is changing and the old playbook is not working anymore. I published 5000 words last month that try to analyze this trend within an economics framework: https://giansegato.com/essays/dawn-new-startup-era
VC management fees are typically 2%/y. if a VC fund has $100 million in committed capital, the annual management fees would generally be between $2 million and $2.5 million. it's a lot of money.
There's a lot of nuance here. A $100mm fund could be a single guy/gal working from her office, running money in an industry she knows with a little bit of admin support. In that world, $2mm a year in fees is plenty to keep the lights on. Some fund managers I know in this situation don't call all the fee; there may be social considerations / signaling the manager prefers to make. Some spend it all and then some of their own. Absolutely none of them think that the fee is 'retirement money'; they all have eyes on the prize of a 3-5x and carried interest getting them to $100m or so, when they can do whatever thing it is they want to do that got them into running the fund.
On the other hand, a $100mm fund could be a 'contender' fund that wants to raise a large fund, and is in a competitive industry -- it's trying to get on the cap tables that, say, Mayfair gets on to, and so it needs to staff recruiting support, tech help, marketing people. Perhaps it's multi-jurisdiction. In that world, $2mm is way, way too little, and the GPs may well be financing the fund personally through fund 1 and into fund 2, depending on the follow-on raise. They are aiming at running, eventually, $1bn+ per fund in three to four stacked funds, and taking home $1bn after 20 years (or less?) of good effort for each of the original GPs.
These are caricatures, and there's much more than this to the lifecycle of venture funds, but since we're at HN and VC is a big part of the conversation, I think it's good for hackers and founders to understand the counterparties they do business with, and particularly to be able to read the signals of the VCs they talk to, while the VCs are reading the signals of a desperate, out of date deck.
> Absolutely none of them think that the fee is 'retirement money'
You are sure you can speak for all of them? There are tons of VCs...
To me the 2% running fee sounds pretty nice, combined with somewhat low pressure job compared to many others. Of course it is not nice if your fund doesn't make it but you are guaranteed somewhat cushy position for 5-10 years.
You've already spent a lot of time to raise the fund, unpaid or paid out of proceeds from a previous fund first. Then you have to put in a massive effort to find, sort through and vet investments. Either there are quite a few of you, or you have a nightmare work pressure for several years unless you already have a massive rep (and it's still work).
And most LP's will expect the GP's to have significant skin in the game. E.g. at my previous employer, every staff member was expected to have at a minimum the equivalent of 1x gross yearly salary committed within a few years.
VC salaries are not that great outside the top tier funds or unless you're one of the GP's.
It could be "retirement money" for a handful of the GP's at the top tier funds, but they're only in that position in the first place because they have a lengthy track record, and so their past earnings from carry etc. will still dwarf any operating fee from their current fund.
Yep I'm sure I can speak for all of the $100M-fund work out of your home office types. Or at least > 99%. The Venn overlap between "content with promising people you'll make money for them believably", "too cheap to spend on office / marketing because your fake pitch was so good nobody will need it to feel comfortable", "enough executive function to make believable calls on believable companies while doing no sourcing work" and "$2mm for three years until people catch on, but def don't send me to prison" is absolutely zero or very close to it.
Most who run a fund like this do not think of it as low pressure or cushy, regardless of goals. Something I tell my portco CEOs a lot is that as much as they want to raise money, or need money for their company, in general, VCs they are talking to need to write checks even more. Just not bad checks.
I've not lost billions twice, but have definitely lost tens of millions, and worked alongside at lost one person who lost a billion once... It's an "interesting" business to be in...
Spending a billion dollars take a lot of effort (or so I assume; I've "only" spent millions). People will ask annoying questions like "where is the billion dollars coming from, I had no idea you were a billionaire", and ask about AML etc.
Conversely, driving down the value of a company that's already worth billions is "easy": Just publicly demonstrate your willingness to drive the company totally into the ground.
Or if you want speed, and have access to the funds, the super-fast way would be transferring a billion worth of crypto to a random address.
For my part, my biggest "losses" were paper values in startups that failed or didn't get the exits we'd hoped for. There it's also "easy".
It really varies. I worked for a VC for years, and it often takes substantial reputation to be able to demand fees like that. It also takes substantial reputation to be able to get high enough quality inbound dealflow to be able to do so with few people.
E.g. I know of a decent number of funds that size or smaller with a staff in the range of 10, a few with well above that. Even at 2% it's suddenly not so much money then, even less so when you start to factor in costs.
EDIT: You may also sometimes "on paper" have fees like that, but quietly offer discounts etc. to convince investors. On top of that comes often quite substantial requirements to buy into the fund for at least senior staff that seriously reduce the de facto salary unless the fund also does well enough that it's the carry that matters.
> On top of that comes often quite substantial requirements to buy into the fund for at least senior staff
This is a clearly beneficial requirement, but your point is fair about it leading to 'on-paper' comp looking high. But I'd even go so far as to say that the majority of comp for senior people should be contingent (not sure if that's typical).
Yes, but it's tricky here because often it's upfront. The only reason it wasn't our case was that carry was unusually spread out over the team and the buying requirement was for everyone, so the LPs accepted that as long as there was a clear plan in place for everyone to buy in, it was ok.
Note that given salary levels this means that over the 10 year runtime of the fund, most of us would be giving up nearly ~20% of our 10 year aggregate gross salary, most of us within 4-5 years. My gross salary during that period was not much different from in my job before - it was a pretty steep sacrifice for a shot at that carry.
> Yes, but it's tricky here because often it's upfront.
Fair. These sorts of things are usually pretty nuanced.
> it was a pretty steep sacrifice for a shot at that carry.
I totally get that, but it also seems like the ideal balance of interests. To many obvious failure modes if you don't have enough skin in the game. Of course that works the other way too, the upside in good-to-great cases have to make it make sense.
I mean, I made the choice to join because I saw it as a good option. But even so, was an unusually risky tradeoff between an effectively low basic for a higher bet at the return. I also certainly think it's understandable that LPs want it that way. Main point is that it's only lucrative if the fund pays out on carry, and you take a high risk for something which might possibly pay out ten years in the future. If it doesn't pay out, you've worked years at a not very high (for tech) salary.
Which is effectively like old school startups when you think of it … relatively low salaries and a bunch of options that may or may not turn to gold in 10 years .
Sure, and that's fair enough as long as staff gets a big enough stake. And to be clear, we did. Every single person outside the exec team/general partners on that team had an unusually high stake in the total carry. Our main investor buying us out and turning it into a boring corporate not being the end game we had in mind aside, it was one of the most enjoyable startup experiences I've had (we were a bit of a hybrid, in that while we were operating a single fund, a lot of my work was towards getting tech in place to optimize delivery of subsequent funds).
A lot of startups think it's still ok to pay under the odds once hiring staff that are getting tiny fractions of a percent, though, and at that point, the risk-adjusted value of those options is not worth taking a cut for relative to a bigger corporate with somewhat predictable share performance and liquidity.
Is that 2% on cash the VC fund directly contributed, or is it 2% on total funds injected into the business including loans the VC saddles the business with? (Or is that kind of leverage typically only done by private equity funds?)
Whatever the percentage for a given fund is, this is the rate the investors in the fund (the limited partners) pays to the fund managers (the general partners) to manage the fund. It's separate from what the companies they invest in gets.
So around 5-10 software engineers (compensation in finance seems mostly similar to SW), not including any other costs like office, etc? That's not a lot of money.
that's not true. you have it written explicitly in the final check. you just don't have to do the mental gymnastics of figuring out how much you'll pay before ordering, which is absolutely absurd
The only "good side" of having price sticker without tax is that taxes can change as they wish and it cannot be used as an excuse to change the price anyway.
It's not that common but sometimes governments do apply VAT changes and usually sellers uses it to push prices up more, because people are already expecting it. Or if the VAT goes down, they don't reduce the final price that much.
But overall, as an European, I prefer how we do it.
I don't understand the general tenor and sentiment of the comments here. I've taken dozens of self-driving cars rides in SF in the last few months. They were magical, and _they work_.
It blows my mind that the general defeatist tone is "it can't be done", while it's literally happening right now. It took a bajillion dollars and decades of work, but we're past the tipping point now.
Sure, regulation must happen, it's not like a chat bot where screwing up is worst case scenario being canceled for a week. Lives are on the line. But outright "it's impossible and must be stopped" is literally against progress.
Yes, Tesla's FSD sucked and they generated a lot of bad reputation, both for themselves and the industry. But FSD v12 (end-to-end ML) - their latest release, is leaps and bounds ahead of v11. I only used to use v11 on relatively empty highways, like cross-country road trips.
With v12, I leave it on 95% of the time - their cameras see more than I do, and process things quicker than I can. The onus is still on me to pay attention, and I do. Yes, there would be idiots who don't. But then again, there are idiot drunk drivers as well.
I am beginning to believe that in a year, Tesla v12 will be really really good, and safer on the road than an average human driver. It probably already is. I haven't researched the stats.
But the current state of the art is Waymo - at this point, a Waymo is actually safer than human drivers. People need to take a few rides in them to believe it - its almost a solved problem to navigate on city roads.
I think this is why they are moving into robotics. They're close enough to solving autonomous driving that they've sign-posted that it's a tractable problem for everyone else. We can reasonably expect the other major manufacturers to catch up and once the technology is widespread and EVs are the norm, Tesla has much less of a competitive edge in their core market.
There will be tipping points where it's consistently better than humans so long as there's someone supervising, and then again when it's better without someone supervising. Beyond that point it's just incremental changes.
I jumped into my co-workers Tesla to go to lunch. I asked him if he ever uses the self-driving feature. He turned it on and in less than a second, the car veered directly into the middle turning lane. I watched him yank at the wheel and disable the self-driving mode, explaining, "It's good but sometimes it does that".
It's hard to look at a system where you have "AI" directly causing human deaths and not have a knee-jerk reaction that it can't be done and should be regulated out of existence, even if it's objectively safer than humans. It's an emotional position but as they say it's nigh impossible to reason someone out of a position they didn't reason themselves into.
It's the cars themselves that are dangerous. Whether people or robots are driving them. We used to have sensible regulation to ensure safety, which was unfortunately repealed, resulting in millions of avoidable deaths:
"...this restricted the speed (of horse-less vehicles) to 2mph in towns & 4mph in the country. The Act also required three drivers for each vehicle – two to travel in the vehicle and one to walk ahead carrying the infamous red flag."
What you are describing is essentially anecdata in the grand scheme of things.
Yes, there are absolutely scenarios and situations where FSD is a solved problem. The issues that relative that all of the situations that occur across the country (and across the world) daily, the percentage of daily miles driven where FSD can perform flawlessly is likely less than 5% of total miles.
You asserted that, "the percentage of daily miles driven where FSD can perform flawlessly is likely less than 5% of total miles".
If FSD can operate in situations such as the one in the video, you are wrong by at least an order of magnitude. And FSD does not have to perform flawlessly to perform better than humans.
Except that without a lot more information on that vehicle we don't know if it is optimized for a very specific scenario (low speed driving in crowded areas), or if it could perform similarly on a highway at 65MPH during snow or rain.
There are TONS of examples of FSD working well in various scenarios online. Having worked in video analytics/AI for the last 15 years I have seen all kinds of demo video that is essentially highlight reels, while the evidence of the product utterly failing is not released.
It's not about if it can be done or not. Waymo is very impressive, and they continue to expand their scope. Good on them. But their sensor suite is very expensive.
BlueCruise is not self-driving. Neither is Tesla's Full Self Driving; despite the name, read their letter to the CA DMV.
Both of those systems will absolutely ignore stationary vehicles in the path of travel when travelling at highway speeds.
World doesn't end at SF borders, nor does it revolve around it.
I can come up with tons of corner cases where I simply won't risk life of me and my whole family just because some tech bro said so on the internet. And you know, tons of corner cases that I sometimes experience all over the world sum up into some major percentage.
By all means be a betatester, but don't force it down the throats of unsuspecting non-tech users who often trust what manufacturers claim.
> [...] I simply won't risk life of me and my whole family just because some tech bro said so on the internet.
There isn't the option to take no risk - there are over a million deaths a year[0] from regular road traffic crashes.
Region-restricted automated taxi services like Waymo are already looking pretty safe compared to human drivers, though there is a lot of selection bias (good conditions, well-mapped US cities, ...).
> Every year the lives of approximately 1.19 million people are cut short as a result of a road traffic crash. Between 20 and 50 million more people suffer non-fatal injuries, with many incurring a disability.
(for disclosure, if anyone is confused about JoeAltmaier's reply: the above link wasn't initially in my original comment - I had edited it in shortly before refreshing and seeing their reply)
Further, those two stats are irreconcilable? There are not enough countries in the world to add up to nearly 2M deaths per year, given that the top rates are in the US and Russia. Who else has as many cars? Even China and India can't contribute much because of the dearth of cars.
This sort of risk analysis is always baffling to me. I live in a big city. It feels like there is some near miss collision to me almost every week. 99.99% of the time it's because of human drivers, not automated systems.
It's virtually guaranteed that in your lifetime you'll have a collision or near miss with a human. And no, it doesn't take "special conditions", it could be a perfectly sunny day and a clear road and someone will do something crazy.
So that's what we put up with on a daily basis, deep threats to human life by human drivers, our safety standards on letting humans drive so low as to be utterly comical. And yet all the handwringing is done over incredibly rare situations where an AI system screws up and its human driver also screws up at oversight.
You don't understand human psychology, nor few simple facts per se.
I choose how I behave in various situations. I know I am way above average driver with most kms driven in real wheel drive bmw, keep my distances, do defensive driving etc. I pick scenarios, I choose how I do the 'battles'. If somebody else does something stupid and 'unique', I trust myself way way more than some 'ai' being in beta test, its not just reaction time but experience, massive amount of anticipation where I see bad drivers and I overtake them before they do something stupid etc.
Maybe its emotional, but I am highly logical person and don't let emotions interfere with decisions much. Still, no. I kept saying "in 10 years" but this goalpost is basically moving as time moves, so I understood its in "maybe in my retirement" category and stopped expecting mass adoption earlier.
Surprise me world, I would love that. But I am being realistic, not bullish just because it would be so nice to have robo cars and taxis.
> Six Waymo robotaxis blocked traffic moving onto the Potrero Avenue 101 on-ramp in San Francisco on Tuesday at 9:30 p.m. (...) While routing back to Waymo’s city depot that evening, the first robotaxi in the lineup came across a road closure with traffic cones. The only other path available to the vehicles was to take the freeway, according to a Waymo spokesperson. (...) the company is still only testing on freeways with a human driver in the front seat. (...) After hitting the road closure, the first Waymo vehicle in the lineup then pulled over out of the traffic lane that was blocked by cones, followed by six other Waymo robotaxis.
Also gotta point out, the article uses such a disingenuous way to put it. The Waymo didn't "pull[ed] over out of the traffic lane that was blocked by cones" the car stopped in the lane of travel and put it's flashers on as evidenced by the video at the top of the article.
I think a vicious PR campaign against the people doing the lobbying and the people being lobbied is a good first step in situations like this.
THe public should know the names and faces of the people involved and they should associate negative and petty emotions with them.
Most Canadians hve no clue about this bridge or the people like Moroun and his lawyers who are involved. These should be names that people spit on the ground when heard, there should be political cartoons and memes that satirize their faces everywhere. There should be boycotts against his businesses and the businesses of his lawyers and accountants.
Uhm. I don't know. At my company, we're living on Slack (it's remote). There are only few tactical meetings. Coordination happen async on Slack. Worked out pretty well so far, we ship tons of stuff continuously. I agree with some other comments here: how much signal to noise ratio you get out of Slack mostly boils down to the people
replit employee here. the team who built this is very small (less than a dozen, including non-eng roles for the go to market), and went from idea to general availability in 8 weeks
That's very impressive. Hats off to them! I dont think this is too out of the ordinary either though. I'd guess they started off with a LLM from hugging face, set up some pipeline to ingest code from replit repos to finetune the LLM. The ML aspect of this is not terribly hard given that they probably dont need to train a LLM from scratch. Figuring out how store and serve from replit repos (or publicly available code bases) is not too difficult. From there it's a matter of productionalizing: how to serve the model in real time, figuring out they want the product to look/feel like and I suppose this part of it might take a while. I'd estimate you'd need 1-2 ML engineers, 2 data engineers, 2-3 swes, 1 PM for the team for a minimal viable product.
yep, true! however, the devil is in the details. from what i've been told, the big challenge was latency: they worked a lot to bring the latency down to acceptable levels - essentially to be usable in a cloud IDE
iirc the team managed to bring it to a lever an order of magnitude lower than off-the-shelf models
8 weeks is impressive for something like that, and it goes to show just how powerful our off-the-shelf tools have become.
I think it's also a bit scary, because 8 weeks is very little time for testing, tuning, and validation of something as opaque as a machine learning model. If it worked right the first time, that's great. But there is still a lot of inherent uncertainty in ML projects. Decision makers need to take that uncertainty into account when planning.
That, or, the 8 weeks only covers the final training runs and the implementation/deployment, and doesn't include time spent developing and tuning proof-of-concept prototype models.
Interesting, sithlord is an anagram for shitlord. While the behavior of the CEO wasn't cool, the issue seems to have been resolved between all involved parties and everyone has moved on - we don't need to bring it up every time repl.it is mentioned.
This is the type of thing where goodwill is burned and it takes time to earn it back. I don't think we just brush it under a rug either. In my opinion, you don't just get to "resolve it" and then everyone forgets about it. For me, future decisions and importantly, actions, will help me personally move past this and "move on" as you say.
Ok, sounds good about it taking time - assuming perfect behavior, how long will it be before you stop referencing the affair whenever an unrelated repl.it story comes up?
I feel like, if there's ANYTHING we have learned in the past decade or two it's that people who defend a company tend to be doing so for the wrong reasons. See Sony or Microsoft, or Apple or Android, etc. Defending a company is just weird.
I look at replit as a tool, run by people. The tool might be cool, but the CEO made a bad decision and now I judge the product on that CEOs actions. There's no definitive time frame or action that just magically makes it better.
But in general, I'll stop thinking about the stupid actions of the CEO when my brain stops reminding me "Oh, no matter how cool this is, the actions of the CEO were incredibly poor." When will that be? No idea, but maybe sometime down the road he does enough good things that I will suddenly stop and think "cool, looking back, he's done enough good that I can probably forget about the poor decision he made and start looking at this again, because he's proven he isn't that one stupid action."
Goodwill is earned, it's not simply given. It's often hard won, but incredibly easy to lose.
I mean, I think it's relevant still. My time to move on from this particular instance might be different than someone else's, and there may be people out there that did not hear about that particular story. Personally, I feel everyone should have the opportunity to make their own decision on what I believe are poor actions made by a company.
We do it every day, whether we realize it or not. As a people we should support the companies that do good, and we should be aware of the companies doing bad. There's room for grey in there, it's not a one size fits all. But if you aren't aware of the bad then you aren't informed in your decision making.
If it bothers you, sorry. But I see it as a pro. I had a really bad experience with Remarkable, every time it comes up, I point out my experience so that others who might be making their own decision can use my experience in their decision making. When a company performs poorly. I guess by the metric you've provided above, this would be a low effort comment by your definition.
The CEO's actions are a reflection of the company. I'm not sure I "care more about it" than I am simply aware of their past actions when making decisions on whether to use their product or not.
I'll admit, every time I hear of repl.it mentioned, I think of the time the CEO threatened the intern. The CEO did a huge disservice to himself and the company that day in my mind
Looks like this CEO isn't of good character after all. He looks almost like a jerk when looking at the end of the story. Even in his last email he tried to get his (obviously wrong) point. He never apologized for the things that mattered most, only tried to extinguish the social media fire all in all.
Big LOL here! The abstract things are the simplest, yeah! That's why progress in something like math or theoretical physics is made by the dumbest people, in contrast to something like sociology where you need genius level of intelligence to come up with some new ideas. Sure, sure.
But that's of course not everything this dude got completely backwards.
Would explain why replit is the most useless of all the online IDEs: It has no direction, no true value proposition. It's not a good cloud coding environment. It never was a good code snippet playground (actually one of the worsts). Now they even require accounts, so the quick code snippet aspect is also gone. Also they badly positioned in the education space…
Of course I wish them luck!
But I guess they have no chance against something like Gitpod, Github, or OpenShift codespaces, which are light-years ahead.
OK, maybe the exit-strategy is "just" to be visible enough that at some point they get bought by one of the above. (Which doesn't look like the most ethical thing to do ;-)).
To me (programmer in Sweden) the largest single team I've been on was 14 people and that was _very_ large (indeed the largest in the tech department). We actually broke ourselves up into two more informal groups since we thought that was a more manageable team size.
Neat feature but yeah very small doesn’t seem like < 12 to me either (worked at big tech for a while). A two pizza team (standard amazon size) is 8-10, 12 starts to be on the larger size for a single team, but not abnormal. Very small to me would be if a team of 2-4 shipped it. Replit must be much larger then I expected for a startup.
can i get a clarification - when it says "in-browser" i hear "on-device" as in it doesnt call back to replit to get the predictions. i assume that's inaccurate?
for cost/compute purposes i'm wondering how small models have to get in order to run "truly in browser"