Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You speak with a passive voice, as if the future is something that happens to you, rather than something that you participate in.


They are not wrong.

The market, meant in a general sense, is stronger than any individual or groups of people. LLMs are here, and already demonstrate enough productive value to make them in high demand for objective reasons (vs. just as a speculation vehicle). They're not going away, nor is larger GenAI. It would take a collapse of technological civilization to turn the tide back now.


The market is a group of people.


And you are a collection of cells, but individual cells (mostly) don’t have the ability to dictate your actions


Yeah, but Jeff Bezos does actually have control over Amazon and can make decisions.


Sort of, kind of. Most decisions you'd see him make would quickly cause his control over Amazon to disappear, without actually improving anything for Amazon workers.

That's one part of the bad mental model of organizations and markets (and thus societies) people have. The people at the top may be richer and more powerful, but they're not actually free to do whatever. They have a role to play in the system they ostensibly "control", but when they deviate too far from what the system expects them to do, they get ejected.

Never mistake the finger pointing at the Moon for the Moon itself. Also, never mistake the person barking orders for the source from which those orders originate.


There is nothing like "the" system though. When a government launch some genocide, sure it's an expression of the system in a sense, but it didn't need to respect a majority of actor opinions, and it doesn't mean that "the behavior of the system" is a mere and direct outcome of all the social values at stake which would presumably have great safeguard against any significant deviation.

Virus can kill their hosts, and a bunch of individuals can have significant harmful impact on societies.


A virus that kills their hosts itself dies out quickly. Viruses that thrive, and that we actually have most problems with, are ones that spread before manifesting symptoms.

Much like viruses, systems are subject to selection pressures over time. Systems that are too damaging to society makes society develop memetic, cultural and legal immunity against them. Systems that let individual members easily kill them are fragile and don't survive either.

Systems that thrive are ones that are mild enough to not cause too much external resistance, and are resilient enough to not allow individuals to accidentally or intentionally break them from within.


Yeah, these decisions just appear out of the aether, there absolutely not the result of capitalists acting in their self-interest. It's nice to claim, oh poor little me couldn't possibly have done anything else, I guess I just have to benefit from all this money my decisions give me.


I think you’re agreeing in a way - they are making the decisions that maximise their profits in the existing system (capitalism) and the system is such that it will produce people like this. They can nudge it in their preferred direction but if they were in, say, a frontier economy they’d likely make different decisions.


That. And the aspect I'm trying to emphasize is, those profit-maximizing people are technically free to choose to not profit-maximize, but then the system will happily punish them for it. They can nudge the system, but the system can nudge them back, all the way to ejecting them from whatever role they played in that system so far. And yes, the system is just made of other people.

That's the nature of self-reinforcing feedback loops.


Jeff Bezos is a product of the system, not a driver of it. Bezos, Musk, Zuckerberg, Thiel, etc, are outputs, not inputs.

Their decisions are absolutely constrained by the system's values. They have zero agency in this, and are literally unable to imagine anything different.


It is a fascinating take. One way to measure agency is whether Bezos, Musk, Zuckerberg and Thiel have the power to destroy their creations. With exception of Bezos ( and only because he no longer has full control of his company ), the rest could easily topple their creations suggesting that system values you refer to are wide enough to allow for actions greater than 'zero agency'.


They may destroy their creations but that would be a minor blip in overall system that will keep moving. Destruction of Facebook, Amazon, SpaceX won't destroy social media, eCommerce or reusable rockets. Previously destruction of giants like IBM/Apple(1st round)/Cray/Sun had no impact on PC, GUI, Supercomputers, Servers or any other fields they were pioneer in. Even if OpenAI/Gemini/Anthropic all disappear immediately the void will be replaced by something else.


Not to mention, organizations don't just blip out of existence. A dying giant leaves behind assets, IP, and people with know-how and experience - all ready to be picked up and stitched back together, to continue doing the same thing under new ownership.


That's actually a quite good high-level measure. However, I'd question your measurement: I doubt that Musk, Zuckerberg and Thiel would actually be able to destroy their creations. SpaceX, Tesla, X, Meta, Palantir - they're all large organizations with many stakeholders, and their founders/chairman do not have absolute control over them. The result of any of those individuals attempting to destroy their creations is not guaranteed - on the contrary, I'd expect other stakeholders to quickly intervene to block or pivot any such moves; the organization would survive, and such move would only make the market lose confidence in the one making it.

There's no total ownership in structures as large as this - neither in companies nor in countries. There are other stakeholders, and then the organization has a mind of its own, and they all react to actions of whoever is nominally "running it".


I don't know about the others, but Zuckerberg can absolutely destroy Meta. He owns a special class of shares that have 10x voting power vs. normal shares, so he personally controls about 60% of the votes. If there was any way of Zuckerberg getting ousted by investors, there's no way they would have let the company lose so much money for so long trying to make VR a thing.


Even then he has a fiduciary duty - provided he acts in bad faith (and it is egregious enough); I think it would still be possible to oust him due to that.


Even if Tesla is destroyed by Musk, EVs and self-driving cars are here to stay. If not in USA than in rest of the world.


Musk is clearly doing his best to destroy Tesla.


You can also measure agency as the power to destroy other things.


What are you talking about? Of course they have agency. They're using that agency to funnel as much money as possible into their pockets, and away from other people, it's not that they can't imagine anything different, it's that when they do what they see is a world in which they're not as well off.


That's a very naive take - but also a popular one, because it gives people license to hate those who seem to be better off.

The truth is, no one just acquires power on their own - people with power have it because other people let them, and they can wield this power only as long, and only in ways, as others allow it. Gaining power doesn't make one more free to exercise it - on the contrary, the more power you have, the more constrained you are by interests of those who provide you that power.


The CEO might have more control of Amazon than Jeff.


Indeed. Here, a very large one. Now, focus on the dynamics of that group to see my point.

Or much more elaborately, but also exhaustively and to the point: https://slatestarcodex.com/2014/07/30/meditations-on-moloch/.


I'm not going to read that hack, but in either case, the metaphysical monster of the market you're proposing is not what is propping up LLMs. It's the decisions of actors at major tech companies and VCs. These are people, not magical entities. And even still, LLMs aren't profitable.


> I'm not going to read that hack, but in either case, the metaphysical monster of the market you're proposing is not what is propping up LLMs. It's the decisions of actors at major tech companies and VCs. These are people, not magical entities.

Your loss. The article is actually talking about the thing you're saying. And so am I. These are all people, not magical entities, and that is exactly why the near-term future of "AI being the new electricity" is inevitable (short of a total collapse of civilization).

The article spells out the causal mechanism 20 different ways, so I still recommend reading it if the dynamics are not blindingly apparent to you yet.


It can simultaneously be true that people in these positions have less agency than most other people assume, and more than they themselves might think.

Another reply mentions that Bezos can't imagine anything different. If that is so (I am not unwilling to believe a certain lack of imagination tends to exist or emerge in extremely ambitious/successful people) then it's a personal failing, not an inevitable condition of his station, regardless of how much or little agency the enormous machine he sits on top of permits him to wield personally. He certainly doesn't have zero as the commenter claims.

FWIW I have read Scott's article and have tried to convince people of the agency of moloch on this site before. But the fact that impersonal systems have agency doesn't mean you suddenly turn into a human iron filing and lose all your self-direction. It might be convenient for some people to claim this is why they can do no different, and then you need to ask who benefits.


I have a parallel to suggest; I know it's the rhetorical tool of analogous reasoning, but it deeply matches the psychology of the way most people think. Just like getting to a "certain" number of activated parameters in a model (for some "simple" tasks like summarisation) can be as low as 1.8 billion, once that threshold is breached the "emergent" behaviour of "reasonable", "contextual", or "lucid" text is achieved; or to put this in layman's terms, once your model is "large enough" (and this is quite small compared to the largest models currently in daily use by millions) the generated text goes from jibberish to uncanny valley to lucid text quite quickly.

In the same way once a certain threshold is reached in the utility of AI (in a similar vein to the "once I saw the Internet for the first time I knew I would just keep using it") it becomes "inevitable"; it becomes a cheaper option than "the way we've always done it", a better option, or some combination of the two.

So, as is very common in technological innovation / revolution, the question isn't will it change the way things are done so much as where will it shift the cost curve? How deeply will it displace "the way we've always done it"? How many hand weaved shirts do you own? Joseph-Marie Jacquard wants to know (and King Cnut has metaphorical clogs to sell to the Luddites).


There is an old cliché about stopping the tide coming in. I mean, yeah you can get out there and participate in trying to stop it.

This isn't about fatalism or even pessimism. The tide coming in isn't good or bad. It's more like the refrain from Game of Thrones: Winter is coming. You prepare for it. Your time might be better served finding shelter and warm clothing rather than engaging in a futile attempt to prevent it.


If you believe that there is nobody there inside all this LLM stuff, that it's ultimately hollow and yet that it'll still get used by the sort of people who'll look at most humans and call 'em non-player characters and meme at them, if you believe that you're looking at a collapse of civilization because of this hollowness and what it evokes in people… then you'll be doing that, but I can't blame anybody for engaging in attempts to prevent it.


You are stating a contradictory position: A person who doesn't believe AI can possibly emerge but is actively working to prevent it from emerging. I suggest that such a person is confused beyond help.

edit As an aside, you might want to read Don Quixote [1]

1. https://en.wikipedia.org/wiki/Don_Quixote


The last tide being the blockchain (hype), which was supposed to solve all and everyone's problems about a decade ago already.

How come there even is anything left to solve for LLMs?


The difference between hype and reality is productivity—LLMs are productively used by hundreds of millions of people. Block chain is useful primarily in the imagination.

It’s just really not comparable.


> productively used

This chart is extremely damning: https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...

The industry consistently predicts people will do the task quicker with AI. The people who are doing the task predict they'll do it quicker if they can use AI. After doing the task with AI, they predict they did it quicker because they used AI. People who did it without AI predict they could have done it quicker with AI. But they actually measured how long it takes. It turns out, they do it slower if they use AI. This is damning.

It's a dopamine machine. It makes you feel good, but with no reality behind it and no work to achieve it. It's no different in this regard from (some) hard drugs. A rat with a lever wired to the pleasure center in its brain keeps pressing that lever until it dies of starvation.

(Yes, it's very surprising that you can create this effect without putting chemicals or electrodes in your brain. Social media achieved it first, though.)


No, it's overinvestment.

And I don't see how most people are divided in two groups or appear to be.

Either it's total shit, or it's the holy cup of truth, here to solve all our problems.

It's neither. It's a tool. Like a shovel, it's good at something. And like a shovel it's bad at other things. E.g. I wouldn't use a shovel to hammer in a nail.

LLMs will NEVER become true AGI. But do they need to? No, or course not!

My biggest problem with LLMs isn't the shit code they produce from time to time, as I am paid to resolve messes, it's the environmental impact of MINDLESSLY using one.

But whatever. People like cults and anti-cults are cults too.


Your concern is the environmental impact? Why pick on LLMs vs Amazon or your local drug store? Or a local restaurant, for that matter?

Do the calculations for how much LLM use is required to equal one hamburger worth of CO2 — or the CO2 of commuting to work in a car.

If my daily LLM environmental impact is comparable to my lunch or going to work, it’s really hard to fault, IMO. They aren’t building data centers in the rainforest.


Why do you assume I am not concerned about the other sources of environmental impact?

Of course I don't go around posting everything I am concerned about when we are talking about a specific topic.

You're aware tho, that because of the AI hype sustainability programs were cut at all major tech firms?


It also correlated with the discovery that voluntary carbon credits weren’t sufficient for their environmental marketing.

If carbon credits were viewed as valid, I’m pretty sure they would have kept the programs.


I broadly agree with your point, but would also draw attention to something I've observed:

> LLMs will NEVER become true AGI. But do they need to? No, or course not!

Everyone disagrees about the meaning of each of the three letters of the initialism "AGI", and also disagree about the compound whole and often argue it means something different than the simple meaning of those words separately.

Even on this website, "AGI" means anything from "InstructGPT" (the precursor to ChatGPT) to "Biblical God" — or, even worse than "God" given this is a tech forum, "can solve provably impossible task such as the halting problem".


Well, I go by the definition I was brought up with and am not interesting and redefining words all the time.

A true AGI is basically Skynet or the Basilisk ;-)


Most of us are so; but if we're all using different definitions then no communication is possible.


There are two different groups with different perspectives and relationships to the "AI hype"; I think we're talking in circles in this subthread because we're talking about different people.

See https://news.ycombinator.com/item?id=44208831. Quoting myself (sorry):

> For me, one of the Beneficiaries, the hype seems totally warranted. The capability is there, the possibilities are enormous, pace of advancement is staggering, and achieving them is realistic. If it takes a few years longer than the Investor group thinks - that's fine with us; it's only a problem for them.


> it's the environmental impact of MINDLESSLY using one.

Isn't much of that environmental impact currently from the training of the model rather than the usage? Something you could arguably one day just stop doing if you're satisfied with the progress on that front (People won't be any time soon admittedly)

I'm no expert on this front. It's a genuine question based on what i've heard and read.


Overinvestment isn't a bug. It is a feature of capitalism. When the dust settles there'll be few trillion-dollar pots and 100s of billion are being spent to get one of them.

Environmental impacts of GenAI/LLM ecosystem are highly overrated.


Reminder that the Dutch exist.


"Stopping the tide coming in" is usually a reference to the English king Cnut (or 'Canute') who legendarily made his courtiers carry him to the sea:

> When he was at the height of his ascendancy, he ordered his chair to be placed on the sea-shore as the tide was coming in. Then he said to the rising tide, "You are subject to me, as the land on which I am sitting is mine, and no one has resisted my overlordship with impunity. I command you, therefore, not to rise on to my land, nor to presume to wet the clothing or limbs of your master." But the sea came up as usual, and disrespectfully drenched the king's feet and shins. So jumping back, the king cried, "Let all the world know that the power of kings is empty and worthless, and there is no king worthy of the name save Him by whose will heaven, earth and the sea obey eternal laws."

From https://en.wikipedia.org/wiki/Cnut#The_story_of_Cnut_and_the...


I know what it's a reference to, I'm calling skill issue on King Cnut.


They're not stopping the tide, they are preparing for it - as I suggested. The tide is still happening, it just isn't causing the flooding.

So in that sense we agree. Let's be like he Dutch. Let's realize the coming tide and build defenses against it.


They are kinda literally stopping the tide coming in though. They're preparing for it by blocking it off completely.

That is a thing that humans can do if they want it enough.


> They're preparing for it by blocking it off completely.

No we don't. Quite the opposite. Several dams have been made into movable mechanic contraptions precisely to NOT stop the tide coming in.

A lot of the water management is living with the water, not fighting it. Shore replenishment and strengthening is done by dropping sand in strategic locations and letting the water take care of dumping it in the right spot. Before big dredgers, the tide was used to flush sand out of harbours using big flushing basins. Big canals have been dug for better shipping. Big and small ships sailed and still sail on the waters to trade with the world. A lot of our riches come from the sea and the rivers.

The water is a danger and a tool. It's not stopped, only redirected and often put to good use. Throughout Dutch history, those who worked with the water generally have done well. And similarly, some places really suffered after the water was redirected away from them. Fisher folk lost their livelihoods, cities lost access to trade, some land literally evaporated when it got too dry, a lot of land shrunk when water was removed, biodiversity dropped...

Anyway, if you want to use the Dutch waters as a metaphor for technological innovations, the lesson will not be that the obvious answer is to block it. The lesson will be to accept it, to use it, to gain riches through it: to live with it.


The difference is that right now we're looking at a giant onrushing wave and we're considering maybe building a few dinghies to "ride it out".

Please understand. We're not in a position where we have sophisticated infrastructure to carefully control AI development. We have nothing, and the waves are getting bigger every few months.

You're in a position where you're safe enough (after centuries of labor!) that you can decide to not block some amount of incoming water. That is not where we are at with AI. There is no dike.


> Please understand.

I understand that you're afraid. I'm not. But that's not what I was responding to. I was just pointing out that your comparison to the Dutch does not bolster your argument, but instead supports the opposite view.


I agree that what I said was literally false. I think the comparison to the Dutch still bolsters my view with the added context.

When you understand tides and local ecosystems and have flood level forecasting, you can choose to operate dikes in a way that allows tidal flow while blunting floods. However, we're currently in a position where in the analogy, we have no dike and people are arguing that dikes are impossible and anyway who's to say that the incoming flood won't be good for houses? In that situation, the first thing you need to do is get the incoming masses of water under control, and that's a thing that humans can do and it's the thing you did. (Unless I'm wrong?)

edit: Hang on, isn't Amsterdam below zero? How is that not blocking tidal flow effectively completely?

My point is just that tides are in the feasible range of human engineering, whether that's a good idea or not. Pragmatic management is not the same thing as unconditional surrender, which the other comment was advocating on basis of infeasibility, which is doubly wrong.


As the other commenter noted, you are simply wrong about that. We control the effects the tide has on us, not the tide itself.

But let me offer you a false dichotomy for the purposes of argument:

1. You spend your efforts preventing the emergence of AI

2. You spend your efforts creating conditions for the harmonious co-existence of AI and humanity

It's your choice.


As things stand, 2 is impossible without 1. There simply is not enough time to figure out safe coexistence. These are not projects of equal difficulty- 1 is enormously easier than 2. And 1 is still a global effort!


You have no evidence for any of your claims (either for "impossibility" or degree of difficulty) and I strongly doubt your rationalization will stand the test of validation in reality.

You are also completely moving the goal posts. My original comment was about the hubris of man to prevent processes that operate at a scale beyond his means. The processes that are driving forward the march towards AI are beyond your ability to stop. And now you are arguing (again, with no evidence) the relative difficulty of slowing it down (a much weaker claim compared to stopping it) vs. contributing to safe co-existence.

But in the interest of finding some common ground let me point out: attempting to slow it down is actually getting on board to my project (although, in a way I think is ineffective). It starts with accepting that it can't be prevented and choosing a way to contribute to safe coexistence by providing enough time to figure it out.


Man's scale is Earth.

You know, I think you have no evidence for any of your claims of "impossibility" either. And I'd argue there's a ton of counterevidence where man, completely ignoring how impossible that's supposed to be, effects change on a global scale.

You're comparing two dissimilar things. On the one hand slowing it down (which contrary to your claim that I'm moving the goalpost, is at sufficient investment effectively equal to stopping it), on the other, "contributing" to safe co-existence, which is trivially achieved by literally doing anything. I'm telling you that if we merely "contribute" to safe co-existence, we all die. The standard, and it really is the standard in any other field, is proving safe coexistence to a reasonable standard. Which should hopefully make clear where the difficulty lies: we have nothing. Even with all the interpretability research, and I'm not slagging interpretability, this field is in its absolute infancy.

"It can't be prevented" simply erases the most important distinction: if we get ASI tomorrow, we're in a fundamentally different position than if we get ASI in 50 years after a heroic global effort to work out safety, interpretability, guidance and morality.


> I'm telling you that if we merely "contribute" to safe co-existence, we all die.

I hear you. I believe you are wrong.

> it really is the standard in any other field, is proving safe coexistence to a reasonable standard

No it isn't. It often becomes the standard after the fact. But pretty much every invention by man didn't go through a committee. Can you provide some counter-examples? Did the Wright brothers prove flight was safe before they got on the first plane? Did the inventors of CRISPR technology "prove" it is safe? Or human cloning? Or nuclear fission? Your very argument rests on the mistakes humans made in the past and the out-sized consequences of making the same kinds of mistakes with AI. Your argument must be: we have to do things differently this time with AI because the stakes are higher.

These are old and boring arguments. I've been watching the less wrong space since it was overcoming bias (and frankly, from before). I've heard all of the arguments they have to make.

But the content of this discussion was on inevitability and how to respond to it. The person I replied to suggested that it was a mistake to see the future as something that happens to us. It was a call to agency. I was pointing out that not all agency is equal, and hubris can lead us to actions that are not productive.

It is also the case that fear, just like hubris, can lead us to actions that aren't productive. But perhaps we should just move on from this discussion.


> prove flight was safe

Flight did not have potentially uncontrollable consequences.

> human cloning

No uncontrollable consequences.

> Nuclear fission

To a reasonable standard, yes! I remind you that there was a concern of atmospheric ignition that was reasonably disproven before the first test.

> CRISPR technology

Tbh they should have, and I fully advocate this standard for any sort of live genomic research as well.

Also, just fwiw. I am not scared of AI. I'm not even particularly scared of dying in a global armageddon (as the song says, "we will all go together when we go", and tbh that's genuinely a relief). I just think, fairly dispassionately, that it's going to happen. You can't explain your disagreements with "my opponents are just emotionally affected."

> Your argument must be: we have to do things differently this time with AI because the stakes are higher.

I don't understand what you're saying here. That is in fact my argument. My whole entire point is just that it's not something beyond our means by any means- we have to do it, and we're capable of doing it, so we should do it.


The year is 1985. Internet is coming. You don't want it to.

Can you stop it?


You can shape it.


Isn't it kind of both?

Did luddites ever have a chance of stopping the industrial revolution?


Luddites weren't stopping industrial revolution. They were fighting against mass layoffs, against dramatic lowering of wages and against replacement of skilled workers with unskilled ones. Now this reminds me of something, hmmm...


Did the Dutch ever have a chance to stop the massive run up in tulip prices?

It's easy to say what was inevitable when you are looking into the past. Much harder to predict what inevitable future awaits us.


It's interesting that the Dutch actually had more success at stopping the actual tide coming in than controlling a market tide (which was more like a tidal wave I suppose).


One is external, the other exists within. A literal tidal wave is a problem for everyone; a market tide is - by definition - an opportunity to many.


No, but software engineers for example have more power, even in an employer's market, than Luddites.

You can simply spend so much time on meticulously documenting that "AI" (unfortunately!) does not work that it will be quietly abandoned.


Software engineers have less power than we'd like to think; we may be paid a lot relative to the baseline, but for vast majority that's not even in the "rich" range anymore, and more importantly, we're not ones calling the shots - not anymore.

But even if, that presupposes a kind of unity of opinion, committing the exact same sin the article we're discussing is complaining about. Many engineers believe that AI does, in fact, work, and will keep getting better - and will work towards the future you'd like to work against.


The exact same sin? It seems that you don't go off message even once:

https://news.ycombinator.com/item?id=44568811


The article is wrong though :). It's because people make choices, that this future is inevitable - enough people are independently choosing to embrace LLMs because of a real or perceived value. That, as well as the (real and perceived) reasons for it are plain and apparent, so it's not hard to predict where this leads in aggregate.


The luddites or at least some of them threatened employers, factories and/or machinery with physical aggression. They lived in the locations where these industries for a long time remained tho automation certainly made the industry more mobile. Like unions they used collective bargaining power in part derived from their geographic location and presence among each other.

A Guatemalan or Indian can write code for my boss today...instead of me. Software engineers despite the cliff in employment and the like are still rather well paid and there's plenty of room to undercut and for people to disregard principles. If this is perceived to be an issue to them at all. If you talk to many irl... Well it is not in the slightest.


No one will read that documentation. And by the time you finish writing it, the frontier AI models will have improved.


The Luddites were among the precursors to Marx et al.; even a revolution wasn't enough to hold back industrialisation, and even that revolution had a famous example of the exact kind of resource-distribution failure that Marx would have had in mind when writing (Great Famine in Ireland was contemporaneous with the Manifesto, compare with Holodomor).


What? Can you elaborate?


The dutch have walls/dams that keep the ocean away.


You can fight against the current of society or you can swim in the direction it's pulling you. If you want to fight against it, you can, but you shouldn't expect others to. For some, they can see that it's inevitable because the strength of the movement is greater than the resistance.

It's fair enough to say "you can change the future", but sometimes you can't. You don't have the resources, and often, the will.

The internet was the future, we saw it, some didn't. Cryptocurrencies are the future, some see it, some don't. And using AI is the future too.

Are LLMs the endpoint? Obviously not. But they'll keep getting better, marginally, until there's a breakthrough, or a change, and they'll advance further.

But they won't be going away.


I think it's important not to be too sure abot what of the future one is "seeing". It's easy to be confidently wrong and one may find countless examples and quotes where people made this mistake.

Even if you don't think you can change something, you shouldn't be sure about that. If you care about the outcome, you try things also against the odds and also try to organize such efforts with others.

(I'm puzzled by poeple who don't see it that way but at the same time don't find VC and start-ups insanely weird...).


The reality for most people is that at a macro level the future is something that happens to them. They try to participate e.g. through voting, but see no change even on issues a significant majority of 'voters' agree on, regardless of who 'wins' the elections.


What are issues that a significant majority of voters agree on? Polls indicate that everyone wants lower taxes, cleaner environment, higher quality schools, lower crime, etc. But when you dig into the specifics of how to make progress on those issues, suddenly the consensus evaporates.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: