CUDA is over a decade of investment. I left CUDA toolkit team in 2014 and it was probably around 10 years old back then. Can't build something comparable fast.
Gives you a peace of mind once it is over. Also, allows you to prepare contingencies. Our company only now was recovering from the last year layoffs and now it is suddenly hit again. It would've been much more humane to know in advance not to relax.
If I may suggest am interpretation: different kind of professionals may see different things. E.gm I'm told the market rate for react devs dropped like a brick, which is not the case for other software engineers.
If you call yourself a react dev, then I’m sorry you’re not a software engineer. That’s like calling urself a Facebook marketer instead of a marketer. Good way to box yourself into a niche that may run out of demand
> If you can collude on layoffs, that means industry demand for labor is down and salaries are reflecting lowered demand.
No it doesn't. You make a "gentleman's agreement" to do your layoffs and trust your chums to keep to it. Are you arguing that industry collusion is impossible?
Yes, because companies need labor. They aren't going to willingly crash their growth if the economy is booming and other industries are growing, because that will destroy themselves. It is insanely better to shed labor when the economy is poor than when the economy is great.
When you shed labor during poor economic conditions, it is not collusion, even if they, hypothetically, "colluded" to do it.
When a bunch of people see that a stock is crashing and decide to pull out together, it is not stock manipulation. It is a rational decision. (And don't nitpick the analogy on things like "buy low sell high", you probably understand the point I'm making.)
Companies need some labour, sure. But they can absolutely do things like hiring 10% more or 10% less, bringing a layoff forward a few months or pushing it back.
> When you shed labor during poor economic conditions, it is not collusion, even if they, hypothetically, "colluded" to do it.
> When a bunch of people see that a stock is crashing and decide to pull out together, it is not stock manipulation. It is a rational decision.
That's backwards logic. If a bunch of people agree to sell a stock at the same time so that none of them are left holding the bag, that is stock manipulation, even if they sold at a time when it was "rational" to sell. The fact that companies can act as a de-facto cartel without the kind of explicit coordination that our current anaemic anti-trust regime might punish is an argument for stronger anti-trust laws, not for ignoring the cases when we do catch them red-handed.
The "silicon valley anti-poaching agreement" is a known case where it happened a few years ago. So it's not too implausible to think that the current round of layoffs might be being coordinated in a similar way.
All you have to do is look at how these things play out. One large company announces they're considering layoffs, which primes everyone else to compile their own lists.
After a month or two one company announces they're going through with layoffs, and that sets off a chain reaction for the rest to execute their own plans as soon as possible. It always happens like this. Everyone just decides now is the time to clean house, all at the same time, and we all decided to do this independent of each other...
The "collusion" is a dog-whistle protocol happening in plain sight. There is no "evidence," no damning email to be found, only behavioral patterns to observe. It's the same playbook every single time.
I don’t know which of you is right, but I think you’re saying different things. They’re saying positions that are actually open right now are paying less, and you’re saying salaries for existing employees are flat. They could be right if the positions which are open are only recruiting those with lower salary expectations.
No, I’m saying pay for new openings is flat. I just went through a job search, the numbers in the level.fyi data match my own job search experience looking at current openings.
- soda is up about 50% for name brand, and generic soda is up around 80%
- diesel is up, what 50%? how much is regular gas in california?
- housing "values" where I am at are up 25-40% in the last two years
- Potato chips are up at least 40%
- Apples are much more expensive
- Restaurants are up at least 25%, some 50-75%
Many of these products went up by these amounts yet also engaged in shrinkflation. The biggest example of this is Chipotle which almost doubled the cost of its burritos but also shrunk them in size by about a third.
Governments are motivated to reduce inflation in inflation statistics. Perhaps the cost of goods for the ultra-rich hasn't really changed much, but for the poor (or cheap) they have all gone up quite dramatically in two years.
oh god, how will SF SWEs survive with more expensive potatoe chips and soda. Maybe stop drinking poison and processed foods and you'll be fine. A whole rotisserie chicken is only $8.99 at whole foods
boo hoo, how will SWE survive on a paltry $150k per year for adjusting drop down menu fonts. The salaries in tech were ridiculous and an insult to other professionals that actually improved the world. Now that 0% interest rates are gone and most tech is mature and cranking up margins by pushing more ads, you don't need as many SWEs. Michelin-chef meals, onsite daycare, 400K salaries, and nap rooms were not a sustainable long term thing
I feel like there's a high chance of a layoff in my org as they had layoffs in other orgs yesterday and mine is one of major holdouts. I would very much wished for something definite, either for them to say that nothing currently planned or to say that it will happen later this month or something.
What do you have in the history that’s sensitive? Keys, passwords should not be in shell history anyways (e.g. I delete them from bash history if I enter by mistake)
I don't think it's that unusual. What comes to mind immediately is it's not unusual for me to clone something from a private git repo, where a username+password would be needed for permissions. In which case it's possible to put in `git clone http://username:password@example.com` or another git command that interacts with remotes. (To be clear the "password" is typically a token and not human generated string, but still functions like a password).
For that example: Any reason the server doesn’t just have an SSH server? Then you can use `git clone` in the “usual way”, using SSH certificate authentication.
I use those ! Bash history features all the time. I.e. !?some_test to just rerun a test case I ran several months ago. I don’t need to sync histories between PCs (they are different enough) but history is important.
I am ex-core contributor Chromium and Node.js and current core contributor to gRPC Core/C++.
I am never bothered with build times. There is "interactive build" (incremental builds I use to rerun related unit tests as I work on code) and non-interactive build (one I launch and go get coffee/read email). I have never seen hardware refresh toggle non-interactive into interactive.
My personal hardware (that I use now and then to do some quick fix/code review) is 5+ year old Intel i7 with 16Gb of memory (had to add 16Gb when realized linking Node.js in WSL requires more memory).
My work laptop is Intel MacBook Pro with a touch bar. I do not think it has any impact on my productivity. What matters is the screen size and quality (e.g. resolution, contrast and sharpness) and storage speed. Build system (e.g. speed of incremental builds and support for distributed builds) has more impact than any CPU advances. I use Bazel for my personal projects.
Somehow programmers have come to accept that a minuscule change in a single function that only result in a few bytes changing in a binary takes forever to compile and link. Compilation and linking should be basically instantaneous. So fast that you don't even realize there is a compilation step at all.
Sure, release builds with whole program optimization and other fancy compiler techniques can take longer. That's fine. But the regular compile/debug/test loop can still be instant. For legacy reasons compilation in systems languages is unbelievably slow but it doesn't have to be this way.
This is the reason why I often use tcc compiler for my edit/compile/hotreload cycle, it is about 8x faster than gcc with -O0 and 20x faster than gcc with -O2.
With tcc the initial compilation of hostapd it takes about 0.7 seconds and incremental builds are roughly 50 milliseconds.
The only problem is that tcc's diagnostics aren't the best and sometimes there are mild compatibility issues (usually it is enough to tweak CFLAGS or add some macro definition)
I mean yeah I've come to accept it because I don't know any different. If you can share some examples of large-scale projects that you can compile to test locally near-instantly - or how we might change existing projects/languages to allow for this - then you will have my attention instead of skepticism.
I am firmly in test-driven development camp. My test cases build and run interactively. I rarely need to do a full build. CI will make sure I didn’t break anything unexpected.
I too come from Blaze and tried to use Bazel for my personal project which involves backend + frontend dockerized, the build rules got weird and niche real quick and I was spending lots of time working with the BUILD files making me question the value against plain old Makefiles, this was 3 years ago, maybe the public ecosystem is better now.
Aren’t M series screen and storage speed significantly superior to your Intel MBP? I transitioned from an Intel MBP to M1 for work and the screen was significantly superior (not sure about storage speed, our builds are all on a remote dev machine that is stacked).
When I worked at Chromium there were two major mitigations:
1. Debug compilation was split in shared libraries so only a couple of them has to be rebuilt in your regular dev workflow.
2. They had some magical distributed build that "just worked" for me. I never had to dive into the details.
I was working on DevTools so in many cases my changes would touch both browser and renderer. Unit testing was helpful.
Bazel is significantly faster on m1 compared to i7 even if it doesn’t try to recompile protobuf compiler code which it’s still attempting to do regularly
The Threadripper PRO branding was only introduced 3.5 years ago. The first two generations didn't have any split between workstation parts and enthusiast consumer parts. You must have a first-generation Threadripper, which means it's somewhere between 8 and 16 CPU cores.
If you would not significantly benefit from upgrading, it's only because you already have more CPU performance than you need. Today's CPUs are significantly better than first-generation Zen in performance per clock and raw clock speed, and mainstream consumer desktop platforms can now match the top first-generation Threadripper in CPU core count and total DRAM bandwidth (and soon, DRAM capacity). There's no performance or power metric by which a Threadripper 1950X (not quite 6.5 years old) beats a Ryzen 7950X. And the 7950X also comes in a mobile package that only sacrifices a bit of performance (to fit into fairly chunky "laptops").
I guess I should clarify: I am a rust and C++ developer blocked on compilation time, but even then, I am not able to justify the cost of upgrading from a 1950X/128GB DDR4 (good guess!) to the 7950X or 3D. It would be faster, but not in a way that would translate to $$$ directly. (Not to mention the inflation in TRx costs since AMD stopped playing catch-up.) performance-per-watt isn’t interesting to me (except for thermals but Noctua has me covered) because I pay real-time costs and it’s not a build farm.
If I had 100% CPU consumption around the clock, I would upgrade in a heart beat. But I’m working interactively in spurts between hitting CPU walls and the spurts don’t justify the upgrade.
If I were to upgrade it would be for the sake of non-work CPU video encoding or to get PCIe 5.0 for faster model loading to GPU VRAM.
sTR4 workstations are hard to put down! I'll replace mine one day, probably with whatever ASRock Rack Epyc succeeds the ROMED8-2T with PCIe 5.0.
In the meantime, I wanted something more portable, so I put a 13700K and RTX 3090 in a Lian Li A4-H2O case with an eDP side panel for a nice mITX build. It only needs one cable for power, and it's as great for VR as it is a headless host.
Using Daisy UI, great project. Some minor annoyances because it does not have JS. E.g. some popups don’t close when you click on the button they are tied to because they are open based on button focus…
I’ve been programming since middle school. That would be 30 years. Nothing really changed much. C++ is incrementally more convenient but fundamentally the same. Code editors are same. Debugger are same. Shell is same.
I am certain in 30 years everything will still be the same.
The way I write code was fundamentally altered in the last year by GPT4 and copilot. Try having GPT4 write your code, you won’t be so certain about the future of programming afterward I guarantee it.
I am the same, 35 years. I use GPT 4 every day now. It sure is handy. It speeds up some things. It is a time saver but it does not seem to be better than me. It is like an OK assistant.
I would agree, not a fundamental or radical improvement yet.
GPT 4 does not produce code that I'm ready to accept. The time it takes to convince it to produce code that I'll accept significantly larger than the time it takes to write that code myself.
GPT 4 is fine for absolutely foreign tasks to me, like write a power shell script, because I know almost nothing about power shell. However those tasks are rare and I generally competent about things I need to do.
I have free Copilot due to my OSS work. This week I disabled it for C++ because it is chronically incapable to match brackets. I was wasting too much time fixing the messes.
I use it for TypeScript/React. But it’s just a more comprehensive code complete. Incremental.
Uh huh, try GPT4 and report back. It’s a generational leap above copilot. I use copilot to auto complete one liners and GPT4 to generate whole methods.
Other than 30 years ago you were writing a whole shitload more buffer/integer overflows. Hell, that's why we've written numerous languages since that point to ensure it's a hell of a lot harder to footgun yourself.
If coding hasn't change much in 30 years, it may mean you have not changed much in 30 years.
Soviet Union built a lot of those “designed by numbers” cities. They had certain distances to stores, planned density of kindergartens and hospitals, so on. Complete failure. Society changes a lot in those 10-15 years while you plan and build those cities. That’s even if you ignore the fact that SU stopped existing while some places where not finished (I grew up in a district with ridiculous population - but no subway to connect to the rest of the city and not enough roads for cars)
Soviet housing buildup was not a complete failure. That's an absurd thing to claim. It was maybe one of the best most successful program in the Soviet Union.
The Soviets had critical housing shortage after the Civil war and even worse after WW2. And even those houses that existed were pre-WW1 standards of modernity.
Now I think a to strong embrace of modernism. A lack of small scale capitalism. And their hatred for pre-Soviet old town really hurt them. And eventually to much copying the US in terms of cars did hurt their program.
But saying its a failure is simply inaccurate. From 60-80 an absurd amount of modern housing was built.