Hacker Newsnew | past | comments | ask | show | jobs | submit | more cdavid's commentslogin

It is very likely: the atomic bomb was initially built to defeat Nazi Germany, and the Manhattan program was started before Pearl Harbour. When you read The Atomic Bomb book, many scientists who worked on the bomb justified their effort by defeating the Nazis, and many had to escape from Europe.

Once it became clear Nazi were about to be defeated, there were discussions from the scientists about sharing the knowledge w/ all countries. But at that point, the scientists had long lost control over the project.


If I understand correctly what is meant by rank polymorphism, it is not just about speed, but about ergonomics.

Taking examples I am familiar w/, it is key that you can add a scalar 1 to a rank 2 array in numpy/matllab without having to explicitly create a rank 2 array of 1s, and numpy somehow generalizes that (broadcasting). I understand other array programming languages have more advanced/generic versions of broadcasting, but I am not super familiar w/ them


Or a good one who cares about their naive reports not to get burn.

Like any advice, it is contextual. Especially when working for large organizations, the OT is the right default. If you're leaving because things are bad, it will be a mix of 1) people know but did not care/could not do anything about it and 2) people did not know about specific issues. Younger me thought it was often 2), but actually it is almost always 1).


SVD/eigendecomposition will often boil down to making many matmul (e.g. when using Krylov-based methods, e.g. Arnoldi, Krylov-schur, etc.), so I would expect TPU to work well there. GMRES, one method to solve Ax = b is also based on Arnoldi decomp.


A/B testing does not have to involve micro optimization. If done well, it can reduce the risk / cost of trying things. For example, you can A/B test something before investing a full prod development, etc. When pushing for some ML-based improvements (e.g. new ranking algo), you also want to use it.

This is why the cover of the reference A/B test book for product dev has a hippo: A/B test is helpful against just following the HIghest Paid Person Opinion. The practice is ofc more complicated, but that's more organizational/politics.


In my own career I've only ever seen it increase the cost of development.

The vast majority of A/B test results I've seen showed no significant win in one direction or the other, in which case why did we just add six weeks of delay and twice the development work to the feature?

Usually it was because the Highest Paid Person insisted on an A/B test because they weren't confident enough to move on without that safety blanket.

There are other, much cheaper things you can do to de-risk a new feature. Build a quick prototype and run a usability test with 2-3 participants - you get more information for a fraction of the time and cost of an A/B test.


There are cases where A/B testing does not make sense (not enough users to measure anything sensible, etc.). But if the A/B test results were inconclusive, assuming they were done correctly, then what was the point of launching the underlying feature ?

As for the HIPPO pushing for an A/B test because of lack of confidence, all I can say is that we had very different experiences, because I've almost always seen the opposite, be it in marketing, search/recommendation, etc.


"not enough users to measure anything sensible" is definitely a big part of it: even for large established companies there are still plenty of less than idler used features that don't have enough activity for that to make sense.

A former employer had developed a strong culture of A/B testing to the point that everyone felt pressure to apply it to every problem.


well it is both an easy way to compute in a dataframe context and a reactive programming paradigm. When combined, it gives a powerful paradigm for throwing data-driven UI, albeit non scalable (in terms of maintenance, etc.).


One of the largest, if not the largest python codebase in the world, implements similar ideas to model financial instruments pricing: https://calpaterson.com/bank-python.html.


The issue about executive mandate is likely coming from the context of large corporations. It creates fatigue, even though the underlying technology can be used very effectively to do "real work". It becomes hard for people to really see where the tech is valuable (reduce cost to test ideas, accelerate getting into a new area, etc.) vs where it is just a BS generator.

Those are typical dysfunctions in larger companies w/ weak leadership. They magnified by a few factors: AI is indistinguishable from magic for non tech leadership, demos that can be slapped quickly but that don't actually work w/o actual investments (which was what leadership wanted to avoid in the first place), and ofc the promise to reduce costs.

This happens in parallel to people using it to do their own work in a more bottom up manner. My anecdotal observation is that it is overused for low-value/high visibility work. E.g. it replaces "writing as proof of work" by reducing the cost to write bland, low information, documents used by middle management, which increases bureaucratic load.


My observation is the latter, but I agree the results fall short of expectations. Business will often want last minute change in reporting, don't get what they want at the right time because lack of analysts, and hope having "infinite speed" will solve the problem.

But ofc the real issue is that if your report metrics change last minute, you're unlikely to get good report. That's a symptom of not thinking much about your metrics.

Also, reports / analysis generally take time because the underlying data are messy, lots of business knowledge encoded "out of band", and poor data infrastructure. The smarter analytics leaders will use the AI push to invest in the foundations.


Given the context, I am assuming this is on the "behavioural" side of the IV (aka what most companies call culture fit). And I am assuming you are applying to "traditional" companies, that is companies that have a defined hiring process and are large enough. This includes all FAANG and what not.

My advice:

  - write down the stories (use cases) before the actual IV
  - for each story, focus on what you learnt / succeeded
  - for the really negative ones, focus on the learning
  - for the other ones, focus on the outcomes, mentioning  things that worked and maybe some things that did not work  and how you did it
This is the part where you have to act the game and avoid being too transparent. Mentioning too much the negative will be seen as a red flag by most hiring managers or recruiters.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: