correct - the SVB collapse was pretty bad on its own, but it provided the perfect opportunity to jump on the blame train and spin a bunch of negative narratives.
News(events) = boring but useful
News(events + narratives) = juicy and less useful
This is based on articles that show up on the Front Page of each source, twice daily, if it is as the bottom of the frontpage, it's coverage score will be close to zero. That said, feedback taken about my word-choice.
you should just build it and then do the recommendations by a heuristic function. Then you can substitute the function with an ML classifier once you have enough data to train on (and time to learn about ML). Don't wait on ML coding tips for this project
totally possible, you can view the output as a pie chart, and compare it to the ideal pie chart, and make changes based off that. My analogy is flexible and quite loose. We're not actually taking partial derivatives here
the piece seems too founded on symbolic representation and building a world model, but the vast amount of AI systems deployed don't use either of those, or only use a bit of 'knowledge'. His example with 'You should eat a banana' should be replaced with 'You went banana crazy'. This highlights the ambiguity that we may want to tell a machine, that any symbolic system would utterly fail at.
He touched on machine-learning, but I think the most powerful systems in the future will be heavily built upon data driven algorithms will (and presently) be able to handle ambiguity and understand what this means.
hierarchical doesn't imply that every organizes their hierarchies the same. The vegetable counter example is moot.
The brain works in many different ways - I'm sure you're not wrong about the graphical structures being intuitive. But there's a reason why outlining is so ubiquitous in daily life and many professions.
"but the machine built for this study actually outperformed the average human on these questions."
A subtle but key statement in the article. If the models are trained for the test, we're essentially looking at a standard machine learning problem, albeit with very modern techniques (word vectors, deep nets, etc). The point is that all of these are optimized towards a goal. In this case, the goal is the IQ test.
This is not close to being an intelligent being the way humans are. Candidate optimizations you can say humans are 'trained' for might be survival, finding meaning, reproduction, etc. All of these goals are extremely broad and abstract, especially in the context of computers.
I'm not saying this article is sensationalist, but it may be perceived sensationally. This article merely notes a predictable progression in artificial intelligence.
here's the video: https://www.loom.com/share/5e83475be2464778950f7df7e209ac2d