In the area I live, houses are often built one complete room at a time, over many years. They start out as a single-room shack, then the owner builds extensions as they have children or money. Often, they build a porch, and then decades later wall up the porch and turn it into a room of some kind.
I kind of like this analogy because it does help us reason about the situation. The one-room shack is basically an MVP; a hacky result that just does one thing and probably poorly, but it is useful enough to justify its own existence. The giant mansion built from detailed architectural plans seems like a waterfall process for an enterprise application, doesn't it?
There are many advantages to building a house one room at a time. You get something to house you quickly and cheaply. When you build each extension, you have a very good idea of how it will be most useful because you know your needs well. You are more capable of taking advantage of sales (my neighbor collects construction overstock for free/cheap and starts building something once he has enough quantity to do so). It's more "agile". The resulting houses are beautiful in their own bespoke ways. They last a long time, too.
The downsides are that the services and structure are a hodgepodge of eras and necessity. If you're competent, you can avoid problems in your own work, but you may have to build on shoddy "legacy" work. You spend more of your time in a state of construction, and it may be infeasible to undertake a whole-house project like running ethernet to every room.
It's all tradeoffs. I think it does in many cases make sense to build a house in this way, and it likewise makes sense to build software this way. It depends on the situation.
That's interesting, thanks; as you point out, an important aspect of software- as with building architecture is that it tends to evolve over time, and that's where the waterfall approach falls down. However, in software at least, it's not actually necessary to exchange one extreme for another - waterfall or agile; one can take benefits from both approaches, blending foresight and forward planning with modular construction.
> There are many advantages to building a house one room at a time.... It's more "agile"... The downsides are that the services and structure are a hodgepodge of eras and necessity... it may be infeasible to undertake a whole-house project like running ethernet to every room.
The thing is that that end result is actually the opposite of agile, being as it is more difficult to change, and this speaks more broadly to a perennial problem in software development - requirements change regularly, even deep into project development. Planning a design up front does not mean just fixing a specific set of requirements in stone, but also anticipating the things that may change, even without knowing the specifics of what those changes will be, and designing in a flexible way that can accomodate a broad spectrum of possible futures. A car manufacturer might conceivably branch into making other types of vehicles, plant equipment, and similar things like that, whereas they are unlikely to ever get into catering (and if they did, that would likely be a seperate business and a new piece of software). Responding only to the requirements in front of you right now tends to make the design more rigid rather than less, and almost inevitably leads to big balls of mud and big-bang rewrite projects that fail as often as they succeed. Keep in mind also that most software spends most of its life in maintenance mode, so optimising for the delivery stage is short-sighted at best.
Designing software in the way I'm describing is not easy, but it's definitely possible, and in my opinion offers a lot more value than it might first appear.
The bitter lesson is becoming misunderstood as the world moves on. Unstated yet core to it is that AI researchers were historically attempting to build an understanding of human intelligence. They intended to, piece-by-piece, assemble a human brain and thus be able to explain (and fix) our own biological ones. Much like can be done with physical simulations of knee joints. Of course, you can also use that knowledge to create useful thinking machines, because you understand it well enough to be able to control it. Much like how we have many robotic joints.
So, the bitter lesson is based on a disappointment that you're building intelligence without understanding why it works.
Right, like discovering Huygens principle, or interference, integrals/sums of all paths in physics.
It is not because a whole lot of physical phenomena can be explained by a couple of foundational principles, that understanding those core patterns automatically endows one with an understanding of how and why materials refract light and a plethora of other specific effects... effects worth understanding individually, even if still explained in terms of those foundational concepts.
Knowing a complicated set of axioms or postulates endows one to derive theorems from them, but those implied theorem proofs are nonetheless non-trivial, and have a value of their own (even though they can be expressed and expanded into a DAG of applications of those "bitterly minimal" axiomatization.
Once enough patterns are correctly modeled by machines, and given enough time to analyze it, people will eventually discover a better how and why things work (beyond the mere abstract, knowledge that latent parameters were fitted against a loss function).
In some sense deeper understanding has already come for the simpler models like word2vec, where many papers have analyzed and explained relations between word vectors. This too lagged behind the creation and utilization of word vector embeddings.
It is not inconceivable that someday someone observes an analogy between say QKV tensors and triples resulting from graph linearization: think subject, object, predicate; (even though I hate those triples, try modeling a ternary relation like 2+5=7 with SOP-triples, its really only meant to capture "sky - is - blue" associations. A better type of triple would be player-role-act triples, one can then model ternary relations, but one needs to reify the relation)
Similarly, without mathematical training, humans display awareness of the concepts of sets, membership, existence, ... without a formal system. The chatbots display this awareness. It's all vague naive set theory. But how are DNN's modeling set theory? Thats a paper someday.
Indeed. But the premise of the objection, was that it is understandable, and a shame that we're not putting such understanding before implementing these systems.
If you're right, and it's essentially impossible to understand (and we still want to advance these technologies) we will have to do so in some degree of ignorance anyway.
Thanks, Max! I've personally never been able to wrap my head around BSP; I appreciate your diagrams explaining everything, and I am optimistic about the new engine. It's crazy how much faster it can be; really drives home how much extra stuff Unreal is doing that your game may not need.
It's a very risky industry. It's quite likely to sink a ton of money into a game and earn very little back, basically wasting everything. Game series that consistently make revenue are even rarer so they're extremely coddled while they last, though it is expected that eventually they will die, too. The rare successes cover the costs of the common failures.
I used to believe this and then I got into the area. It depends on the area of course but it turns out that the cost of the house is quite significant. House construction costs are $200-500 per square foot, putting even a medium-sized house at around a quarter of a million dollars. When you look at the costs of empty plots of land versus similar plots of land with houses on them, you'll see that the housed plots of land cost the same as the empty plots plus the construction costs of a similar-sized house. In the areas where I looked, the costs of the house dominated, such that the land value is about 20% of the total value of the plot. Even the variance that occurs at that level can be further explained by the value of potential future plots of land on the space -- a plot of land that has one house but could hold a second (for whatever reason) is more valuable than an otherwise-identical plot that can only support one house.
It has more to do with how game engines are built, making embeddability be the most important criteria.
Most game engines are written as a large chunk of C++ code that runs each frame's timing, and the big subsystems like physics, particles, sound, and the scenegraph. All the important engineers will be dedicated to working on this engine and it behaves like a framework. The "game logic" is generally considered to be a minority of the code, and because less-technical people generally author it, it gets written in a higher-level "scripting" language.
This creates some serious constraints on that language. It must be possible to call into the scripts from the main engine and the overhead must be low. The scripts often have to make calls into the data structures of the main engine (e.g. for physics queries) and the overhead of that should be low as well. It should also be possible to control the scripting language's GC because the main engine is pretty timing-sensitive. Memory consumption is pretty important as well.
All these requirements point towards two implementations specifically: Lua (and LuaJIT), and Mono. Those two runtimes go out of their way to make it easy and fast to embed. A third option which a lot of engines pick is to write their own scripting language where they control everything about it. Any other language you can think of (with the possible exceptions of Haxe) will have some major hurdle that prevents easy embedding. The fact that you can compile multiple languages to Mono bytecode pushes some folks in that direction; if you're planning to write a lot of code in the scripting engine (not all of them do! See: Unreal) that's nice flexibility to have.
You're right, and it's got more layers than that. C# does have value types, which are not boxed, and using them judiciously can avoid garbage. However, they are a more recent addition to the language (which started as a lame Java clone), and so the standard library tends to not know about them. Really trivial operations will allocate hundreds of bytes of garbage for no good reason. Example: iterating over a Dictionary. Or, IIRC, getting the current time. They've been cleaning up these functions to not create garbage over tiem, of course, but it's never fast enough for my taste and leads to some truly awful workarounds.
C# had value types and pointers from the very beginning. These are not a recent addition. The standard library does know about them. However, not until C# 2.0, which introduced generics, were collections able to avoid boxing value types.
There are some cases where allocations are made when they could have been avoided. Iterating over a dictionary creates a single IEnumerator object. Async methods, tuples, delegates, and lambda expressions also allocate memory as do literal strings. It is possible to have struct-based iterators and disposers. There are some recently added mitigations such as a ValueTask, ValueTuple, function pointers, ref structs, conversions of literals to read-only spans, that eliminate allocations.
DateTime is a value type and doesn't allocate memory. Getting the current time does not allocate memory.
With the recent additions to ref types and Span<>, C# provides a lot of type-safe ways to avoid garbage collections. You can always use pointers if need be.
I kind of like this analogy because it does help us reason about the situation. The one-room shack is basically an MVP; a hacky result that just does one thing and probably poorly, but it is useful enough to justify its own existence. The giant mansion built from detailed architectural plans seems like a waterfall process for an enterprise application, doesn't it?
There are many advantages to building a house one room at a time. You get something to house you quickly and cheaply. When you build each extension, you have a very good idea of how it will be most useful because you know your needs well. You are more capable of taking advantage of sales (my neighbor collects construction overstock for free/cheap and starts building something once he has enough quantity to do so). It's more "agile". The resulting houses are beautiful in their own bespoke ways. They last a long time, too.
The downsides are that the services and structure are a hodgepodge of eras and necessity. If you're competent, you can avoid problems in your own work, but you may have to build on shoddy "legacy" work. You spend more of your time in a state of construction, and it may be infeasible to undertake a whole-house project like running ethernet to every room.
It's all tradeoffs. I think it does in many cases make sense to build a house in this way, and it likewise makes sense to build software this way. It depends on the situation.