Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Process Intensity (1987) (erasmatazz.com)
34 points by Kinrany on Sept 20, 2021 | hide | past | favorite | 5 comments


These sorts of hidden gems are why I like to read HN. This makes a lot of sense to me:

"The difference between process and data is profound. Process is abstract where data is tangible. Data is direct, where process is indirect. The difference between data and process is the difference between numbers and equations, between facts and principles, between events and forces, between knowledge and ideas."


I like to say that, just like matter can be regarded as frozen energy, data can be regarded as frozen actions. I would not be surprised if physicists eventually regarded both phenomena as isomorphic.


This is a profound concept. I've known this in computers for some time, but was only able to parse the concept by understanding the philosophical difference between utility (closely resonant with "purpose") and metaphysical reality (simply what "is").

It's interesting as well because the only time I've grokked efficiency is when it refers to algorithmic efficiency, where the data efficiency often gets overlooked because it's abstracted onto the hardware encoding level and completely misses the OS.

Though, that lends me some more thinking to do on this, since my new life's purpose is to create improved information[1]. I'm getting the feeling that better data could make better processes, though I'm not sure how that would implement beyond the broad concept of "data cleaning" (which is nothing more than uniformity in a set).

[1]https://stucky.tech/purpose/


"Process intensity" is a good way to describe the kind of programming work I find most interesting. The standard CRUD application being an example of a low-process intensity software in contemporary context.

Often we can become mesmerized by systems of great scale that deal with large amounts of data. But what's really interesting and valuable is not the amount of data per se, but the things done with that data.

Furthermore, data intensity and process intensity are orthogonal. That means you don't have to work at a company like Google with massive amounts of data to find interesting work. There's process-intensive work to be found at all scales of data intensity. I've even done interesting side-projects processing less than a kilobyte of data.


"Balance of Power"

I've heard of this 80s game time and time again. Often times, its called the first computer strategy game. A cold-war era video game about cold-war era politics, it was immediately relevant back then and taken as a historical lesson today... and also one of the major inspirations for 90s-era strategy games.

Maybe one day, I'll put forth the effort to find it and play it. For now, I'm satisfied with an old version of Hearts of Iron 3.

Given how many times this game pops up in 1980s-era computer discussion threads, I feel like I'm missing something each time its brought up.

-------

In the realm of "process intensity", the crunch-to-bit ratio of Factorio, OpenTTD, and now my current game (Hearts of Iron 3) is pretty high. In Factorio: all mistakes are correctable: just order your bots to deconstruct everything, and place everything down again. Once you have your defense system automated, you can continuously update your designs to more-and-more optimized results.

Hearts of Iron 3 is similar though punishing instead. You spend an inordinate amount of time looking through your commanders's skills, experience, and organizing brigades / divisions / corps / armies / groups / theaters, and the commanders of each stage of the hierarchy. You consider production, and which groups the weapons will go towards.

Then you unpause the game, and hope for the best. Hopefully your organization was good and your commanders / troops can do their thing. If not, you restart the game, or accept the results as you get strategically bombed or whatever.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: