Central Point Software, the makers of Copy II PC, was one of our customers (we created back office software, order processing etc.).
It was a pretty healthy business, not just for the copy protection breaking but also the general tools software.
Funny story:
I was at their offices working on a project when they were getting ready to ship out the new version. Their warehouse was connected to the office building and they were producing all of the final copies and loading them on trucks to get sent to the distributors.
In the morning they gave the all clear for the first wave of trucks to leave, then about 4 hours later someone found a bug and they had to call all of the trucks back to the warehouse, unload, re-create new clean product etc.
They did this about 3 times before that version finally made it to the distributors.
Ya, RPG assumed character based IO so probably a safe bet that they just ported stuff that ran on IBM character based terminals and just made it run in DOS. (I worked in RPG in the 80's)
> The cells in your eyes have exactly the same DNA as the cells in your big toe
Is that true?
I know that cells in the brain have significant variability in DNA, but not really aware of what non-neuronal and non-brain cells in general typically have.
Definitely not. The Art of Software Testing wrote about "module testing" in 1979, which was revised as "unit testing" in a later revision after "unit test" was part of the popular lexicon. Perhaps that is what you are thinking of?
Beck is clear that he "rediscovered" the idea. People most certainly understood the importance of tests being isolated from each other in the 1960s. Is that, maybe, what you mean?
testing, outside of thesystem, of a part of the system thatmay have less than a complete function.
Component test:
testing, inside of the system, of parts that have been successfully unit tested.
Integration test:
testing of new components to ensure that the system is growing functionally; in addition, retesting of successive system builds to ensure that they do not cause re-gression in the capabilities of the system.
Regression test:
testing of the system for unpredictable system failure.
Scaffolding:
coding that replaces not - yet - completed functions..."
I was taught in the 80's about this by a woman that had worked at some company that was more formal about software dev than we were at the small company I worked for.
> you're trying to twist reality to fit a premise that is just impossible to make true: that estimates of how long it takes to build software are reliable.
It's not binary, it's a continuum.
With experience, it's possible to identify whether the new project or set of tasks is very similar to work done previously (possibly many times) or if it has substantial new territory with many unknowns.
The more similarity to past work, the higher the chance that reasonably accurate estimates can be created. More tasks in new territory increases unknowns and decreases estimate accuracy. Some people work in areas where new projects frequently are similar to previous projects, some people work in areas where that is not the case. I've worked in both.
Paying close attention to the patterns over the years and decades helps to improve the mapping of situation to estimate.
Yes, but where reliability is concerned, a continuum is a problem. You can't say with any certainty where any given thing is on the continuum, or even define its bounds.
This is exactly what makes estimates categorically unreliable. The ones that aren't accurate will surprise you and mess things up.
In that sense, it does compress to being binary. To have a whole organisation work on the premise that estimates are reliable, they all have to be, at least within some pretty tight error bound (a small number of inaccuracies can be absorbed, but at some point the premise becomes de facto negated by inaccuracies).
reply