I’ve been in software for 30 years now. To give you an idea of how old I am, remember that I was a school teacher for four years after graduating from college. A lot has changed, but a lot has stayed the same. Some things, may never change.
Scene I
Early in my career, I wrote a tool to help people see deep details on Windows fonts. It was great for figuring out why a font “looked weird” at a certain size or when printed on a certain printer, or understanding when a font would show bitmaps vs. rasterization, etc. It was extremely helpful, and extremely targeted.
Then someone emailed me (this was 1996), and told me that the tool was “completely broken”, because try as they might, they couldn’t test kerning, because it would only display one character at a time. I rolled my eyes, explained the context and purpose of the tool, and then pointed them to a different tool that could be used to zoom in and evaluate kerning tools. Was my tool buggy or useless? I guess it depends on the definition of the user, but I don’t think so.
Scene II
Years later, I was working with a set of static analysis tools from Microsoft research. As someone who read books like Writing Solid Code before I ever did serious programming, I already ran compilers at the highest warning level, and was excited to see what these new tools did for our code base.
Sure enough, they found a LOT of bugs. Many of them security issues (this was ahead of the flurry of security issues in msft projects in 1999 and 2000). The tools - partially because they were new, and partially because our code base was …ugly legacy, found a fair number of false positives as well, but those were easy to annotate away or fix with a little bit of code organization. In my opinion, the tools were extremely valuable.
Yet - when we rolled them out to more teams, they complained that the tools were buggy (due to false positives). Others complained that they were slow - which they could be if the instructions weren’t followed**. They made a general dismissal of the tools based on specific observations.
** The angry weasel of 2023 does, indeed, think that the tool setup should have been automated to avoid the errors these teams saw, and would probably also make sure that some of the false positives were tuned.
Some of the teams who dismissed the tool initially were also the teams involved in those previously mentioned viruses and security issues that plagued msft around the turn of the century.
Interlude
There’s a logical fallacy called the “hasty generalization” fallacy. It’s sometimes (often?) also called a faulty generalization or insufficient sample fallacy. An example may be something like this:
Every time I see a Tesla Model 3, the driver is doing something stupid. All Tesla Model 3 drivers are horrible.
Even if Model 3 drivers have cut you off in traffic before, we know logically that we can’t prove it’s true for everyone.
Closely related to this fallacy is confirmation bias - which says (in short) that we see what we want to see. If I think Model 3 drivers are all horrible, now I’m going to watch every Model 3 I see like a hawk and point out their driving errors - even if I’m driving next to them on the shoulder.
Now combine the fallacy and the bias above with the Dunning–Kruger effect (another bias where people overestimate their ability), and things truly go to shit.
All Tesla Model 3 drivers are horrible, I know because I watch them, and I also know, because I’m an expert driver
<apologies to the Model 3 drivers for the example>
Perspective?
A line I love to ponder for all these scenarios is:
We don't see things as they are – we see them as we are.
To me, this is how we break from the above fallacies and biases. If I find myself thinking something must be true, or that I must be right, or that I surely am the expert, I try to take a step back and ask myself…
Is it possible that I’m wrong?
If I’m wrong, what reasons could there be for my misunderstanding?
Do I know enough right now to have this strong of an opinion?
Scene 3 and Beyond
The scenario goes on forever. Something new appears that people find valuable. New people use the thing and don’t find value, so they throw it away. Inevitably, progress wins, everyone matures, and we start over again.
Not quite.
Sometimes people insist to stay in the past. Someone is still probably mad that fonttool.exe doesn’t solve their problem, and I guarantee that someone right now is complaining about a static analysis tool.
All that’s fine - you just have to choose if you want to move forward - with a critical eye toward change, or dig in and reject change as it comes.
Thanks for reading.
(note, I may skip next week due to DefCon - apologies in advance)
-A