It’s not all about incentives.
Incentives matter. Incentives are one of the key lessons of economics. People don’t just act, they act for reasons. It’s not Brownian buying & selling & inventing & marketing. & cheating & conniving & colluding for no reason at all.
But there are limits. There aren’t enough lollipops in the world to make me shoot a stranger (to pick an extreme example). How, then, to talk about incentives sensibly and with awareness of what they can & can’t do?
When I first started focusing on incentives they seemed almost magical. If only I could contrive the right incentives I could get people to do anything. This perspective, of people as machines I tweak, is deeply dysfunctional. It doesn’t work because people are complicated and will do what they want. As a recovering narcissist, though, I’m prone to this kind of thinking.
My working model is that incentives change marginal behavior (margins are another of those key lessons of economics). If I’m on the fence about doing this or that, the incentives I experience will nudge me this or that way.
To take a geeky example (this is Geek Incentives, after all), when programmers are punished for not having tests, they will write tests. Awful, useless, expensive tests. But tests. The incentives mattered. Programmers who wouldn’t have written tests are now writing tests.
But not good tests. Punishment for not writing tests is not sufficient for the writing of good tests. That requires, for those who can’t write good tests already, incentives to learn. The most effective of those incentives are social or intrinsic (because some is curious or loves learning).
Now when seeing a situation and wanting to apply incentives, I think about the marginal behavior that might change as a result. I’m not popping a new piano roll in the player piano. I’m nudging a little & that’s all I can do.
How do you make people aware of bad tests. It's a delicate topic.
Often people think they do the right thing because they read something
in a book. And then it's hard to convince them of the contrary.
Example 1:
Developers know TDD. They know that a good practice is to test public
methods *only*.
-> Consequence: Devs now start to make private/protected function
public in order to test them complying with the good practice. This
completely breaks information hiding and pollutes a clean interface
with random methods.
Another driving point here is testability or "testing drives good
design". If you make private function public you improve
testability but you screw up design...
Example 2:
Developers often are proud of a good code coverage, e.g. > 80%
-> Looking into the tests reveals that they include large
system/component/e2e test in the coverage statistics. Or they write
unit test with no assertions.
The learning here is no mantra is always true, IT ALWAYS DEPENDS!
Great advice from GeePaw Hill:
"It frustrates me greatly to see so many bright people come out with
harsh declarations of what you must always or never do, generally in
programming, and specifically in TDD. It's judgement, folks, and it's
judgement all the way down. Build your judgement, then use it."
https://twitter.com/GeePawHill/status/1416497997379166209
If the most effective incentives are social or intrinsic, for example my own ego, how can I put myself in check so that I can say 'no' to those incentives and instead look for others more meaningful?