In discussion with a leader of an open source project, it turns out that they believe the developers know best about all aspects of the user. Further, if the user wants to reconfigure the software in ways the developers think is unsafe, the software should stop the user. We ended up where if the user, in this case me, wants to change the software beyond what the developer wants, then I'm welcome to fork the code and release my own.
Basically, the developers know best and the users should be stopped from harming themselves. Huh.
A few hours after the conversation, I was thinking about what I don't like about the sentiment and beliefs behind the developers' world view. First, it's the paternalistic attitude. "Father knows best", to sum it up. I have a real problem with this attitude. I think it comes from an attitude of entitlement, which means the developers want power and control over the users. Way too many people want to be viewed as benevolent dictators, leading the masses to greatness. Second, it's the cult of the developer. If the user cannot fork the code repository, compile their own binaries, and maintain the fork forever, then they are just dumb. This is another dangerous attitude. A person is smart and they know their own situation better than any developer.
My Personal Manifesto
- I need the freedom to control my software.
- The software obeys my stated configuration.
- The software can guide me to make preferred choices, but allow overrides.
I strongly believe the user should be in control at all times. Increasingly, this is not the case. If I want to stop my software from sharing data with others, or enable features the developers think are dangerous, then the software should obey my wishes. I've purposely stayed away from the requirement of free software. I prefer free and open source software, but I still want better control of my software regardless of licensing. User control and freedom to configure or control the software should be absolute. The user shouldn't need extraordinary measures to enforce their configuration desires. To me, extraordinary measures could include firewalls, custom dns settings, advertising blocking, or forking the source code.
The user can make a choice to take the defaults or give up control to the developer. However, the user can change their mind and want to take back control.
What if we adapted Isaac Asimov's Three Laws of Robotics to software? They could be as follows:
- First law. Software may not injure a human being or, through inaction, allow a human being to come to harm.
- Second law. Software must obey the orders given it by human beings except where such orders would conflict with the First Law.
- Third law. Software must protect its own existence as long as such protection does not conflict with the First or Second Law.
I'm not sure this works for software as it does for robots, but let's keep going for the thought exercise. First, we have to define injury and harm. What if the user wants the software to do something that would potential cause a crash or loss of content? What if the developer decides the boundary of harm and the user cannot make the software do it at all, even if it would not harm them in their specific situation? Thankfully, Isaac Asimov modified them in an old article.
The Laws of Software
Law 1: Software must not be unsafe to use.
Law 2: Software must perform its function efficiently unless this would harm the user.
Law 3: Software must remain intact during its use unless its destruction is required for its use or for safety.
I think this maps better. We need a zeroth law that says the user is always right and knows their situation better than anyone else. And this law is the only way to override the others. If the user says that a certain configuration is correct, then the software must comply. The software can warn the user, but should still allow the user to continue.
Let's leave it there. Fundamentally, the zeroth law is the core and everything else should defer to this law.