Whenever custom software development occurs, everyone recognizes the final product: the working software. But there are several OTHER types of software that are typically involved, such as
- automated tests
- automated deployments
I want to explain why these types of code are important to producing the final product so that management can invest appropriately in related activities and products.
I work for a company that provides health and retirement benefits to the (pastors and staff) who serve a large religious organization. I work as a software developer. Our group purchases and leases software for use by our end users. We often write custom code to adapt it to fit our business. We also write some systems almost completely from scratch. We try to follow the “buy-versus-build” rule as best as we can discern it. We have several enterprise software systems, including
- a purchasesd billing system (customized and integrated with other systems using custom code)
- a purchased CRM system (customized and integrated with other systems using custom code)
- several internal and external web sites, built with custom coding
Although our practices may vary from project to project as appropriate, we usually try to deliver working software often. To meet this goals we do many things such as
- develop product software
- test product software
- deploy product software
Managers usually understand that our goal is to deliver working solutions that involve product software. They also understand the need to test, though they may not know precisely how that is best done. And they often seem to be a little fuzzy on what is required to deploy working software from one environment to another.
In support of these processes, and in an effort to delivery reliable, deployable software, we write code OTHER than the final product software. Some examples of internal software needed to deliver the final software include
- unit tests
- integration tests
- build programs
- deplyoyment programs
- monitoring packages
Although managers understand the need to test and deploy software, they may not understand the need to write code to automate the testing and deployment process. It takes time to write automated tests. It takes time to write automated deployments. They may not see the payback. The way they see things, this investment in automated testing and deployment diverts scarce talent from writing the final product. They don’t understand the subtle and numerous ways in which automating these practices accelerates the speed and quality of the overall effort to produce the final product. And when managers fight with us over allocating resources to automate testing and deployment, they usually win, because resource allocation is their role, after all. But the entire team and users suffer because lack of automated testing leads to defects discovered late. Lack of automated deployment leads to slow and error-prone deployments, less user testing, and defects found after deployment.
Why do managers have trouble understanding this? Managers typically do not understand how developers work to produce the final software product. Software development is still is in its infancy, and is changing rapidly. So techniques that were considered “best practice” 10 years ago may no longer be in use. Another reason is because managers often have different training, experience, and temperament from analysts, developers and testers. Regardless of the reasons, collaboration and respect are needed to bridge the gap. If managers listen to developers about the benefits of modern practices and tools, they can help us invest appropriately. Likewise, if developers listen to managers about the challenges they face in trying to meet the competing demands of various customers with limited resources, we will be able to make a more reasoned case. And we may even come to understand that sometimes, “better” software is not a better solution if it takes longer or costs more to deliver. After all, the cost/quality equation is negotiable, although it is tricky to get it right.
My goal with this article is to raise the visibility of this issue and give developers ideas that will help them communicate with management so that the two groups can work together to balance investment appropriately.
What can you do?
If you don’t understand the costs, benefits, and interactions among practices, how can you convince anyone to invest in those practices? If you are at a loss for words, I can offer you a few ideas below under “Resources”.
Start a life-long dialogue with management
In my first conversation with management, I had some unrealistic expectations. I wanted time and money to automate all testing and deployments. And I wanted it soon. I came away frustrated because my managers did not just grant me carte blanche. The nerve! But they respectfully listened and encouraged continued dialogue. And as I reflected, I realized I had been naive. So I adopted a much more patient approach and began a more long-term educational process. That process continues to this day and may never end. I don’t just want to “win the argument” or “get the resources”. I want to help management better understand the modern methods we use to produce quality software. And help them to make better investments among the various pieces of the puzzle. In return, I have learned many things by listening to them that I did not understand, such as: The pressure from management to control costs and deliver quickly. The effects of the broader economy on project decisions.
Be patient with your managers and customers. Be patient with yourself. Don’t get discouraged. Impatience leads to frustration, bitterness, and anger. That interferes with the educational and collaborative environment that will achieve your goals.
I highly recommend Agile Testing by Lisa Crispin and Janet Gregory. WIth chapters like “Why we want to automate tests and what holds us back”, you are certain to learn not just how to use automation to improve quality and efficiency, but why these practices contribute to such results.
I also like Kent Beck’s very populate book Extreme Programming Explained, even if you do not think you are interested in the Extreme Programming method or the supporting culture. I would direct your attention to several key practices including Ten-minute build, continuous integration, daily deployment, test-first programming. In particular, notice the language that describes dependencies among practices. If you can describe how one practice supports another, you can better justify unfamiliar practices by tracing them as supporters of well-recognized practices.
Tracing Practices to Software Qualities they Support
Along those lines, I made a goal map, Tracing Practices to Software Quality, illustrating how various technologies we use support certain practices, which support software quality. It’s a work in progress. It’s not the Rosetta Stone, and I’m not Ptolemy, but it is a fun exercise in tracing how practices reinforce each other. If you don’t agree with it, draw your own. Regardless, I find it is useful to me in explaining why we pursue these various practices.