I’ve been involved in a lot of development projects — I’ve seen the good, the bad, and the ugly. One of the scariest things that can happen in a software project is losing your developer for some unforeseen reason. This kind of thing happens even when you have a big team. Because they are typically highly visible, you definitely want to do as much as possible to mitigate risk on software projects.
What if your developer gets hit by a bus? Sounds like hyperbole, right? Not in Houston traffic…
I remember a project where we’d just finished a major release cycle for one of our biggest clients. The application had been developed, tested, installed, and running at the client site for over a month. All of the functionality had been run through the grist mill and came out looking pretty good. The major bugs were resolved, and even the minor ones were well on the path to being fixed. Tired from long hours and war room readiness, the whole team was ready for some delayed time off.
The only team members left on site were a senior developer from the project and a handful of cross trained devs intended to backfill in case something came up. Then, the worst happened. The senior dev was in a 5 car pile-up coming back from the client site. Thankfully, he was OK, but he was going to be unavailable for a few days while he dealt with doctors, insurance agents, and mechanics. The project team only had a skeleton crew of developers with only a surface level of experience with the project.
Of course, suddenly we discovered that a major bug that was preventing their payroll from going out. With large portions of the team out of state, the person responsible for functionality on a cruise somewhere out in the Gulf of Mexico, and the senior developer in the hospital, we were left scrambling to fix the issue for the customer.
Fortunately, this story has a happy ending. We were able to fix the issue in the code relatively quickly. It was a minor bug arising from a business case that realistically never should have come up. We were able to quickly add functionality to handle the case logically because the code was clean and conformed to our standard best practices. Deployment followed standard procedures we had documented in our document store, and the client was able to get their payroll out on time. We managed all this without ever having to approach the client with excuses about a delay due to unavailable personnel.
Still not convinced you need to mitigate risk on software projects? Here’s another one:
We’d been brought in to help out on a number of line of business web applications to track the movement of product through warehouses. Our client was very happy with their original developer’s work. The app functioned as intended, and when they did find a problem, he was able to quickly fix it with almost no delay. He wasn’t a developer by trade, but had picked it up in order to fill the need at the company. When he moved on, they brought us in to help carry them through to completion on the project.
There wasn’t a lot of documentation, so I was left with the task of searching the client’s dev servers for the project files and piecing together how the application communicated across several different solution files. I was able to figure out the application sufficiently to make the requested behavior change, and copied the project up to the client’s staging server for review.
The client’s response was eye opening. While they were perfectly happy with the changes, they were confused because it looked like the application had been reverted by several years. After a quick panicked search, I discovered the truth. The reason that the developer had seemed so responsive to the client’s requests was that he had abandoned both the dev and staging servers entirely and was doing his development work straight against the production server.
Our next task for that client was scouring the various production servers to find the latest versions of all of the individual pieces. This process was complicated due to the developer’s source control method of creating backups of project files in subdirectories. We had to go into one project to find which project had its directory hard coded in, and in at least one case, several instances of one project were referenced by other projects on different servers.
We then created a source control server and backed up each of the client’s projects, recreating their production environment in dev, this time with documentation.
Relying too much on one developer is always a scary proposition. Sometimes, however, it’s unavoidable. It’s just not efficient to have multiple people spun up on a single function at one time. Entrance has some good policies in place to mitigate that risk.
Here are a few ways we mitigate risk on software projects so our clients don’t experience any impact due to unforeseen issues:
-
Documentation, Documentation, Documentation
The first and foremost line of defense is documenting everything, and storing it in a standard location and format. In the case of our own developer’s unplanned absence, we were able to quickly locate his deployment instructions and follow them step by step.
-
Standard Development Practices
Every developer prefers to write code a little bit differently. This isn’t something you can or should control. Much like normal writing, though, a developer will write code very differently if he’s expecting someone else to be reading his code. Frequent code reviews and software enforced formatting practices ensure that the developer is forced to remain in the mindset that he’s writing not just for himself and the computer, but for the other humans who will be working with it later. In our case, this meant being able to quickly locate the relevant code, understand what it was doing, and fixing the problem.
When looking at code that wasn’t written to be read by others, it’s frequently more difficult than necessary to trace. Best Practices are frequently omitted for expediency.
-
Source Control
Source Control is not optional. In addition to providing a backup in case of hardware failure or destructive change, it allows later devs to track the history of a given piece of code. Why and when changes were made is too important to be left to tribal knowledge.
-
Cross Training
When you’re short a developer, it’s too late to train a new one up on an application and stack. In the best of cases, your familiar people will all be busy working. In the worst cases, you won’t have any familiar people at all to do the training.
Ensure that you’ve got someone that can take over. They don’t need to be completely familiar with the intimate details, but they do need to be familiar enough to read through the documentation and start fixing problems in a short time.
-
Process
Each of the above items only work if they are performed regularly and consistently. Worse, you frequently only find out what isn’t being done when there’s a problem, and failure to have the safeguards in place that you think you do is the quickest way to turn an emergency into a disaster.
It’s not enough to set up a policy, you’ve got to verify that it’s being followed.
In short, in my experience, it’s a good idea to plan for having people unavailable, and the best way to do that is to build an organization around sound policies and mutual support. Having a single developer working in isolation may feel cheaper in the short run, but it exposes you to a lot of unnecessary risk.