Feedback

Different skils are needed to build software. When quality is low as work passes down the line, problems build up. Feedback is the key to exposing issues.

Feedback

Different skills are required to get software built and deployed, from marketing people who have ideas, architects and developers who design and build, to operators, releasing and managing production. If the quality is low as work passes down the line, problems build up. Feedback to earlier stages is key to creating a steady, fast, quality flow of software.

As a feature goes through each value adding phase of the software development life cycle, such as requirements gathering, developing or testing, the quality of the work done is going to vary. The next step in the chain can only produce quality output if they receive sufficient quality input in the first place.

This can be hard to measure depending on which stage you are looking at. You might count the number of bugs logged in a ticketing system to measure the quality of development, but how do you know if a requirement was good enough?

Many stages are going to have anecdotal feedback only. It may be difficult to capture it as a statistic. People simply have a feeling that something was easy or hard, good or bad, etc. Explaining how they feel and being asked to explain what they thought were the reasons can draw out potential issues.

Capturing this sort of feedback may require a workshop or survey, rather than being an automated statistic that can be monitored on a dashboard. But there are many things that we can measure, if we take the time to build the measurement capability. Here’s a non-exhaustive list of things we might be able to query automatically:

  • how many bugs were raised?
  • how many live incidents were raised after a deployment to production?
  • was there an increase in tickets from users?
  • how many new security vulnerabilities were discovered?
  • has the performance degraded?
  • how many build failures?
  • how many static analysis issues were there?
  • what is the response time for a call on our website?
  • what happens to application performance under load?

Other questions that might help spot problems with the software development life cycle are:

  • how quickly did subject matter experts respond to questions?
  • were the requirements clear or need further explanation and clarification?
  • was the priority clear?
  • did the demo go well?
  • how much re-design was required during development?

One way to gather the above information is to hold a Value Stream Mapping session. To do so, we gather everyone that is part of the value stream, from the idea creators down to the operator that pushed the change into production, into a one or two-day workshop.

Each phase of the process is discussed, using statistics to back them up if possible. Can we answer the following three questions about each phase:

  • how much time was spent actively working on the feature at that stage?
  • how much elapsed time passed while the feature was at that stage?
  • what was the quality of the output from that stage?

If the quality of output from a stage is low then this needs to be discussed to figure out why. The person or people involved may not be aware that they are causing downstream problems and these sessions can be a real eye opener.

Another clear warning sign is if the elapsed time is way higher than the value-add time, the time spent working on the item. The group should try to figure out some actions that will remove the bottleneck. For example, if questions are sent by email and take too long, perhaps the key individual can sit at a hot desk near the team for a portion of the week to be available for questions to be put directly to them.

If there is a large amount of time spent at a certain stage, this might be a sign that some process analysis might be useful. It may help be possible to streamline the work using some form of Lean analysis. This may also be an indicator that the quality from the previous stage was lower than it should be.

pexels-photo-1034426

The value stream mapping exercise takes some time, but can expose the real problems in the production line. As developers, we may think that automating the provisioning of environments will solve all our problems, but if we never get clear, clean requirements, we will still have disappointing results.

Developers like to think that their code is perfect and will work first time, and if it doesn’t then it only takes a few tweaks to get it working. QAs usually find that this is not the case. They waste time returning code to developers to fix. Speaking about this and putting numbers on the quality in a value stream mapping exercise sheds light on this problem.

The aim is not to fix all problems, just the one that is causing the most impediments to the flow of new features. Any time spent on something that is not the biggest problem, is time wasted.

Actions

  • Identify everyone that was involved in a recent feature or set of features that your team developed and deployed.
  • Approach them or their bosses and ask whether they could take part in a value stream mapping session.
  • Before the session, figure out how much time each item spent at the various stages of the software development life cycle: idea generation, high level estimation, approval, requirements gathering, team estimation, analysis and design, development, testing, demonstration, user acceptance testing, release.
  • Figure out how much value-add time was spent at each stage.
  • If you can, take statistics about the how the feature set is behaving in production, how often it’s used, whether the users are satisfied with the results.
  • Consider devising a short survey that looks for feedback about the quality of the material that came into each phase and send to the relevant people before the workshop.