COVID-19 NOTICE: Pursuant to the UK government’s advisory, agil8 must cancel all current in-person courses. For further information please see here. Please note this does not impact agil8’s Live Online Training courses.

Menu

I still find that most teams  using Scrum or any similar iterative timebox-based approach are not creating a potentially-shippable product increment at the end of each Sprint. At the risk of making a gross generalisation, this is really the number one issue that any such Agile team should be thinking about. This question from a recent attendee on one of my training classes is fairly typical.

John : We have a very mature product that’s now 11 years old. So usually our sprint is made up of half defects and half enhancements (we are working on a technical debt initiative at the moment). Our clients are local authorities and one particular client likes to test what we release first on Pre-Production.

At the moment our process is to end a sprint on a Thursday. The next sprint then starts Friday morning. Also on the Friday the testers will regression test what was released from the previous sprint. Then we release it to our client for them to test if they wish. We then release to production the following Thursday.

The issues come on the Friday when the new sprint has started and the developers are working away on that. During regression we could get failures that need fixing otherwise we cannot promote the release to production. We also need to fix them as soon as possible so the client doesn’t find the issues. So Friday can become a constant battle of estimating new tickets, finding tickets to take out of the sprint to fit the regression failures in that then need hot fixing to Pre-Production.

You said during the course that regression testing should be within the sprint. That makes sense but I can’t get my head round how it works with the developers time. If we have no regression failures then they are sitting there doing nothing as they can’t do new work now everything has been released. Or if they have too many regression failures then we have the same problem where we are still adding tickets to the next sprint, just on a Monday. I did think we could move all the meetings to the Friday e.g. Sprint planning, Retro, Review but then the testers can’t attend as they are regression testing and as soon as they find failures then we need the developers.

Jamie :  Keen to know how people tackle this. For us our strategy is around greater automation and optimisation of the regression testing. It then has less impact on the sprint. Plus if you can run effectively multiple regression tests each time something is ‘done’ you also minimise finding large disruptive issues by the end of the sprint?

David Hicks : I’d endorse your approach Jamie … the key is automation so that the feedback loop from regression testing is as short as possible. So far as the original question goes, it isn’t clear whether the “testers” and “developers” mentioned are in the same team or not. Ideally they would be, in which case each Product Backlog Item (PBI) would not be declared “done” until regression testing has been completed and all errors fixed. The PBIs should be sliced small enough to be completable within the Sprint – including regression testing (automation will help with this). Whilst the developer is waiting for regression testing to complete then they should do the next most important thing that they can. If the “testers” are in a separate team then fixing the regression failures would be PBIs for the team of “developers”. They should be ordered like any other PBI on the Product Backlog and the work to fix them should be planned, either through Sprint Planning or Kanban-style ad-hoc planning.

Tatum :  Also, it could be smart that if there is regular time that devs are waiting for tests to be complete – then that sounds like a good time to start one of your ceremonies – e.g grooming.

Mike :  I may have misinterpreted the questions, but it indicates to me that all of the regression testing is being done at the very end of the Sprint work. So all of the outputs delivered by the Sprint are being regression tested in one hit. Is there not a way that you could break out the individual outputs so that regression of each is carried out as they are made ready and what is incremented each time is the size of the regression?