Let’s start with a disclosure: I am a strong proponent of agile software development.
I lean toward disciplines of Extreme Programming rather than SCRUM, but debates over different approaches are not the purpose of this post.
Rather, I am wondering what people think matters with respect to being able to say they “do” agile anyway.
My partner and I built the brand for my first business on agile principles and used it to compete effectively against companies much larger than our own for the greater part of eight years.
Because I was oriented toward XP, our approach initially had an XP flair, and we gradually blended other disciplines where we found they worked.
The company promotes the same message now seven years after I have moved on, though I am sure they have their own culture, style and market promises that have evolved in the meantime.
I was recently interviewed for a study being performed by a large Midwest university regarding various topics involving agile software development, but one theme that stuck with me (and has triggered me for many years) was the suggestion about what it really takes to “do” agile, as if that should matter to me on its own merits.
I can remember loyal XP advocates in the past saying “If you don’t pair program, you’re not doing XP!” SCRUM folks also had their own criteria for saying whether or not you were really doing SCRUM, and I expect other approaches likely had theirs too.
I thought those conversations were not as big nowadays. Back then the question was which specific methodology camp you were in, but from the university interview it started to sound like the ultimate categorical question is now whether you do agile or not… as if doing agile is somehow good and not doing it is somehow bad.
I mean, rather than worrying about whether I was officially “doing” XP, SCRUM or any other approach… or even agile in a way, I was more concerned about shipping quality software that the market valued highly as quickly as possible.
A development team or larger department adopts a way of doing things that includes various practices we can debate, standardize and improve… and whether we decide on TDD/CI and retrospectives in favor of some other blend of disciplines, the choice is based on what works and promises the greatest throughput of value consistently and recurrently.
I don’t even want to drum the conversation back up, but this recent interview got me wondering… do you still see those debates, and do you have a different assessment of whether or not it matters?