Quick recap of Scrum Australia 2016

I’ve been attending work conferences for over a decade, but they’ve all been web and design conferences – UX Australia, WebStock, Web Directions.

Scrum Australia was new for me, a conference not so much about what we do but how we work, lead and coach.

Steve Denning – author of several management and leadership books including The Leader’s Guide to Radical Management – spoke about organisational transformation and how several companies have made the journey over many years from a few teams adoption agile through to it becoming a top-down mandate, including Microsoft.

Steve explained the difference between adopting Scrum practices versus transitioning to an agile mindset; a topic I also discussed in my previous blog post.

Bernd Schiffer from Bold Mover gave us a comprehensive description of what Scrum Masters can (or should) do, how important they are to the smooth and efficient running of Scrum teams and why investing in a full-time ScrumMaster role is generally better for overall velocity and return on investment in the team than someone trying to do both their normal job as well as wearing the ScrumMaster hat as needed.

Rowan Bunning used the differences between scalable Scrum frameworks SAFe and LeSS to illustrate the essence and power of Scrum and how SAFe suppresses that by placing Scrum as a mere low-level cog where LeSS does a far better job of scaling the power of Scrum.

Charles Randles from Suncorp gave an amusing and even heretical talk about the foolishness and wasted effort of typical management practices that pick the easy stuff like checking timesheets and attendance, and optimising the wrong stuff instead of leading, motivating and creating a space where people can realise their potential.

Andrew Rusling covered theory of constraints (as explained by Eliyahu M. Goldratt) and gave us 21 experiments that teams can try to improve their velocity by identifying where constraints exist through the visualisation of work and then tweaking parameters starting with cheap fixes through to aligning people and then finally more expensive fixes like improving tools, environment and only if absolutely necessary bringing in additional people.

Mia Horrigan (my boss at Zen Ex Machina) spoke on Confessions of a Scrum Mom. She shared her story as a parent of a soccer-playing kid and how her idea of coaching compared to that of real coaches. Mia drew a parallel with her professional agile coaching and when she recognised that she was doing everything for her team as a ScrumMaster and instead needed to take some lessons from elite sports coaches and help her Scrum team grow rather than rely on her to fix everything.

Friday morning was allocated to “Open Space” sessions, self-organised presentations and discussions about various topics in all rooms and corners of the venue – I think at one point there was nearly a dozen concurrent sessions ranging from a group of five up to fifty or more participants.

Here are some of the artefacts of those sessions:

I think the big takeaway for me was being re-energised and more confident about the work that I do in advocating for Scrum and coaching Scrum teams. When you’re the only person in a room who really gets Scrum and everyone else is pushing back you can start to doubt yourself and the methodology.

I feel attending Scrum Australia will give me the momentum to stubbornly persevere with a renewed passion for what I do, and a few more techniques and success stories in my kit. Plus quite a few more books.

Reading list

Books given away by Scrum Australia 2016 sponsors elabor8, sadly not to me:

Focus on the stories, worry about format later

I often get asked by development teams new to Scrum what format to follow when writing user stories. I love their enthusiasm and desire to do it properly but instead of focusing on the format I instead steer them towards just telling good stories.

There are Scrum-specific maturity models but in the context of user stories I’ll instead look to HCD maturity models, specifically Earthy’s Usability Maturity Model (UMC).

Level B of Earthy’s model (“Considered”) requires that organisations fully or largely meet the criteria that training is delivered so that staff are made aware that:

  • the needs of the end users of the system should be considered when developing or supporting the system
  • end users’ skills, background, and motivation may differ from developers or system support staff

As the executive summary states, progressing to this level of maturity is “a major cultural change”.

When it comes to immature teams writing user stories I don’t want them to get hung up on the format because it’s a distraction. They’re not ready to write what I would consider high-quality stories that describe the people the story is relevant to, what we want to help them accomplish and the nuances of the contexts of use … As a blogger, I want to create a draft post because I have an idea but don’t have time and just need to sketch it out so I can revisit it later.

Those nuances are very important for seeding creative thinking and novel solutions to real challenges that a more simplistic view might overlook. But it’s a stretch to expect teams new to user-centred agile to jump straight into.

If we focus on formats such as Connextra then I believe we risk teams settling for doing Scrum by the numbers instead of embracing the full benefit of Scrum or agile generally as Michael Sahota’s slide illustrates:

Doing agile practices does not equal having an agile mindset

Gojko Adzic implores development teams in his excellent book Fifty Quick Ideas to Improve Your User Stories to not worry about the format and instead focus on conversations, collaboration, and exploration rather than simply replacing functional requirements with story cards.

Personally, I quite like the Connextra format but teams who are new to Scrum and often new to user-centred design really struggle with it.

Teams unfamiliar with user-centred design and agile should take Steve Blank’s advice and GOOB (Get Out Of the Building) before they bother trying to identify user archetypes in their stories.

As for the all-important third part of the Connextra format, I wouldn’t expect teams below UMC Level C (“Implemented”) to be able to accurately capture the reasons why users might want to perform a certain task or be shown certain information.

I’m happy enough when new teams simply challenge their own assumptions in meetings about how “real people” might benefit or even suffer from the team’s decisions.

I’m delighted when teams start thinking “outside-in” as Dan Olsen describes it in The Lean Product Playbook. It is a major cultural change and I want to see teams explore and adjust to the user-centric agile paradigm without getting bogged down in syntax.

If all your story cards are 3-5 words that’s fine. As long as I’m seeing evidence that you’re thinking about users, thinking about journeys and contexts and usability then we’re all good.

For now.

Don’t get comfortable because we’ve still got a few rungs to climb.

Iterative delivery done well, and poorly

The iterative design and development of a product or service is a concept that seems really hard for many people to grasp.

As a product manager and designated product owner within Scrum teams I’ve learned you can’t just explain it once, you can’t just structure the product backlog to be iterative and expect people to adopt the mindset. It takes a lot of coaching, persuasion and leading by example with new agile teams.

Well-executed iterations in an agile project target specific user journeys, workflows, audience groups or roles.

Rather than building half a thing that kinda works for most people you build part of a thing for specific people so that it works completely from end to end.

Well-orchestrated sprints and user stories plan to iteratively build parts of the product from a user perspective rather than an implementation model perspective.

User stories shouldn’t really care about which screens or pages features and content are implemented on.

A single user story might require modifying several pages or screens so that someone with a specific need or objective can complete a task or locate information spanning several pages and interactions.

I don’t mind developers breaking up stories into discrete actionable tasks to help them deliver it, as long as everyone is committed to delivering value to end users. It doesn’t have to be pretty but it has to be useful and help us learn whether the user story captured the requirement and whether we need to add, modify or remove something else to meet the essence of the story.

Here’s a literal application of Henrik Kniberg’s famous Not Like This, Like This comic to a screen-based software application:

Illustration of delivering value incrementally the bad way page by page versus user story centric

Hopefully it illustrates how poorly-planned incremental design and delivery may not deliver any value until several poorly-written user stories are completed. The correct way to iterate means that a single user story can be tested with the identified user, audience or role for  a specific scenario, task or information need.

By a “poor user story” I’m referring to the typical practice of using the Connextra user story format as a container to wrap around a good ol’ functional requirement, for example As a user I want to add a new record to the database.

I don’t do lorem ipsum filler text. I don’t do broken links that don’t go anywhere. If I sit a user in front of a release of an iteratively-developed product they should be able to read all the text, click on all the links and validate our design or disprove our assumptions.

This is why high-fidelity visual concepts and designs can make agile harder; they constrain exploration and learning, they support implementation model approaches rather than user-centric and they make it harder for product owners to get the team to focus on stories from the ground up when the team is working from end point designs.

This also works for service design but my specialisation is in web and mobile apps so I’ll tag one of my colleagues to explore good iterative design approaches for services.

Facts, theories and information in work groups

Designers and researchers sometimes come across as the least knowledgeable people in the room, who cannot talk authoritatively about anything.

While others will assert “facts” and provide unequivocal answers, the designers and researchers who have been trained to actually work with qualitative and quantitative data will present their conclusions, models and hypotheses with confidence intervals, sample sizes and testing methodologies.

They use the concept of a theory correctly to describe something actually supported by empirical evidence while others interpret ‘theory’ as an off-the-cuff idea with no merit.

Designers will want to test and validate ideas, researchers will want to increase the reliability of their data, while others just ‘know’.

I’m not an academic and I didn’t formally study statistics or research but I am careful not to speak in absolutes. I prefer to talk about ‘assumptions’ rather than facts even though people often have an adverse reaction to that word.

Then there’s that famous phrase that designers are renowned for “It depends” – there are even t-shirts with it.

I don’t want to bring everyone across to the dark side of research and statistics and I don’t want to insist on people using the correct terms. It would be nice to establish some ground rules about knowledge and information so that a mixed group of researchers, designers, marketing and business people can have a conversation on equal footing.

Such ground rules might prevent the friction caused by one group allegedly knowing things to be true versus those who have a better understanding of information, facts and evidence; I don’t want to start quoting Kant, delve into epistemology, logic or critical thinking.

Do you have any feedback or suggestions of how a group of people might establish such ground rules?

For example:

  • Assumptions are ok, they’re like placeholders that we’ll firm up later if required
  • There are no facts
  • A theory can be better supported by data than a well-known ‘fact’
  • Avoid asserting absolutes

Technical debt, a bad analogy

Technical debt is like sleeping all weekend to recover from working too hard during the week.

When you work overtime, you accrue debt that has to be paid back later … in this case, through sleep.

Nope.

This analogy implies that agile is about producing inferior work in a rush and then going offline to fix up your mistakes and poorly-planned architecture later on. It also implies that it’s okay to work at an unsustainable pace.

The concept of technical debt was invented by Ward Cunningham who also referred to it as accumulated sludge that needs to be removed.

I like this definition of technical debt:

Technical debt includes those internal things that you choose not to do now, but which will impede future development if left undone.

It’s a concious decision made by the team to defer work for very deliberate reasons.

Technical debt should be minimised because it’s often better to do things properly the first time rather than revisit it later. Overall it typically takes longer to deliver the product if design and development effort is deferred to a later sprint.

But I don’t think teams should shy away from technical debt and avoid it like the plague. It can be a useful tool for staying focused on what’s important, delivering iterations of the product for learning and feedback before investing the time to make it production-ready.

Future development work as per the above definition can include work done by another team a year or more from now. It’s expected that other developers can work on your code and database later, extend it, build on it, upgrade it.

If you delivered a valuable product that performs well, meets business and user expectations and is reliable and defect free … but other developers can’t work with it because it’s a mess of hard-coded variables, hacked plugins and undocumented database design then you didn’t pay down your technical debt.

As Jeff Atwood says in his 2009 blog post Paying down your technical debt:

I believe that accruing technical debt is unavoidable on any real software project. Sure, you refactor as you go, and incorporate improvements when you can – but it’s impossible to predict exactly how those key decisions you made early on in the project are going to play out.

Shipping a product with technical debt and walking away is not necessarily your fault if you made it clear the trade-off you made for increased velocity and getting functionality out the door to users and clients for testing.

It’s up to the Product Owner to allocate time in sprints for dealing with technical debt just as they do with design spikes, bug fixes and deployment activities.

However it is important to differentiate between actual technical debt that Atlassian defines as “the difference between what was promised and what was actually delivered” and our preference to do a really nice job above and beyond what is adequate.

I think in some cases we spend more time than is necessary on user stories to do it right, losing focus on agile as learning and feedback opportunities and fall back into a mindset of functional requirements that we only get one shot at. Acceptance criteria and the Definition of Done tell us where to draw the line.

Developers should be encouraged and empowered to be proud of what they deliver and not feel nervous about how long it’ll hang together for or dread an email a year from now from someone who is trying to work on their code.

Agile is not about being obsessed with doing the least amount of effort for the most amount of value. It’s not about cutting corners or shipping inferior quality products.

Agile does expect that we won’t know what we’re going to build and how best to build it until it’s finished. It does expect that we plan just for today and revisit those plans once we know more.

Also remember that Scrum is not exclusively applied to web and software development projects and technical debt is not just about code and software architecture.

My issue with high-fidelity visual concept images

In this 12-minute video I try to explain why concepts describe a grab bag of various iterations of a bunch of user stories that almost rarely describes what the outcome will (or should) be even at any one point in time, and their use as a premature and ill-informed contract with clients about what will be delivered only hurts the client, the team and users.

I found this diagram on LinkedIn showing that proceeding blindly from Point A to Point B even if it’s what the client asked for will likely result in disappointment:

Scrum versus traditional projects showing Scrum zig-zagging in towards the optimal solution

My belief is that regardless of whether a Scrum approach is followed, the delivery of high-fidelity concepts at the start of the project before the build has begun constrains how much the team can zig-zag to home in on the optimal solution … thus negating the benefits of agile.

Transcript of video

So I want to share some thoughts about concepts, and what I’m talking about is the sorts of files you might do in Photoshop or Illustrator. High-fidelity, full colour … what a web site or any sort of product really should look like. And my specialty is in UX design for digital so my thing mainly is websites and mobile apps.

And I want to kind of share some thoughts on concepts; other people may know them as … they could call them mockups, although I tend to think of mockups as being sort of low-fi sketchy wireframes. Or comps is another one that’s often used. But I’m just going to use the term concepts.

So just to give you an illustration a concept is typically a static image. So it’ll often have fixed dimensions, fixed ratio, usually portrait. Let’s just chuck some images in there, and some sort of footer at the bottom.

Ok so that’s the concept image. And typically the clients sees this and they go “Yeah I like that but can we just swap this image out for another image”. “Yeah I like that but can we just make this a bit smaller”. And typically you might go through a few iterations with the client and then this is the design spec; this is what we build.

And there’s different variations, different tools people use, but apart from people who go straight to code using tools like FramerJS or Axure RP and outputting as HTML; Regardless of the tool, typically we still talk about concept images and visual design. High-fidelity visual design as a deliverable.

Now, in an agile methodology which is what most progressive companies work with these days the problem with this [points to drawing] is that it defines an increment of an entire screen. Ok? So let’s say we take … each of these elements. Let’s say we’ll take the mast … I’ll just get a different colour here.

So let’s just draw our increment, we’ll say this is “Increment X”. In that, we’ll have an increment of the mast, increment of … and you’ll see what I’m drawing in a second … increment of the hero image and the tagline, increment of some benefits tiles. Increment of the call to action … some case studies. Alright?

And let’s put these on a timeline. Now I’m just going to link these through. It’s going to get a little messy so bear with me.

So we’ve got the mast, and hero image. Call to action … whoops! Doesn’t really matter.

So that’s an increment, and the concept encapsulates that one increment of all those elements.

Ok, so we take that and we decompose it, whatever you like to say, whatever you like to refer to the process of converting this concept back to user stories. And so, in most cases if you are following a proper emergent backlog approach the first increment of the home screen would not be all these elements looking like this.

So we’re going to take it back a bit.

So, Increment X. We’re just going to get into algebra here, but let’s take it back and say, alright you’re going to go with a first increment of all these things. And this first increment would go into the backlog as user stories.

So, we’ll build those sequentially, build the first increment of the hero image and whatever purpose that’s supposed to serve for users or the business … your product benefits, your call to action, case studies etc; your first increment. Ok? Great.

Ah, now the problem is … we assume that the next increment or Increment X is going to be a fixed number of steps or sprints after that first increment. So we’ll start with the basic and we assume there’ll be like an advanced increment or the half-way step or a more mature version of that. But we know where we want to get to, we’ve already decided that is is where we want to get to.

This increment, X.

Now, for some of them it might be … if you wanted to follow that approach it might be one step to get to that increment, two steps to that one, one for that, three for that, one for that. And so if you were to draw a proper timeline … draw this again … Ok, so our first increments … so we might say “There’s two steps to that one”, three steps to that increment, two to that, three for that one, and two for that one.

So these ones require more work. The product owner in negotiation with business and the team decides those iterations need further work before they can be released or whatever the benefit is, is more important than other features in the backlog. Ok? And that’s fine.

And they should have the power to make those decisions. Well certainly the Product Owner does; that’s their role.

And so they’ve decided to take it a step further with that one. So maybe what was here … is actually here. And once we’ve tested it, reviewed it we’ve decided, actually, what was designed … needs more work. It’s ok but actually we’re going to change how this works; especially when we’re talking about things like responsive design for example.

We don’t like … well, probably the graphic designer didn’t come up with a responsive version; They may have done a mobile version, a desktop version, but what about all the different breakpoints in-between?

You know, if we treat all these things in isolation we come up with a responsive design for the mast separately from a responsive design for the hero image. And so the breakpoints for this could be 600 pixels, for the mast it could be 400 pixels. It just depends on the content, and the imagery.

So the thing with the concept is that it’s already decided where we have to get to what Increment X is going to be, regardless of how many steps it takes to reach that end point that’s been pre-decided.

So let’s look at what we’ve done here. So in this version the Product Owner has decided that the actual home screen is going to look more like this.

So in this scenario … we won’t get into too much detail here. Maybe we’ve decided that the hero image (and this is really superficial detail, this is not the detail we’d get to in user stories). Sorry, this is not the sort of superficiality I’d get to with user stories, but let’s just say that we’ve decided to make a full-width hero image with overlaid title.

Ok so that’s what we decided in the end, and we’ve already deviated from the concept and the concept now no longer forms a contract with the client because we’ve decided that this is actually a superior product.

Ok, so let’s just say we have the concept. Let’s just encapsulate all these these user stories into what we have decided is the product.

This is … The Product. That’s the final increment that was decided by the concept.

And then we back that up and we come up with our first increment. And again we’ll just lump it all together as a single product, and we’ve decided … that’s where we’re going to get to.

Start here at sprint one, and by sprint five … this is where we get to. The problem is, it’s not that simple, because again, the Product Owner making value-based decisions about which user stories to prioritise; and, you know, if we’re doing it properly collecting feedback from stakeholders and users and we change the course of the product as we go.

So instead of being a linear direction to what we originally envisaged we allow the product to evolve and emerge based on new data and feedback. So, maybe it goes this direction or that direction.
And it probably won’t stop there because there’s only so much the designer can envisage. They’ve got a very limited canvas to work with; it’s a static image. It’s based on limited information at the very start of the project.

And so the product probably goes further, and it could go this direction, or this direction and so on.

Obviously not following all these branches at once but you know, there could be parallel prototyping work or multiple teams, but most likely, instead of going this way it goes that way, and then that way, that way. And THIS is the product.

And if evidence-based decision making is used then this product is superior to this product that was envisaged by the team back before the first sprint.

So that’s why I don’t like concept images because they’re portrayed as a contract with the client that says “This is what we’re going to deliver”. And we don’t want to hamstring ourselves into delivering this. When we should be delivering this.

Officeworks as example of poor responsive design

And when I say “poor responsive design” I mean it’s not responsive. Basically a desktop website with a drastically cut-down mobile version, like we used to do years ago.

Before I get stuck into them, I have to state that I like Officeworks – in fact I shopped there twice today – even though they tend to do some silly things.

Let’s look at a search run from a desktop computer:

Officeworks website on a desktop computer

Here’s a list of most of the features, options and attributes available to desktop users:

  • Thumbnails of products
  • Title of products
  • Short description of products
  • Star rating
  • Number of customer reviews
  • Link to read customer reviews
  • Best match search result
  • Price
  • Quantity to add to cart
  • Add to cart button
  • How long delivery would take
  • In-store availability for selected store
  • Availability for Click & Collect
  • Check other stores for availability
  • Filter by category
  • Filter by price, upper and lower bounds
  • Layout filtering
  • Brand filtering
  • Colour filtering
  • FSC certification filtering
  • Paper size filtering
  • Paper weight filtering
  • Num rings filtering
  • Add to list button
  • Switch views from thumbnails grid to list
  • Change from 24 to 100 results per page
  • More info link

And here’s the list of features, options and attributes available to mobile device users:

  • Thumbnails of products
  • Title of products
  • Price

If it’s good enough for desktop users, why not mobile users?

Officeworks website on a mobile device

But we tested it with users!

Image with text showing small part of something through a cardboard tube that turns out to be an angry bear as illustration of how designing user testing and stakeholder feedback can still lead to a shitty product no one wants

I had an alternative sketch of a bear behind bars with the caption “Do you like this bear?” and then the bear without the bars and the caption “Great, so we can release it then” or something along those lines.

I don’t mean to pick on the Australian Bureau of Statistics but they happened to be running a home page beta evaluation at the time, and this is the survey they developed to elicit feedback from users:

Australian Bureau of Statistics beta home page survey

When you have to throw out your backlog and start again

From Agile Product Management With Scrum (2010) by Roman Pichler:

A requirements specification dressed up as a product backlog is like the devil in disguise: It looks neat, pretty, and perfect. It is tempting because it appeals to our old desire to know all the requirements up front. But it has a hidden dark side.

A product backlog that is too detailed and too comprehensive does not support the emergence of requirements. It does not view requirements as fluid and transient but rather as fixed and definite; it freezes all decisions about how customer needs can be satisfied at an early point in time.

A requirements specification disguised as a product backlog is likely to be a symptom of an unhealthy relationship between the product owner and the team. If you encounter such a backlog, see if a product vision is available. If it is, derive a new product backlog from the vision and discard the disguised requirements spec.

You can, of course, choose to plod along, wrestle with the backlog, extract themes, rewrite items as user stories, and struggle to prioritise the backlog.

But it is unlikely to maximise your chances of launching a winning product.

MVP is a misunderstood concept

The Minimum Viable Product (MVP) is a widely misunderstood concept that has resulted in Agile methodologies earning a reputation for delivering products that are uninspired, boring, shallow and that lack the features of more competitive and mature products.

When Eric Ries coined the term Minimum Viable Product he framed it as a learning experience, to test an idea with early adopters. Unfortunately the term is rarely used with that intention and often refers to a collection of features that represent the minimum a development team can get away with shipping.

At what point during construction of a house does it change from being an unusable structure to being a Minimum Viable Product?

Does your assessment of whether a partially-built house is an MVP change if you look at it from the perspective of the builder, the architect, the owner or the future tenant?

Each person in this scenario has different motivations and attitudes; for example the builder might be most concerned with regulations and codes, the owner with cost and the tenant with comfort and liveability.

Would you be happy to move into a house as soon as it reached the lockup stage? Or once power and water are connected? Carpet and flooring?

Many products don’t mature far past the MVP stage. They’re shipped as soon as a core set of essential features have been implemented.

The problem with defining an MVP in terms of features is that it undermines the user-centred approach that the Agile Manifesto promotes.

It is the market that determines whether the product is valuable. Users will assess whether a product is suitable for their needs and demonstrate whether it enables them to accomplish tasks and meet their objectives.

Products that aren’t deemed valuable or useful by consumers are not viable.

This is why I prefer terms like Minimum Desirable Product or Minimum Viable Experience where the emphasis is put firmly back on the people who want to use your product.

The development team cannot define the MVP in the absence of user research, beta testing and data. There must be a feedback loop, a sensitivity to the market and letting that needle swing left and right on the dial before you draw a line in the sand and say “This is our MVP”.

If your developing something in a new uncontested market then it may not take much development effort to settle on a definition for your MVP. If you were building a competitor to Facebook it may take more than two years of effort to design something suitably competitive otherwise the market will dismiss it as inferior.

The MVP is a great way to invest only the amount of time and money into a product that will determine whether an idea will sink or swim, and if it isn’t viable then go back to the drawing board. Rip it all down and start again if you have to … but better to do that a few weeks or a few months in than a few years.

If an MVP does pass the market viability test then keep going! Don’t stand still. Stay competitive, prioritise by value, iterate, stay humble, focus on the people, challenge assumptions, be wary of competitors and trends and keep it lean.

Just remember to address your technical debt and don’t keep bolting features onto your MVP. Stay focused on and committed to people over technology, evaluate and re-evaluate against the stories and personae, prioritise by value, refactor the code, refactor the user interface, challenge your assumptions and remember to step back from time to time to ask hard questions about vision, viability and the market.

Diagram depicting sprawling features bolted onto an inadequate core minimum viable product

Also, don’t follow LinkedIn’s example. I don’t even know where to begin with trying to explain their approach to product management.