Category Archives: Lean

What Agile Teams Learn from Toyota about Definition of Ready

What Agile Teams Learn from Toyota

Photo: Toyota Manufacturing UK – Assembly, Flikr

Getting stories to “ready” is crazily important for Agile teams. My recent field trip to Toyota made me think more broadly about what ready means.

By “ready” I’m referring to having stories articulated ahead of a sprint commencing. I usually advise software teams to include the following in their “Definition of Ready” (their checklist of what needs to be done before a story can be commenced):

  • Clear story statement
  • Articulated acceptance criteria
  • Reference to a process map (if required)
  • Wireframe (if required)
  • The team understands the story
  • It has the Product Owner thumbs up

The Toyota equivalent of ready is a self-driving trolley that arrives at the operator’s station just in time to supply, for example, the wheels for the next vehicle on the assembly line. The passage of self-driving trolleys around the factory floor, playing individual music to alert the operator of approach, is in and of itself ridiculously impressive.

It is Toyota’s lean approach to managing the whole supply chain though, with offsite manufacturers producing, packaging and dispatching the various wheels in the correct order, that takes ready to a whole other level.

What can Agile teams learn from this?

Remember that you are part of a system – seek to include your stakeholders in getting to ready

Ask yourself, who in Legal, Product, Marketing, Learning or Architecture needs to understand, or be involved in defining the story before it is considered ready?

Value your suppliers – they are critical to your system

Toyota understands that their success depend on their suppliers. They provide clarity of expectations.

Influence your ecosystem – educate as required

Do all your stakeholders or suppliers understand their role in Agile product development? If not, find a way to explain it to them and support them as they come up to speed.

Listen to your Developers – they understand ready better than anyone.

At Toyota, if wheels don’t arrive at the right moment, fitting them cannot be completed within the takt time. Our takt time is a sprint. We should have no lesser sense of urgency than Toyota’s 90 second takt time. Ask Developers ahead of a sprint, “Do you have everything you need to complete this story?”, and listen to their response.

Make your team’s model “ready set flow”

Emphasise the connection between ready and overall sprint flow. Encourage Product Owners and Business Analysts to make the workflow of getting stories to ready visible on an Agile wall. Get them talking about these efforts at standup, so everyone in the team feels empowered to flag obstacles to ready.

Agile Maturity Checks – Use with Caution!

Agile maturity check

Just saying it like it is. My favourite Agile maturity measure is the one in the picture above! I like to invite a team starting out in Agile to create a histogram of how they as individuals feel about their comfort level with Agile. I know that Agile maturity checks are loved by management, but I think they’re less useful to teams.

At work I put some time and thought into developing a maturity model that spans Lean system thinking, Lean Startup, Agile and DevOps practices. I modelled it on Spotify’s team health check, which I do feel is really useful. This maturity check was only moderately useful. I suspect that if you do use it, you’ll make an early maturity team feel a bit depressed and inadequate.

My thinking behind this model was to set the bar of early maturity where a non-Agile team taking their first steps might find themselves. My scale ranges all the way up to mature, which is good practice and I hope, not too aspirational.

It might be most useful to use components of it, if a team discloses a particular area they want to address. The next step might be to use a retro to generate ideas about how they could get from “not yet” to “on the way”.

Take a look at it and use with caution!

  Not yet On the way Mature
Standups We don’t run standups. We run standups sometimes. Standups tend to run over the 15 minute mark. We can’t always see the point of standups. We run standups every day and stick to the questions: what we did yesterday, what we did today, and what obstacles we need to resolve. Standups never exceed 15 mins.
Team rhythm As a team member I’m either too busy or not invited to attend all the Agile ceremonies – standup, estimation, sprint planning, showcase and retro. I mostly attend those ceremonies that I’m invited to – standup, estimation, sprint planning, showcase and retro. As a team member, I attend and contribute to all Agile ceremonies – standup, estimation, sprint planning, showcase and retro.
Process Our way of working is painful. Process gets in the way of us getting stuff done. We have a way of work that is sometimes easy and sometimes hard. It would be great if our process fit us better. Our way of working fits us perfectly. We have just enough process to get stuff done easily.
Speed We never seem to get done with anything. We keep getting stuck or interrupted. Stories keep getting stuck on dependencies. We get through stories, but there are frequently blockers, or process glitches. We get stuff done really quickly. No waiting, no delays.
Requirements Our Agile stories are place-keepers for the work we do. We use BluePrint to articulate our requirements. We are using Agile user stories, but our stories are sometime technical tasks derived from specifications. Our stories are not consistently small, estimable, and independent Our user stories are not derived from a formal documentation process but are captured as business value. They are always small, estimable and independent. We always include acceptance and test criteria.
Analysis Time pressure usually prevents us from having our stories articulated before bringing them into a sprint. We do this during the sprint. Our aim is to get stories ready for a sprint, but accept that sometimes the work just needs to be done, so articulation of stories takes place during a sprint. We have a clear “Definition of Ready”, so we never bring stories into a sprint unless they are fully articulated, including a story statement, acceptance criteria and test criteria
Sizing We aren’t yet sizing our work before bringing it into a sprint. Several team members get together for sizing. We mostly have stories estimated before we bring them into a sprint. We have a regular rhythm of story sizing and involve the entire team in this process.
Velocity We use the stories in a sprint as a guideline or focus for our work. We never complete the stories we plan to complete in the sprint. We sometimes finish stories (including testing) in the sprint. As a team we always finish all the stories (including testing) we commit to in a sprint.
Sprint planning We aren’t yet conducting regular sprint planning. Several team members get together for sprint planning. Sprint planning usually happens, but we usually carry over stories not completed from the previous sprint. We have a regular rhythm of sprint planning that involves the entire team. We all commit to the work in the upcoming sprint and are confident we can complete it.
Agile wall We don’t yet have an Agile story wall. We have an Agile story wall, but it doesn’t necessarily reflect what we are working on, or what is in JIRA. Our Agile story wall is not hugely relevant for us in standups or our daily work. Our Agile story wall is our key source of truth about the progress in a sprint. Anyone looking at it can see exactly what we are working on and where we are up to. It matches what is in JIRA. We use it both in standups and the daily work we do.
Showcasing We aren’t yet showcasing. We showcase when we have something tangible to show stakeholders. This can be between every second and fourth sprint. We have working software to showcase after every sprint. We invite key stakeholders to this showcase and our developers run through what has been built.
Retrospectives & Learning We aren’t yet running retrospectives. We never have time to learn anything. We are too busy for retros and cross team showcases. Time permitting, we run retrospectives. This doesn’t necessarily happen after every sprint. We’re learning lots of interesting stuff all the time. We run retros every sprint. We initiate and participate in cross team showcases.
User experience support
We are struggling to understand what role User Experience designers play in defining our product and where they fit into the lifecycle. We sometimes tap into the skills of User Experience Designers, but often time pressure prevents us from working closely with them. We have complete clarity on the role of user experience designers in our product development. We integrate their work into stories across all product builds.
User experience integration
We often find we don’t have time or resources to integrate qualitative (through UX testing) or quantitative (through integration of analytics) customer learnings. We integrate some qualitative and quantitative customer learnings into what we build, but we rarely have an opportunity to run another iteration to further improve our product. We integrate qualitative customer learnings (through UX testing) and quantitative (through integration of analytics) into every feature/initiative we build and deploy.
Testing Testing is owned by the QA/Tester. Functional, non-functional and integration testing is performed at the end of the lifecycle, not necessarily within a sprint. Testing is shared by the QA and Dev. Non-functional and integration testing undergo complete inspection tests. Functional testing is integrated with the build. Testing is owned by the QA, Dev and BA. Non-functional and integration testing undergo complete inspection tests. Functional testing is integrated with the build. Developers practice Test Driven development (TDD).
Build Build is manually performed custom or repeatable, but still manual. Build is performed infrequently. One member of the team owns build. Build is repeatable and automated, but doesn’t happen with maximum frequency. Build is the responsibility of the team. Functional Testing tools (Watir, Selenium, etc) are integrated as gatekeeper events to the build. Integration tests with external tools and products.
Releasing Releases are extremely difficult. Our preference is to release at the end of the project. Releasing is tricky, so we aggregate as much work as possible before releasing. We have a regular rhythm for releasing. It is fast and painless
Collaboration & Communication We consistently find it difficult to get answers to our questions, or resolve issues in a timely way. It is sometimes difficult to get someone in our team to help us out, or resolve a problem when we need it. If we need somebody in the team’s opinion or assistance, we can freely ask that person either at standup or throughout the day and expect a prompt response.
Empowerment – ways of working It’s the job of leadership to suggest how we can work better. It’s sometimes difficult to get traction on improved ways of working, but we give it a go. We feel empowered to suggest how we can work better. Our suggestions translate to real improvements in the way we work.
Empowerment – product We don’t have any understanding of how the customers are responding to the product so we can’t calibrate it. We sometimes hear feedback about our customers’ responses to our product, but incrementing the product in response to this only sometimes happens. Our regular cycle it to respond to learnings from customers. It shapes our backlog.
Contribution We complete our work, but it’s not always clear how the tasks we are performing are critical to the success of the team. We know that the tasks we are performing are critical to the success of the team, but sometimes it doesn’t feel so – . Every day we are at work, we know how our work contributes to delivering a great product. We own creating a great product.
Fun &
Celebrating
Success
Boooooooring. We do not celebrate successes. There are some smiles about, but celebrating success isn’t something we really do as a team. We have great fun working together. We find ways to celebrate our success.

What we can learn from our Lean cousins at Alcoa

cropped_image_of_aluminium_tin_can

A few weeks ago I visited the Alcoa rolling stock facility at Point Henry. The facility has been successful in producing high quality product and keeping market share in production of rolled aluminium of the sort used in the blanks for drink and food cans. A year ago though, predicting that it would be unable to keep pace with mega production facilities rapidly coming on line in China, the US parent company decided to close down the facility.

I was expecting my tour to be a somewhat depressing experience, given that we were just seven weeks away from the shut down. This couldn’t have been further from the truth. I was introduced to a bunch of highly motivated, highly skilled individuals, who not only knew their stuff about Lean, but who had developed a unique and effective approach to quality – one that us folk in the Agile world can learn from.

Alcoa introduced Lean at this plant in 2006. It was a top down, not negotiable introduction of this methodology. Lean experts from Toyota were brought in to provide consulting advice, but it was the effort of those on the ground who worked to transform culture and develop brilliant quality methodologies, that lead to the successful outcome.

The cultural change focused on:

  • Developing self-organising teams
  • Getting close to customers to understand quality
  • Reducing waste
  • Encouraging everyone to voice great ideas

It was in this environment of openness to great ideas, that the safety team, and then the quality team embedded an understanding that there are three ways that quality fails occur:

1. Automatic (or knowledge based)

When we complete a task which we have done many times and which we are highly familiar with, we are effectively on auto pilot. If someone were to ask you which red lights you stopped at on your way home from work, you’d most likely be unable to recall, because stopping at red lights is part of your automatic mindset. The quoted figure for individuals failing an automatic tasks is 1 in 1000.

2. Rule based

When we complete tasks based on a fixed rule, we mostly adhere to the rule. When driving home from work in the early hours of the morning, rules tell us that we should stop at a stop sign and not treat it as a give way sign. However the likelihood of an individual always adhering to this rule is reduced by comparison to automatic functions. The quoted figure for individuals failing to adhere to rule bound tasks is 1 in 100.

3. Unfamiliar (or skill based)

Where an individual is embarking on a new and unfamiliar task, the likelihood of error is obviously much higher. Driving on the other side of the road when travelling overseas is an example of where a familiar task is suddenly unfamiliar. Coming back to work after a break and finding that something such as the code base, has changed, is another time we are likely to make errors. The likelihood of quality failures in this situation is very high and is estimated to be at between 1 in 2 and 1 in 10.

The teams at Alcoa specifically respond to this by flagging during standup, whether tasks they are embarking on that day are automatic, rule based, or unfamiliar. Not only do they flag this, but the whole team actively identifies the likely error traps and discusses mitigation for these.

This struck me as an incredibly useful model for Agile teams eager to improve their quality. Before embarking on development tasks, encouraging the whole team to put their heads together to ask a bunch of questions. Are there any tasks we are embarking on that could adversely affect our code base or architecture? Will what we are doing today impact others in the organisation? What level of preparatory work might be required to reduce defects? Is what we are doing going to generate an unreasonable amount of waste? Have we taken into account lack of knowledge of any individuals about to embark on this task? Do we have all the information we need to commence this task? What are the likely error traps? How can we mitigate these? I’ve included a screen shot of the Alcoa pre-task checklist.

A bunch of other impressive quality measures also jumped out at me from the visit to Alcoa. During the transformation to Lean, the local implementers immediately recognised resistance to change as the key blocker to success. However as some people came on board with the new way of working, that involved clarity of roles, a huge degree of cross skilling and encouraging continuous improvement, what also emerged were individuals who showed initiative to lead the cultural change as “Champions”. Leveraging off the initiative of these individuals, quality champions became embedded in every team. It wasn’t their job to take sole responsibility for quality, but to keep abreast of best practice, teach the team and nurture initiatives and activities of those in the team. It’s true that in Agile we expect everyone to take responsibility for quality in every phase and in every build, but utilising the passion of someone who is a quality champion can help embed ownership of this within a team.

What impressed me most at Alcoa was the understanding that every Operator had about the customer who was buying their product. The guys knew where the rolled aluminium would be used, for what purpose and most importantly what level of defect would be tolerated in that product. In discussion with the Operators, I was reminded of the principle of trade-off sliders often used in Agile inceptions. Achieving a high level of quality is going to require a high level of input. In Agile inceptions and beyond, we aim to reach agreement on a time versus output scale. Perhaps we should also be asking what level of quality can we deliver for the time and cost? Do we need to discover more about our customer, to decide on the level of tolerable defects? Are our developers informed about who our customers are, or is it just the product people who understand this?

When Eric Reis in The Lean Statup talks about early adopters tolerating a higher degree of inconvenience given the often rudimentary nature of first implementations, should we too be asking questions about just how slick, defect free and seamless every implementation needs to be? Perhaps we should be asking – where does this drop sit in the implementation lifecycle, or how does return on investment fit with increasing our defect detection and remediation?

Being a typical Lean environment, there’s heaps more measurement at Alcoa than in the average Agile environment. Every morning, representatives across the entire production process meet in the “8:30 room”. It is of course named “the 8:30 room” because at 8:30am representatives from each team come together after their individual team standups, for a Scrum of Scrums. The 8:30 room is a glassed in area on the factory floor, which is home to the most massive and the most effective information radiator I have ever seen. Casting my eye over it, I could immediately see where specific teams were up to that day in terms of production, risks and issues. The focus of this communication is to validate dependencies and risks across the whole production process for that specific day. Targets versus actual output is discussed, as are quality and safety issues. Most importantly, with representatives from each team present, blockers can be immediately resolved.

Scrum of scrums is a practice I’ve seen implemented across various workplaces, but with varying degrees of success. It needs rules, and a rhythm, just as standups do. The scrum of scrums I’ve been involved in could have benefited from introduction of more measurement to validate where there were blockers affecting other teams. Keeping a greater focus on interdependencies and quality traps is something I’ll be much more likely to do after seeing this in action at Alcoa.

Finally to some cultural quality initiatives at Alcoa that really made my heart sing. I’ve previously blogged about the step in the product lifecycle at Realestate.com.au, which is simply called “Thinking”. At Alcoa “Bright Ideas” are an encouraged part of the quality lifecycle. The fun bit is that each week, the person who has suggested the best bright idea, gets to spin a wheel of fortune style wheel, with various dollar amounts on it. The winner also gets to nominate the charity to which the money will be donated. Some of these bright ideas go on to become Kaizen, or continuous improvement initiatives, while others are acknowledged as great ideas, but with lesser business value.

In the same way that flagging potential improvements is acknowledged, staff are also rewarded for “pulling the help chain”. An equivalent to Toyota’s Andon cord, the help chain is the signal to cease production. It’s a fundamental safeguard to protecting output quality. In Alcoa waste is easily quantifiable. Waste has to be re-smelted and go through the whole production process again. Once rolled aluminium is colour coated, re-smelting is even more expensive and it has to go to a specific smelter that can manage the toxic output of burning the colour coating.

In Agile we value quality at every stage of our production too. We acknowledge that quality at every step produces a quality end product. How could we do better on this though, by learning from our Lean cousins, not just to identify, but to quantify waste? If we implement an interface that hasn’t had the eye of a User Experience Designer caste over it, and the interface is not fit for purpose, the cost of this in the development lifecycle can easily be quantified. How much better would we do if we quantify the cost in our projects of not pulling the help chain when we spot code that will potentially break the code base, or activities that will introduce technical debt, or implementations that will adversely affect other teams!

Alcoa pre-task briefing

Alcoa pre-task briefing checklist