We know that the most useful features of an application are the ones you think of when you start using it. We take advantage of this by delivering release-quality software early and often. We plan ahead to meet business commitments, but we don’t bog down in detail that may turn out to be unnecessary. Our number one deliverable is always working software, and we do that on regular, unmissable, two-week schedule.
We want you to be constantly involved, and we’ll always be honest with you about how things are going. We’ll demonstrate new features to you as we finish them. We’ll use our wealth of experience to suggest new and better ways of doing things, while always leaving you in control.
We have the discipline to maintain a structured, effective Agile process that maximises flexibility and quality while still allowing you to plan ahead. We’re always improving our process, and we’d love to work with you to develop a world-class project.
User Stories Workshop
We know you’re full of ideas and excitement. So are we. So are the graphic designers, the interaction design experts, the marketing group and the test team.
Let’s get them all into one room and talk about what we’re building.
This is the User Stories Workshop. In advance, we’ll have worked with you to determine the types of people who will use your new site. Each ‘User Role’ will have a representative at the workshop – someone from within that team in your company, or an analyst for external roles. Usually the entire development team is present, along with a Scrum Master acting as a facilitator.
At the workshop we’ll brainstorm everything the new site should do. We’ll write these down in the form of User Stories, often taking the form “as a …, I want to …, so that…”. For example, “As an administrator, I want to download page views as a CSV file, so that I can graph it in Excel”. These describe what we’re building, why we’re building it (perhaps we can find a better way?), and who we’re building it for (so we know who to talk to). The stories are not a specification; they’re just a placeholder for a conversation we’ll have later.
By this point, we’ll have dozens of features we’d love to build eventually, but it’s time to decide where we’re going to start. After the workshop, the development team will write an estimate on each of the story cards, using ‘Planning Poker’.
This can take several hours, but it means that everybody is clear on what we’re doing and – more importantly – how we’re doing it. We break into pairs for design decisions, or involve the whole team for particularly tricky ones. Because we’ve listed every task that needs to be completed, the team can accurately track their progress.
By now, we hope that you’ve selected a Product Owner – a single person who is knowledgeable about the product we’re building together, and empowered to make any decisions about it. The Product Owner’s responsiveness is critical to the team’s performance, and ideally we’ll sit with them for at least some of each work day.
We understand that, even in the volatile world of software development, and even with significant uncertainties about exactly what we’re building, you need at least a rough idea of when you’ll be able to launch. At the Release Planning meeting, we’ll present the estimates and invite you to place the story cards in order and draw a line where you’d like your first public release.
We’ll revisit the Release Plan throughout the project as your priorities change or the work proceeds slower or faster than estimated. We use risk multipliers to allow you to make a firm release commitment.
While agile development lets us stay responsive to changing business goals, changing too much disrupts programming effort. We work in two-week ‘iterations’, which are long enough to be efficient but short enough that we can fully specify the work to be done in each one. We have a prioritised list of user stories, from the release plan, so we work down the list, making sure the whole team understands exactly what needs to be done. This is the first point where we explore all the corners of the story, right down to technical tasks like “resize user’s uploaded photo”. We estimate the time needed for the tasks down to the nearest hour.
This can take several hours, but it means that everybody is clear on what we’re doing and – more importantly – how we’re doing it. We break into pairs for design decisions, or involve the whole team for particularly tricky ones. Because we’ve listed every task that needs to be completed, the team can commit to delivering the work by the end of the two-week period.
To determine how much work to plan, we use the team’s velocity – a measure of how many story points we’re delivering per iteration. This is different for each project. Re-used code or occasional flashes of brilliance speed the team up; delays in communicating with stakeholders, technical problems or difficult constraints can slow us down. We use a moving average, but in the simplest terms, the answer to “How much can we do this week?” is “How much did we do last week?”
We normally maintain both the Release Plan and the Iteration Plans on large wall displays in our office; we find it’s by far the most effective way to keep the information in front of everyone and always up-to-date. When collaborating with offsite teams on large projects, however, we’re happy to use Google Docs, Mingle, or your company’s existing project management system.
With a fortnight’s work planned in detail, it’s time to write some code. Our technical practices include:
A daily ten-minute ‘stand up’ meeting at 10am sharp. The product owner is welcome to attend.
The ‘burn down’ chart on the wall, showing our progress, any hold ups, and how much ‘slack’ remains in the iteration.
Paired or ‘mobbed’ whiteboard design. Every team member knows how each piece of the site works, and how it all fits together.
Pair programming with rotating pairs. Two sets of eyes on the code improves quality without sacrificing quality. We peer-review all non-trivial code that wasn’t paired.
A ‘wall of wireframes&rsquo$$ we print the designs for the entire site, post them on the wall, then annotate them with post-it notes.
Extensive automated testing, with continuous integration. For particularly complex business logic, we use test-driven development.
Open coding. We use open-source code libraries where appropriate; we choose only maintained, high-quality code which is a good match for our project. We contribute code back to the community where permitted by licensing agreements.
“Done-done” testing. There can be quite a large gap between a programmer saying “it’s done” and a real-life user using the feature in production. We close the gap by testing each completed feature with the Product Owner as we go; it’s not done until the Product Owner says it is. This tight feedback loop avoids the time-consuming change requests and email chains found in other projects.
Continuous refactoring. If we do find any bugs, we don’t stash them on a forgotten bugtracker somewhere; we kill them immediately. Allowing code quality to deteriorate is a false economy, so we don’t.
It’s important that we meet our goals, and it’s important that there are no surprises for anyone at the end of the project. Every time we finish an iteration we gather the team, the Product Owner, and any interested stakeholders for a brief demonstration of the finished work. We’ll work through the stories one at a time, demonstrating the acceptance criteria that we agreed on at the start of the iteration, and providing an opportunity for feedback from stakeholders who weren’t involved in the day-to-day development. Any changes or new ideas feed into the product backlog, or the next iteration.
While the Scrum process serves us well, we are constantly tweaking it to the challenges of each project. After the demo, we hold a retrospective: a structured process designed to output several definitive, testable changes to our process. While the retrospective is often quite technical, we invite the Product Owner to join us; often the improvements are in communication and planning, rather than code.
After the retrospective, we plan the next iteration. We usually use iterations of constant length, starting on the same day of the week, giving the project a steady heartbeat.
After a few iterations, it’s time to prepare the site for launch. We’ll plan a ‘release sprint’ for performance tuning, user-acceptance testing, final content from the copywriters, and any last-minute tweaks. This is about last-minute changes rather than bug fixes, though; through done-done testing and crisp iterations we avoid having a large and inestimable amount of ‘fix up’ work at the end of the project.
We’ll have asked you to make a decision about hosting early on, so that everything’s set up well before the deadline. We’ll work with your preferred web host, or recommend one of our own. We’ve deployed sites to EngineYard, Brightbox, Slicehost, and high-availability clusters at Claranet. We work with change-control processes for two of our larger clients, and we’re happy to meet your requirements for documentation. If the worst happens, our deployment system allows for rapid rollbacks, including rolling back data migrations.