Agile Development

Your constantly-updated definition of Agile Development and collection of videos and articles. Be a conversation starter: Share this page and inspire others!
2,111 shares

What is Agile Development?

Agile development is an iterative software-development methodology which teams use in projects. Self-organized, cross-functional teams frequently analyze circumstances and user needs to adapt projects. Scrum teams constantly improve quality in sprints with short-term deliverables. They show Agile development in action.

Show Hide video transcript
  1. 00:00:00 --> 00:00:34

    In order to really understand Agile,  it's important to know what Agile *isn't*. Agile didn't just appear from a vacuum; it was a  reaction to the way software was being written in the '80s and '90s. Back then, most of us did  something called Waterfall. I mean, Waterfall was really the best-case scenario because sometimes  there was absolutely no process at all. But when there *was* a process, it was often Waterfall. And even Waterfall had a lot of different versions, but we're going to look at the most optimistic  one that actually... included some design.

  2. 00:00:34 --> 00:01:03

    When you look at Waterfall, it makes some  sense. Somebody, probably a product manager, comes up with an idea for what  needs to be built and presents some requirements. These could take a lot of different  forms, but frequently they were in something called a product requirements document, or a PRD, or  sometimes a marketing requirements document (MRD). These were frequently *extremely long* documents  with a *lot of detail* about what a product should do. In the best-case scenario, that detail was drawn  from research and an understanding of the needs of 

  3. 00:01:03 --> 00:01:34

    both the user and the business ... and in reality, maybe not quite so much. Once the product managers were done, then we entered the design phase. Now, design was often its own little sort of mini Waterfall, but again, in the best cases, it was a good, solid user-centered design process – one that you're hopefully familiar with. It involved lots of user research to understand users in their context and lots of ideating and iterating and  prototype testing and all of that good stuff. In the worst cases, well, we drew a lot of pictures  of things that would never get built, to be honest.

  4. 00:01:34 --> 00:02:01

    The thing is that with these long cycles, the  requirements gathering and design process could last *a while* – almost always months, sometimes a  year; it depended on how big the product was. And a lot of times this was all before a single  line of code got written. That's because when you look at Waterfall, each little drop-off  is really what's called a *staged gate*. What that means is that after the process happened, the  requirements document or the design or whatever,

  5. 00:02:01 --> 00:02:32

    there would be a review process of the output  before it moved through the gate to the next step. Development couldn't happen before design ended, because the design had to be fully vetted before resources were committed to building. After all, you wouldn't want to spend a lot of money writing code if everything was just going to change, right? Again, this all sounds really reasonable – everybody agrees what we're building before we start building it. Who could be against this? Well, we're not done yet! Since requirements gathering was done *before* the design process, often things would show up in the design phase that invalidated  something from the requirements doc.

  6. 00:02:32 --> 00:03:02

    Maybe something changed in the six months that  product management took to write up 300 pages. Maybe we learned new information in  user research... who can say? Things change – which meant going back to redo the requirement ... and then back again once the requirements were fixed. Also, since design worked *independently* of  engineering most of the time, even after the design phase was "over", it was not  unusual for engineers to send the "finished" designs back with a nasty note saying, "This is a  pipe dream and will take us 400 years to build." or

  7. 00:03:02 --> 00:03:33

    something like that. That's always what it sounded  like to me, anyway. The same thing happened with quality assurance team at the testing step, except they would just file bugs for the engineers to fix. Anyway, what all of this means is that the  diagram often looked more like this. And it took a really, *really* long time. In a lot of cases, there wasn't any way to get around this process since at the end of the waterfall, we'd be printing a  bunch of CD-ROMs or even putting things directly into hardware, so there was really no going back and fixing stuff later like you can with internet-connected software.

  8. 00:03:33 --> 00:04:03

    The real problem, though, lay  in the *complete separation of the departments*. Engineering often didn't get much input into  the process until requirements were already set, despite the fact that requirements could be  drastically affected by engineering decisions and requirements sometimes changed, even from  the time that the 300-page document was written and approved to when the engineers  finished working on it. In other words, market forces could change or we could learn  things from the users or the engineers would find some really hard problem that couldn't be solved,  and then *everything* had to be changed

  9. 00:04:03 --> 00:04:31

    because it was incredibly hard to change one thing in that  300-page document without everything cascading ... because of *the waterfall* – get it? If something on page 28 changed, it meant that everything from page 29 to 300 *also* needed to change, and that was what we liked to call an enormous nightmare. Let's not even talk about feature creep, which happened  when people from step one realized that they'd forgotten a whole bunch of stuff and tried to  squeeze it in around step three or four, causing

  10. 00:04:31 --> 00:05:03

    the requirements to balloon out of control, and  all of the deadlines – which were ridiculous in the first place – would just get missed. Anyway, is there really any wonder why the engineers of the time might be interested in something a little bit more flexible – something where maybe every single decision didn't get made up front and then changed later; something where we *admit that we don't know everything*, so we're not going to bother trying to specify everything out to the last bit and byte, and we're going to build  some stuff and get *feedback and adjust as we go*. Throw in the fact that around this time websites  and web applications were really starting to take off;

  11. 00:05:03 --> 00:05:36

    and it was getting a lot easier to  get feedback directly from customers and make *continuous changes* rather than having  to chisel everything into stone before packaging it all up and sending it to the store  to sit on shelves – which is how people used to buy software; I mean, not the stone part, but the store part. Obviously, this wasn't everybody's experience of Waterfall. Between you and me, a lot of really good stuff has been built Waterfall-fashion – it's not a horrible system. In fact, most physical products and large civic projects like bridges and skyscrapers are built in a *mostly* Waterfall fashion.

  12. 00:05:36 --> 00:05:48

    But it's also not surprising that a lot of high-level engineers were not huge fans of it and they'd come up with something that was significantly less frustrating for them. And it's *really* not surprising that it would catch on.

“If your goal is to deliver a product that meets a known and unchanging specification, then try a repeatable process. However, if your goal is to deliver a valuable product to a customer within some targeted boundaries, when change and deadlines are significant factors, then reliable Agile processes work better.”

— Jim Highsmith, Software engineer, in his book “Agile Project Management: Creating Innovative Products”

Table of contents

Agile Development—An Antidote to Inflexibility

By the mid-1990s, many software professionals had become frustrated with development processes that micro-managed them and overlooked their needs. Iterative development methods already existed, but these developers needed something more dynamic to help them work. Software was becoming increasingly sophisticated. They wanted more freedom to shape their projects. So, they began to streamline an approach where they could stay flexible and respond to design challenges as these emerged. Appropriately, they called it “Agile”. This was their answer to Waterfall project management—the traditional, linear model where teams follow a rigid path of development throughout the product lifecycle. Around 2000, the software market was transforming from store-bought boxes to downloads. The rules of engagement were likewise changing. In 2001, 17 professionals from backgrounds including adaptive software development and extreme programming met in Utah to determine an alternative approach. This group—the Agile Alliance—wrote the Manifesto for Agile Development. Of its twelve principles, four are at the heart of Agile development. They value:

  • Individuals and interactions over processes and tools;

  • Working software over comprehensive documentation;

  • Customer collaboration over contract negotiation; and

  • Responding to change over following a plan.

Agile development’s authors stated transparency and iteration are essential. Clients should remain closely involved throughout projects which are “alive” with team cooperation. Although the Waterfall approach has positive aspects, teams find themselves restricted because each development stage and its effects flow directly into the next. It’s hard for everyone to keep a clear, day-to-day view of the big picture. Moreover, team members tend to stay siloed and blindly pass deliverables on to other teams (a habit called “throwing over the wall”). By contrast, Agile teams work iteratively and collaboratively to produce the right deliverables. They do so under time-boxed conditions to present these on tight schedules. Teams stay highly focused on delivering short-term, smaller goals (“chunks”) in design sprints that run anywhere from 1–4 weeks. In daily scrum meetings, internal stakeholders account for the previous day’s actions, stay updated about needs and refine plans. When a sprint ends, teams log their progress and review the development process.

Agile Development and User Experience (UX) Design—Take Care with Two Cultures

Development and UX personnel are different in many ways, especially how they picture users’ needs. For an organization to apply Agile development successfully, management must create the right environment so that UX, development and other stakeholders can collaborate openly and have constant dialogue throughout a project. Another vital consideration is how organizations define UX roles—particularly important regarding how much responsibility and influence they expect/allow any UX team members to have in projects. The Nielsen Norman Group has defined four key principles for organizations to get the most from an Agile-UX cultural combination:

  1. Management must understand and support UX work.

  2. UX professionals should show leadership and take time to reach out to colleagues.

  3. Agile workflows must be flexible enough to accommodate the needs of UX personnel.

  4. UX personnel should be part of the product teams, building respect and rapport with developers.

To support UX work, management must provide sufficient resources for UX research. A business analyst or a solution architect who works closely with UX personnel can help translate ideas to developers. Above all, design should lead development in addressing the right issues in sprints. When UX professionals help teams work with the client’s and users’ requirements in mind, everyone can produce the right deliverables.

Learn More about Agile Development

You can learn more about integrating Agile with UX in these courses:

Agile Methods for UX Design.

UX Management Strategy and Tactics.

This is a revealing piece about UX and Agile practices.

See some powerful considerations for Agile development concerning UX.

Here’s an incisive account about the UX-Agile dynamic, showcasing tool use.

Read the Nielsen-Norman Group’s advice on handling Agile with UX.

What is scrum in agile?

SCRUM is a framework within agile methodology that organizes work into short, fixed-length iterations called sprints, typically lasting 2-4 weeks. Teams plan, design, develop, test, and review each sprint. Key roles include the Scrum Master, Product Owner, and Development Team. Artifacts like the product backlog, sprint backlog, and events such as daily stand-ups and sprint reviews drive progress. Scrum promotes collaboration, adaptability, and quick feedback.

What is an epic in agile?

In agile methodology, an epic represents significant work that the team breaks down into smaller user stories for better manageability. It encapsulates a considerable enhancement or feature, spanning multiple sprints or releases. Just as Google's design sprint process, highlighted in our video, streamlines the creation of designs, epics help teams categorize and prioritize large-scale activities in agile projects. 

Show Hide video transcript
  1. 00:00:00 --> 00:00:32

    In this video, we'll teach you how to use  Google's design sprint process to create great design faster. In an age of tight resources, companies  are more reluctant than ever to commit  to big design projects without a thorough  understanding of their chances of success. Google has developed a method to make the design process fast and still offer valuable insight. The process developed by  Google Ventures is called a *design sprint*.

  2. 00:00:32 --> 00:01:00

    It focuses on getting insights into *critical  business questions* within a very short timeframe – just five days. The process is based on *design thinking*, so it attempts to gain those insights via *rapid design*, *prototype development* and *user testing*. The design sprint is a five-phase process. Each phase takes approximately one day or eight  hours to perform. So, a sprint can be done in five days. The five phases of Google's design sprint are *understand*,

  3. 00:01:00 --> 00:01:31

    *sketch*, *decide*, *prototype* and *validate*. The design sprint is a *linear process*, but you  are strongly encouraged to *make revisions* based on your first sprint and then *reiterate the prototype and validate phases*. You can also move further back to earlier phases and reiterate from there. In the following, we'll look at each step in more detail, but let's start by taking  a quick look at how to plan a design sprint. To ensure that the design sprint will be  successful, some *planning ahead* is required.

  4. 00:01:31 --> 00:02:04

    Before the sprint begins, here are some things you should consider. You should *write a brief* to state the sprint's goal and bring everyone on the same page. You might also need to collect or *conduct user research* to get insights that can inform the work you do during the sprint. *Consider who should be on the sprint team.* Google's sprint process is designed to be run by teams rather than individuals. That means getting everyone together and ensuring  that they're all aiming in the same direction. The ideal team would include representatives from  all relevant functions and at all levels in the organization.

  5. 00:02:04 --> 00:02:34

    You could also invite external people –  for instance, a representative user or stakeholder. You need a *suitable space* for the  sprint, typically somewhere bright, quiet, with lots of wall space and enough room  for people to move about will be suitable. You have to *gather supplies* for the sketching  and prototyping phases. Typically, you need office supplies like Post-its, blank sheets of paper, color markers, tape, and so on. Finally, you should choose a good *icebreaker exercise* to kick off your design sprint.

  6. 00:02:34 --> 00:03:00

    Some members of your team might not have worked together before, so it's good to start off with an activity to warm people up. Now, let's take a look at each stage in the  execution of the actual sprint. In the understand session, your goal is to *create shared knowledge* about the business problem you're working on. You bring everyone together and unpack all of your  team's knowledge about the problem.

  7. 00:03:00 --> 00:03:32

    *Lightning talks*, where knowledge experts use 10 to 15 minutes to share their knowledge about the problem, is an important part of the understand session. Typical topics for a lightning talk are *business goals*, *insights from user research*, an *overview of competitive products*, *technical opportunities*, and so on. As part of the lightning talk, it can be a good idea to have a presentation by someone from senior management outlining why the problem  you're working on is important to the business. Other typical activities of the understand phase are demonstrations of solutions that are already available,

  8. 00:03:32 --> 00:04:01

    a detailed walkthrough of any proposed  solution, creating user journeys and performing user research. In the understand session, it's also  a good idea to be *clear on the metrics of success*. And remember, your metrics should be  useful and not pulled out of thin air. When you do the understand session, it's  important to involve the whole team. Don't let an individual or group dominate the  proceedings. The idea is to ensure everyone is on the same page, and the only way to do that  is if everyone is heard from.

  9. 00:04:01 --> 00:04:30

    Once everyone is on the same page, it's time to split the team up and get them to start working on solutions. Sketch day is an *individual effort*. Everyone is tasked with coming up with a detailed solution to the problem. It's a good idea to do this on paper for two reasons. First, it's quick and it takes no time  to make changes. Second, everybody is able to sketch so they can participate even if they don't know any wireframing tools.

  10. 00:04:30 --> 00:05:05

    For particularly complex, large-scale problem solving, you might want to break up the problem into *manageable chunks* and assign people a chunk rather than the whole problem. The aim of sketch day is to get as many ideas down as possible. If your team is large and you generate a ton of ideas, you might want to allocate an hour at the end of the day to quickly *reduce the number of ideas* to a more manageable number before you go into the third day of the sprint. As you might expect, decide day is all about  *making a decision* about which idea

  11. 00:05:05 --> 00:05:33

    you're going to take to the prototype phase. But there's more to decide day than making a decision. It's also about working out how your solutions  might conflict with your objectives and abilities. You can start the day by quickly  listing any assumptions that you are making about things like budget, users, technology capacity and business drivers. Then it's time to review each idea and look at  the conflicts that it generates. You should have an objective in mind during your review.

  12. 00:05:33 --> 00:06:01

    Would you be looking to take a single great idea forward to prototyping? Or are you going to pick, say, a top  5 and take those forward? You should be looking to constantly *refine* your list and remove ideas that simply aren't feasible early in the process. The entire team takes part in the decision process  by participating in discussions and through voting for ideas. Once you have an idea or ideas you  want to prototype, the last part of decide day is to *create some storyboards for your ideas*.

  13. 00:06:01 --> 00:06:33

    The storyboard should show each interaction with the user in a step-by-step process, and they'll be your specification for your prototype. You might also want to define a *user story*  or two to help beef up the specification. As the title implies, on prototype day you have  a single day to *create a prototype* that your users can test on the final day. First of all, storyboard what you're going to build if you haven't already done that. To build a prototype, you can use any tool of your choice.

  14. 00:06:33 --> 00:07:00

    Just pick one that you master enough for  rapid prototyping. The important thing is, don't attempt to learn a new tool on this day. Just use *whatever you're most comfortable with*. Finally, remember to *leverage the whole team*. Assign tasks and get everyone building or helping with something. Prototyping isn't just for the engineers.  Get people to document, write, review, plan a user test – any activity that contributes towards your end goal.

  15. 00:07:00 --> 00:07:31

    On day 5, you *validate your idea*. The most important part of the validation  is to bring in a group of your end users to test your prototype. It's important that the entire team gets to *observe the users interact with the product* either directly  or through watching recordings of your test. A *cognitive walkthrough* or *brief usability  test* are great tools to use in this phase. Other good activities to validate your design –  to bring in experts and management stakeholders to review your idea.

  16. 00:07:31 --> 00:08:00

    Everyone on the team should make  notes and record what they feel they've learned. You want to take these notes and  summarize them at the end of the day. This should help you decide if  anything needs iterating and improving. At the end of the final day, take some time with  the whole team to reflect on your experience. As Google puts it, there can be three possible  outcomes to a sprint: an *efficient failure* – perhaps your ideas didn't work so well, but you learned a lot in the process

  17. 00:08:00 --> 00:08:30

    and saved your team a lot of time from going down the wrong path; a *flawed success* – perhaps some ideas worked nicely, while others didn't: this gives you insights on what  can be improved and what you could work on next; finally, an *epic win* – the ideas your team had have  shown great promise and seem to work really well. You're ready to move into a more serious  implementation phase. Either way, you can only win! Whether it's by avoiding failure, learning where more work needs to be put in

  18. 00:08:30 --> 00:09:04

    or generating a killer design, you can only emerge from the design *wiser and more experienced*. The added bonus is that the time you've sacrificed  for this is relatively *short*. Though this is a tried-and-tested method by Google, it's also a relatively new concept adapted from Agile methods. It might take a few tries within your organization  to keep the sprint to five days. That's OK. You can work towards delivering faster sprints as  you get more practice. Google design sprints should help you take a process that currently takes months and make it lean and efficient. 

  19. 00:09:04 --> 00:09:31

    It's not a substitute for all design processes,  especially not for very complex products, but it's one that lets you ideate and test  ideas incredibly fast. A highly productive design team working in sprints is more likely to add business value and be recognized for their work within the larger organization. Finally, we encourage you to take a look at Google's Design Sprint Kit to get more ideas and tools for how to run your own design sprint.

While a design sprint focuses on rapid prototype development, like the "sketch" or "prototype" phases, an epic encompasses broader goals, guiding a team's trajectory over multiple cycles.

What is a sprint in agile?

In agile development, a sprint is a set period during which specific work must be completed and ready for review. Sprints typically last 2 to 4 weeks, allowing teams to break down larger projects into manageable chunks. 

Show Hide video transcript
  1. 00:00:00 --> 00:00:33

    OK, here's a scenario: You're a designer on a product that connects job hunters with great new careers. Your team is currently working on a new screen that will let applicants search by categories. Your job is to figure out what the  categories should be. Well, obviously the easiest way to figure this out is with some research. There are a bunch of different sorts of research that you could do; you could do *quantitative*, for example; you could do some *analysis* maybe of the search terms that people are already typing in

  2. 00:00:33 --> 00:01:00

    and see if there are any obvious categories; you could also do some *competitive research*, maybe check out some other sites and see if there are any common terms; or, of course, you could do the gold standard  and actually *observe some users searching for jobs*. But here's the problem: Those all *take time*. And the engineer who was assigned to the user story wants the answer from you so that they can finish  before the end of the *sprint*, which is on Friday. But you were just designing a story; how are *you* supposed to solve this problem?

  3. 00:01:00 --> 00:01:31

    It turns out this happens quite a lot on agile  teams and it can be *extremely frustrating*. The thing is, this is indicative of a very common  anti-pattern often called "feeding the beast". In this instance – in *all* instances – the engineers are the beast. There are a few ways to handle this. The first requires that stories be treated differently. Researchers and designers could work on their stories *ahead* of engineering, and by the time engineering gets them, the acceptance criteria are all set. In other words, you'd get this story a few weeks early, get to go do your various forms of research,

  4. 00:01:31 --> 00:02:02

    come up with a good solution, and then the story would go to the engineers. That's fine. And it's what happens on a lot of teams who are what we call "agile-ish". But it's not particularly cross-functional or collaborative. Another way of approaching it is to actually involve the engineer you're working with on making the decisions. For example, maybe they can help do some data analysis on the search terms while you do some competitive analysis on the competitors' types. You could both work together to observe some real users. Meanwhile, the engineer could do a bit of exploration to see if there are any technical reasons why certain types of categories would be

  5. 00:02:02 --> 00:02:31

    easier or harder to implement than others. You would be shocked at how often there are weird technical constraints that you do  not find out about until an engineer sees a story, which means it's really much better to get them involved as soon as possible. Of course, this may mean it takes you a little bit longer than a single sprint to get the whole story done; although, again, that can depend on how fast you are or how big the feature is or how long your sprints are. On the upside, you know that any design  will be technically feasible because you're involving the engineering team  in the decision-making process.

  6. 00:02:31 --> 00:03:00

    You also know that the engineer implementing the story  actually understands the user needs and goals. Depending on the design, it may drastically reduce  the number of deliverables you need to create. I mean, if you've already done co-design exercises  and sketched out ideas with somebody, you often don't also need a designer to create pixel-perfect mockups to show them exactly how something works. Will this kind of group research work in every single instance? No; sadly not. It depends on a lot of things, not least the willingness of the engineers to be involved in the research.

  7. 00:03:00 --> 00:03:30

    On some teams, engineers could get penalized if they spent too much time on a story. So, this sort of research would end up being bad for their careers. This is incredibly unfortunate, and if your team is like this, it's honestly worth trying to get it changed because the benefits of doing research together strongly outweigh the slight negatives of maybe moving a bit slower in some cases. In fact, often doing research and design  like this can end up making a team move faster in the long run simply because there's  less miscommunication among team members and less need for complex deliverables to be passed  around.

  8. 00:03:30 --> 00:04:03

    Of course, doing research together doesn't necessarily mean that everybody on a team does all research all together all the time; you're still doing larger, more open-ended studies that take a longer time. And if you've got one researcher and designer and several engineers and a product manager, you're not all going to go to all of the sessions together – that would be a little overwhelming. But be sure to *include* various members of the engineering and product teams and even marketing or sales or customer service or anybody else who has an impact on product decisions *when you're doing your research*. Not everybody will go to every research  session, but having people on your team

  9. 00:04:03 --> 00:04:31

    watch your customer experience first-hand can  really help your team be much more agile by giving everybody the context that they need to make  better decisions about whatever they're working on. Of course, this does tend to work best with smaller  teams, but then that's true whenever you're using agile methodologies. They're all designed around this idea of a truly cross-functional, collaborative, reasonably sized product team that is at least partially autonomous and can make actual decisions about what to build and how to self-organize.

  10. 00:04:31 --> 00:04:39

    If that doesn't describe your team, team research is going to be harder; although it's still not impossible, it just takes some work.

The video highlights the challenges of sprints, ensuring the team finalizes all research and decisions on time. For example, when designing a new feature like a search category screen, decisions on categories must be made promptly to meet the sprint's deadline. This emphasizes the importance of collaborative and cross-functional teams in agile, where designers, engineers, and researchers work closely to make informed, timely decisions and ensure smooth sprint completions. Many organizations overcome these challenges by running user experience sprints ahead of implementation sprints.

What is kanban in agile?

In agile methodology, Kanban is a visual tool that helps teams manage workflow effectively. As described in the video, a Kanban board typically consists of columns representing stages of a task, such as things you're going to do, currently doing, and jobs already done.

Show Hide video transcript
  1. 00:00:00 --> 00:00:32

    You're probably familiar with these;  they're a really good way to track progress on any sort of project, and there are tons of tools to help you build them, although – you know – plain old Post-it Notes and blue tape work just fine. Your Kanban board will have some columns. At the minimum, there will be columns for things you're *going to do*, things you're *currently doing* and things that are *already done*. Sometimes there will be columns for other things like *review* or *release* or something called an *ice box* for things that have been suggested but that aren't ready to be done yet. The idea here is that things move from column to column

  2. 00:00:32 --> 00:01:01

    as they move from planning to in-progress to finished. There are lots of other stages as well. The process set up by your team will determine what columns you use. Sometimes we have multiple projects going on at once. In that case, we might have something called *swim lanes*. These are horizontal lines across the  board that let you separate out different projects. So, if your team was building both a website and an app, you might track progress for the website and the app separately but on the same Kanban board. Sometimes designer research is tracked in its own swim lane. Again, it varies

  3. 00:01:01 --> 00:01:10

    depending on the needs of the team. Ideally, it allows everybody to see a quick overview of what's currently being done, what's going to be done and what's been finished.

This board can be as simple as using Post-it Notes, Blu Tack, or sophisticated digital tools. The fundamental principle is moving tasks from one column to another as they progress. For teams handling multiple projects, Kanban boards may feature swim lanes – vertical divisions that separate different projects, ensuring clarity and organization. Kanban offers a clear snapshot of work statuses, facilitating smoother project management and better team collaboration.

What is a spike in agile?

In agile development, a spike is a time-boxed task to answer a specific question or address uncertainties. It involves research or prototyping to gain the knowledge needed for reducing risks or making informed decisions. Unlike regular user stories that produce shippable product increments, spikes generate knowledge. Once completed, teams can estimate, design, or prioritize better. Typically used in Scrum or Extreme Programming, a spike ensures the team doesn't commit to uncertain tasks, promoting effective planning and higher product quality.

What is a user story in agile?

In Agile development, a user story is a tool used to capture a description of a software feature. In user-centered design user stories are written in collaboration with designers who are familiar with users’ needs. Outside of UCD, they represent the development teams’ best guess. . As highlighted in the video, user stories often follow a format like: "As a [type of user], I want [an action] so that [a benefit/a value]." For instance, "As a job applicant, I want to save my resume information so I don't have to re-enter it every time I apply for a new job." This approach ensures that developers and designers understand the feature's requirements, the desired outcome, and its significance to the user. 

Show Hide video transcript
  1. 00:00:00 --> 00:00:30

    User stories are the actual tasks that are written on the cards or sticky notes in your Kanban board. In Agile methodologies, these are often small  enough that they can be done in a matter of hours or sometimes a day or two. Many teams write them from the perspective of the user. So, you may end up seeing things in the form of: 'As a _____ I would like to _____ so that I can _____.' To fill in those blanks just a little bit,

  2. 00:00:30 --> 00:01:00

    'As a *user type* I would like *some goal or action*, so that *outcome*.' And an actually useful example would be something like: 'As a *job applicant* I would like to save my *resume information*, so *I don't have to re-enter  it every time I apply for a new job*. The idea here is that the engineer who is implementing this user story knows not just what the *description of the feature* is but *what the outcome is  and why the user would want that outcome*. Sometimes the user stories are accompanied  by various types of design deliverables.

  3. 00:01:00 --> 00:01:10

    Also, some teams don't phrase the user stories in exactly this way. You're going to have to adopt the patterns of your team or work with them to come up with something that works better for you.

In this Master Class webinar, user-centered design expert and IxDF course editor, William Hudson, will unveil the surprising origins of user stories and how simple changes in their structure can make them much more effective in the form of persona stories.

Show Hide video transcript
  1. 00:00:00 --> 00:00:30

    Despite their promising title, which might seem a bit surprising given that they are the main method that most traditional software development techniques, including Agile use in order to turn requirements into working systems. Find out how to fix this. What the problem is. There's a whole set of them and we have some simple solutions and we have some difficult solutions.

  2. 00:00:30 --> 00:00:40

    It's very challenging issue, but I'm going to cover all the main points in my Master Class, so please join me.

What is MVP in agile?

In Agile development, an MVP or Minimum Viable Product is a foundational concept aimed at validating a product idea with the least amount of work. As emphasized in the video, the MVP approach, rooted in The Lean Startup methodology, isn't about delivering a stripped-down product version but a functional version designed for learning. 

Show Hide video transcript
  1. 00:00:00 --> 00:00:31

    One of the key ways of designing for  experimentation is to build an MVP, or Minimum Viable Product. This is a concept that was first introduced by Frank Robinson back in 2001 and was popularized by Eric Ries and Steve Blank. It's now heavily associated with The Lean Startup methodology, and it's obviously probably one of the most misunderstood concepts that is also wildly popular, which is impressive because there's actually a lot of competition in that category. People seem to have misunderstood what a minimum  viable product is

  2. 00:00:31 --> 00:01:04

    in about as many ways as you can do it, but the most common misunderstanding  is that you can just cut as much stuff out of a feature or product and then ship it and then – I don't know – learn something, I guess. But the thing is, you *can't learn something from  a really bad product*. You just learn that nobody likes really bad things, which is something that we already know. Instead, what we should be doing is designing and building something that *we can actually learn something from*. Specifically, what we normally want to learn from  building an MVP is whether the proposed feature or product *solves a real problem* for a specific type of user.

  3. 00:01:04 --> 00:01:33

    The idea is, if the users we've identified really have a problem that they want solved and they try out our *small* version of the product, that they'll overlook some simple missing features or some small problems because getting that problem solved in this way is so important. The great thing is, if we combine this technique with the one where we share our work with a small group of early users, we can end up getting really good data on the exact things that we're missing. This can sometimes mean that we end up building something far smaller than we thought we'd need in the first place.

The primary goal is understanding whether the product addresses a genuine problem for a targeted user group. Contrary to misconceptions, an MVP is a moderate product; it's a tool for gathering insights. By presenting the MVP to a select group of early users, developers can obtain valuable feedback on essential missing elements, often realizing that a more straightforward solution than initially envisioned can adequately meet user needs.

What is Continuous Discovery in agile?

In Agile development, Continuous Discovery is a paradigm shift from the traditional project-based approach, emphasizing the constant infusion of customer input into product decisions. As detailed in the video, instead of setting significant project goals based on early research and then relying on that data throughout, continuous discovery advocates for a relentless focus on gathering customer insights.

Show Hide video transcript
  1. 00:00:00 --> 00:00:33

    Software grew up in sort of a project world where *business stakeholders defined* a whole big chunk of work to do; we wrote product requirements; the team built it; we learned way late in the process nobody wanted it. And so, we've seen people move more towards an *Agile mindset* or a *continuous improvement mindset* where we're looking at: How do we work in smaller chunks? How do we get feedback more continuously? So, continuous discovery is really just: How do we continuously *infuse  our decisions* about what we're building

  2. 00:00:33 --> 00:01:02

    with customer input? — instead of thinking about it as a project approach where you do some research up front and then you rely on that research for the rest of the project. I think this switch from project to continuous is one of the hardest ones. So, even when people – let's say they start to adopt Scrum as their sort of Agile methodology and they work in a two-week sprint; really all they're doing, what most teams do, is they make the mistake of

  3. 00:01:02 --> 00:01:31

    'Oh, well, I just take my old Waterfall process and I jam it into a two-week cycle, and so I'm still defining upfront two weeks' worth of work and then my  engineers build it and then I do a little bit of research while they're building,  and I define the next two-week iteration.' The challenge with that is that when we do our  research in a project basis even if it's just a two-week project, is that we do the research on the *big questions*, but we don't do the research on all the little teeny tiny questions that come up as we're building,

  4. 00:01:31 --> 00:02:03

    like, 'What do we call this button?' or 'How do we expose this in the interface?'  or 'How should the data model work?' And the example that I give for this is I feel like anybody on the planet could look at their mobile phone and find a dozen apps that they were excited about, which is why they downloaded it. So, they got the big idea of the app right, but then they never used it, because they *tried to* and they got all those little details wrong, and the app didn't quite work as promised. So, a more continuous discovery process is

  5. 00:02:03 --> 00:02:14

    we have to answer those *big questions* like 'What should we be building?'; we also have to answer those *little questions* as we're building so that we get all the little details  right and people actually use our products.

The aim is not just to answer big questions, such as "What should we be building?" but also to address the minute details that arise during the development process, like button naming or interface design. The risk of neglecting these details is clear: products might align with a broad user need but fail in execution due to overlooked specifics. By adopting a continuous discovery mindset, teams can ensure that both the overarching concept and the granular elements of a product align with user needs and preferences.

What are the information radiators in agile?

Information radiators are prominent tools in agile environments that "radiate" essential data to the team. Essentially, they are big visible charts or displays that provide real-time insights. As mentioned in the video, information radiators can range from dashboards showcasing company metrics, sales numbers, and critical alerts to design artifacts like journey maps or personas posted on walls. 

Show Hide video transcript
  1. 00:00:00 --> 00:00:31

    Information radiators are any big thing that  sits in the space with the team and, well... radiates information. (Right there in the name.) They can also be called *big visible charts*. Sometimes they'll be something like a real-time dashboard showing metrics of various sorts. They can be things like company metrics or sales numbers, or they can  be important alerts like an engineering problem. They're meant *to convey information to the team that everybody should know all of the time*. Lots of design artifacts can also be used as  information radiators,

  2. 00:00:31 --> 00:00:56

    like a journey map or a set of personas that are posted on the wall. Those are information radiators. Now, these can be trickier on remote teams since you have to have a place to constantly display things but – you know – for example, a dedicated Slack channel that constantly updates specific metrics or that shows code pull requests. Those would be information radiators. So would a Google Analytics dashboard  that everybody on a team has access to.

They ensure that vital information is consistently accessible to all team members. This concept can be adapted for remote teams with tools like dedicated Slack channels updating specific metrics or Google Analytics dashboards shared with everyone. The main goal is to keep everyone informed and aligned at all times.

When should agile teams make time for innovation?

Agile teams consistently aim for incremental improvements, as highlighted in the video, which discusses the balance between substantial innovative changes and minor adjustments. 

Show Hide video transcript
  1. 00:00:00 --> 00:00:31

    Let's talk about the difference between big,  innovative changes to our product and small, incremental improvements, and the kinds of research  that you might need in order to make these changes. We'll start with the incremental  improvements because that's really the most frequent kinds of changes  that we make as designers and researchers. While we all like to talk about designing things  from scratch or making huge, sweeping changes,

  2. 00:00:31 --> 00:01:00

    the vast majority of people spend a lot of their  time working on existing products and making them a little bit better every day. So, imagine you're building your new job marketplace to connect job seekers with potential employers. The product works. It's out in the real world being used by folks to find jobs every day. It's great! You made a thing that people are using, for money. Now, your product manager is looking at the metrics and they notice that a bunch of people are signing up and looking at jobs but they're not applying for anything.

  3. 00:01:00 --> 00:01:33

    Your job is to figure out why. So, what do you do? You can go ahead and pause the video and think  about it for a minute if you want. There are a lot of different options you could go with here, but at the very least you're going to want to figure out the following things: Where are people stopping  in the process and why are they stopping there? You'll probably want to dig into metrics a bit and  figure out if folks do anything besides just look at jobs. Do they fill out their profile? Do they  look at job details? Do they click the Apply button? And then do they give up at that point? Or do they never actually even get to that point?

  4. 00:01:33 --> 00:02:02

    Once you know where they're giving up, you'll  probably do some simple observational testing of actual users to see what's happening when  they do drop out. You'll probably also want to talk to them about why they're not applying. Maybe you'll find out that they get frustrated because they can't find jobs in their area. Well, that'd be great because that's really easy to fix; if that's the problem, maybe you can try letting them search  for jobs near them. That's an *incremental change*. Now, what do we mean by that? It doesn't necessarily mean that it doesn't have a big impact on metrics.

  5. 00:02:02 --> 00:02:30

    Things like this can be hugely important for your metrics. If you manage to get lots more qualified candidates to apply to jobs, that's a huge win for the employers who are looking for great employees and it doesn't matter that it was just a simple button that you added. But it's not a wildly innovative change.  In fact, it's a pretty standard feature on most job boards, and it's a very small improvement in terms of engineering effort, or at least it should be. If it isn't, there may be something wrong with your engineering department... which is a totally different course.

  6. 00:02:30 --> 00:03:03

    This change is *improving an existing flow*, rather than completely changing how something is done or adding a brand-new feature. OK, now, imagine that you're doing some observational research with your job applicants and you learn that for whatever reason they really don't have very much access to computers or they're not used to typing on a keyboard. This might lead to a very different sort of change than just searching for jobs in their area. Rather than making a small, incremental improvement to a search page, you might have to come up with an entirely  different way for candidates to apply for jobs.

  7. 00:03:03 --> 00:03:31

    Maybe they need to film themselves using their  phone cameras. This is a much larger change; it's *less incremental* since you're probably going to have to change or at least add a major feature to the entire job application process. You'll probably have to change how job seekers get reviewed by potential employers as well since they'll be reviewing videos rather than text resumes – which they might not be used to. This is a big change, but it's still incremental because it's not really changing what the product does.

  8. 00:03:31 --> 00:04:06

    It's just finding a new way to do the thing that it already did. OK, now, let's say that you have the option to do some really deep ethnographic research with some of your potential job applicants. You run some contextual inquiry sessions with them or maybe you run a diary study to understand all of the different jobs that they look at and learn why they are or aren't applying. Maybe in these deeper, more open-ended research sessions, you start to learn that the reason that a lot of potential job applicants drop out is because they just don't have the skills for the necessary jobs.

  9. 00:04:06 --> 00:04:30

    But what could *you* do about that? Well... our only options are either to find different applicants, find more suitable jobs or create some way to train our users in the skills that they need for the kinds of jobs that are available. All of those are really pretty big, risky ventures, but they just might be what we need to do to get more applicants into jobs. These are very big, and a couple of them are fairly innovative changes.

  10. 00:04:30 --> 00:05:05

    If the company pivots into, say, trainings and certifications or assessments, that definitely qualifies as innovation, at least for your product, but *how* does the research change for *finding* each of these sorts of things? Couldn't you have found out that applicants aren't qualified with the same types of research that you used to learn that they wanted to search by location? Maybe. Sometimes we find all sorts of things in  very lightweight usability-type testing, but  *more often* we find bigger, more disruptive  things in deeper kinds of research – things like contextual inquiry, diary studies or longer-term relationships that we build with our customers.

  11. 00:05:05 --> 00:05:30

    Also, bigger, more disruptive changes often require  us to do more in-depth research just to make sure that we're going in the right direction  because the bigger it is the more risky it is. Let's say we ran some simple usability testing on  the application process. That would mean we'd give applicants a task to perform, like find a job and apply to it. What might we learn from that? Well, that's the place where we'd learn if there were any bugs or confusing

  12. 00:05:30 --> 00:06:04

    parts of the system – basically, *can* somebody apply for a job? It takes more of a real conversation with a real user or a potential user to learn why they're not applying for jobs. It's not that one kind of testing is better than the other; it's that you can learn very different things with the different types of testing. Some types of research tend to deliver more in-depth learnings that can lead to big breakthrough changes, while other types of  research tend to lead to smaller, more incremental but still quite useful and impactful changes. Both are extremely useful on agile teams, but you may find that the latter is more common just because many

  13. 00:06:04 --> 00:06:15

    agile teams don't really know how to schedule those big  longer-term types of research studies, while running quick usability testing on existing  software is quite easy and can even often be automated.

While most adjustments are incremental and designed to refine and enhance existing features, occasionally, deeper insights reveal the need for more significant innovation. For instance, realizing that job applicants lack the necessary skills could lead to introducing training modules – a substantial shift. Such innovations often stem from in-depth research methods like contextual inquiries or diary studies. Therefore, agile teams should prioritize innovation when deep research insights reveal substantial gaps or when there's an opportunity to meet user needs in novel ways. Regularly scheduled retrospectives can be an ideal time to discuss and plan for these innovative endeavors.

Where to learn more about agile?

The Agile Methods for UX Design course on Interaction Design Foundation is an excellent starting point for those eager to dive deep into agile methodologies. This course provides comprehensive insights into agile practices and their application in UX design. Additionally, Interaction Design Foundation offers many articles, such as the one on agile development, which shed light on various facets of the agile approach. By coupling these resources with real

Show Hide video transcript
  1. 00:00:00 --> 00:00:35

    About a little over a year ago, I got bored  and decided to start a fight on Twitter; you know – as you do; as I do. And I asked Design Twitter,  "What do you hate about agile?" And then just to kind of kick the hornet's nest a little  more, I asked product folks the same thing, and I got a few responses! I got kind of a lot of responses, actually, and some of them were really interesting, and a lot of them were really similar, and the complaints all seemed to fall into these few different categories.

  2. 00:00:35 --> 00:01:01

    But the biggest similarity was that *none of the things* that people were complaining about *seemed particularly agile* to me – at least, I mean they weren't the kind of agile teams that I had been on or led, and I got kind of interested and I ended up having these more in-depth conversations with a bunch of folks. So, I did more research because – you know – *of course* I did; I am a user-centered designer – kind of my thing. For this research, we got people to tell us stories about their teams,

  3. 00:01:01 --> 00:01:34

    and what they considered agile to be, what worked, what didn't, what they loved, what they hated, especially what they hated, because those are always the most fun stories. They're also why I won't be identifying any of the research respondents by name here. They were all promised confidentiality,  and I don't burn my sources. Anyway, I got stories from literally hundreds of people, and again I had more in-depth conversations with lots of them, trying to get sort of a range of folks. I didn't just talk to designers. I talked to product people, engineers, some folks at agencies, big companies, small companies, obviously designers.

  4. 00:01:34 --> 00:02:00

    I even talked to people working in governments. What I *rarely saw* was something that I would describe as *an actually agile team*. I started working with the Interaction Design Foundation to write a course for new designers. And I decided to write a course about designing for agile teams because based on their research it was something that *their* students had been asking for for a bunch, and it turns out the designers I had chatted with weren't the only ones who had trouble figuring out

  5. 00:02:00 --> 00:02:05

    how to design on agile or agile-ish teams, which is not surprising.

Earn a Gift, Answer a Short Quiz!

Question 1

What is the main goal of agile development in software projects?

1 point towards your gift

Literature on Agile Development

Here's the entire UX literature on Agile Development by the Interaction Design Foundation, collated in one place:

Learn more about Agile Development

Take a deep dive into Agile Development with our course Agile Methods for UX Design .

Agile, in one form or another, has taken over the software development world and is poised to move into almost every other industry. The problem is that a lot of teams and organizations that call themselves “agile” don’t seem to have much in common with each other. This can be extremely confusing to a new team member, especially if you’ve previously worked on an “agile” team that had an entirely different definition of “agility”!

Since the release of the Agile Manifesto in 2001, agile methodologies have become almost unrecognizable in many organizations, even as they have become wildly popular. 

To understand the real-world challenges and best practices to work under the constraints of agile teams, we spoke with hundreds of professionals with experience working in agile environments. This research led us to create Agile Methods for UX Design.

In this course, we aim to show you what true agility is and how closely agile methodologies can map to design. You will learn both the theory and the real-world implementation of agile, its different flavors, and how you can work with different versions of agile teams.

You will learn about the key principles of agile, examples of teams that perform all the agile “rituals” but aren’t actually agile, and examples of teams that skip the rituals but actually embody the spirit.

You’ll learn about agile-specific techniques for research and design, such as designing smaller things, practicing continuous discovery, refactoring designs, and iterating.

You will also walk away with practical advice for working better with your team and improving processes at your company so that you can get some of the benefits of real agility.

This course is aimed at people who already know how to design or research (or who want to work with designers and researchers) but who want to learn how to operate better within a specific environment. There are lots of tools designers use within an agile environment that are no different from tools they’d use anywhere else, and we won’t be covering how to use those tools generally, but we will talk about how agile deliverables can differ from those you’d find in a more traditional UX team. 

Your course instructor is product management and user experience design expert, Laura Klein. Laura is the author of Build Better Products and UX for Lean Startups and the co-host of the podcast What is Wrong with UX?

With over 20 years of experience in tech, Laura specializes in helping companies innovate responsibly and improve their product development process, and she especially enjoys working with lean startups and agile development teams.

In this course, you will also hear from industry experts Teresa Torres (Product Discovery Coach at Product Talk), Janna Bastow (CEO and Co-founder of ProdPad) and Adam Thomas (product management strategist and consultant).

All open-source articles on Agile Development

Please check the value and try again.
  • 1
  • 2
  • 2 of 2

Open Access—Link to us!

We believe in Open Access and the democratization of knowledge. Unfortunately, world-class educational materials such as this page are normally hidden behind paywalls or in expensive textbooks.

If you want this to change, , link to us, or join us to help us democratize design knowledge!

Share Knowledge, Get Respect!

Share on:

or copy link

Cite according to academic standards

Simply copy and paste the text below into your bibliographic reference list, onto your blog, or anywhere else. You can also just hyperlink to this page.

Interaction Design Foundation - IxDF. (2016, June 5). What is Agile Development?. Interaction Design Foundation - IxDF.

New to UX Design? We're Giving You a Free eBook!

The Basics of User Experience Design

Download our free ebook “The Basics of User Experience Design” to learn about core concepts of UX design.

In 9 chapters, we'll cover: conducting user interviews, design thinking, interaction design, mobile UX design, usability, UX research, and many more!

A valid email address is required.
315,888 designers enjoy our newsletter—sure you don't want to receive it?