We are the business

An ex-manager of mine once pointed out that we need to stop talking about “the business.” Doing so gives leverage to those claiming to represent “the business” and limits the influence of the engineering team. His was more a political observation than a call to change our mind state, but ever since then I have noticed how commonplace it is for colleagues to make vague assurances that “the business requested it,” or “the business want it like that.”

When someone uses the term “the business” they invoke shadowy high priests with absolute knowledge — but they could be referring to a clueless HIPPO, an opinionated sales or marketing exec or equally to any end user of a piece of software. The engineer should have every right to question those requirements and to request exactly whom “the business” refers to in this scenario.

At Westfield Labs we are very fortunate to work in a ‘digital’ department which combines product, design, end-users and engineering resources as equal collaborators. Most recently we have moved to truly cross-functional teams where the only direction given to us by sponsors and stakeholders is high-level: e.g. bring us more customers and more conversions through focusing on streams x and y. Sure, the product team provide the ultimate direction from a product perspective but only after close consultation and collaboration with all other relevant parties.

In this scenario it is not hard for an engineer to think of oneself as part of the business and I positively encourage my team to stop using the term ‘the business’ to refer to others as it implies that we are not an equal partner. To take it further, I actively encourage my engineers to question and understand business requirements and to shout out if they don’t make sense.

Talking recently to an engineer from a UK online retailer, he noted that his company “think of themselves as a marketing company, not a software company” and see the engineering department as a necessary expense to realising their feature requests. In a business that is so dependent on the quality of the implementation and the iterative improvements upon that implementation, it is naive to think that engineers are not equal partners.

Obviously it’s not so easy when you are working in an agency (and let’s face it, sometimes you are working from a spec and clearly not ‘equal partners’) but any enterprise that wishes to succeed in the digital age will ultimately depend on the quality of its implementation – and the feedback from those that are doing the implementing. Otherwise it will be made irrelevant by a competitor that does.

So, I entreat software developers everywhere: let’s stop talking about “the business” and start talking about: customers, stakeholders, sponsors, sales team, marketers, product team, whatever. Make it clear that we consider ourselves part of ‘the business’.

Advertisement

Iterating vs Dabbling

Iterating vs Dabbling

Agilists are often urging each other to “iterate” — that is to make small, incremental changes and test each change in the market as you go.

Iterating makes sense, especially when building complex systems. In complex systems, even small changes can have unintended or unexpected effects, and by moving incrementally we give ourselves the best chance of detecting early enough to minimise the cost of correction. We’re not just talking about software bugs in the new features here. “Cost of correction” includes addressing misjudged customer requirements, pivoting to reach a better market fit, and fixing regression errors.

We can also iterate without coding and this can make a lot of sense in the very early days of product development. Deming’s “Plan, Do, Study, Act” (PDSA) cycle is helpful even in early concept development. For example, we might have a few iterations of the basic concept development, talking to potential customers and then reflecting back our understanding of their needs, just to make sure we’ve really understood, and are talking about those needs in ways the potential customers recognise.

Iterating

So Iterating is Good and we’ve all got that religion. Excellent.

What we might not have is a common understanding of what “iterate” means. It’s obviously suggestive of a cycle, but are we all using the same steps in the cycle? I’ve specifically called out Deming above because it’s a genuine iterative cycle:

  • Plan: Decide, up front, what question you’re asking and how you’re going to get the answer. If you believe you’re not asking a question then what you’re about to do is probably not an “iteration” of anything so you can stop reading now.
  • Do: Execute the Plan. Measure and gather results.
  • Study: Analyse the data gathered above. Look at what we’ve learned, and if our original question is answered.[1]
  • Act: (Action Research calls this “reflect”) Finally, we integrate the learning generated from the Study above into our product, service, processes or plans.

Looking at this in the four steps above it might be easy to miss an essential feature: the cycle repeats. That is, you are not iterating if you only do it once.

Dabbling

Apple’s App Store is littered with dabblers.

Dabbling has some shallow similarities to iterating:

  • Small things are done, then released.
  • There’s an apparent attempt to gather feedback.

And then, after that, nothing happens.

There are some particular examples I have in mind but it’s unfair to call them out because there are so many others. On the App Store the telltale sign of a dabble is that there’s been only one or two releases and they were some time ago. There’s nothing really wrong with this kind of development. Perhaps the author just wanted to learn the process, and that they’ve taken things all the way through to a release on the App Store is actually a Really Good Thing from a learning point of view.

On the other hand you can also find apps that have had one or two releases and then been abandoned because they “haven’t been profitable.” It’s easy and relatively cheap to build new apps — but it’s difficult and expensive to build new businesses.

Even large and well-funded projects can catch a touch of the dabbles. Requirements churn, stories that fiddle with colours and fonts, and funding cliff-falls are all potential warning signs that you’ve stopped being serious and have started pretending:

  • Requirements Churn: Are you implementing the same basic functionality but you’re not sure why? Are your non-functional requirements changing rapidly? Are you contemplating switching to Yet Another Framework? Take a moment to check if these changes are going through a proper iteration cycle (PDSA).
  • Polishing the Poo: In past projects I look back at the number of “0.5 point” stories and realise how useless all that was. It added up to a lot of “points” but I suspect little business value was added. There was no measurement of effects in the “Do” step, and no attempt to Study the impact. We’ll simply never know if any of that work was worth it.
  • Budget/Funding Cliff-Falls: Looking back at research projects that I’ve managed I’ve decided one mistake I made was approving “build a prototype” projects with no funding or plan locked in to check it and develop it further. We certainly intended to do those things. But without a pre-approved budget to actually do them we lost momentum, staff, and potential customer enthusiasm.

Lessons Learned

There’s nothing wrong with dabbling, but dabble consciously. Understand when you’re doing it and why. If you’re iterating, iterate consciously. Call out the specific steps in the Deming/Shewhart cycle and confirm for yourself that the results of this iteration will be used in the next.

Now get back to work.
🙂

Continue reading

Should a team assign points to bugs in a sprint?

I’ve been watching a discussion on a LinkedIn Agile group about whether or not a team should assign points (or estimates generally) to a bug for inclusion in a sprint. I’m going to write about how our teams handled the question and think about it from an ‘agile values’ point of view.

The Question

I’m assuming a Scrum process here, so we’ll be talking about sprints, stories and bugs:

  • sprint is a fixed-time iteration of design, development and release.
  • story is an “increment of business value.” In other words, once this story is done, someone will be able to do something useful. It has a “level of effort” estimate, usually “points.”
  • The velocity is the number of points per sprint, usually a weighted average over the last few sprints. It’s used for forward estimates of work that can be completed.
  • bug is a defect or unintended behaviour from the system. More generally, a ‘bug’ exists when the systems behaviour does not meet a customer’s expectations.

It should be relatively uncontroversial that if a bug is found in a story during its sprint then there’s no question what happens: the story card is simply “sent back” to the developer to be fixed[1]. The question some teams seem to struggle with is what happens when a defect is found after the story has been accepted as complete, and possibly even pushed to production. Here are some options people talk about when discussing how to handle the situation in an “agile” way[2]:

  1. Take the original card, remove its “complete” status, and move it back into the backlog. Subtract its points from the velocity for that past sprint (this should lower the team’s average velocity, which is generally intended). A team may want to re-estimate the card, particularly if the bug appears particularly difficult or suggests that a deeper architectural issue or understanding of requirements is at fault. However this is generally avoided in order to maintain a consistent velocity.
  2. Leave the original card, create a new “Bug” card, and add it to the backlog. If the card is urgent, it’s often added to the current sprint immediately. This approach can have two variants which are the subject of the LinkedIn discussion:
    1. The bug card is given an estimate (usually in points). If the bug goes into the current sprint, then the team is now either over-committed and likely to fall short; or the Product Owner is being asked which of their stories is the lowest priority and can it move to the next sprint? Either way this usually means that the team’s velocity for that sprint or the next is not affected, because the same number of points are delivered.
    2.  The bug card does not have an estimate or points. Teams usually pull it into the current sprint anyway, or agree with the Product Owner if it should wait for the next sprint. In any case, the team’s velocity is likely to fall, because the bug effectively counts for “zero points”, no matter how much work it takes to fix.

Our Answer

Our teams had the same discussions and we came down on the side of not assigning points to bugs. We saw points as a useful measure of how much functionality could be delivered each sprint. A bug was generally a fix to something already delivered. Fixing a bug may deliver “value for the customer,” but really what we were doing was “making good” on an earlier promise to deliver value. If there were a lot of bugs, velocity would drop. That was intended — it surfaced an issue in the development process. The root cause could be a poor choice of software library, too much technical debt, developer capability, or an inadequate QA process. By surfacing the problem, we were on the hook for addressing those issues, and could explain to management what was going on. Looking back though, I can see that it many ways, it almost doesn’t matter. What made the difference were the values that underpinned the discussion in the first place:

  • Our highest priority was to satisfy the customer through early and continuous delivery of valuable software. Working software was our primary measure of progress.
  • We wanted sustainable development, so that the entire business team could sustain a constant pace indefinitely.

We chose a process that we felt reflected those values, but you have to consider your own situation and adapt to it. (I might also note that “individuals and interactions” are supposed to be favoured over processes…)

More Light, Less Heat

These discussions sometimes seem to get a bit heated. I’m not sure if that reflects passion or pride, but either way my advice is not to put too much emphasis on the pronouncements of experts. They’re worth listening to, but only you know what’s going on in your organisation and your team and therefore what ought to work best. In one company I worked at, the accountants became aware of “points” and started including them in their reports. They would calculate the “cost per point” and developed an obsessive concern about reducing the cost per point and increasing the points per sprint (velocity). In that kind of environment you’re sorely tempted to assign points to everything. Worse, the team starts to suffer a kind of “points inflation” where the estimates start going up, velocity starts going “up” and “cost per point” starts falling — despite the fact that things are getting worse, and not better.

This makes me want to try an experiment one day: what if developers didn’t estimate level of effort on cards at all? Instead, product owners estimated how much additional revenue stories will earn the business per annum. The estimates would be in order of magnitude, for example:

  • Fixing a CSS bug on a particular browser won’t do much, so it’s worth $10 pa.
  • Fixing a CSS bug that affects all browsers might do a bit more. Maybe it makes the site look more professional. Anyway, that might be worth $100 pa.
  • A very small amount of fraud is coming from a particular location. Block it to save $1,000 pa.
  • A small percentage of customers want to save their shopping carts as wish lists. We might estimate this would add $10,000 pa.
  • Expanding wish lists to wedding registries could bring in $100,000 pa.
  • An entire new line of business might bring in $1,000,000 pa.

An alternative would be to use ‘percentage increase’ in revenue instead.

Either way, a team’s velocity would be calculated based on business value, rather than level effort. The assumption here is that business value is roughly correlated to level of effort over the medium to long term. My hypothesis is that teams might change the way they do things:

  • Product Owners and developers are much more focused on delivering business value.
  • Large stories would have to be broken down into smaller increments of business value. A story worth “$1,000,000 pa” won’t fit and so we focus on doing something that will meet some customer needs and bring in some revenue early.
  • It would become much easier to practice simplicity: the art of maximising work not done.

No doubt there’d be unintended consequences too. Still, would like to try it. 🙂 Continue reading

Agile Values

When I first started out with Agile software development I went looking for an agile analogue of the “Joel Test.” I wanted some kind of check-list I could run through to ensure that what we were doing included all the “essential” elements of agile development. Although I didn’t realise it at the time, this was because I wasn’t Doing It Right, because I had misunderstood Agile.

The Part I Missed

The Agile Manifesto is surprisingly brief, and the brevity is significant. Right up front we can see that the manifesto is a statement of values:

Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan

That is, while there is value in the items on
the right, we value the items on the left more.

Note that the manifesto does not say that there is no value in the items on the right. What it does assert though is that the values on the left are given priority over the ones on the right.

Most software engineers are familiar with the need to make trade-offs. It’s one of the things that can make development so difficult. An example of a trade-off is the tension between user-friendliness and security (no, usability is not the same thing as user-friendliness. But I digress…). Generally speaking, the more secure we make a system, the more barriers we place around it, and the less user-friendly it becomes. This is probably what you want in a banking system, but is not so good for your mailing list sign-up page.

The important point in considering such trade-offs is that neither is more important in isolation. The two need to be considered in context and then one explicitly chosen as being more appropriate in that particular situation. We can never truly say “security is always more important than ease of use” because it’s the context, the specific application, that determines importance.

Of course in software development you’re never trading-off just two factors. There can be dozens of considerations that you need to balance. These include robustness, performance (speed), resource utilisation, simplicity, and resilience. Where there is tension between developers on the Right Way to do things, explicitly stating the priority order of these things from the business perspective can help resolve the issue.

But back to the agile values. All values are important (because that’s why we call them “values”), but we often need to prefer one over another. Consider for example the values of honesty and compassion. If a friend gave you a present that you hated, would you choose honesty, and tell them that? Or would you “prioritise” compassion, spare their feelings, and thank them? What if they did it all the time, to everyone? What if it was your partner, and you thought the poor choice reflected their level of care for you?

In any non-trivial project there are considerations, values if you will, that can conflict in this way. In the Agile Manifesto, the authors made this conflict explicit, and stated a preference for certain values over others. And one of those statements places individuals over processes and tools. That’s why there’s no check-list. When you start to become obsessed with the artefacts of agile, you stop being agile.

What I Know Now

The Agile Principles follow directly from its values. For me, the first is the most important:

“Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.”

(Emphasis added)

Often (not always, but often), conflict within a team, or between teams, is because values are not aligned. The classic example is Development and Operations. Development is all about change — because change is how they create business value. New features, better performance, better customer service. Operations meanwhile is all about stability. That’s how they create business value. That means minimising change. I was once in a meeting with Dave Thomas and the CIO of a company. Dave was there to talk about his experiences introducing Agile to large companies. During the meeting Dave made a comment about change and the CIO said: “Indeed. In fact it’s better to not make any technology changes at all.” Which is not what Dave was saying. But the CIO was filtering through a different list of prioritised values, in an environment where the worst thing ever is to make a change and have it break something.

DevOps is one response to this conflict over values, although I can’t see it helping really if the values question is not addressed up front by the two teams. This can be difficult. I once tried to get a group of managers and researchers to do pair-wise comparisons of values in order to prioritise a list. It didn’t work. They couldn’t see the point, did not accept that there are trade-offs to be made, and so did not want to choose between “Excellent Research” and “Benefit Industry.” Only later, much later, did they start to engage with that trade-off.

Lessons Learned

The “so what” of all this is that Agile is not a methodology. It’s not a process, or a set of tools, or prescribed practices (that’s what “Scrum” is). Agile is a set of values. Making those values explicit is part of the process of achieving genuine management agreement with how the team will work. And if your teams are experiencing conflict internally, or with other teams, look for a possible divergence in the prioritisation of values. Discussing which values are more important in your particular circumstance can go a long way to resolving such conflicts, even if you don’t end up agreeing on the ones that should be preferred.

PS: The Joel Test is perfectly applicable to all development styles, there doesn’t need to be an “agile equivalent.” Go ahead and use it if it will help you make useful software.

The philosophy of Continuous Deployment

Over the past few years my team have been on a journey toward Continuous Deployment – this has involved changing our architecture, our attitude towards testing, and our tolerance toward slow build times but most fundamentally we have noticed it required a change in philosophy and our attitude toward quality, responsibility and ownership.

In our previous model our mantra was “you release what you test” and with the right phase gates and environments in place it allowed new releases to flow through the hands of QA and into the responsibility of Production Operations. Despite having a huge body of automated tests, our attempts to refine this process to become faster always foundered:

  1. we could patch additions into the release branch but this delayed the release while QA assured themselves we were ‘ready’

  2. any late patches or bug fixes to the release branch had to be merged back into the Master branch and were sometimes forgotten causing regressions on the next release

  3. the longer the delay to the branch release the more the product team tried to shoehorn more stuff into the release branch instead of Master and the vicious cycle continued

  4. meanwhile QA were always absorbed getting the branch release out and work on Master was delayed or quality suffered

A recent pivot allowed us to become far more aggressive toward enforcing a strict “no junk in the trunk (master)” policy and we found that when a team embraces Continuous Deployment, they start to embrace a philosophy of personal responsibility and develop the following behaviours:

  • ownership of services or code bases

  • collaboration with the operations team (devops)

  • making sure their code is well-tested and production ready

  • releasing code in small releasable chunks

  • thinking about backwards compatibility

  • collaborating and communicating with your team and the business users that your code changes will affect

  • accepting that you cannot live in isolation on your long-running branch without paying the price in conflicts

  • realising that the QA team are not your bug finding monkeys

  • checking your production logs for errors

  • writing the correct level of automated tests to limit regressions without negatively impacting productivity

  • monitoring your systems vital statistics (in all environments)

  • deciding when to back-out and when to fix-forward (and having a plan for this)

  • having a nuanced understanding of risk, quality assurance, testing and release strategies and that some features and bugs can be released immediately while others need in-depth testing

  • understanding that you no longer need a “definition of done” because your work isn’t done until it’s successfully released into a stable production environment.

All this comes under the umbrella of personal responsibility and that comes from the knowledge that when a developer chooses to release a piece of code, that code will go to production and they will accept responsibility for it. Along with that responsibility is the pride and reward in knowing that one’s code is released – it’s live, in the wild, and changing users lives (hopefully for the better).

Recently we experimented with adding a traditional formal sign-off to our release process from business owners outside the product team and the results have been interesting. We were faced with either applying a moratorium to Master while sign-off was approved or creating a release branch and letting Master flow. Either way we were faced with similar issues:

  1. Developers are no longer able to release their own code so Master starts to differ significantly from what is in production.

  2. A “one size fits all” approach is used for release sign-off – whether it’s fixing a typo or refactoring your payment system you still need sign-off as developers are no longer trusted to weigh the risk themselves.

  3. The QA team and release manager become responsible for the release, not the individual that wrote the code. The theme of personal responsibility is broken: the developer has moved onto a new task.

  4. The release manager starts to manage what goes into the release: features and fixes are forced to languish on branches until the release manager is ready to merge them.

  5. As the amount of code ready to deploy grows, release managers get nervous and need more time to test on the release branch.

  6. More and more time is spent managing ‘the release’ than writing and delivering features

  7. As stakeholders realise that releases take longer, they start to hold up the release for “one last fix” rather than wait another week. A full regression is then required by a nervous release manager and the vicious cycle continues.

  8. When you push out your release it has batches of changes in it. If production is affected, it is far more difficult to work out which change caused the problem.

The end result: your release cycle is slower, your team is less productive and less engaged, your QA team is over-loaded and your releases are potentially more buggy and harder to fix.

Continuous Deployment is not just about releasing code fast – it’s about having a team that takes pride in their work and feels responsibility for the quality, stability and effectiveness of the live product.

Our focus now is on improving visibility, accountability and automation so we can better provide rich, descriptive release notifications to stakeholders and regain the capacity to release from within the team rather than have to ask permission from an approval body from outside the team.

How Agile would have saved project ORCA

Working in the Web Development industry it’s easy to get the impression that everyone involved in software knows about Agile and is at least paying it lip service even if their processes are not really Agile in practice. Clearly this is not the case. The Unmitigated Disaster Known As Project ORCA describes how Mitt Romney’s team attempted to win the race to get voters out to the polls using new technology but fell at every hurdle. It presents us with a textbook copy of everything that can go wrong in software delivery with (depending on how significant its impact) potentially election losing consequences for Mitt Romney and the GOP.

The following are some of the glaring errors made that would have been avoided using Agile/Lean techniques:

  • Putting a mission critical piece of software to its first real use on a day when failure was not an option.
  • Giving themselves no opportunity for feedback or pressure testing and gaining no validated learnings as to whether what they had built would even work – conceptually, functionally or practically
  • Using a top-down approach that ignored the huge amount of skills, knowledge and experience that could have come from the greater team (those on the ground that would have to use the product)
  • Convincing themselves they had a product so great that it was better to keep it under wraps and maintain the element of surprise than it was to allow the product to be pressure-tested by the people that were going to use it

Agile software development recognises that you will not get it right in version one, that a delivery team needs to include its end users, that software improves iteratively and that not only your software but your business model itself must be able to pivot and adapt to the realities on the ground and the feedback you receive.

This is what Jason Fried of 37signals did when he gave his company a month to work on projects of their own choosing.

Sounds radical at first thought but if you have dedicated passionate people it makes a lot of sense. It’s the staff involved in the day to day running and building of your business that are most likely to know what your business needs. Fried has given them an opportunity to demonstrate what’s needed and the solutions to those needs.

How can you afford to do this? How can you afford not to? Argues Fried: “We would never have had such a burst of creative energy had we stuck to business as usual.”

Jason Fried – Why I gave my company a month off

Wonderful short video on how to be a Product Owner – should be required viewing for all PO’s and similarly essential viewing for anyone that’s trying to understand the principles of Agile. Captured in this video is the essence of Agile product development. Watch it once and if you don’t understand it all, watch it again. But most importantly, if you are a Product Owner, note the parts about working closely with the team.

Validated Learnings: How Heroku Pivot and Adapt

My team at Westfield.com are always trying to educate the business sponsors to implement the simplest possible thing – to release the minimum viable product and generate validated learnings rather than implement gold-plated visions of what marketing folk think that people want.

Francis Is shows how successful Heroku were at doing this in Heroku’s Early History.

Heroku’s first offering bore little resemblance to what they finally became but it allowed them to pivot and adapt: to drop the offerings that no one cared about (online code editors, no deployments) and concentrate on the stuff they did (github integration, scalability). Early releases discovered what users really wanted and mining that seam of ‘want’ resulted in exponential growth and a $212 million cash buy-out within 3 years.

User stories and asking “Five Whys”

We are currently coaching a new team into the ways of Agile and one of the problems we’ve encountered is getting the devs to write cards or stories using the standard Agile story formats. To Agile newbies the syntactic sugar that surrounds a story’s details often seems like a waste of time. i.e. A developer knows exactly what he means when he writes:

“Add currency code to Data Warehouse views”

and feels like he is being made to jump through hoops to turn that into:

“In order to differentiate between international sales, we need to update the data warehouse transactional views to show the currency code, so that they can report on this data.”

The reason we favour this (or the “As a [user]…” format) on agile projects is that the story describes what needs to be done and why. This means that any member of the team can understand exactly why a story has been added to the backlog and doesn’t need to get an explanation from the person that wrote the story or drill down into the acceptance criteria to discover this.

The other benefit of this format is that it forces the person writing the story to find out exactly why the requester wants that story. Indeed, in drilling down to the real requirement, the analyst may discover that the real business requirement is not what the requester is asking for at all.

Aslak Hellesoy (the creator of Cucumber) illustrates all this perfectly in the cucumber documentation where he describes the process of asking the Five Why’s to discover the underlying requirements – the why in the story.

(shamelessy copied directly from the cucumber wiki for your convenience..

[5:08pm] Luis_Byclosure: I’m having problems applying the “5 Why” rule, to the feature
“login” (imagine an application like youtube)
[5:08pm] Luis_Byclosure: how do you explain the business value of the feature “login”?
[5:09pm] Luis_Byclosure: In order to be recognized among other people, I want to login
in the application (?)
[5:09pm] Luis_Byclosure: why do I want to be recognized among other people?
[5:11pm] aslakhellesoy: Why do people have to log in?
[5:12pm] Luis_Byclosure: I dunno… why?
[5:12pm] aslakhellesoy: I’m asking you
[5:13pm] aslakhellesoy: Why have you decided login is needed?
[5:13pm] Luis_Byclosure: identify users
[5:14pm] aslakhellesoy: Why do you have to identify users?
[5:14pm] Luis_Byclosure: maybe because people like to know who is
publishing what
[5:15pm] aslakhellesoy: Why would anyone want to know who’s publishing what?
[5:17pm] Luis_Byclosure: because if people feel that that content belongs
to someone, then the content is trustworthy
[5:17pm] aslakhellesoy: Why does content have to appear trustworthy?
[5:20pm] Luis_Byclosure: Trustworthy makes people interested in the content and
consequently in the website
[5:20pm] Luis_Byclosure: Why do I want to get people interested in the website?
[5:20pm] aslakhellesoy: 🙂
[5:21pm] aslakhellesoy: Are you selling something there? Or is it just for fun?
[5:21pm] Luis_Byclosure: Because more traffic means more money in ads
[5:21pm] aslakhellesoy: There you go!
[5:22pm] Luis_Byclosure: Why do I want to get more money in ads? Because I want to increase
de revenues.
[5:22pm] Luis_Byclosure: And this is the end, right?
[5:23pm] aslakhellesoy: In order to drive more people to the website and earn more admoney,
authors should have to login,
so that the content can be displayed with the author and appear
more trustworthy.
[5:23pm] aslakhellesoy: Does that make any sense?
[5:25pm] Luis_Byclosure: Yes, I think so
[5:26pm] aslakhellesoy: It’s easier when you have someone clueless (like me) to ask the
stupid why questions
[5:26pm] aslakhellesoy: Now I know why you want login
[5:26pm] Luis_Byclosure: but it is difficult to find the reason for everything
[5:26pm] aslakhellesoy: And if I was the customer I am in better shape to prioritise this
feature among others
[5:29pm] Luis_Byclosure: true!

https://github.com/cucumber/cucumber/wiki/