But is it in production?

Anyone that’s worked in a genuine continuous delivery environment will find it very frustrating when features are completed but not released to production. Abuse of feature toggles can make this worse – blurring the line between what a team considers complete and what has actually been released to a real user. We know that when you actually release features with a big delta to the wild you will likely be faced with various bugs and user issues which will need to be fixed and polished to make it really complete – and the bigger the delta the worse they will likely be… In the worst case you may be building something that is completely irrelevant to your users (which could have been discovered many sprints previously).

Anyway, regardless of all that these thoughts remind me of my time many years ago working at Westfield Digital when we would do a 2 week sprint, followed by a long cycle of test and regression before the features actually made it into production. The focus was always based on how many points were delivered for ‘completed work’ rather than ‘released work’ and I always felt this had an effect of the quality of the work completed each sprint.

The following cartoon is a shameless tribute/rip-off of XKCD that I did ages ago thinking on this.

“Dev complete”

Are you iterating?

There has been so much hype and debate about Agile ways of working over the years that sometimes we lose track of the fundamental benefits to the business that can be found in the famed Agile Manifesto.

If you are a business owner wondering why your CTO is recommending you move to Agile, get better at Agile or even engage in an ‘Agile transformation’ you may be wondering what are the tangible benefits that are going to be received from moving down the agile path. If so, read on.

There are so many benefits of moving toward an agile model; in many cases it will make your software delivery quicker and more cost-effective and it should definitely help your company work together as a combined team in delivering software.  That said, how do you quantify and measure such abstract achievements?

Are you iterating?

It’s actually simple. With an agile process, gone should be the long delivery cycles, the phase 1, the phase 2 and the months of unreleased code and delays followed by further missed timelines and eventual disappointment that the software does not deliver what it was supposed to in the first place.

With a well structured agile implementation you develop your software iteratively. Your delivery team establishes a vision of what they intend to build but then they deliver that software in small increments of 1 to 2 weeks, or smaller. Each iteration should be demonstrable to interested stakeholders as an “increment of business value” and should ideally allow you to discover something about your customers as a result.

Are you improving?

As a sponsor you know the metrics you care about; sales, conversion, CAC, NPS? Your delivery team should also know all about these KPI’s – how they are measured, why they are important and the strategies you are using to improve them. Every story (item of work) they deliver should be based around improving these metrics and each showcase should review the effect of the latest work on these metrics.

Ideally, each iteration should deliver a way of measuring the value of your software and its impact on these metrics, but equally the gain may be in discovering a more accurate idea of how long the software will take to be delivered. At the very least it should allow the delivery team to share working software early with the stakeholders that are sponsoring the development so they can:

  1. See tangible benefits being achieved
  2. Ensure that there is alignment between the sponsors vision and those doing the building
  3. Review agreed metrics to measure the rate of progress

For more on how an agile approach can transform the way your business works through iterative processes it’s worth reading Eric Ries’ The Lean Startup where he describes the approach of using a Minimal Viable Product (MVP) to test your hypotheses around your business ideas and projections.

An agile approach is unlikely to magically make your team deliver software faster immediately but.. if you are looking for simple metrics on how to judge your team’s adoption of agile look for the following:

  1. Are the iterations relatively short (less than 2 weeks)?
  2. Do you get to see an increment of business value released at the end of each iteration?
  3. Are the estimates of what can be delivered becoming increasingly accurate each sprint?

Are you releasing to Production?

To achieve any of the benefits of the aims outlined above, your software must be released to a production environment to put the changes in the customer’s hands. If you are not releasing to Production, not only can you not measure the benefits of the changes in terms of impact to metrics but it may be that you are seeing an illusion of progress; the test version of the software you are reviewing may have undiscovered bugs or just not work in the real world. By keeping your iterations short and scope small for each release you maximise your opportunity to learn and minimise the likelihood of introducing compounding issues.

If you work as described above, it allows you as a sponsor to quickly determine if the return on investment is positive and consider changing course if not.

This article was originally published on the Pragma.Team blog

 

Cutting your losses

Originally published at Lux Group Tech Blog

Back in Feb 2017 I wrote about our decision to rebuild our entire architecture. It is established lore among the software community that re-writing software is something you should never do. This is not only because of Joel Spolsky’s excessive influence in software-land but because so many software engineers have experience of hating the system they work on, only to find that when they re-wrote it, they simply didn’t make it that much better.

Screenshot of small part of the legacy Back Office portal: a dazzling array of tabs, icons and drop down lists, most of which are irrelevant to the task at hand

There are some well-established ways of avoiding the re-write from some of the luminaries of the software world but sometimes you have to come to the conclusion that there’s nothing worth saving. At the Lux Group we were faced with a legacy platform with no discernible architecture, no automated tests, an outsourced engineering team and an overwhelming level of tech debt; we saw no real alternative and suggested the following justifications for doing a strategic rebuild (and doing it properly):

successful rebuild is normally undertaken under the following conditions:

* The company has gained enough experience to understand its business domain, the customers it is serving and the product that it wants to offer.

* The company is well-resourced, able to invest in more experienced engineers and to invest those engineers in building a product or platform for the future.

* The company stakeholders understand that building a product with solid engineering principles, built-in checks and balances and a high-performing team to run this software takes more time and tends to cost more than doing Rapid Application Development (RAD) in the same way that building and maintaining an architect designed home tends to be far more expensive to build and maintain than buying a kit home and renovating it yourself.

* The stakeholders understand that Software Engineering is Expensive and building things to last makes this even more expensive, therefore the company and team needs to be very selective around what features they opt to build.

It is now over a year later and the product luxuryescapes.com has been released without salvaging a single line of code from the original system (CRM aside). We release code every day, fix bugs before writing new features and the small amount of tech debt we have is under control. In retrospect we can see that the rebuild project could have easily spiralled out of control, the following were some of the principles we applied to ensure it didn’t.

Strong Product Team

We have this policy within our team: “if a feature doesn’t make sense to you, don’t do it”. This might sound blindingly obvious, but engineers and designers are often asked to work on stories and features that don’t make any sense to them. They cannot see why the customers would want that feature or how it benefits the product or the business. A strong Product Team can challenge senior stakeholders and sponsors to get to the root of the problem or design a holistic solution rather than applying band-aid after band-aid. An empowered engineer can expect a good explanation of the strategy under-pinning any features they are asked to deliver.

Architecture without an end state

In architecture without an end state, Michael Nygard describes how we should build architectures that are designed to be ever-changing along with the personnel and direction of the business. Lux Group is an ambitious company; by deciding to use micro-services we embraced the dynamic nature of the business and the ability to scale teams and iterate quickly. We embraced the challenges they created (reporting and atomic transactions), and avoided reverting to a monolith design when faced with these challenges.

When we sold some of our businesses, bought others and restructured the company and our business model, we had an architecture, team and process that was setup to embrace this change.

Iterative development and Continuous Delivery

From the first month of development we created an MVP that we treated as a production environment even though it was not live to the public. This MVP contained the ‘spine’ of the product (simple implementation of critical features like: search, add to cart, purchase, pay vendor). We only showcased software that had been released to that environment, and we released aggressively, with a Continuous Deployment approach. We treated every feature as an MVP, releasing early and iterating. When we failed, we demanded an RCA. When we finally launched the product, we already had the processes and discipline in place to continue the same practise.

Screenshot of Back Office portal on new platform: data is cleanly presented with consideration given to UI

Transparency

Every month we showed progress to the sponsors and we were honest about our setbacks and failures. By being honest, the delivery team did not get caught behind a lie and the sponsors saw the true rate of progress and an opportunity to consider our options: scope, time, resources.

Challenge every requirement

When you do a rebuild you get a fresh start. The conventional wisdom suggests you’re going to end up rebuilding every feature you already had so stick with what you’ve got — but this was not the case for us. We had many features that had been built over many years and were no longer that relevant. We were keen to highlight that every feature has a cost of ownership: paying an engineering team for build and maintenance as well as ongoing operational costs. Often the cost of ownership does not justify the profit that will be made from this feature. By costing and challenging every requirement you can help keep your product tight and the business focused on what it really needs. This is beautifully explained in one of my favourite blog posts The Tax of New.

What could we have done better?

We thought we had an elegant plan for solving the inevitable reporting problem that comes from having the distributed data of a microservice architecture. It turned out this plan didn’t work and we had to scramble a tactical solution while we learnt a lot more about pub/sub event-sourcing and re-constituting distributed data. If you do embark on microservices, ensure you solve this problem as part of your early MVP.

Would I do it again?

We can feel confident that no one at the company wishes we’d tried to incrementally work with what we had. We now have in place a high-performing team with a great product and platform that is being extended to build the next chapter of the Lux Group’s story. Joel Spolsky and Martin Fowler are probably correct in most cases; if you can salvage parts of your system while you rewrite others, you probably should. However, some systems are unsalvageable so don’t be afraid to start again from scratch.

Should we be building software?

Originally published as Journey of a Tech Stack on Lux Group’s technical blog

1_u0YNd-zYOZw3DwukKSh91A

A painting of the Tower of Babel under construction

Should we be building software?

Software engineering is difficult and expensive: this is well understood and is reflected in the commoditisation of software into hugely scalable packages (usually cloud-based) that allow us to achieve our business goals without actually writing any code.

Just 10 years ago if you wanted an e-commerce site you had to hire a developer and purchase hardware to run and manage your site. Similarly if you wanted to launch a brochure site to promote your business or a blog to publish your thoughts or promote your business you hired a developer; soon enough you began to realise how expensive software development was. It’s never set and forget; standards change, fashions change, sites need ongoing maintenance, development and redesigns.

Nowadays if you want a blog you go to Tumblr or Medium, if you want an ecommerce site you go to Shopify, BigCommerce or Magento and if you want a brochure site you go to Squarespace or WordPress; most companies should avoid writing custom bespoke software unless really necessary.

So, who does need to hire developers (or software engineers as we prefer to be called these days)? The answer to this is companies or startups that are trying to solve a new problem — that is, one that hasn’t yet been fully commoditised — hire engineers, QAs, product owners, visual designers and user experience experts to create new software that fills a need or solves a problem for their users.

What is the better way to build software?

One of the most popular contemporary approaches to building software for startups is the Lean Startup method which involves building the smallest increment of business value possible and validating that within the market. This is a relatively modern approach that can be used in companies of all sizes but also happens to be the approach that small startups on restricted budgets often do by default and have done for many years before Eric Ries had even dreamed of the term “Lean Startup”.

Phase I: The budget startup

At its genesis, a company normally has serious budget constraints and takes the fastest route to building software and delivering value to its customers. Here, at Lux Group, this is exactly how our initial technology stack was built: using PHP (the swiss-army knife of development languages), outsourced engineers and rapid application development, delivering features as quickly as possible, validating them in the marketplace and then moving on to the next feature. Sales figures were the priority, performance and user experience followed closely behind with technical integrity and system design for an uncertain and ever-changing future well down the list.

In software engineering, Software Entropy increases over time and, unless significant time and resources are invested into maintenance, systems eventually become harder to understand and maintain. If a company is lucky enough to still be in business by this time they may reach Phase II.

Phase II: The strategic rebuild

The world is littered with the corpses of companies that never made it to Phase II. Many of these companies may have over-engineered their tech stack, over-complicating a solution to a problem they did not yet fully understand. Indeed many of the companies that have been purchased by the Lux Group such as Brands Exclusive, Living Social and Pinchme were far more ambitious with their technical solutions than the Lux Group engineers, and yet were unable to compete in the market while a simple, stripped-down, lean approach suited us, allowing us to develop quickly, absorb these companies and move on to the next challenge.

Twitter famously rebuilt its platform after discovering that the original Ruby on Rails implementation was unable to cope with their traffic and replaced it with a queue based system that served their purpose better. Facebook took a different approach, creating an extremely sophisticated low-level solution to improve the speed of their php code without requiring a full rewrite of their application logic.

A strategic rebuild normally comes when a company is in a strong position. To some dominant companies this market strength can arguably be a disadvantage — in the 9 years I worked in Westfield’s Digital Labs we pivoted to a different business strategy roughly every 2 years, often requiring a complete change in stack, programming language and personnel. In a less well-resourced company this wouldn’t have even been a possibility; Westfield Labs is still actively experimenting with Westfield’s role in the digital world.

rebuild is normally undertaken when the software is unable to provide the return on investment it has previously and the company knows that significant investment is required to return the software to a healthy state. When technical debt grows high enough, a system becomes technically bankrupt and unable to deliver the features growth necessary to implement the business strategy.

successful rebuild is normally undertaken under the following conditions:

  • The company has gained enough experience to understand its business domain, the customers it is serving and the product that it wants to offer.
  • The company is well-resourced, able to invest in more experienced engineers and to invest those engineers in building a product or platform for the future.
  • The company stakeholders understand that building a product with solid engineering principles, built-in checks and balances and a high-performing team to run this software takes more time and tends to cost more than doing Rapid Application Development (RAD) in the same way that building and maintaining an architect designed home tends to be far more expensive to build and maintain than buying a kit home and renovating it yourself.
  • The stakeholders understand that Software Engineering is Expensive and building things to last makes this even more expensive, therefore the company (and team) needs to be very selective around what features they opt to build.

If these principles are understood and adhered to, a strategic rebuild will be far more cost-effective over the long-term than a rapid application approach as the system will be designed to grow and change over time, hence allowing the company to grow and change with it, protecting the company and its customers from the shock and upset of prolonged downtime while the system is rebuilt again.

Continuous Delivery

So far we have talked about rapid application development on a budget and more strategic development at a higher cost. Recent developments in cloud computing, virtualisation and automation have led to companies being able to iterate rapidly, without compromising quality, security or stability.

How?

By breaking applications into smaller components, writing extensive tests and investing in automation, companies are able to release changes to production many times a day, while managing risk to the stability of the system.

Overheads

This approach comes at a cost: there is a significant overhead to writing tests for the features you deliver. However, not writing the tests means that you are not managing your risk and are unable to guarantee stability. The division of applications into smaller components means these components often make the path to retrieving data more convoluted which, in turn, makes development of these features slower.

High-performing team

How do you maintain your ROI given these increased overheads?

Having a more complex and sophisticated architecture means you require a more sophisticated team to run it. Features are expensive to build so we should not waste our time building features of unproven benefit: Pareto’s Principle states that 80% of the value comes from 20% of the features. Changes should be smaller and incremental, with each change forming an experiment upon your customers and your team having the ability to gauge the success of that test and pursue the most promising route — the simplest example of this are the A/B tests that many companies run.

The delivery team needs to have all the components necessary for delivery: planning, design, user experience, engineering, operations etc all working together.

Solve problems rather than build features

Most importantly the delivery team needs to become expert at solving problems and the stakeholders or management team need to be able to give the delivery team the autonomy to solve these problems rather than ask the team to deliver ‘features’.

So, should we be re-building software?

Joel Spolsky describes rewriting software from scratch as the worst thing you can ever do and I agree that in most scenarios, this indeed is the case. However, sometimes there are exceptions: here at Lux Group we have an opportunity to work on rebuilding a subset of features that simplifies and changes our business requirements and we believe we satisfy the criteria outlined above to enable a successful build that hugely improves our capacity to deliver value to the business.

Should we be doing planning poker?

I got a message from an old colleague today with a link to a wiki page on Poker Planning and a question: “Why didn’t we do this?”

My initial reaction was that we did, we did it for years but after consideration I realised he was right – by the time he’d started working with us we had stopped using poker planning to estimate – or at least we only used it when we really needed it. So why stop? Did we lose our discipline?

The Scrum approach

Poker planning is a great way of using the wisdom of crowds to estimate stories. Points rather than days are normally used to keep the estimation abstract and these points are then tracked to calculate a team’s velocity to allow for capacity planning. In our early days of agile experimentation we followed the Scrum dogma quite rigidly – the team would gather together and examine features and help break them down to stories and then use planning poker to come up with an estimate, then the Product Owner would come along and select the stories he wanted the team to work on. More than once he commented on how he felt he was given the ‘illusion of choice’ whereby due to story size, team capacity and soft dependencies his options to select and order the backlog were severely limited.

This approach lead to us having a ‘well-groomed’ backlog but many of the groomed and estimated stories would languish in the backlog for a long period of time as small tokens of time wasted; growing stale and decreasing the signal to noise ratio. Furthermore, production bugs sat in their own backlog, prioritised separately and largely ignored by a product owner focused on building ‘the new’ and a team attempting to maintain its velocity.

A Product Owner’s role is to generate the maximum ROI through picking the stories which return the most business value for the amount of effort expended, however, the reality is often far more subtle than this; while each feature or story is technically an independent release of business value, more often than not each feature and story is part of a much bigger picture and really needs to be played in a specific order that the team understands. Backlog grooming, planning poker and sprint-stacking can become a transactional (rather than collaborative) affair leading to a poorly maintained and planned product.

Continuous deployment and a high-performing team

By the time the aforementioned colleague had joined the team we had restructured our approach. The Product Owner was embedded in the team, story and feature selection had become collaborative; there was no formal backlog grooming and commitments were agreed by small teams based upon the entire team having an in-depth understanding of both the business benefits of each feature and the estimated effort.
The roadmap was understood by the whole team and any up and coming work had either been spiked already or been analysed by the engineers and collaborated on (with UX and BA’s as necessary).

Furthermore, specialisation was respected and ownership of components encouraged: while we all love full-stack engineers, in a small team the reality is you may only have one real JS expert, one back-end expert and one HTML/CSS expert. Working on a maximum of one week iterations and with your definition of done as “released to production”, the abstraction of poker-planning-driven velocity becomes irrelevant – it’s easier for the team to work out what they can achieve in that sprint that week, estimating by days if necessary. With the onus on quality and user experience, fixing production bugs is treated as a priority and will often need to be prioritised at short notice which will impact deliverables so focusing on hard, velocity-based commitments can encourage a compromise on product quality.

So why did we stop Poker Planning?

When a product is well-established and development is lean and iterative, the team should be in control of its own destiny, understanding the vision and goals of the business and driving towards those goals with a roadmap for guidance if necessary. With the product owner embedded as part of the team and in constant collaboration with his team members the need for poker and velocity based capacity planning becomes irrelevant and the one important question becomes: “What can you (or even better, ‘we’) commit to delivering to production this week”.

With good stakeholders (or good stakeholder management) the sponsors will ask “what goals or improvements have we achieved” rather than “how many features did we complete”. With code being released to production on a weekly (or daily) basis the transparency that this approach delivers encourages trust between stakeholders, product owners and the delivery team, and negates the need for the transactional approach of poker planning, sprint stacking and velocity tracking.

SOA: An enabler for Continuous Delivery and innovation

Building on my experience at Westfield Labs, this presentation was delivered to Sydney CTO Summit and explores how implementing a Service Oriented Architecture allowed Westfield Labs to embrace a Lean, Agile approach to Product delivery.

The presentation covers how an SOA assisted with:

  • Management of backlog, particularly bugs
  • Managing build times
  • Cross-functional teams
  • Faster iterations
  • The ‘QA Paradox’ of better quality through reducing testers

 

Data is Money

When I use Google Search, I don’t pay anything. I get a great service but I don’t pay them cash for it.

There’s a popular expression now: “if you’re not paying for it you are not the customer. You are the product.”  I like this expression, because it captures the way I feel about the Google and Facebook (GAF) business model. But while I like it, I no longer agree with it, because it’s not an accurate picture of my relationship with GAF. I am a customer, and I do pay for their services. I just happen to pay in data, instead of dollars.

Data Is Money

Data is now a currency. With data I can buy thousands of apps on the Apple App Store. I can search the web, the world’s academic journals, and millions of photos of cats doing funny things. I can send and receive email. It’s the business model of the Internet, and it has its limitations, but nevertheless it is here.

Data is money. In exchange for creating, and then transferring data to GAF, they give me a web service. Both companies specialise in aggregating all that data and selling it for dollars to advertisers. In effect, there is a data to dollars exchange rate (and, you’ll note, a dollar to data one too).

Data is Money. Not like money, or as good as money. It is money.

Anthony, Brazen Thoughts, 2012

It is money because it is a medium of exchange, a unit of account, and a store of value. Currently, organisations that have a lot of it either “mine” it for information that can be used to design better products or services, or package it directly for sale in another form of currency (eg dollars).

Where it might get interesting is when we start asking to be paid in data directly, instead of in dollars first. There’s an example of a musician, asking to be paid in data, instead of the measly fractions of cents she gets as a cut from iTunes. It’s not that big a stretch to imagine a supermarket where I pay for my groceries in personal data (making Woolworths an advertising platform, as well as a supermarket. They’re halfway there already).

This doesn’t necessarily lead to a world where “everyone is an advertiser” however. The advertising business model exists because we haven’t yet thought of any other way to convert data to dollars, which we want to do, because we need dollars for food. But if we had even one farmer who was willing to supply food in exchange for data…

Now, all we need is a trusted record of exchange of data. I wonder if anyone is working on that?

First Data Bank

Here’s an idea I’d like to see: a data bank.

You “deposit” your data in a bank. You can withdraw it, which means it’s deleted. You can add to it at any time. You can deposit any kind of data you want, and transfer it to other accounts if you choose. The data is yours, in the same sense that money in a bank is “yours”.

The bank “loans” data to borrowers, under strict terms (in essence, the bank doesn’t need to physically transfer anything, or even give direct access to the data… but I digress). The borrowers have that data on loan, and must pay “interest.” The interest takes the form of the insights that they gain from analysing that data. The insights flow back to the data bank, and ultimately the data depositors.

This is a very different business model to GAF.

Money is Data

Money, by the way, is data. This is where I actually started, but I decided to lead this blog post with the conclusion rather than the introduction.

Money is an act of collective imagination. A mass, mutual suspension of disbelief. Money has value because we all believe it has value. This is easy for us, because our government says it’s true, and everyone is acting as if it is. Fairly catastrophic things happen to societies when people stop believing that their money still holds value (or that it will in the future). We call these catastrophic things hyper-inflation, and the collapse of civilisation.

While we have physical manifestations that represent money (coins, notes, bearer bonds, etc), most of our money these days exists purely as data recorded on bank computers. I rarely think about it, but I go about my day secure in the belief that the money in my account is “real.” But it’s not physically real. There’s no vault, no physical ledger, no gold or cash. It’s just flipped bits on a platter in a private cloud.

To access our money, we often use “money avatars”, such as credit & debit cards, gift cards, or cheques. They are avatars in the sense that they are physical manifestations that represent something imaginary, an intangible value. The bank note is not the thing, it just represents the thing. Its value is based on a promise we all beleive will be kept. The item itself is near-worthless paper.

Modern avatars pop up in other places too. Service avatars are physical manifestations of intangible service value. My iPhone is a service avatar. The true value of the iPhone is in the intangible apps. My Kindle is a service avatar (“The Kindle is not a product. It is a service” – Bezos).

Maybe I’m just a data avatar. 🙂

TL;DR: Conclusion

Money is data, but more interestingly, data is money. In exactly the same way as a fiat currency, data has an irrational but reliable intangible value and is used in exchange for services.

Could be really interesting times ahead.

Postscript

 

Continue reading

My 6 Coursera (& MOOC) Study Tips

Calculus I at The Ohio State UniversitySince I’ve been really quite slack in contributing to this blog I thought I should try and re-boot.

I’ve studied a few courses now on Coursera and completed about half of them. So from my rich well of both success & failure, here are Hisso’s Tips for Actually Completing Courses Online (if you’re just mucking about, these tips won’t apply):

  1. Study one course at a time, no matter how tempting it might be to add “just one more” because it sounds really interesting. As cheap as it is to sign up, the extra courses only serve to distract and overwhelm.
  2. Use the estimated hours. If the course convener has said the course will need 8-10 hours per week they mean it. I’ve found those effort estimates pretty reliable. You might look at the video lectures and think it’s only 1-2 hours a week — but the lecturers have run this before and know that the exercises, peer-assessments and reading actually make up the bulk of the time commitment. So go with their estimate.
  3. Study what’s interesting to you, not your employer. Because presumably you’re having to do this study in your own time, so you’re going to need to rely on intrinsic motivation to finish.
  4. Keep up with the lectures and quizzes. Once you start to fall behind it can be almost impossible to catch-up, because the cumulative study time required will easily exceed your spare time available. Which leads me to…
  5. Schedule in the Study Time. Ideally this is a regular time (I schedule 7-8 am weekdays) which not only helps keep you on track but also gives you a good idea of the maximum course load you can take on. If you know you only have 5 hours a week then you know not to bother trying to complete an 8-10 hour a week course (without scheduling in additional time).
  6. Find a Friend. If you can find a friend or colleague who will do the course at the same time then your chances of completing are greatly improved.

So there you go. Guaranteed Coursera Course Completion or your money back (har har).

Continue reading

Google is not Crazy

Google is not Crazy

Last month, the New York Times ran an article which raised some questions about Google’s recent acquisitions and how they fit in with its larger strategy. Some analysts see Google’s acquisition of Nest in a positive light, however, the article noted, not everyone is convinced:

Colin Gillis of BGC Partners is more sceptical. “Do you trust Google’s management as visionaries?” he asked. The analyst questioned the Nest purchase. Making thermostats does not fit in with Google’s core advertising, he said. Neither does robotics.

In my view, this is not necessarily correct. Here’s why.

Google’s Vision vs Business Model

Google defines it’s mission as: “[…] to organize the world’s information and make it universally accessible and useful.” Their pursuit of this mission is clearly seen in Google Search, but also its academic publications search, e-mail, social, scanned books, and other services. It’s perhaps harder to see that mission in driverless cars and “thermostats” (assuming that’s what Nest’s business really is). Nevertheless, I think the mission is still there. But today I want to talk about Google’s business model. In terms as simple as its mission, what would you say Google’s Business Model is? Here’s my take:

Sell precisely targeted advertising.

The mission and the business model are not the same. That’s OK, they shouldn’t be the same. The role of the business model is to support the mission. Google’s business model is deceptively simple. Yes, they sell advertising. However for Google (and Facebook, and maybe Twitter), the point is that the advertising is precisely targeted, because of Google’s access to consumer intent through its search engine, Gmail, Google+ and other services.

Limits to Google’s Business Model

There is, however, a limitation to this business model. It is this: Most of what we do in our lives we do “offline” to Google. Most people, most of the time, buy products in physical stores. We drive cars, catch trains, visit friends & family and we do these things without necessarily letting Google know about it. If your business model was to sell precisely targeted advertising, and you realised that most consumer activity was actually happening without you knowing about it, what would you do?

Intermediation Model 2.0

What Google wants to do is intermediate our lives. A lot has been written about dis-intermediation as being the defining feature of the current change sweeping the business world. But it would be more accurately described as re-intermediation. In the past, newspapers aggregated consumer eyeballs and then sold those to advertisers. Then they got dis-intermediated (and unbundled, and out manoeuvred, and et hoc genus omne). But that hasn’t meant that you get your news directly from journalists. Rather, you get it from the new aggregators and intermediaries: Google, Facebook, and Twitter.

Similarly, Amazon disrupted the book retailing market, and with its Kindle service, is now in the position to cut out the publishers completely. But again, it hasn’t meant that you buy your books directly from authors. You buy them from the new intermediaries: Amazon, Apple, and Google. Google is in the intermediation business. By learning everything it possibly can about us, it’s able to sell very precisely targeted advertising, and effectively mediate consumer access to service providers. And it can make a lot of money doing it.[1]

Google is not “Predicting the Future.”

Let’s revisit the analyst’s reservations about Google:

“Do you trust Google’s management as visionaries?” […] Making thermostats does not fit in with Google’s core advertising [business]. Neither does robotics.

Except, yes they do. They fit because Google’s advertising is precisely targeted based on what it knows about us. What do you think a home thermostat connected to the Internet could tell Google about the people who lived there?

What correlations might Google discover between thermostat settings and, say, disposable income? What happens when the Nest product suite branches out to gather more than just temperature data? What about noise levels? Movement? Air quality? Could a Nest sensor infer the emotional state of a household based on voice intonations? Could it infer what people are watching or listening to based on background noise? Imagine having Shazam running all the time, only it identified more than just music but also news stories, movies, TV shows… Might that be interesting to an “advertising” company?

What could a self-driving car tell Google about where people went, and how often they went there? Would the car see interesting events on its travels? What would it hear people inside talk about? Do you think it might want to talk back? What would it say? Anything here interesting to an “advertising” company?

What about robots generally? Who will the robots work for and what will they do? If we delegated service consumption to an automated system that worked for us, would a company that wants to mediate your consumption of services want to know about it? Robotics is key for Google because it offers the potential for Google to break free of the digital interface straight-jacket that is mobile and desktop computing. The robot will be the new service interface. Does anyone really doubt that owning a slice of the new service interface wouldn’t make any company salivate?

Summary

TL;DR?

  • Google is not in the “advertising” business as traditionally described. It is in the business of service intermediation. They are one of the new service intermediaries.
  • The service intermediation business requires advertising that is precisely targeted for its advertiser customers to get real value. This targeting requires large volumes of data about the target consumers. Google is missing data about our “real-world” interactions, and sensor companies like Nest can fix that for them.
  • The (automated) service interfaces of the future are not the screens and keyboards we know today. “Online services” is going to break out of the constraints of mobile and desktop computing and into the physical world via “robots” and other automated services that can manipulate their physical environment.

Google will be able to learn more, and then mediate our service acquisition, in more places and more often. So no, Mr Analyst who asked a rhetorical question that I will answer anyway: No I don’t think Google’s founders are visionary. I think they are lucky to be where they are, and they’re following up that advantage with some very canny business development.

Continue reading

We are the business

An ex-manager of mine once pointed out that we need to stop talking about “the business.” Doing so gives leverage to those claiming to represent “the business” and limits the influence of the engineering team. His was more a political observation than a call to change our mind state, but ever since then I have noticed how commonplace it is for colleagues to make vague assurances that “the business requested it,” or “the business want it like that.”

When someone uses the term “the business” they invoke shadowy high priests with absolute knowledge — but they could be referring to a clueless HIPPO, an opinionated sales or marketing exec or equally to any end user of a piece of software. The engineer should have every right to question those requirements and to request exactly whom “the business” refers to in this scenario.

At Westfield Labs we are very fortunate to work in a ‘digital’ department which combines product, design, end-users and engineering resources as equal collaborators. Most recently we have moved to truly cross-functional teams where the only direction given to us by sponsors and stakeholders is high-level: e.g. bring us more customers and more conversions through focusing on streams x and y. Sure, the product team provide the ultimate direction from a product perspective but only after close consultation and collaboration with all other relevant parties.

In this scenario it is not hard for an engineer to think of oneself as part of the business and I positively encourage my team to stop using the term ‘the business’ to refer to others as it implies that we are not an equal partner. To take it further, I actively encourage my engineers to question and understand business requirements and to shout out if they don’t make sense.

Talking recently to an engineer from a UK online retailer, he noted that his company “think of themselves as a marketing company, not a software company” and see the engineering department as a necessary expense to realising their feature requests. In a business that is so dependent on the quality of the implementation and the iterative improvements upon that implementation, it is naive to think that engineers are not equal partners.

Obviously it’s not so easy when you are working in an agency (and let’s face it, sometimes you are working from a spec and clearly not ‘equal partners’) but any enterprise that wishes to succeed in the digital age will ultimately depend on the quality of its implementation – and the feedback from those that are doing the implementing. Otherwise it will be made irrelevant by a competitor that does.

So, I entreat software developers everywhere: let’s stop talking about “the business” and start talking about: customers, stakeholders, sponsors, sales team, marketers, product team, whatever. Make it clear that we consider ourselves part of ‘the business’.