Deconstructing ITSM

13 January 2009

Gather requirements and sell achievement? Sell requirements and gather satisfaction!

(Yes, I struggled to come up with a snappy title for this post that wouldn’t sound like marketing speak.)

In October, Paul Glen (re-)published an article at TechRepublic: Project managers: Stop “gathering” IT requirements and Hank Marquis published an article on CIO Update: Why IT Service Level Management Fails (And How to Fix It).

  • In summary, Paul says that, while a failure to agree requirements is the root of many IT project failures, “gathering” requirements is the wrong attitude. As I’ve also found, customers tend not to be very good at articulating their requirements at the outset of a project (not nearly as good as they are at saying “No, that’s not what I wanted” at the end). Secondly, passively receiving requirements puts IT projects squarely within IT’s responsibility. It’s much better to negotiate or even sell requirements – to my mind, this is actually what customers mean when they complain “I thought it was IT’s job to define requirements”.
  • In similarly brutal summary, for which I apologise, Hank says that service level reporting based on well-defined and controlled metrics – like percentage availability and mean time to repair – fail to address what customers want. He advocates dropping all the tightly-controlled (whether actually or theoretically) metrics and focusing on customer satisfaction. He reports on the SERVQUAL method for reporting quality in service industries in general. I haven’t looked into SERVQUAL enough to say whether it specifically is valuable, but I strongly agree with his principle that “quality is what customers tell you it is”.

It struck me that these two recommendations go together. Establishing, defending or analysing requirements (call this stage what you will) happens at the beginning of a service’s lifecycle, and service reporting happens at what we like to think of as the end (apart from service retirement or decommissioning).

The traditional, all-too-common approach is to ask users for requirements (or sometimes, especially in ITSM projects, not to ask at all), build the service or function, and then to find that it takes a lot of effort to report on service achievement, often in confrontation with the users. It’s fair to characterise this as passive and reactive. Trying to justify performance when IT thinks service levels are good and the users see service achievement as poor could even be characterised as passive-aggressive.

The approach that these articles taken together would suggest is that services begin by negotiating requirements, as a two-way activity in partnership – keeping the customers involved and committed. Then when the service has been delivered, report on achieve strictly from the customers’ perspective. This may take a humility that many IT departments are not familiar with, and at least can bring on the fear that many know all too well. But when customers are seeing a lack of alignment between IT and business, what could be better than placing their quality first? It’s fair to characterise this approach as proactive, although I do think that word is overused.

Instead of gathering requirements and struggling to sell achievements, sell requirements and gather satisfaction.

A guiding principle could be the SERVQUAL equation that Hank Marquis describes as “the E=mc2 of every service industry except IT”:

  • q = e – p, or service quality = expectation – perception

Set the expectation at the requirement negotiating stage; measure the perception and derive the quality during the service lifetime.

A key way of looking at the difference between the approaches is to consider how quality failures are handled. (Nothing in the “proactive” approach means that quality issues won’t occur any more, of course.)

With the traditional approach, a “service level breach” (in the contract-oriented language of ITIL) manifests as the failure of metrics under IT’s control. The users cannot easily understand how these relate to their service experience, let alone their business objectives. All they can do is complain at IT. Or feel obliged to take on part of IT’s role themselves and research best practices and benchmarks. When users quote industry reports on 99.999% availability to the IT department, it’s a sure sign of a failure of alignment – and doesn’t really help anyway. Costs can go out of control if IT invests in fixing the metrics and not in improving the business value of the services.

Worse, the service levels can all be met but the customers still be dissatisfied, because they have no ownership in how the service level metrics relate to their business performance metrics. In this case IT has the appearance of full control, but the perception of poor service: again, this shows a lack of alignment between IT and the business.

What happens in the “proactive” approach? Quality failures still occur, but you know about them much more quickly (a “fail-fast” characteristic), because they come from the coalface. (Infrastructure component failures still occur, and should be monitored and managed within IT as is traditionally done – customers should not even be involved in the process of managing the infrastructure.) Moreover, you have much more insight into what to do because the affected customers can immediately tell you how they’re affected and what the impact is. This is valuable input to a change process. (Note that measuring “satisfaction” should not be seen as measuring one number. You need a framework, like SERVQUAL, so that you know which aspect of the customer experience is affected. Like any service reporting, reporting service quality can usefully be done through a dashboard that presents a clear top-level picture but allows drill-down.)

Because the quality failure is first expressed as a customer perception drop, it’s much easier to keep customers involved and committed to the success of the service and to participate in the continual improvement activities. This begins to sound like Agile principles (frequent iterations, responding to change instead of following a mammoth plan, working service over voluminous reports) … but that would be a subject for another post.*

* One of the reasons for my lapse in getting this blog going is that, in the several drafts I’ve jotted down in the past few months (see my recycle bin) I keep seeing things connected to the topic I’m writing about that I need to research and write about in the same post, to be comprehensive. Exactly what blogging should not try to be!

1 Comment »

  1. […] And by change management I don’t just mean change control for the operational infrastructure, I mean lifecycle change management all the way from business requirements analysis, which I touched on here. […]

    Pingback by What’s most important in ITSM in 2009? « Deconstructing ITSM — 15 January 2009 @ 19:39 | Reply

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Create a free website or blog at

%d bloggers like this: