Is the lack of ITSM and ITAM alignment causing application sprawl?

Urban sprawl

I’ve written before about the negative consequences of the lack of industry alignment between ITIL-focused ITSM functions, and the IT Asset Management groups which typically evolved somewhat separately.

A recent CapGemini study of CIOs and IT decision makers concisely illustrated one impact this is having:

  • 48% believe their business has more applications than it needs (up from 34% over the previous three years).
  • Only 37% percent believe the majority of their applications are mission critical.
  • 70% believe at least a fifth of their company’s applications share similar functionality and could be consolidated.

The majority believe a fifth of those applications should be retired or replaced.

This shows a very strong consensus amongst IT leaders: IT is spending too much money and time on too many applications, with too much overlap. And in the rapidly evolving application landscape, this impact is by no means limited to traditional on-premise software: Skyhigh’s 2013 study on cloud service adoption found that enterprise respondents used, on average, well over 500 cloud services (the largest number of services found in one organisation was an eye-watering 1769).[Update for Q1 2015: SkyHigh now put the average at over 900]

If we want to remain serious about understanding the business services our IT organizations are managing, overseeing and underpinning, surely we can’t lose track of key assets like this?

How can IT possibly aim to control this sprawl, understand its impact, pinpoint its risks and and remove its vulnerabilities, if there is no unified overseeing function? Who is tracking which users are entitled to which services? Who ensures that users are equipped with the right services, and who removes their access once they leave, to ensure both data security and cost control? Who can identify the impact on key services if an application is removed or consolidated?

Concerningly, this does not appear to be high on the agenda in ITSM discussions. We still see two separate threads in the conference ecosystem: ITSM conferences rarely address asset management. Asset management conferences talk about suppliers and infrastructure without putting them in the context of the services they underpin. My own role involves product management of an ITAM system which is part of an ITSM suite, so I attend both sets of conferences, see both parallel tracks, and experience nagging concerns in each case that the other side of the picture is overlooked.

Recent initiatives such as the Pink Think Tank 14 are, welcomely, addressing in increased detail the multi-sourced, multi-vendor evolution of IT service delivery, but there still does not appear to be a detailed focus on the actual assets and software being supplied by those vendors.  That’s a gap. Those vendors fill the IT environment with assets, from physical kit through software services to less tangible “assets” like critical people with vital knowledge.  All those things cost money. They may have contractual associations. We may need to know, very quickly, who owns and supports them. And if a supplier is replaced, we need to know what they might take with them.

The harsh reality, as clearly shown by CapGemini’s study, is that CIOs and leaders are asking questions about consolidation that will require a detailed, holistic understanding of what we are actually spending money on, and why it is there.

Advertisement

itSMF UK and the mysterious case of the missing Asset Managers

logo of the ITSM13 conference

Something is bothering me.

When I first looked at the agenda for the 2013 itSMF UK conference in November, what stood out for me was a glaring omission: where is the IT Asset Management content?

First, let me state: It’s a really good agenda, full of really interesting speakers, and I will certainly aim to be there. I’ve been privileged to work in the the UK ITSM sector for the thick end of two decades, and many of the names on the agenda are people i feel lucky to have worked and interacted with.

If you can, you should definitely go.

However, the lack of any ITAM focus, across more than 40 presentation sessions, is strange. If we want to understand our business services, we have to have a grasp on the assets underpinning them. The nearest this agenda appears to get to that is an interesting looking session on Supplier Management – important, but only part of the picture, and again, something that doesn’t really work without a good knowledge of what we are actually buying.

It took ITIL a while to come to the realisation that an asset is relevant in more ways than being just a depreciating item on a balance sheet, but version 3 finally got there, and then some:

“Service Asset”, according to ITIL v3: Any Capability or Resource of a Service Provider. Resource (ITILv3): [Service Strategy] A generic term that includes IT Infrastructure, people, money or anything else that might help to deliver an IT Service. Resources are considered to be Assets of an Organization Capability (ITIL v3): [Service Strategy] The ability of an Organization, person, Process, Application, Configuration Item or IT Service to carry out an Activity. Capabilities are intangible Assets of an Organization.”

So… we consider our service-underpinning capabilities and resources to be our assets, but we don’t discuss managing those assets at the premier conference about managing the services? More importantly, we offer nothing to its increasingly important practitioners?

As long as ITAM is only discussed at ITAM conferences, and ITSM keeps up the habit of excluding it (this isn’t universal, mind: this presentation by Scott Shaw at Fusion 13 seems to hit the perfect message), then we risk looking disjointed and ineffective to CIOs who depend on the complete picture. To me, that’s pretty worrying.

(Footnote: I did submit a speaker proposal, but this isn’t about my proposal specifically – I’m sure lots of proposals couldn’t make the list)

Why does reporting get forgotten in ITSM projects?

ITSM initiatives often focus heavily on operational requirements, without paying enough up-front attention to reporting and analytics. This can lead to increased difficulty after go-live, and lost opportunity for optimisation. Big data is a huge and important trend, but don’t forget that a proactive approach to ordinary reporting can be very valuable.

“…users must spend months fighting for a desired report, or hours jockeying Excel spreadsheets to get the data they need. I can only imagine the millions of hours of productive time spent each month by people doing the Excel “hokey pokey” each month to generate a management report that IT has deemed not worthwhile”

Don’t Forget About “Small Data” – Patrick Gray in TechRepublic

In a previous role, aligning toolsets to processes in support of our organisation’s ITSM transformation, my teammates and I used to offer each other one piece of jokey advice: “Never tell anyone you’re good with Crystal Reports”.

The reason? Our well established helpdesk, problem and change management tool had become a powerful source of management reports. Process owners and team managers wanted to arrive at meetings armed with knowledge and statistics, and they had learned that my team was a valuable data source.

Unfortunately, we probably made it look easier than it actually was. These reports became a real burden to our team, consuming too much time, at inconvenient times. “I need this report in two hours” often meant two hours of near-panic, delving into data which hadn’t been designed to support the desired end result. We quickly needed to reset expectations. It was an important lesson about reporting.

Years later, I still frequently see this situation occurring in the ITSM community. When ITSM initiatives are established, processes implemented, and toolsets rolled out, it is still uncommon for reporting to be considered in-depth at the requirements gathering stage. Perhaps this is because reporting is not a critical-path item in the implementation: instead, it can be pushed to the post-rollout phase, and worried about later.

One obvious reason why this is a mistake is that many of the things that we might need to report on will require specific data tracking. If, for example, we wish to track average assignment durations, as a ticket moves between different teams, then we have to capture the start and end times of each. If we need to report in terms of each team’s actual business hours (perhaps one team works 24/7, while another is 9 to 5), then that’s important too. If this data is not explicitly captured in the history of each record, then retrospectively analysing it can be surprisingly difficult, or even impossible.

Consider the lifecycle of a typical ITSM data instance, such as an incident ticket:

Simple representation on an incident ticket in three phases: live, post-live, and archives

Our record effectively moves through three stages:

  • 1: The live stage
    This is the key part of an incident’s record’s life, in which it is highly important as a piece of data in its own right. At this point, there is an active situation being managed. The attributes of the object define where it is in the process, who owns it, what priority it should take over other work, and what still needs to be done. This phase could be weeks long, near-instantaneous, or anything between.
  • 2: The post-live stage
    At this point, the ticket is closed, and becomes just another one of the many (perhaps hundreds of thousands) incidents which are no longer actively being worked. Barring a follow up enquiry, it is unlikely that the incident will ever be opened and inspected by an individual again. However, this does not mean that it has no value. Incidents (and other data) in this lifecycle phase do not have much significant value in their own individual right (they are simply anecdotal records of a single scenario), but together they make up a body of statistical data that is, arguably, one of the IT department’s most valuable proactive assets.
  • 3: The archived stage
    We probably don’t want to keep all our data for ever. At some stage, the usefulness of the data for active reporting diminishes, and we move it to a location where it will no longer slow down our queries or take up valuable production storage.

It’s important to remember that our ITSM investment is not just about fighting fires. Consider two statements about parts of the ITIL framework (these happen to be taken from Wikipedia, but they each seem to be very reasonable statements):

Firstly, for Incident Management:

“The objective of incident management is to restore normal operations as quickly as possible”

And, for Problem Management:

“The problem-management process is intended to reduce the number and severity of incidents and problems on the business”

In each case, the value of our “phase 2” data is considerable. Statistical analysis of the way incidents are managed – the assignment patterns, response times and reassignment counts, first-time closure rates, etc. – helps us to identify the strong and weak links of our incident process in a way that no individual record can do so. Delving into the actual details of those incidents in a similar way helps us to identify what is actually causing our issues, reinforcing Problem Management.

It’s important to remember that this is one of the major objectives of our ITSM systems, and a key basis of the return on our investment. We can avoid missing out on this opportunity by following some core principles:

  • Give output requirements as much prominence as operational requirements, in any project’s scope.
  • Ensure each stakeholder’s individual reporting and analytics needs are understood and accounted for.
  • Identify the data that actually needs to be recorded, and ensure that it gets gathered.
  • Quantify the benefits that we need to get from our analytics, and monitor progress against them after go-live.
  • Ensure that archiving strategies support reporting requirements.

Graphs icon courtesy of RambergMediaImages on Flickr, used under Creative Commons licensing.