Gartner’s London summit message: Make ITAM important!

Gartner’s IT Financial, Procurement and Asset Management rolled into London last week (11th and 12th September 2013), and promptly kicked off on an ominous note: Stewart Buchanan’s opening keynote warning that certain roles in IT, including that of the IT Asset Manager, risk becoming obsolete.

As the two day event progressed, however, it became increasingly clear that Gartner’s analysts don’t see ITAM as a complete anachronism. It is important, however, that it evolves with the technology and practices around it. Asset Management needs to become a key strategic tool to the business. For those of us who have been blogging on this theme for some time, and who have witnessed the best ITAM professionals in the industry delivering huge results from this approach, it is great to hear Gartner emphasising it so strongly.

Research Director Victoria Barber stressed the power of a strong “symbiotic relationship” between the Asset Management function, and IT’s financial controllers. “Finance needs to understand how it can leverage the data from Asset; Asset Management needs to understand how to support it”.

Barber’s fellow Research Director Patricia Adams described the evolving role of the IT Asset Management team in an increasingly virtualised environment. By Monday morning, she advised, the ITAM team should ensure that it is part of the process for spinning up a virtual machine.

Moving forward, Adams continued, they need to be aware of emerging technologies and preparing for potential adoption. This needs good awareness of what is going on in the business: “You want to make sure the asset team has the skills to work with the config team, to work with the virtualisation team, to understand what those teams are doing”.

As Buchanan concluded in a later session, companies should “use ITAM to continually improve and optimise both IT operations and the business use of IT”.

To this audience, at least, Gartner’s message is an encouraging one.

Why the CIO won’t go the same way as the VP of Electricity – an article at the ITSM Review

A dodo

Commoditisation is, without doubt, a massive and revolutionary trend in IT. In just a handful of years, a huge range of industrialised, cost-effective solutions have created rapid change, so much so that some commentators now predict the end of the corporate IT department altogether.

Info-Tech Research Group’s June 2013 article highlights a comparison made by some, between today’s CIO, and the “VP of Electricity” role apparently ubiquitous in large organisations at the turn of the last century…

More here at the ITSM review: http://www.theitsmreview.com/2013/06/cio-vp-electricity/

Image credit

Why does reporting get forgotten in ITSM projects?

ITSM initiatives often focus heavily on operational requirements, without paying enough up-front attention to reporting and analytics. This can lead to increased difficulty after go-live, and lost opportunity for optimisation. Big data is a huge and important trend, but don’t forget that a proactive approach to ordinary reporting can be very valuable.

“…users must spend months fighting for a desired report, or hours jockeying Excel spreadsheets to get the data they need. I can only imagine the millions of hours of productive time spent each month by people doing the Excel “hokey pokey” each month to generate a management report that IT has deemed not worthwhile”

Don’t Forget About “Small Data” – Patrick Gray in TechRepublic

In a previous role, aligning toolsets to processes in support of our organisation’s ITSM transformation, my teammates and I used to offer each other one piece of jokey advice: “Never tell anyone you’re good with Crystal Reports”.

The reason? Our well established helpdesk, problem and change management tool had become a powerful source of management reports. Process owners and team managers wanted to arrive at meetings armed with knowledge and statistics, and they had learned that my team was a valuable data source.

Unfortunately, we probably made it look easier than it actually was. These reports became a real burden to our team, consuming too much time, at inconvenient times. “I need this report in two hours” often meant two hours of near-panic, delving into data which hadn’t been designed to support the desired end result. We quickly needed to reset expectations. It was an important lesson about reporting.

Years later, I still frequently see this situation occurring in the ITSM community. When ITSM initiatives are established, processes implemented, and toolsets rolled out, it is still uncommon for reporting to be considered in-depth at the requirements gathering stage. Perhaps this is because reporting is not a critical-path item in the implementation: instead, it can be pushed to the post-rollout phase, and worried about later.

One obvious reason why this is a mistake is that many of the things that we might need to report on will require specific data tracking. If, for example, we wish to track average assignment durations, as a ticket moves between different teams, then we have to capture the start and end times of each. If we need to report in terms of each team’s actual business hours (perhaps one team works 24/7, while another is 9 to 5), then that’s important too. If this data is not explicitly captured in the history of each record, then retrospectively analysing it can be surprisingly difficult, or even impossible.

Consider the lifecycle of a typical ITSM data instance, such as an incident ticket:

Simple representation on an incident ticket in three phases: live, post-live, and archives

Our record effectively moves through three stages:

  • 1: The live stage
    This is the key part of an incident’s record’s life, in which it is highly important as a piece of data in its own right. At this point, there is an active situation being managed. The attributes of the object define where it is in the process, who owns it, what priority it should take over other work, and what still needs to be done. This phase could be weeks long, near-instantaneous, or anything between.
  • 2: The post-live stage
    At this point, the ticket is closed, and becomes just another one of the many (perhaps hundreds of thousands) incidents which are no longer actively being worked. Barring a follow up enquiry, it is unlikely that the incident will ever be opened and inspected by an individual again. However, this does not mean that it has no value. Incidents (and other data) in this lifecycle phase do not have much significant value in their own individual right (they are simply anecdotal records of a single scenario), but together they make up a body of statistical data that is, arguably, one of the IT department’s most valuable proactive assets.
  • 3: The archived stage
    We probably don’t want to keep all our data for ever. At some stage, the usefulness of the data for active reporting diminishes, and we move it to a location where it will no longer slow down our queries or take up valuable production storage.

It’s important to remember that our ITSM investment is not just about fighting fires. Consider two statements about parts of the ITIL framework (these happen to be taken from Wikipedia, but they each seem to be very reasonable statements):

Firstly, for Incident Management:

“The objective of incident management is to restore normal operations as quickly as possible”

And, for Problem Management:

“The problem-management process is intended to reduce the number and severity of incidents and problems on the business”

In each case, the value of our “phase 2” data is considerable. Statistical analysis of the way incidents are managed – the assignment patterns, response times and reassignment counts, first-time closure rates, etc. – helps us to identify the strong and weak links of our incident process in a way that no individual record can do so. Delving into the actual details of those incidents in a similar way helps us to identify what is actually causing our issues, reinforcing Problem Management.

It’s important to remember that this is one of the major objectives of our ITSM systems, and a key basis of the return on our investment. We can avoid missing out on this opportunity by following some core principles:

  • Give output requirements as much prominence as operational requirements, in any project’s scope.
  • Ensure each stakeholder’s individual reporting and analytics needs are understood and accounted for.
  • Identify the data that actually needs to be recorded, and ensure that it gets gathered.
  • Quantify the benefits that we need to get from our analytics, and monitor progress against them after go-live.
  • Ensure that archiving strategies support reporting requirements.

Graphs icon courtesy of RambergMediaImages on Flickr, used under Creative Commons licensing.

Notes from the CITE 2013 Conference in San Francisco

Logo of CITE (Consumerization of IT in the Enterprise)

Last Monday (3rd June 2013) I was fortunate to be able to attend the first of two days at the Consumerization of IT in the Enterprise (CITE) conference at the Marriott Marquis in San Francisco, CA.  This was the conference’s second year, and drew a healthy attendance of delegates, many of them CIOs and CTOs for significant organizations.  Consumerization is here, and IT executives are realizing the importance of embracing it.

My employer, BMC Software, was present as a sponsor, and was demonstrating several products including our new end-user-focused product MyIT.  In addition to some time in the booth, however, I was also able to attend a full day of conference sessions, and with a strong agenda it was often difficult to choose between overlapping meetings.

Some highlights:

Metrics from IT Consumerization’s frontline

IDG Enterprise’s Bob Melk (@bobmelk) presented key findings from his organization’s 2013 report on the consumerization of IT in the enterprise. Some important points from the presentation include:

  • Asked about the top challenges arising from consumerization, the most popular answer, from over 82% of large organizations was security, followed by privacy and compliance issues (65%) and lack of control (53%).
  • One challenge that was not called out by the majority of organizations was the inability to measure ROI. 69% of large enterprises responded that this was not a top challenge.
  • Within the scope of security, the biggest challenges called out were the difficulty of installing controls on user devices (54% for large enterprises), and the difficulty of integrating devices with existing security systems (44%)
  • Asked if they were confident that they were ready to increase access to consumer technologies in the workplace, only 15% reported that they were “very confident”. 45%, however, responded that they were “somewhat confident”.  Interestingly, this has doubled since the 2011 survey.
  • Productivity is an objective: More than half of the respondents are looking to achieve increased productivity and better employee access to work materials anytime/anywhere.

Cisco – “Not so much the Internet of Things, as the Internet of Everything!

A fascinating presentation by Cisco’s Marie Hattar (@MarieHattar) pointed out that over 99% of the things that could be connected to the internet still aren’t.  That’s 1.5 trillion things, of which 96.5% are consumer objects. Putting it another way, it’s 200 connectable things per person*.  This, Cisco believe, is a $14.4 trillion market just waiting to be addressed, a case set out in more detail in their white paper here.  We are already in the age of the “Internet of Things”, they argue. The “Internet of Everything” is the next step on the journey.

(*my brilliant colleague Chris Dancy (@ServiceSphere) probably gets close to that number with a single arm, but we should probably place him amongst the leaders on this metric.  You can watch him on this subject at the SDI conference in Birmingham, UK, on 19th June. More details here).

Panel Discussion – The Social Enterprise

In an interesting panel discussion alongside Kevin Jones (@KevinDJones) and Ted Shelton (@tshelton), Tom Petrocelli (@tompetrocelli) of ESG Global argued that the traditional hierarchical organization is changing.  This is a challenge to those who might normally move up the hierarchy, if it is not in their interest for their organizations to transform into a more disparate, networked structure. Social enterprise, according to Petrocelli, is not so much a technical challenge as a management one (edit at 8:16PM BST 10th June 2013: Tom has tweeted me with what I think is a useful addition: “Remember, though collaboration is a management problem and technology isn’t the answer, it is part of the answer”).

“Crapplications”

Brian Katz (@bmkatz) of Sanofi presented an entertaining analysis of good and bad mobile applications.

A very detailed mobile UI application (photo from Brian Katz's presentation at CITE 2013)
Brian Katz presented examples of good and bad mobile UIs. Guess which category this fell into?

There was a strong message too: “If you don’t have a mobile strategy, you don’t have a strategy”. Brian’s view is that organizations should develop their apps on mobile, then bring them to tablets and desktops. Microsoft Word, for instance, has hundreds of features, which would make no sense to a user of an iPad application.

The great HTML5/Native debate

From a mobile applications point of view, one thing that was abundantly clear is that there is still no consensus on the HTML5-versus-Native debate.  TradeMonster’s CIO, Sanjib Sahoo (@SahooSanj) put a passionate and solid case for the former. An HTML5 approach enabled them to deploy a trading application more quickly and less expensively than their competitors. Their app is strongly rated by users, and Sanjib spoke of HTML5 being seen as a “great long term strategy”, while acknowledging difficulties such as memory footprint, and the fact that HTML5 is not yet a true cross-platform technology.  He also pointed out that the limited data cache available to HTML5 applications compared to truly native applications is not really a problem for real-time trading applications where live data is the key requirement. For other requirements, it’s definitely more of a factor.

How customer feedback will transform ITSM

Five stars

Feedback is a huge part of a consumer’s experience. There is no reason to believe it won’t be one of the major factors in consumerization of corporate IT

A search result showing over 40,000 results for a search on the term "bed bugs"
A search of TripAdvisor’s reviews for “bed bugs” on yields over 45,000 results!

A decade or so ago, I paid a then-pretty-hefty £100 to stay in a suburban London hotel, close to the venue of a friend’s wedding. It was in a handy location, had a nice-hotel name, and a reassuringly weighty price-tag. In fact, I was looking forward to seeing what my hundred pounds bought in the suburbs.

To this day, that hotel trip remains a firmly-etched and very unpleasant memory. It was, simply, awful. By that, I don’t mean that familiar complacently-mediocre standard that we all encounter once in a while as consumers, but properly, mouldily, thinplasterboardally obnoxious in its awfulness. I would have been scared of a fiery death, except it was too damp. With insufficient time available to turn around and walk away, I had little choice but to hold my breath and stick with it.

Like many of you in ITSM, I am no stranger to hotels. Working for a number of years as a field consultant, travelling most weeks, I began to notice a new trend: the hotels coming up cheapest on the price comparison sites also had the worst reviews on TripAdvisor. Today, it’s easy to see this effect: try searching for three-star hotels in a city like London, sorting by ascending price (the impact is particularly obvious on the late availability clearing sites). Simply, the rubbish hotels are seeing their prices forced to the lowest levels.

Why? Because now, everyone is informed.

TripAdvisor has changed hotel choice forever, providing a wealth of opinion and reviews, detailed photos, and real experiences. Proprietors live in fear of harsh reviews: a study in 2011 by Cone Research concluded that 80% of potential customers would change their mind after reading a bad online review. And, because angry customers tend to seek an outlet for their frustration more often than happy ones, poor service gets disproportionately highlighted.

Today, establishments like suburban London’s Hotel Streptococcus (not its real name, more’s the pity) can’t get away with it any more. They improve, or they die.

In addition to informing consumers, feedback has become a valuable product in its own right. It has pushed aside the old classified directory model, in which an individual business’s profile against its peers was driven by advertising (evidenced by the hurried acquisition of review services by some of the “traditional” yellow-pages-style directories). The key players in the feedback market now benefit from a classic virtuous circle: the more feedback they collect, the more useful their service becomes, and hence the more people begin to both consume and produce their content.

Interestingly, both small and large businesses are benefitting. In the restaurant sector, for example, review sites like Yelp have led to a boost in trade for many independent businesses, while having a less obvious effect on brand-name chains. Michael Luca, in a study for Harvard Business School, observed that for these independents…

“A one-star increase in Yelp rating leads to a 5-9 percent increase in revenue”

We recently renovated our house. One tradesperson in particular, our plasterer, was superb, and I offered to act as a reference for his business. Instead, he asked me to leave a good online review. For small businesses, the online feedback IS the brand.

Reviews Have Driven the Dominance of a few Service Brokers

However, while the impact may be less significant for chain hotels and restaurants (where the reassurance of brand familiarity persists), it has arguably been a huge factor in entrenching dominant brokers of other people’s products and services, such as retailers and online marketplaces.

Amazon is a prime example. The recent 20th anniversary edition of Wired magazine looks back at the rise of online retail, and the almost universal early assumption that consumers would be driven almost entirely by the cheapest prices: comparison sites would be king. As the magazine points out, this has not been the case. The big winners, like Amazon and eBay, have dominated. In each case, reviews (whether of products or of vendors) are a central feature. They draw people to the site. A five star product review backed by a large number of previous customers gives solid confidence to make that purchase. And each buyers may complete the virtuous circle by adding to the ever-increasing pool of feedback.

Feedback will be Huge in IT

For IT, the huge rise of feedback teaches us some very relevant lessons:

  • Positive reviews can break customer habits, shifting them from a familiar and established provider to an upstart alternative

IT has been the “branded chain restaurant” for decades… The familiar choice, offering a degree of certainty if not perfection (Ray Kroc, founder of McDonalds, advocated that customers didn’t want a perfect burger as much as they wanted burgers that always taste the same). But just as reviews have boosted independent restaurants, they will bolster be best upstart IT providers. Suddenly, the uncertainty is within: how can I be confident in my IT department’s new, unrated cloud storage service, over a five-star-rated public service?

  • A service provider with a review mechanism for services differentiates itself from equivalent providers without one

IT departments are evolving into “brokers of services”, some self-provided, others bought in. Our customers expect to self-select service offerings, and IT even needs to justify its role in being the front-end for that solution. External providers will increasingly provide reviews and feedback, driving engagement and confidence with our customers. IT needs to do the same.

  • Feedback is a huge part of a consumer’s experience. There is no reason to believe it won’t be one of the major factors in consumerization of corporate IT.

How to avoid the most common CMDB mistake

Being lucky enough to work for something of an pioneering ITIL adopter back in the late 1990s, I guess I was a CMDB practitioner earlier than most.  Back in those days, there was little “off-the-shelf” CMDB technology, and even automated discovery was in its infancy, with little adoption. But we were driving our ITSM initiative ambitiously, and every year the ITIL auditors would come and score us. We needed a CMDB pretty quickly, and it was clear we needed to built it ourselves.

This was great, because it meant we got to make all the usual CMDB mistakes early.  In our case, we only started to get it right third time.

On our first attempt, we built a tool that wasn’t really up to the task of holding the data we needed to hold.  That was definitely our fault, although ITSM platforms were pretty simplistic back then too.  With no off-the-shelf CMDBs to buy or to benchmark against, we were pretty much on our own, and we got our design wrong. We abandoned that, and tried again.

The next system was much better, but our implementation was marred by a second big error.  This mistake was driven by the received wisdom of its day, inspired by ITIL version 2 itself and its definition of the CMDB as…

“A database that contains all relevant details of each CI and details of the important relationships between CI”

Great, that’s clear. We need everything.

Unfortunately, despite ITIL having long cleared up its story, and technology bringing us clever tricks like data federation, the same fundamental mistake still gets made today.  In fact, it gets made a lot.

This type of failure is characterized by an approach that is founded on the data sources for the CMDB, not the required outputs.

  1. The organization identifies a bunch of data sources, and decides these will be the basis of its CMDB.  These might be discovery tools, other management systems, spreadsheets, or more typically a combination of some or all of these things.Simple representation of three multiple data sources
  2. The organization spends a lot of time, money and effort integrating the data sources into a single data store.
  3. With this hefty new database built, the organization tries to derive some outputs from the new CMDB.  And here, it hits a problem:

Representation of a multi-source CMDB with overlayed output requirements. There are gaps and overlaps

Suddenly, the organization faces the realization that having made their CMDB out of “everything”, their “everything” is both too much and not enough.  It’s too much, because a whole bunch of data is being included, expensively, for which there is no actual end result.  At the same time, it’s not enough, because the outputs we actually need are not completely supported by the data in the CMDB anyway, due to two key problems – gaps, and overlaps:

Close up representation of the CMDB failing to support its output requirements due to gaps and overlaps

This mistake is devastating to a project, but it’s completely avoidable, if some basic fundamentals are followed in any CMDB initiative:

  • Focus on the requirements, not the data you happen to own.
  • Identify what data is needed to support those requirements.  This is the data your CMDB needs.
  • If there are overlaps – i.e. objects of required CMDB data which could be sourced from more than one place, you need to determine the best source. A good CMDB tool needs to support effective reconciliation of multiple sources, with appropriate priority for the best source of each item.
  • If there are gaps, where no obvious source is available, there are two basic choices: Either re-think the requirement, or find a way to get that data.
  • Know how every piece of data is kept accurate. As soon as governance fails, trust is lost in the CMDB. That’s fatal.

Of course, as the middle diagram above illustrates, some data sources might actually provide more data than the initial requirements set requires.  For example, automated discovery tools may gather a lot more information than is initially needed.  This isn’t necessarily a bad thing: future requirements or investigative data mining might each benefit from this data.

If there’s little extra cost to maintaining this extra data (as might be the cost if it’s automatically supplied by discovery tools), then it might be worth hanging on to. If it’s complex, manual, time consuming, and doesn’t support any outputs, then why bother?

Gartner’s 2013 ITAM Repository MarketScope raises the bar for vendors… again

A pole vaulter, silhouetted against the sun, vaults over a high bar
Raising the bar: An athlete pole vaults over a bar, silhouetted against the sun
Gartner’s 2013 MarketScope for the Asset Repository raises the bar again for toolset vendors.

This month, Gartner have released their latest MarketScope for the IT Asset Management repository.

For those of us involved in manufacturing ITAM tools, the MarketScope report is important. It reflects the voice of our customers. It has a very wide audience in the industry. Perhaps most importantly, it shows that standing still is not an option.  Over the last three reports, published in 2010, 2011, and 2013, several big-name vendors have seen their ratings fall. The bar is set higher every year, and rightly so.

IT Asset Management has been undergoing an important and inexorable change over recent years.  Having often been unfairly pigeon-holed as custodians of an merely financial or administrative function, smart IT Asset Managers now find themselves in an increasingly vital position in the evolving IT organization. The image of the “custodian of spreadsheets” is crumbling.  Gartner Analyst Patricia Adams’s introduction to this new MarketScope report gets straight to the point:

The penetration of new computing paradigms, such as bring your own device (BYOD), software as a service (SaaS), infrastructure as a service (IaaS) and platform as a service (PaaS), into organizations, is forcing the ITAM discipline to adopt a proactive position.

Last year, I was fortunate to have the chance to speak at the Annual Conference and Exhibition of IAITAM, in California. It’s an organization I really enjoy working and interacting with, because the people involved are genuine practitioners: thoughtful, intelligent, and doing this job on a day to day basis. I’d just finished my introduction when somebody put their hand up.

“Before we start, please could you tell us what you mean when you say ‘IT Asset'”?

It’s a great question; in fact it’s absolutely fundamental. And different people will give you different answers. It took the ITIL framework – influencer of so much IT management thinking – more than a decade to acknowledge that IT Assets were anything other than simple items needing basic financial management. My answer to this question was much more in line with ITIL’s evolved definition of the Service Asset): It’s might include any of the components that underpin the IT organization, whether they’re physical, virtual, software, services, capabilies, contracts, documents, and numerous other items. IT Assets are the vital pieces of IT services.

If it costs money to own, use or maintain; if it could cause a risk or liability; if it’s supported by someone; if we need to know what it’s doing and for whom; if customers would quickly notice if it’s gone… then it’s of significant importance to the IT Asset Manager.  Why? Because it’s probably of significant importance to the executives who depend increasingly on the IT Asset Manager.

One simple example of evolved IT Asset Management: A commercial application might be hosted by a 3rd party, running in a data centre you’ll never see, on virtual instances moving transparently from device to device supported by a reseller, but if you can’t show a software auditor that you have the right to run it in the way that you are running it, the financial consequences can be huge.  To provide that insight, the Asset Manager will need to work with numerous pieces of data, from a diverse set of sources.

The role of Asset Management in the new, service-oriented IT organization, is summed up by Martin Thompson in the influential ITAM Review:

“ITAM should be a proactive function – not just clearing up the mess and ensuring compliance but providing a dashboard of the costs and value of services so the business can change accordingly.”

Asset Managers are needing to redefine their roles, and we need to ensure our products grow with them.  We need to continue to provide ways to manage mobility, and cloud, and multi-sourcing, and all of the other emerging building blocks IT organizations.  Our tools must integrate widely, gather information from an increasing range of sources, support automated and manual processes, and provide effective feedback, insight and control. Our goal must be continually to enable the Asset Manager to be a vital and trusted source of control and information.

Gartner’s expectations are shaped by the customers, practitioners and executives with whom they speak. In their words, the old image of an ITAM tool as a simple repository of data is evolving “… from “What do I have?” to “What insight can ITAM provide to improve IT business decisions?”.

I’m very proud that we have held our “Positive” rating over the last three MarketScope reports in this area. Gartner’s message ITAM vendors is clear: You have to keep moving and evolving. The bar will continue to rise.

Image courtesy of Sebastian Mary on Flickr, used with thanks under Creative Commons licensing.