Knowing what you DON’T know

Question Mark

I presented an Asset Management breakout session at the BMC Engage conference in Las Vegas today.  The slides are here:

An interesting question came up at the end: What percentage accuracy is good enough, in an IT Asset Management system?  It’s a question that might get many different answers.  Context is important: you might expect a much higher percentage (maybe 98%?) in a datacentre, but it’s not so realistic to achieve that for client devices which are less governable… and more likely to be locked away in forgotten drawers.

However, I think any percentage figure is pretty meaningless without another important detail: a good understanding of what you don’t know. Understanding what makes up the percentage of things that you don’t have accurate data on is arguably just as important as achieving a good positive score.

One of the key points of my presentation is that there has been a rapid broadening of the entities that might be defined as an IT Asset:

The evolution of IT Assets

The digital services of today and the future will likely be underpinned by a broader range of Asset types than ever.  A single service, when triggered, may touch everything from a 30-year-old mainframe to a seconds-old Docker instance. Any or all of those underpinning components may be of importance to the IT Asset Manager. After all, they cost money. They may trigger licensing requirements. They need to be supported. The Service Desk may need to log tickets against them.

The trouble is, not all of the new devices can be identified, discovered and managed in the same way as the old ones.  The “discover and reconcile” approach to Asset data maintenance still works for many Asset types, but we may need a completely different approach for new Asset classes like SaaS services, or volatile container instances.

The IT Asset Manager may not be able to solve all those problems.  They may not even be in a position to have visibility, particularly if IT has lots its overarching governance role over what Assets come into use in the organization (SkyHigh Networks most recent Cloud Adoption and Risk Report puts the average number of Cloud Services in use in an enterprise at almost 1100. Does anyone think IT has oversight over all of those, anywhere?).

However, it’s still important to understand and communicate those limitations.  With CIOs increasingly focused on ITAM-dependent data such as the overall cost of running a digital service, any blind spots should be identified, understood, and communicated. It’s professional, it’s helpful, it enables a case to be made for corrective action, and it avoids something that senior IT executives hate: surprises.

Question mark image courtesy Cesar Bojorquez on Flickr. Used under Creative Commons licensing.

Advertisement

Thoughts on #cfgmgmgtcamp, and why ITSM needs to take note

Configuration Management Camp logo

Mention the role of “Configuration Manager” at an ITSM conference, and then use the same description at an Infrastructure Management conference, and your respective audiences will visualise completely different jobs*.

So, it was with some curiosity that I arrived this morning for the first day of the Configuration Management Camp in Ghent.

This particular event falls squarely into the infrastructure camp. It’s the realm of very clever people, doing very clever things with software-defined infrastructure. A glance at the conference sponsors makes this very clear: it includes Puppet Labs, Chef, Pivotal, and a number of the other big (or new) names in orchestrated cleverness.

While this is not the ITSM flavour of Configuration Management, however, today’s conference really made it clear that this new technology will become more and more relevant to the ITSM community. In short, ITSM should make itself aware of it.

The tools here have underpinned the growth of many recent household-name startups: including those internet “unicorns” like Facebook and Uber which have risen from zero to billions. They’ve enabled rapid, cloud-driven growth in a brand-new way. This new breed of companies have firmly entrenched DevOps methodologies, with ultra-rapid build, test, and release cycles, increasingly driven by packaged, repeatable scripts. It primarily takes place on cloud-based open-source software stacks. As a result, there’s not quite as much focus on resource and commercial constraints as we find in big enterprises.

But here’s the crux: methods like this will surely not remain solely the preserve of the startups.  As business get deeper into digital, there’s increasing pressure on the CIO from the CEO and CMO, to deliver more stuff, more quickly. As the frontline business pushes its demands onto IT, long development and deployment cycles simply won’t be acceptable. And, with equal pressure on IT’s costs and margins, these technologies and methods will become increasingly attractive.

Rapid innovation and deployment is becoming essential to business success:  PuppetLabs 2014 State of Devops study recently claimed that strongly performing organizations are deploying code 30 times more often, with 50 times fewer failures. Okay, those numbers come from squarely within the Devops camp, but they are credibly reinforcing past analysis such as MIT Sloan’s 2007 classic study on “the alignment trap”. IT can’t just be a ponderous back-end function if a company wants success.

That’s not to say that this conference is providing the answers. I’d argue that Configuration Management Camp is the “bottom up” conference to an ITSM conference’s “top down”.

Some of the toolsets on display here are very granular indeed. Many of the presentations were slides, or live demos, full of slightly arcane command-line-interface text input. We watched individual clusters get provisioned, quickly, impressively, but devoid of context.

However, there was also a sense of an increasing imperative to start connecting those dots: to define the applications and their inter-dependencies, and ultimately, the services. We’ve seen talks today with titles like “Configuring Services, not Systems”. Dianne Mueller of Red Hat OpenShift described new tools which focus on the deployment of applications, rather than points of infrastructure.

I spoke with more than one person today who described DevOps starting to meet “reality” – that is, the day-to-day expectations of an enterprise environment.  There is a feeling of “cutting edge” here, probably justified, but the counterpoint to that might be that this community tends to see “traditional” IT is seen as slow and clunky.  PuppetLabs founder and CEO, Luke Kaines dismissed this: “The enterprise space doesn’t move slowly because they’re stupid or they hate technology. It’s because they have… what’s the word… users”.

One thing that was clear today was that these technologies are only in their infancy. Gartner recently identified Software Defined Infrastructure as one of its key trends for 2015. Forrester have declared that Docker, the rapidly-emerging containerisation technology, will “live up to the hype and rule the cloud”.

And that’s why IT Service Management needs to take note.

We can’t understand the services we provide our customers, if we don’t have a grasp on the underlying infrastructure. We can’t formalise change control without getting the buy-in of the technical experts for whom rapid infrastructure shifts are a command-line prompt away. We can’t help prevent uncontrolled software spend, or inadvertent license breach, if we don’t proactively map our contracts onto the new infrastructure. With change cycles moving from weeks to seconds (it was claimed today, in one session, that Amazon deploys code to production on a sub-one-second interval), established ITSM practices will need to adapt significantly.

So, if it feels like ITSM’s “top-down” and infrastructure configuration management’s “bottom-up” are striving to find a connection point, it also feels like that join has not yet been made. It’s up to IT as a whole to make that connection, because if we don’t do it, we’ll end up repeating the lessons of the past. But faster.

It’s going to be a fun challenge to face. This is exciting stuff.

*ITSM: defender of the CMDB. Infrastructure: server deploying and tweaking wizard. Right?

Mobile ITSM isn’t only about field support: It’s about everyone.

iPhone 6

When we built the new Smart IT UX for BMC Remedy, we were determined to allow ALL IT Support Workers to be Mobile. Why? Because everyone can benefit from mobility.

In the short history of enterprise mobility, mobile business applications have generally focused on two specific user groups. The first is the group of users for whom being on the road is the bulk of their job, such as field engineers: they go to a location, perform some tasks, move on to the next place.

The second group is those who might be based at a desk in an office, but who move around through a series of meetings, on and off-site. For these users, the primary purpose of mobility has been continuity of communication (with the weapon of choice having historically been the keyboard-equipped Blackberry).

For most other users, performing most other business tasks, the desktop computer (or desk-based notebook computer) still remained the key delivery mechanism for business applications.

Today, this is an outdated philosophy.

I recently stood in a lift at a customer’s office. There were four people in that elevator, and there were seven smartphones on display.  Okay, two of them were mine (I’m a mobility product manager, after all), but that is still a notable average.

Even in the short moment offered by a journey of just a few floors, those office-based employees found a moment to communicate. Whether that communication was work-based or personal, one-way or two-way, is irrelevant. The point is that the time was being used to perform those tasks in a way that could not have happened just a few years ago.

In the December 2014/January 2015 edition of Fast Company, Larry Erwin, a Business Development Executive with Google, points out:

“When I was a kid growing up back in the ’90s, I was the only kid on my block with a Tandy 1000. Now kids who are 15, 16 years old have a supercomputer in their pocket”

The opportunity for business software tools to take advantage of that new computing power is huge, and growing. The very structure of the traditional office is under pressure, as users become more mobile and more technology enabled. That generation of teenagers will soon enter the workplace having had a completely different, and more universal grounding in technology than we select geeks who owned the Tandy 1000s and Sinclair Spectrums of yesteryear.

Mobility has already become a primary means of service consumption for customers, across a swathe of industries. Consider the process of taking a flight: with many airlines, the entire customer experience has been mobilized. Forrester Research outlined this beautifully in a 2014 illustration charting the timeline of mobile engagement for the airline passenger:

  • -2 Weeks: Book ticket, change reservation
  • -2 Days: Change seat, request upgrade
  • -2 Hours: Check in, check gate, departure time, lounge access
  • Flight: Arrival time, food order, movies, wi-fi, duty free
  • +2 Hours: Ground transport, lost luggage, navigation
  • +2 Days: Mileage status, reward travel, upcoming reservations
  • +2 Weeks: Mileage points earned, customer satisfaction survey
    (Source: Forrester)

Mobility for the consumer is now table stakes. So why not extend this to the people serving those consumers? Mobility, simply, provides great opportunities to enhance the role of the service representative.

When I arrived at a Westin Hotel in Chicago last month, I needed to speak with reception, and joined the line of people at the check-in desk. However, I was approached by a staff member with an iPad. They were quickly able to answer my question. The Starwood Hotels group, he told me, aims to keep its hotel staff on their feet, closer to customers, delivering service in a more dynamic way. Even the group’s CEO, Fritz van Paasschen, has abandoned his desk and PC: a Wall Street Journal article in November 2014 revealed that he works entirely on tablet and smartphone (van Paasschen’s office contains no desk – just a boardroom table and a couch).

In an IT Service Management environment, the case for mobility for field support users has long been clear: the alternative being a hotch-potch of printed dockets, slow communication, and inconvenient (or omitted) retrospective updates to systems of record, back at a field office.

But even in the office, it’s important to realise that good IT service, like all good customer service, combines communication, expertise, initiative and process. Many people involved in that process are not at their desk all day: they may be in meetings, or travelling between sites, or sitting with colleagues.

If those people can only access their support tools from their desk, then gaps appear. Twenty minutes waiting for input from an approver or technical expert could amount to twenty minutes more waiting time for the customer, or even a missed window to communicate with the next person in the chain (and hence an even bigger gap). Mobilising people – properly – fills those gaps, even in the office. And, as the IT department’s customers get more mobile, the best way to support them is often to become more mobile.

When we built the Smart IT interface for BMC Remedy, released in September 2014, this was the philosophy of our mobile approach: ITSM should be mobile for every user, whether they are field support technicians roaming a wide area, or a service desk agent taking a five minute break at the coffee machine.

The tool needed to provide all the features they need, including comprehensive teamwork features and assistiveness, so that they are never forces to find a desk or wait for the slow boot-up of a traditional PC. We released the tablet version of Smart IT on day one, and the phone version, scheduled to be live in December 2014, has been already received a great reception in demonstrations at customer events. As with Smart IT in general, there’s no additional cost over and above a standard Remedy ITSM license.

Our work with our ITSM customers has shown us, and them, that there are huge and real business benefits to seamless and comprehensive mobile experience. Time not spent in front of a PC no longer needs to be time spent not helping customers.

Properly equipped, an increasingly mobile-focused user base is sure to find those benefits, and that means faster, better outcomes for IT’s customers.

Is the lack of ITSM and ITAM alignment causing application sprawl?

Urban sprawl

I’ve written before about the negative consequences of the lack of industry alignment between ITIL-focused ITSM functions, and the IT Asset Management groups which typically evolved somewhat separately.

A recent CapGemini study of CIOs and IT decision makers concisely illustrated one impact this is having:

  • 48% believe their business has more applications than it needs (up from 34% over the previous three years).
  • Only 37% percent believe the majority of their applications are mission critical.
  • 70% believe at least a fifth of their company’s applications share similar functionality and could be consolidated.

The majority believe a fifth of those applications should be retired or replaced.

This shows a very strong consensus amongst IT leaders: IT is spending too much money and time on too many applications, with too much overlap. And in the rapidly evolving application landscape, this impact is by no means limited to traditional on-premise software: Skyhigh’s 2013 study on cloud service adoption found that enterprise respondents used, on average, well over 500 cloud services (the largest number of services found in one organisation was an eye-watering 1769).[Update for Q1 2015: SkyHigh now put the average at over 900]

If we want to remain serious about understanding the business services our IT organizations are managing, overseeing and underpinning, surely we can’t lose track of key assets like this?

How can IT possibly aim to control this sprawl, understand its impact, pinpoint its risks and and remove its vulnerabilities, if there is no unified overseeing function? Who is tracking which users are entitled to which services? Who ensures that users are equipped with the right services, and who removes their access once they leave, to ensure both data security and cost control? Who can identify the impact on key services if an application is removed or consolidated?

Concerningly, this does not appear to be high on the agenda in ITSM discussions. We still see two separate threads in the conference ecosystem: ITSM conferences rarely address asset management. Asset management conferences talk about suppliers and infrastructure without putting them in the context of the services they underpin. My own role involves product management of an ITAM system which is part of an ITSM suite, so I attend both sets of conferences, see both parallel tracks, and experience nagging concerns in each case that the other side of the picture is overlooked.

Recent initiatives such as the Pink Think Tank 14 are, welcomely, addressing in increased detail the multi-sourced, multi-vendor evolution of IT service delivery, but there still does not appear to be a detailed focus on the actual assets and software being supplied by those vendors.  That’s a gap. Those vendors fill the IT environment with assets, from physical kit through software services to less tangible “assets” like critical people with vital knowledge.  All those things cost money. They may have contractual associations. We may need to know, very quickly, who owns and supports them. And if a supplier is replaced, we need to know what they might take with them.

The harsh reality, as clearly shown by CapGemini’s study, is that CIOs and leaders are asking questions about consolidation that will require a detailed, holistic understanding of what we are actually spending money on, and why it is there.

Some initial thoughts on the Service Management Congress

Having gone away on paternity leave for a few weeks (I’m writing this with a sleeping four-week-old stretched along my lap), I initially missed the fuss that came out of the Service Management Fusion13 conference. On returning, an acquaintance in the UK ITSM sector emailed me and suggested I take a look at the Service Management Congress website, and its bold rallying call:

SMCallAction

That’s quite a lot to take in between sleep-deprived nappy changes, so I’m grateful that he also pointed me to some useful and interesting context from prominent ITSM consultant, podcaster and blogger Barclay Rae:

What I didn’t expect was to be involved in a ‘revolution’, but that happened too…
Over the week – and with the support of the organisers – a number of meetings were held with a cross-section of ITSM people who wanted to see change happen and to do something about it – now. A few people were initially invited and others like me simply joined as part of the conversation . The sessions were originally set up with the intention of discussing how to improve or develop the role of the itSMF (especially in the US) – which (with the exception of some great chapters in northern Europe and elsewhere) is perceived to be flagging. The discussion moved on from that to a bigger and more fundamental view of how to fix the industry – perhaps idealistic but certainly with positive intent.

A post on the SM Congress website itself, entitled  “Background on the Group Formerly Known as RevNet“, detailed the terms of referene that had been given to the core, invited group who had drawn up this fledgeling manifesto:

* To challenge our community of service management professionals to look at things differently and to embrace the future
* To challenge us (itSMF USA, and to a lesser degree, the entire itSMF international community) to improve and stay relevant
* To challenge themselves and explore what should come out of this group – what should come next

This is interesting – a brief to look at things with “a fresh set of eyes”, equivalent in part to the spin-out group described in Clayton M. Chrisiansen’s “The Innovator’s Dilemma”, assembled as an independent, fresh entity to avoid the challenge of responding to disruptive influences from an established, mature and successful market position.

Companies that have tried to develop new capabilities within established organizational units also have a spotty track record, unfortunately. Assembling a beefed-up set of resources as a means of challenging what an organization can do is relatively straightforward… Too often, however, resources such as these are then plugged into fundamentally unchanged processes, and little change results…

A separate organization is required when the mainstream organization’s values would render it incapable of focusing resources on the innovation project.

I’ve signed the pledge. I think the intentions seem very honourable, and the problems identified by the group are real, if somewhat loosely stated. Many of the principles seem spot-on: it’s certainly my view that too much of the information that should help us to drive our industry is hidden behind paywalls and accreditation schemes when it should really be a public resource.  My views aren’t fully formed, but nor by its own acknowledgement are those of the Service Management Congress itself.   It doesn’t seem self-evident to me that this structure will work, but it seems a good thing to explore and develop.  At this stage, I have a few key hopes:

I hope that a broad set of ITSM people are able to feel “ownership”: The initial signers and many of the follow-up pledgers are pretty familiar names within the industry: high-profile bloggers, tweeters, and presenters. It’s an impressive set of names, but we do need to bear in mind Rob England’s astute observation that “there are over two million people with an ITIL certificate. I guess quite a few of them are ITSM practitioners in one form or another – even if they wouldn’t call themselves that – let’s say a million. So a few thousand have read the SMcongress stuff and a few hundred have put their names on it“.  If this is perceived, even if very unfairly, as a talking shop for some “usual suspects”, it won’t get near any critical mass.

I hope we remember that ITSM doesn’t suck!:  There is plenty of room for improvement, but we have great people in this sector, and we’ve build something effective and successful. It needs to grow, and adapt, but that doesn’t mean everything thus far is a mistake.

I hope the approach is outside-in: This is not an “iPad” moment, where (to paraphrase Steve Jobs) we are creating something our customers didn’t even know they wanted. Great practice will come from real life, and there’s plenty of it out there. We can’t design it from scratch in a meeting room. Anyway, I’m a Product Manager, so I have to say this.

I hope that its ideas are genuinely transformative, but I don’t think it needs to create a revolution: ITSM is a mature framework in a rapidly shifting environment. Is ITIL adapting quickly enough to remain a dependent and definitive standard? There’s obviously doubts and concerns about that.

My own view is that our customers have become comfortable and familiar with a set of tools and practices and interactions provided by their consumer technology that has set the bar much higher in terms of their expectations for the workplace. Upstart providers like Uber, who I have written about previously, have taken century-old customer interactions and transformed them to the extent that traditional providers face disruption out of their markets.  Internet-enabled cloud services have taken aspects of technology that were completely within IT’s domain, and offered them to anyone with a credit card.  This presents both a danger of irrelevance, and a gulf in governance, and ITSM needs to address those issues urgently.

If our established frameworks can’t do that quickly enough, we need a rapid innovation.  But is it realistic to change everything? It feels more pragmatic, initially, to find some great ideas that can fold back into the broader ITSM discipline, bringing genuine improvements without trying to eat the whole elephant in one go.  Our stakeholders, to whom this transformation ultimately has to be sold, won’t accept a message that says “everything changes right now”

I hope that we don’t just do this:
XKCD cartoon -
(see also: camel/horse/committee)

I’m looking forward to engaging, and I’m looking forward to watching things develop. It’ll be interesting to revist this subject in a month or so.

itSMF UK and the mysterious case of the missing Asset Managers

logo of the ITSM13 conference

Something is bothering me.

When I first looked at the agenda for the 2013 itSMF UK conference in November, what stood out for me was a glaring omission: where is the IT Asset Management content?

First, let me state: It’s a really good agenda, full of really interesting speakers, and I will certainly aim to be there. I’ve been privileged to work in the the UK ITSM sector for the thick end of two decades, and many of the names on the agenda are people i feel lucky to have worked and interacted with.

If you can, you should definitely go.

However, the lack of any ITAM focus, across more than 40 presentation sessions, is strange. If we want to understand our business services, we have to have a grasp on the assets underpinning them. The nearest this agenda appears to get to that is an interesting looking session on Supplier Management – important, but only part of the picture, and again, something that doesn’t really work without a good knowledge of what we are actually buying.

It took ITIL a while to come to the realisation that an asset is relevant in more ways than being just a depreciating item on a balance sheet, but version 3 finally got there, and then some:

“Service Asset”, according to ITIL v3: Any Capability or Resource of a Service Provider. Resource (ITILv3): [Service Strategy] A generic term that includes IT Infrastructure, people, money or anything else that might help to deliver an IT Service. Resources are considered to be Assets of an Organization Capability (ITIL v3): [Service Strategy] The ability of an Organization, person, Process, Application, Configuration Item or IT Service to carry out an Activity. Capabilities are intangible Assets of an Organization.”

So… we consider our service-underpinning capabilities and resources to be our assets, but we don’t discuss managing those assets at the premier conference about managing the services? More importantly, we offer nothing to its increasingly important practitioners?

As long as ITAM is only discussed at ITAM conferences, and ITSM keeps up the habit of excluding it (this isn’t universal, mind: this presentation by Scott Shaw at Fusion 13 seems to hit the perfect message), then we risk looking disjointed and ineffective to CIOs who depend on the complete picture. To me, that’s pretty worrying.

(Footnote: I did submit a speaker proposal, but this isn’t about my proposal specifically – I’m sure lots of proposals couldn’t make the list)

Why does reporting get forgotten in ITSM projects?

ITSM initiatives often focus heavily on operational requirements, without paying enough up-front attention to reporting and analytics. This can lead to increased difficulty after go-live, and lost opportunity for optimisation. Big data is a huge and important trend, but don’t forget that a proactive approach to ordinary reporting can be very valuable.

“…users must spend months fighting for a desired report, or hours jockeying Excel spreadsheets to get the data they need. I can only imagine the millions of hours of productive time spent each month by people doing the Excel “hokey pokey” each month to generate a management report that IT has deemed not worthwhile”

Don’t Forget About “Small Data” – Patrick Gray in TechRepublic

In a previous role, aligning toolsets to processes in support of our organisation’s ITSM transformation, my teammates and I used to offer each other one piece of jokey advice: “Never tell anyone you’re good with Crystal Reports”.

The reason? Our well established helpdesk, problem and change management tool had become a powerful source of management reports. Process owners and team managers wanted to arrive at meetings armed with knowledge and statistics, and they had learned that my team was a valuable data source.

Unfortunately, we probably made it look easier than it actually was. These reports became a real burden to our team, consuming too much time, at inconvenient times. “I need this report in two hours” often meant two hours of near-panic, delving into data which hadn’t been designed to support the desired end result. We quickly needed to reset expectations. It was an important lesson about reporting.

Years later, I still frequently see this situation occurring in the ITSM community. When ITSM initiatives are established, processes implemented, and toolsets rolled out, it is still uncommon for reporting to be considered in-depth at the requirements gathering stage. Perhaps this is because reporting is not a critical-path item in the implementation: instead, it can be pushed to the post-rollout phase, and worried about later.

One obvious reason why this is a mistake is that many of the things that we might need to report on will require specific data tracking. If, for example, we wish to track average assignment durations, as a ticket moves between different teams, then we have to capture the start and end times of each. If we need to report in terms of each team’s actual business hours (perhaps one team works 24/7, while another is 9 to 5), then that’s important too. If this data is not explicitly captured in the history of each record, then retrospectively analysing it can be surprisingly difficult, or even impossible.

Consider the lifecycle of a typical ITSM data instance, such as an incident ticket:

Simple representation on an incident ticket in three phases: live, post-live, and archives

Our record effectively moves through three stages:

  • 1: The live stage
    This is the key part of an incident’s record’s life, in which it is highly important as a piece of data in its own right. At this point, there is an active situation being managed. The attributes of the object define where it is in the process, who owns it, what priority it should take over other work, and what still needs to be done. This phase could be weeks long, near-instantaneous, or anything between.
  • 2: The post-live stage
    At this point, the ticket is closed, and becomes just another one of the many (perhaps hundreds of thousands) incidents which are no longer actively being worked. Barring a follow up enquiry, it is unlikely that the incident will ever be opened and inspected by an individual again. However, this does not mean that it has no value. Incidents (and other data) in this lifecycle phase do not have much significant value in their own individual right (they are simply anecdotal records of a single scenario), but together they make up a body of statistical data that is, arguably, one of the IT department’s most valuable proactive assets.
  • 3: The archived stage
    We probably don’t want to keep all our data for ever. At some stage, the usefulness of the data for active reporting diminishes, and we move it to a location where it will no longer slow down our queries or take up valuable production storage.

It’s important to remember that our ITSM investment is not just about fighting fires. Consider two statements about parts of the ITIL framework (these happen to be taken from Wikipedia, but they each seem to be very reasonable statements):

Firstly, for Incident Management:

“The objective of incident management is to restore normal operations as quickly as possible”

And, for Problem Management:

“The problem-management process is intended to reduce the number and severity of incidents and problems on the business”

In each case, the value of our “phase 2” data is considerable. Statistical analysis of the way incidents are managed – the assignment patterns, response times and reassignment counts, first-time closure rates, etc. – helps us to identify the strong and weak links of our incident process in a way that no individual record can do so. Delving into the actual details of those incidents in a similar way helps us to identify what is actually causing our issues, reinforcing Problem Management.

It’s important to remember that this is one of the major objectives of our ITSM systems, and a key basis of the return on our investment. We can avoid missing out on this opportunity by following some core principles:

  • Give output requirements as much prominence as operational requirements, in any project’s scope.
  • Ensure each stakeholder’s individual reporting and analytics needs are understood and accounted for.
  • Identify the data that actually needs to be recorded, and ensure that it gets gathered.
  • Quantify the benefits that we need to get from our analytics, and monitor progress against them after go-live.
  • Ensure that archiving strategies support reporting requirements.

Graphs icon courtesy of RambergMediaImages on Flickr, used under Creative Commons licensing.

Painted into a Corner: Why Software Licensing isn’t getting simpler

It’s not easy being a Software License Manager.

It’s really not easy being a Software License Manager in a company which uses products from one or more of the “usual suspects” among the major software vendors.  Some of the largest have spent recent years creating a licensing puzzle of staggering complexity.

There’s an optimistic school of thought which supposes that the next big change in the software industry – a shift to service-oriented, cloud-based software delivery – will make this particular challenge go away.  But how true is this? To answer the question, we need to take a look back, and understand how we arrived at the current problem.

In short, today’s complexity was driven by the last big industry megatrend: virtualization.

In an old-fashioned datacenter, licensing was pretty straightforward.  You’re running our software on a box?  License that box,  please.  Some boxes have got bigger?  Okay, count the CPUs, thanks. It was nothing that should have been a big issue for an organized Asset Manager with an effective discovery tool.  But as servers started to look a bit less like, well, servers, things changed, and it was a change that became rather dramatic.

The same humming metal boxes were still there in the data center, but the operating system instances they were supporting had become much more difficult to pin down.  Software vendors found themselves in a tricky situation, because suddenly there were plenty of options to tweak the infrastructure to deliver the same amount of software at a lower license cost. This, of course, posed a direct threat to revenues.

The license models had to be changed, and quickly. The result was a new set of metrics, based on assessment of the actual capacity delivered, rather than on direct counting of physical components.

In 2006, in line with a ramping-up of the processor core count in its Power5 server offering, IBM announced its new licensing rules.We want customers to think in terms of ‘processor value units’ instead of cores”, said their spokesman. A key message was simplification, but that was at best debatable:  CPUs and cores can be counted, whereas processor-specific unit costs have to be looked-up.  And note the timing: This was not something that arrived with the first Power5 servers. It was well into the lifetime of that particular product line.  Oh, and by the way, older environments like Power4 were brought into the model, too.

And what about the costs?  “This is not a pricing action. We aren’t changing prices”. added the spokesman.

For a vendor, that assertion is important. Changing pricing frameworks is a dangerous game for software companies, even if on paper it looks like a zero-sum game.  The consequences of deviating significantly around the current mean can be severe:  The customers whose prices rise tend to leave. Those whose prices drop pay you less.  Balance isn’t enough – you need to make it smooth for every customer.

Of course, virtualization didn’t stand still from August 2006 onwards, and hence neither did the license models.  With customers often using increasingly sophisticated data centers, built on large physical platforms, the actual processing capacity allocated to software might be significantly less than the total capacity of the server farm.  You can’t get away with charging for hundreds of processors where software is perhaps running on a handful of VMs.

So once again, those license models needed to change.  And, as is typical for revisions like this, sub-capacity licensing was achieved through the addition of more details, and more rules.  It was pretty much impossible to make any such change reductive.

This trend has continued:  IBM’s Passport Advantage framework, at the time of writing, has an astonishing  46 different scenarios modelled in its processor value unit counting rules, and this number keeps increasing as new virtualization technologies are released. Most aren’t simple to measure: the Asset Manager needs access to a number of detailed facts and statistics.  Cores, CPUs, capacity caps, partitioning, the ability of VMs to leap from one physical box to another – all of these and more may be involved in the complex calculations. Simply getting hold of the raw data is a big challenge.

Another problem for the Software Asset Manager is the fact that there is often a significant and annoying lag between the emergence of a new technology, and the revision of software pricing models to fit it. In 2006, Amazon transformed IT infrastructure with their cloud offering. Oracle’s guidelines for applying its licensing rules in that environment only date back to 2008. Until the models are clarified, there’s ambiguity. Afterwards, there are more rules to account for.

(Incidentally, this problem is not limited to server-based software.  A literal interpretation of many desktop applications’ EULAs can be quite frightening for companies using widespread thin-client deployment. You might only have one user licensed to work with a specialist tool, but if they can theoretically open it on all 50,000 devices in the company, a bad-tempered auditor might be within their rights to demand 50,000 licenses.)

License models catch up slowly, and they catch up reactively, only when vendors feel the pressure to change them. This highlights another problem: despite the fine efforts of industry bodies like the SAM Standards Working Group, vendors have not found a way to collaborate.  As the IBM spokesman put it in that initial announcement: “We can’t tell the other vendors how to do their pricing structure”.

As a result, the problem is not just that these license models are complex.  There are also lots of them.  Despite fundamentally measuring the same thing, Oracle’s Processor Core Factors are completely different to IBM’s Processor Value Units.  Each vendor deals with sub-capacity in its own way, not just in terms of counting rules but even in terms of which virtual systems can be costed on this basis. Running stuff in the cloud? There are still endless uncertainties and ambiguities.  Each vendor is playing a constant game of catch-up, and they’re each separately writing their own own rules for the game. And meanwhile, their auditors knock on the door more and more.

Customers, of course, want simplification. But the industry is not delivering it. And the key problem is that pricing challenge.  A YouTube video from 2009 shows Microsoft’s Steve Ballmer responding to a customer looking for a simpler set of license models.  An edited transcript is as follows:

Questioner:

Particularly in application virtualization and general virtualization, some of Microsoft’s licensing is full of challenging fine print…

…I would appreciate your thoughts on simplifying the licensing applications and the licensing policies.”

Ballmer:

“I don’t anticipate a big round of simplifying our licenses.  It turns out every time you simplify something, you get rid of something.  And usually what we get rid of, somebody has used to keep their prices down…

…The last round of simplification we did of licensing was six years ago…. it turned out that a lot of the footnotes, a lot of the fine print,  a lot of the caveats, were there because somebody had used them to reduce their costs…

…I know we would all like the goal to be simplification, but I think the goal is simplification without price increase. And our shareholders would like it to be a simplification without price decreases”…

…I’d say we succeeded on simplification, and our customer satisfaction numbers plummeted for two and a half years”.

In engineering circles there is a wise saying: “Strong, light, cheap: Pick any two”. The lesson from the last few years in IT  is that we can apply a similar mantra to software licensing:  Simple, Flexible, Consistently Priced: Pick any two.

Vendors have almost always chosen the latter two.

This brings us to the present day, and the next great trend in the industry. According to IDC’s 2011 Software Licensing and Pricing survey, a significant majority of the new commercial applications brought to market in 2012 will be built for the Cloud. Vendors are seeing declining revenues from perpetual license models, while subscription-based revenue increases. Some commentators view this as a trend that will lead to the simplification of software license management. After all, people are easier to count than dynamic infrastructure… right?

However, for this simplification to occur, the previous pattern has to change, and it’s not showing any sign of doing so.  The IDC survey reported that nearly half of the vendors who are imminently moving to usage-based pricing models still had no means to track that usage. But no tracking will mean no revenue, so we know they’ll need to implement something. Once again, the software industry is in an individual and reactive state, rather than a collaborative one, and that will mean different metrics, multiple data collection tools, and a new set of complex challenges for the software asset manager.

And usage based pricing is no guarantee of simplicity. A glance at the Platform-as-as-Service sector illustrates this problem neatly. Microsoft’s Azure, announced in 2009 and launched in 2010, promised new flexibility and scalability… and simplicity. But again, flexibility and simplicity don’t seem to be sitting well together.

To work out the price of an Azure service, the Asset Manager needs to understand a huge range of facts, including (but by no means limited to) usage times, usage volumes, and secondary options such as caching (both performance and geographic), messaging and storage.  Got all that?  Good, because now we have to get to grips with the contractual complications: MSDN subscriptions have to be accounted for, along with the impact of any existing Enterprise Agreements. Microsoft recognized the challenge and provided a handy calculator, only to acknowledge that “you will most likely find that the details of your scenario warrant a more comprehensive solution”. Simplicity, Flexibility, Consistent Pricing: Pick any two.

And, of course, the old models won’t go away either. Even in a service-oriented future, there will still be on-premise IT, particularly amongst the organizations providing those services.

Software vendors have painted themselves into a corner with their license models, and unless they can find a way to break that pattern, we face a real risk that the license management challenge will get even more complex. Entrenched complexity in the on-premise sector will be joined by a new set of challenges in the cloud.

The pattern needs to change. If it doesn’t change, be nice to your Software Asset Manager. They’ll need a coffee.

Ticket Tennis

The game starts when something breaks.

A service is running slowly, and the sounds of a room full of frustration echo down a phone line. Somewhere, business has expensively stopped, amid a mess of lagging screens and pounded keyboards.

The helpdesk technician provides sympathetic reassurance, gathers some detail, thinks for a moment, and passes the issue on. A nice serve, smooth and clean, nothing to trouble the line judges here.

THUD!

And it’s over to the application team.  For a while.

“It’s not us. There’s nothing in the error logs. They’re as clean as a whistle”.

Plant feet, watch the ball…

WHACK!

Linux Server Support. Sure footed and alert, almost nimble (it’s all that dashing around those tight right-angle corners in the data center).  But no, it seems this one’s not for them.

“CPU usage is normal, and there’s plenty of space on the system partition”.

SLICE!

The networks team alertly receive it.  “It can’t be down to us.  Everything’s flowing smoothly, and anyway we degaussed the sockets earlier”. (Bear with me. I was never very good at networking).

“Anyway, it’s slow for us, too. It must be an application problem”.

BIFF!

Back to the application team it goes.   But they’re waiting right at the net.  “Last time this was a RAID problem”, someone offers.

CLOUT!

…and it’s a swift volley to the storage team.

I love describing this situation in a presentation, partly because it’s fun to embellish it with a bit of bouncy time-and-motion.  Mostly, though, it’s because most people in the room (at the very least, those whose glasses of water I’ve not just knocked over) seem to laugh and nod at the familiarity of it all.

Often, something dramatic has to happen to get things fixed. Calls are made, managers are shouted at, and things escalate.  Eventually people are made to sit round the same table, the issue is thrashed out, and finally a bit of co-operation brings a swift resolution.

You see, it turns out that the servers are missing a patch, which is causing new application updates to fail, but they can’t write to the log files because the network isn’t correctly routing around the SAN fabric that was taken down for maintenance which has overrun. It took a group of people, working together, armed with proper information on the interdependent parts of the service, to join the dots.

Would this series of mistakes seem normal in other lines of work?  Okay, it still happens sometimes, but in general most people are very capable of actually getting together to fix problems and make decisions.   Round table meetings, group emails and conference calls are nothing new. When we want to chat about something, it’s easy.  If we want to know who’s able to talk right now, it’s right there in our office communicator tools and on our mobile phones:

It’s hard to explain why so many service management tools remain stuck in a clumsy world of single assignments, opaque availability, and uncoordinated actions.  Big problems don’t get fixed quickly if the normal pattern is to whack them over the net in the hope that they don’t come back.

Fixing stuff needs collaboration, not ticket tennis. I’ve really been enjoying demonstrating the collaboration tools in our latest Service Desk product.  Chat simply makes sense.  Common views of the services we’re providing customers simply make sense.  It demos great, works great, and quite frankly, it all seems rather obvious.

Photo courtesy of MeddyGarnet, licensed under Creative Commons.