We’ve been getting “DevOps vs ITIL” wrong

At DevOps conferences, I’ve observed some very negative sentiment about ITIL and ITSM. In particular, the Change Advisory Board is frequently cited as a symbol of ITSM’s anachronistic bureaucracy. They have a point. Enterprise IT support organisations are seen as slow, siloed, structures built around an outdated three-tier application model.

None of this should be a surprise. The Agile Manifesto, effectively the DevOps movement’s Declaration of Independence, explicitly values individualism over process, and reactiveness over structure. The manifesto is the specific antithesis of the traits seen in that negative perception of ITSM.

ITSM commentary on DevOps, meanwhile, is inconsistent, ranging from outright confusion to sheer overconfidence. The complaints of the DevOps community are frequently acknowledged, but they are often waved away on the basis that ITSM is “just a framework”, and hence it should be perfectly possible to fit Devops within that framework. If that doesn’t work, the framework must have been implemented badly. Again, this is a reasonable point.

But there’s a recurring problem with the debate: it tends to focus primarily on processes: two ITIL processes in particular. ITSM commentators frequently argue that Change Management already supports the notion of automated, pre-approved changes. DevOps “is just mature ITIL Release Management”, stated an opinion piece in the Australian edition of Computerworld (a remarkable assertion, but we’ll come to that later). Some of the more robust sceptics in the DevOps community focus on ITSM’s process silos and their incompatibility with the new agility in software development.

Certainly, the ITSM community has to realise that there is a revolution happening in software production. Here are some statements which are easy to back up with real-world evidence:

  • DevOps methodology fundamentally improves some of the inefficiencies of old, waterfall-driven processes.
  • Slow, unnecessarily cumbersome processes are expensive in themselves, and they create opportunity costs by stifling innovation.
  • Agile, autonomous teams of developers are unleashing creativity and innovation at a new pace.
  • By enabling small, point releases, systems are growing in a more resilient way than monolithic releases tended to achieve.

Unarguably, the new methodology is highlighting the shortcomings of the old. Can anyone argue today that a Change Advisory Board made of humans, collating verbal assurances from other humans, is preferable to an effective, fully-automated assurance process, seamlessly integrated with the release pipeline?

We know, then, that DevOps methods dramatically improve the speed and reliability with which technology change can increase business value. But that’s where the arguments on both sides start to wear thin. What is that business value? How do we identify it, measure it, and assure its delivery?

In my experience, there is little mention of the customer at DevOps events. DevOps is seen, correctly, as a new and improved way to drive business value from software development, but the thinking feels very “bottom up”. ITSM commentators seem to have taken the same starting point: drilling into minutiae of process without really considering the value that ITSM should be looking to bring to the new world.

To highlight why a lack of service context is a problem, let’s take the simple example of frontline support. When developers push out an incremental release to a product, customers start to use it. No matter how robust the testing of that release was, some of those customers will encounter issues that need support (not every issue is caused by a bug that was missed in testing, after all).

The Service Desk will of course try to absorb many of those issues at the first step. That is one of its fundamental aims. To do this effectively, it needs to have reasonable situational awareness of what has been changing. It is not optimal for the Service Desk only to become aware of a change when the calls start coming in. Ideally, they should be armed with knowledge of how to deal with those issues.

No matter how effective the first line of support is, some issues will get to the application team. Those issues will vary, as will the level of pain that each is causing. Triage is required, and that is only possible if there a clear understanding of the business and customer perspective.

Facing a queue of two tickets, or ten tickets, or one hundred tickets, the application team has to decide what to do first. This is where things start to unravel for an idealistic, full-stack, “you break it, you fix it” devops team. Which issues are causing business damage? Which are the most time critical? Which can be deferred? How much time should we spend on this stuff at the cost of moving the product forward? This is the stuff that ITSM ought to be able to provide.

Effective Service Management in any industry starts with a fundamental understanding of the customer. Who are they? What makes them successful? What makes them tell other potential customers how great you are? What annoys them? What hurts them? What will trigger them to ask for a refund? What makes them go elsewhere altogether? And, importantly: what is it that we are obligated to provide them with?

An understanding of our service provision is fundamental to creating and delivering innovative solutions, and supporting them once they are there. This is where ITSM can lead, assist, and benefit the people pushing development forward.

The “S” in ITSM stands for “service”, not process. The heavy focus on process in this discussion (particularly two specific process, close to the point of deployment) has been a big mistake by both communities. It is wholly incorrect to state that DevOps is predominantly contained within Release and Change Management. Code does not appear appear spontaneously in a vacuum. A whole set of interconnected events lead to its creation.

I have been in IT for two decades, and the DevOps movement is by the biggest transformation in software development methodology in that time (I still have the textbooks from my 1990s university Computing course. These twenty-year-old tomes admonish the experimenting “hacker” and urge the systems analyst to complete comprehensive designs before a line of code is written, as if building software was perhaps equivalent to constructing a suspension bridge).

The cultural change brought by DevOps involves the whole technology department… the whole enterprise, in fact. Roles change, expectations change. There are questions about how to align processes, governance and support. We need to think about the structure of our teams in a post three-tier world. We need to consider new support methodologies like swarming. We need to thread knowledge management and collaboration through our organizations in innovative new ways.

But the one thing we really must do is to start with the customer.

Knowing what you DON’T know

Question Mark

I presented an Asset Management breakout session at the BMC Engage conference in Las Vegas today.  The slides are here:

An interesting question came up at the end: What percentage accuracy is good enough, in an IT Asset Management system?  It’s a question that might get many different answers.  Context is important: you might expect a much higher percentage (maybe 98%?) in a datacentre, but it’s not so realistic to achieve that for client devices which are less governable… and more likely to be locked away in forgotten drawers.

However, I think any percentage figure is pretty meaningless without another important detail: a good understanding of what you don’t know. Understanding what makes up the percentage of things that you don’t have accurate data on is arguably just as important as achieving a good positive score.

One of the key points of my presentation is that there has been a rapid broadening of the entities that might be defined as an IT Asset:

The evolution of IT Assets

The digital services of today and the future will likely be underpinned by a broader range of Asset types than ever.  A single service, when triggered, may touch everything from a 30-year-old mainframe to a seconds-old Docker instance. Any or all of those underpinning components may be of importance to the IT Asset Manager. After all, they cost money. They may trigger licensing requirements. They need to be supported. The Service Desk may need to log tickets against them.

The trouble is, not all of the new devices can be identified, discovered and managed in the same way as the old ones.  The “discover and reconcile” approach to Asset data maintenance still works for many Asset types, but we may need a completely different approach for new Asset classes like SaaS services, or volatile container instances.

The IT Asset Manager may not be able to solve all those problems.  They may not even be in a position to have visibility, particularly if IT has lots its overarching governance role over what Assets come into use in the organization (SkyHigh Networks most recent Cloud Adoption and Risk Report puts the average number of Cloud Services in use in an enterprise at almost 1100. Does anyone think IT has oversight over all of those, anywhere?).

However, it’s still important to understand and communicate those limitations.  With CIOs increasingly focused on ITAM-dependent data such as the overall cost of running a digital service, any blind spots should be identified, understood, and communicated. It’s professional, it’s helpful, it enables a case to be made for corrective action, and it avoids something that senior IT executives hate: surprises.

Question mark image courtesy Cesar Bojorquez on Flickr. Used under Creative Commons licensing.

Mobile ITSM isn’t only about field support: It’s about everyone.

iPhone 6

When we built the new Smart IT UX for BMC Remedy, we were determined to allow ALL IT Support Workers to be Mobile. Why? Because everyone can benefit from mobility.

In the short history of enterprise mobility, mobile business applications have generally focused on two specific user groups. The first is the group of users for whom being on the road is the bulk of their job, such as field engineers: they go to a location, perform some tasks, move on to the next place.

The second group is those who might be based at a desk in an office, but who move around through a series of meetings, on and off-site. For these users, the primary purpose of mobility has been continuity of communication (with the weapon of choice having historically been the keyboard-equipped Blackberry).

For most other users, performing most other business tasks, the desktop computer (or desk-based notebook computer) still remained the key delivery mechanism for business applications.

Today, this is an outdated philosophy.

I recently stood in a lift at a customer’s office. There were four people in that elevator, and there were seven smartphones on display.  Okay, two of them were mine (I’m a mobility product manager, after all), but that is still a notable average.

Even in the short moment offered by a journey of just a few floors, those office-based employees found a moment to communicate. Whether that communication was work-based or personal, one-way or two-way, is irrelevant. The point is that the time was being used to perform those tasks in a way that could not have happened just a few years ago.

In the December 2014/January 2015 edition of Fast Company, Larry Erwin, a Business Development Executive with Google, points out:

“When I was a kid growing up back in the ’90s, I was the only kid on my block with a Tandy 1000. Now kids who are 15, 16 years old have a supercomputer in their pocket”

The opportunity for business software tools to take advantage of that new computing power is huge, and growing. The very structure of the traditional office is under pressure, as users become more mobile and more technology enabled. That generation of teenagers will soon enter the workplace having had a completely different, and more universal grounding in technology than we select geeks who owned the Tandy 1000s and Sinclair Spectrums of yesteryear.

Mobility has already become a primary means of service consumption for customers, across a swathe of industries. Consider the process of taking a flight: with many airlines, the entire customer experience has been mobilized. Forrester Research outlined this beautifully in a 2014 illustration charting the timeline of mobile engagement for the airline passenger:

  • -2 Weeks: Book ticket, change reservation
  • -2 Days: Change seat, request upgrade
  • -2 Hours: Check in, check gate, departure time, lounge access
  • Flight: Arrival time, food order, movies, wi-fi, duty free
  • +2 Hours: Ground transport, lost luggage, navigation
  • +2 Days: Mileage status, reward travel, upcoming reservations
  • +2 Weeks: Mileage points earned, customer satisfaction survey
    (Source: Forrester)

Mobility for the consumer is now table stakes. So why not extend this to the people serving those consumers? Mobility, simply, provides great opportunities to enhance the role of the service representative.

When I arrived at a Westin Hotel in Chicago last month, I needed to speak with reception, and joined the line of people at the check-in desk. However, I was approached by a staff member with an iPad. They were quickly able to answer my question. The Starwood Hotels group, he told me, aims to keep its hotel staff on their feet, closer to customers, delivering service in a more dynamic way. Even the group’s CEO, Fritz van Paasschen, has abandoned his desk and PC: a Wall Street Journal article in November 2014 revealed that he works entirely on tablet and smartphone (van Paasschen’s office contains no desk – just a boardroom table and a couch).

In an IT Service Management environment, the case for mobility for field support users has long been clear: the alternative being a hotch-potch of printed dockets, slow communication, and inconvenient (or omitted) retrospective updates to systems of record, back at a field office.

But even in the office, it’s important to realise that good IT service, like all good customer service, combines communication, expertise, initiative and process. Many people involved in that process are not at their desk all day: they may be in meetings, or travelling between sites, or sitting with colleagues.

If those people can only access their support tools from their desk, then gaps appear. Twenty minutes waiting for input from an approver or technical expert could amount to twenty minutes more waiting time for the customer, or even a missed window to communicate with the next person in the chain (and hence an even bigger gap). Mobilising people – properly – fills those gaps, even in the office. And, as the IT department’s customers get more mobile, the best way to support them is often to become more mobile.

When we built the Smart IT interface for BMC Remedy, released in September 2014, this was the philosophy of our mobile approach: ITSM should be mobile for every user, whether they are field support technicians roaming a wide area, or a service desk agent taking a five minute break at the coffee machine.

The tool needed to provide all the features they need, including comprehensive teamwork features and assistiveness, so that they are never forces to find a desk or wait for the slow boot-up of a traditional PC. We released the tablet version of Smart IT on day one, and the phone version, scheduled to be live in December 2014, has been already received a great reception in demonstrations at customer events. As with Smart IT in general, there’s no additional cost over and above a standard Remedy ITSM license.

Our work with our ITSM customers has shown us, and them, that there are huge and real business benefits to seamless and comprehensive mobile experience. Time not spent in front of a PC no longer needs to be time spent not helping customers.

Properly equipped, an increasingly mobile-focused user base is sure to find those benefits, and that means faster, better outcomes for IT’s customers.

Cloud’s impact on ITSM tools: it’s not just about SaaS

Image of clouds in a deep blue sky

The last few years in the ITSM toolset market have been somewhat dominated by the subject of cloud delivery. Business has, of course, rapidly embraced the cloud as an application consumption option. ITSM has been no exception: new entrants and established brands alike have invested either in fully SaaS offerings, or in diversification of their offering to provide a choice between on-premise and cloud delivery models.

However, for the users of those tools, or their customers in the wider organisation using SaaS software, the delivery method alone does not necessarily change much. This is hugely important to remember. If software is consumed via a URL, it does not particularly matter whether the screens and features are served from the company’s own servers, or from a data centre halfway across the country or even the world.  There are often points of benefit for the SaaS end user, of course. But the mechanism alone? It’s a big deal for the buyer, or for the people managing the system, but it might be wholly transparent to everyone else.

It’s important, therefore, to look at what the real differences are to those real-life users: the people whose jobs are constantly underpinned by the applications. Now that we have a solid set of SaaS platforms underpinning ITSM, it seems right to focus on where cloud has already created dramatic user benefits outside the ITSM space. These huge trends show us what is possible:

Autonomy: When an employee stores or shares files using a cloud storage provider like Dropbox, they are detaching them from the traditional corporate infrastructure of hard drives, email, and groupware. When they use their own smartphone or tablet at work, as more than 80% of knowledge workers are doing, they are making a conscious decision to augment their toolset with technology of their own choice, rather than their company’s.

Collectivisation: Cloud applications have the potential to pull broad user groups together in a manner that no closed corporate system can ever hope to do. In the consumer space, this is the key difference between crowdsourced guidance and point expert advice (a battle in which the momentum is only going one way: as evidenced by numerous examples such as the disruption of the travel guide book market by Yelp and TripAdvisor). Aggregated information and real time interaction are new and powerful disruption to traditional tools and services, and Cloud is a huge enabler of these.

Communication: Facebook’s impact on social communication has been to close down distances and seamlessly bring groups of people together in an effortless manner. In a similar manner, Cloud platforms give us new ways to link disparate ITSM actors (whether customers or deliverers) across multiple systems, locations and organizations, without the requirement to build and maintain multiple, expensive ad-hoc paths of communication, and without some of the drawbacks of traditional channels such as email. Service, at least when things get complicated, is a team effort, and slick communication underpins that effort.

Cross-Platformity: Cloud underpinnings have enabled a new generation of applications to work seamlessly across different devices. An employee on a customer visit can use a tool like Evernote to dictate stand-up notes using a smartphone, before editing them on the train home using a tablet, and retrieving them on the laptop in the office the next morning. Nothing needs to be transferred: there is no fiddling with SD Cards or emails.

These are the principles which will change the game for ITSM’s front line service providers, and it’s customers. Bringing some or all of them together opens up a huge range of possibilities:

  • Integrated service platforms, connecting the customer in new ways to those serving them (think of the “two halves of Uber”, for instance: separate applications for passenger and driver, with powerful linkage between the two for geolocation, payment and feedback).
  • Fully mobilised ITSM, delivering a truly cross platform “Evernote” experience with persistent personal data such as field notes.
  • Easy application linkages, driven by tools like IFTTT and Zapier, opening up powerful but controllable autonomy and user-driven innovation.
  • Integrated community interaction beyond the bounds of the single company instance, enabling knowledge sharing and greater self-help.
  • Highly contextual and assistive features, underpinned by broad learning of user needs and behaviours across large sets of users, and detailed analysis of individual patterns.
  • Open marketplaces for granular services and quick “plug and play” supplier offerings, rapidly consumed and integrated through open cloud-driven toolsets.
  • New collaboration spaces for disparate teams of stakeholders, bringing the right people together in a more effective way, to get the job done.

Autonomy, collectivisation, communication, cross-platformity: these are four key principles that are truly making a difference to ITSM. Cloud delivery is just the start.  It is now time to harness the real frontline benefits of this technological revolution.

 

Cloud image: https://www.flickr.com/photos/aztlek/2357990839.  Used under Creative Commons licensing.

Is the lack of ITSM and ITAM alignment causing application sprawl?

Urban sprawl

I’ve written before about the negative consequences of the lack of industry alignment between ITIL-focused ITSM functions, and the IT Asset Management groups which typically evolved somewhat separately.

A recent CapGemini study of CIOs and IT decision makers concisely illustrated one impact this is having:

  • 48% believe their business has more applications than it needs (up from 34% over the previous three years).
  • Only 37% percent believe the majority of their applications are mission critical.
  • 70% believe at least a fifth of their company’s applications share similar functionality and could be consolidated.

The majority believe a fifth of those applications should be retired or replaced.

This shows a very strong consensus amongst IT leaders: IT is spending too much money and time on too many applications, with too much overlap. And in the rapidly evolving application landscape, this impact is by no means limited to traditional on-premise software: Skyhigh’s 2013 study on cloud service adoption found that enterprise respondents used, on average, well over 500 cloud services (the largest number of services found in one organisation was an eye-watering 1769).[Update for Q1 2015: SkyHigh now put the average at over 900]

If we want to remain serious about understanding the business services our IT organizations are managing, overseeing and underpinning, surely we can’t lose track of key assets like this?

How can IT possibly aim to control this sprawl, understand its impact, pinpoint its risks and and remove its vulnerabilities, if there is no unified overseeing function? Who is tracking which users are entitled to which services? Who ensures that users are equipped with the right services, and who removes their access once they leave, to ensure both data security and cost control? Who can identify the impact on key services if an application is removed or consolidated?

Concerningly, this does not appear to be high on the agenda in ITSM discussions. We still see two separate threads in the conference ecosystem: ITSM conferences rarely address asset management. Asset management conferences talk about suppliers and infrastructure without putting them in the context of the services they underpin. My own role involves product management of an ITAM system which is part of an ITSM suite, so I attend both sets of conferences, see both parallel tracks, and experience nagging concerns in each case that the other side of the picture is overlooked.

Recent initiatives such as the Pink Think Tank 14 are, welcomely, addressing in increased detail the multi-sourced, multi-vendor evolution of IT service delivery, but there still does not appear to be a detailed focus on the actual assets and software being supplied by those vendors.  That’s a gap. Those vendors fill the IT environment with assets, from physical kit through software services to less tangible “assets” like critical people with vital knowledge.  All those things cost money. They may have contractual associations. We may need to know, very quickly, who owns and supports them. And if a supplier is replaced, we need to know what they might take with them.

The harsh reality, as clearly shown by CapGemini’s study, is that CIOs and leaders are asking questions about consolidation that will require a detailed, holistic understanding of what we are actually spending money on, and why it is there.

This fascinating KPMG survey reveals the software license auditor’s viewpoint

KPMG survey front cover - "Is unlicensed software hurting your bottom line"

Software licensing audits are a big challenge for IT departments.  65% of respondents to a 2012 Gartner survey reported that they had been audited by at least one software vendor in the past 12 months, a figure which has been on a steady upward trajectory for a number of years.

Often, companies being audited for software compliance will actually deal, at the front-line, with a 3rd party audit provider. One of the big names in this niche is KPMG, whose freely-downloadable November 2013 report, “Is unlicensed software hurting your bottom line?”, provides a very interesting window into the software compliance business.

The report details the results of a survey conducted between February and April 2013, with respondents made up “31 software companies representing more than 50 percent of the revenue in the software industry”.

Revenue is driving software audits

The survey results show, rather conclusively, a belief in the business value of tackling non-compliance:

  • 52% of companies felt that their losses through unlicensed use of software amounted to more than 10% of their revenue.
  • Almost 90% reported that their compliance program is a source of revenue. For about a tenth, it makes up more than 10% of their overall software revenue.  For roughly half, it is at least 4%.

Compliance audits are increasingly seen as a sales process

  • In more than half of responding organisations, the software compliance function is part of Sales. This is reported as being up from 1 in 3, in an equivalent 2007 survey.
  • In 2007, 47% of compliance teams were part of the Finance department. This figure has plummeted to just 13%.

This shift is not universal, and some companies seem committed to a non-Sales model for their compliance team.  A compliance team member from one major software vendor talked to me about the benefit of this to his role: He can tell the customer he is completely independent of the sales function, and is paid no commission or bonus based on audit findings.  Many other vendors, however, structure audits as a fully-commissioned role.  As the survey points out:

  • Only 20% of companies pay no commission to any individuals involved in the compliance process.
  • In 59% of cases, the commission structure used is the same as the normal sales commission program.

There is further indication of the role of sales in the audit process, in the answers to the question on “settlement philosophy”.  More than half of the respondents reported a preference for using audit findings as leverage in a “forward-looking sales approach”, rather than wanting to seek an immediate financial settlement.

Almost half of vendors select audit targets based on profiling

The biggest single selection reason for a compliance review was nomination by the sales account team (53%), with previous account history in close second place (50%).

Interestingly, however, 47% reported selecting customers for review based on “Data analytics suggesting higher risk of non-compliance”, with 7% stating that random selection is used.  It seems that audits are still a strong likelihood regardless of an organisation’s actual compliance management.

Auditors prefer their own proprietary tools to customers’ SAM tools

There seems to be a distinct lack of regard for Software Asset Management tools. 42% of respondents seek to use their own discovery scripts in the audit process. Only 26% of the vendors stated that they use customers’ SAM tools, and remarkably this is down from 29% in 2007, when one might expect few SAM tools would have been found on customer sites anyway.

This echoes the experience of a number of customers with whom I have previously spoken, and it can be a real source of annoyance. How, some argue, is it fair that license models are so complex that it takes a secretive proprietary script, only available to the auditor, to perform a definitive deployment count?

Other observations

  • Software tagging has not been widely adopted: Less than half of respondents do it, or have plans to do so.
  • SaaS reduces the role of the software auditor. Only 15% reported any compliance issues, and more than half don’t even look for them.
  • Few companies seek to build protection against overdeployment into their software. From conversations I have had, most seem to want to encourage wide distribution. Some desktop software was deliberately released in a manner that has encouraged wide, almost viral distribution. In at least one case, an acquisition by a larger company has been the trigger for a significant and aggressive audit program, targeting almost every large company on the assumption that the software is likely to be found there.

Conclusions?

It is very clear from the survey results that many large software vendors have established their compliance program as a significant revenue generator, and with a significant shift of these functions into the sales department, we can probably assume that there is a broad intent to maintain or even grow this role.

Whether this is even compatible with a more collaborate model of software compliance management is highly questionable: the business case for the status quo seems very sound, from the vendor’s point of view.  With so many vendors only trusting the discovery scripts used by their auditors, the situation for customers is nearly impossible: how can they verify compliance if the only counting tool is in the hand of the vendor?

The light at the end of the tunnel for many customer may be SaaS:  SaaS software tends to be more self-policing, and consumption models are often simpler. However, it brings its own challenges: zombie accounts, decentralised purchasing, and a new set of inconsistent consumption models. Meanwhile, hosted software does not go away.