We’ve been getting “DevOps vs ITIL” wrong

At DevOps conferences, I’ve observed some very negative sentiment about ITIL and ITSM. In particular, the Change Advisory Board is frequently cited as a symbol of ITSM’s anachronistic bureaucracy. They have a point. Enterprise IT support organisations are seen as slow, siloed, structures built around an outdated three-tier application model.

None of this should be a surprise. The Agile Manifesto, effectively the DevOps movement’s Declaration of Independence, explicitly values individualism over process, and reactiveness over structure. The manifesto is the specific antithesis of the traits seen in that negative perception of ITSM.

ITSM commentary on DevOps, meanwhile, is inconsistent, ranging from outright confusion to sheer overconfidence. The complaints of the DevOps community are frequently acknowledged, but they are often waved away on the basis that ITSM is “just a framework”, and hence it should be perfectly possible to fit Devops within that framework. If that doesn’t work, the framework must have been implemented badly. Again, this is a reasonable point.

But there’s a recurring problem with the debate: it tends to focus primarily on processes: two ITIL processes in particular. ITSM commentators frequently argue that Change Management already supports the notion of automated, pre-approved changes. DevOps “is just mature ITIL Release Management”, stated an opinion piece in the Australian edition of Computerworld (a remarkable assertion, but we’ll come to that later). Some of the more robust sceptics in the DevOps community focus on ITSM’s process silos and their incompatibility with the new agility in software development.

Certainly, the ITSM community has to realise that there is a revolution happening in software production. Here are some statements which are easy to back up with real-world evidence:

  • DevOps methodology fundamentally improves some of the inefficiencies of old, waterfall-driven processes.
  • Slow, unnecessarily cumbersome processes are expensive in themselves, and they create opportunity costs by stifling innovation.
  • Agile, autonomous teams of developers are unleashing creativity and innovation at a new pace.
  • By enabling small, point releases, systems are growing in a more resilient way than monolithic releases tended to achieve.

Unarguably, the new methodology is highlighting the shortcomings of the old. Can anyone argue today that a Change Advisory Board made of humans, collating verbal assurances from other humans, is preferable to an effective, fully-automated assurance process, seamlessly integrated with the release pipeline?

We know, then, that DevOps methods dramatically improve the speed and reliability with which technology change can increase business value. But that’s where the arguments on both sides start to wear thin. What is that business value? How do we identify it, measure it, and assure its delivery?

In my experience, there is little mention of the customer at DevOps events. DevOps is seen, correctly, as a new and improved way to drive business value from software development, but the thinking feels very “bottom up”. ITSM commentators seem to have taken the same starting point: drilling into minutiae of process without really considering the value that ITSM should be looking to bring to the new world.

To highlight why a lack of service context is a problem, let’s take the simple example of frontline support. When developers push out an incremental release to a product, customers start to use it. No matter how robust the testing of that release was, some of those customers will encounter issues that need support (not every issue is caused by a bug that was missed in testing, after all).

The Service Desk will of course try to absorb many of those issues at the first step. That is one of its fundamental aims. To do this effectively, it needs to have reasonable situational awareness of what has been changing. It is not optimal for the Service Desk only to become aware of a change when the calls start coming in. Ideally, they should be armed with knowledge of how to deal with those issues.

No matter how effective the first line of support is, some issues will get to the application team. Those issues will vary, as will the level of pain that each is causing. Triage is required, and that is only possible if there a clear understanding of the business and customer perspective.

Facing a queue of two tickets, or ten tickets, or one hundred tickets, the application team has to decide what to do first. This is where things start to unravel for an idealistic, full-stack, “you break it, you fix it” devops team. Which issues are causing business damage? Which are the most time critical? Which can be deferred? How much time should we spend on this stuff at the cost of moving the product forward? This is the stuff that ITSM ought to be able to provide.

Effective Service Management in any industry starts with a fundamental understanding of the customer. Who are they? What makes them successful? What makes them tell other potential customers how great you are? What annoys them? What hurts them? What will trigger them to ask for a refund? What makes them go elsewhere altogether? And, importantly: what is it that we are obligated to provide them with?

An understanding of our service provision is fundamental to creating and delivering innovative solutions, and supporting them once they are there. This is where ITSM can lead, assist, and benefit the people pushing development forward.

The “S” in ITSM stands for “service”, not process. The heavy focus on process in this discussion (particularly two specific process, close to the point of deployment) has been a big mistake by both communities. It is wholly incorrect to state that DevOps is predominantly contained within Release and Change Management. Code does not appear appear spontaneously in a vacuum. A whole set of interconnected events lead to its creation.

I have been in IT for two decades, and the DevOps movement is by the biggest transformation in software development methodology in that time (I still have the textbooks from my 1990s university Computing course. These twenty-year-old tomes admonish the experimenting “hacker” and urge the systems analyst to complete comprehensive designs before a line of code is written, as if building software was perhaps equivalent to constructing a suspension bridge).

The cultural change brought by DevOps involves the whole technology department… the whole enterprise, in fact. Roles change, expectations change. There are questions about how to align processes, governance and support. We need to think about the structure of our teams in a post three-tier world. We need to consider new support methodologies like swarming. We need to thread knowledge management and collaboration through our organizations in innovative new ways.

But the one thing we really must do is to start with the customer.

Knowing what you DON’T know

Question Mark

I presented an Asset Management breakout session at the BMC Engage conference in Las Vegas today.  The slides are here:

An interesting question came up at the end: What percentage accuracy is good enough, in an IT Asset Management system?  It’s a question that might get many different answers.  Context is important: you might expect a much higher percentage (maybe 98%?) in a datacentre, but it’s not so realistic to achieve that for client devices which are less governable… and more likely to be locked away in forgotten drawers.

However, I think any percentage figure is pretty meaningless without another important detail: a good understanding of what you don’t know. Understanding what makes up the percentage of things that you don’t have accurate data on is arguably just as important as achieving a good positive score.

One of the key points of my presentation is that there has been a rapid broadening of the entities that might be defined as an IT Asset:

The evolution of IT Assets

The digital services of today and the future will likely be underpinned by a broader range of Asset types than ever.  A single service, when triggered, may touch everything from a 30-year-old mainframe to a seconds-old Docker instance. Any or all of those underpinning components may be of importance to the IT Asset Manager. After all, they cost money. They may trigger licensing requirements. They need to be supported. The Service Desk may need to log tickets against them.

The trouble is, not all of the new devices can be identified, discovered and managed in the same way as the old ones.  The “discover and reconcile” approach to Asset data maintenance still works for many Asset types, but we may need a completely different approach for new Asset classes like SaaS services, or volatile container instances.

The IT Asset Manager may not be able to solve all those problems.  They may not even be in a position to have visibility, particularly if IT has lots its overarching governance role over what Assets come into use in the organization (SkyHigh Networks most recent Cloud Adoption and Risk Report puts the average number of Cloud Services in use in an enterprise at almost 1100. Does anyone think IT has oversight over all of those, anywhere?).

However, it’s still important to understand and communicate those limitations.  With CIOs increasingly focused on ITAM-dependent data such as the overall cost of running a digital service, any blind spots should be identified, understood, and communicated. It’s professional, it’s helpful, it enables a case to be made for corrective action, and it avoids something that senior IT executives hate: surprises.

Question mark image courtesy Cesar Bojorquez on Flickr. Used under Creative Commons licensing.

The Internet of Things has some interesting implications for Software Asset Managers (part 2)

Gartner Projection for Growth in Internet of Things, from 0.9bn in 2009 to 25bn in 2020.

In part one of this blog, I discussed the potential impact of an exponential growth of connected devices, in terms of today’s software licenses.

However, the Internet of Things is already bringing a different licensing challenge. An ever-increasing number of smart devices are being brought to home, street and workplaces. As with many rapid technical revolutions, the focus has been on innovation and market growth. Monetisation has been a secondary concern.  Reality, however, has to catch up in time, and now we are seeing a number of blogs and articles (such as this example from Wired’s Jerome Buvat) debating how IoT vendors might actually start to make some money back.

So why does this matter to Software Asset Managers?

In April 2014, Gartner published a research paper, aimed at IoT vendors, entitled “Licensing and Entitlement Management is One of the Keys to Monetizing the Internet of Things“. Its author, Lawrie Wurster, argued strongly that vendors should see IoT devices not so much as hardware assets, but as platforms for software:

“…to secure additional revenue, manufacturers need to recognize the role that embedded software and applications play in the IoT, and they need to monetize this value”

Gartner point out a number of big advantages for vendors:

  • New offerings can be created with software enhancements, increasing speed to market and removing the expensive retooling of production lines.
  • A single license can bundle hardware, feature offerings and supporting services such as consulting.
  • Vendors can create tiered offerings, enabling the customer to start with basic levels of capability, but with the possibility to purchase more advanced features as they mature.
  • Offerings can be diversified. The vendor can create specific regional offerings, or niche solutions for specialist markets, without needing to manufacture different hardware.

This is not merely analyst speculation, though. It is already happening, and there are already vendors like Flexera helping to enable it. Flexera are a well known name to software asset managers (and my employer, BMC, works in close partnership with them in the SAM space), but another significant part of their business is the provision of licensing frameworks to vendors.  This year, they co-published a report with IDC which presented some striking findings from a survey of 172 device vendors

  • 60% of the vendors are already bundling a mixture of device, software and services into licenses:Chart from Internet of Things study by Flexera, showing that 60% of vendors say “We use licensing and entitlement management systems to develop new offerings”
  • 32% already use software to enable upsold options. More than half will be doing this by 2017.
  • 27% already use a pay-per-use model, charging by the amount of consumption of the software on the devices. A further 22% plan to do so by 2017.
    While there are clear advantages, both to vendors and consumers, there is a big unspoken challenge here. With licensing comes the difficulty of license management. This is not something that industry has done well even before the smart device revolution: Billions of dollars are paid annually by enterprises in compliance settlements.

    Many ITAM functions depend heavily on automated discovery of software installed and used on devices. However, today’s discovery tools may not adapt to discovering multiple new classes of IP-connected devices. Even when the devices are visible, it may not be easy to detect which licensed options have been purchased and enabled.

    Another big challenge might arise from a lack of centralisation. The growth of smart devices will be seen right across the business: in vehicles, facilities, logistics, manufacturing, even on the very products the company itself is selling. With software, the IT department typically had some oversight, although even this has been eroding (SkyHigh Networks, for example, now put the average number of cloud services in an enterprise at over 900… and it’s likely that a significant number of these were bought outside IT’s line of sight).  Put bluntly: IT may simply have no mandate to control purchasing of licensed devices.

This puts the IT Asset Management function in an interesting position. Software Asset Management and Hardware Asset Management, traditionally seen as two related but separable personas, are going to converge when it comes to smart devices. More widely, businesses may need guidance and support, to learn the lessons from IT’s previous difficulties in this area, and avoid even greater compliance and cost-sprawl problems in future.

The Internet of Things has some interesting implications for Software Asset Managers (part 1)

Gartner Projection for Growth in Internet of Things, from 0.9bn in 2009 to 25bn in 2020.

The phrase, “the Internet of Things”, is believed to have been coined by a brand manager at Proctor and Gamble. Kevin Ashton, in a 1999 presentation, envisaged an exponential growth of connected devices, as supply chain logistics got smarter.

Today, the Internet of Things is seen as one of the most significant technology trends, with analysts predicting that the number of connected, smart devices will grow to tens of billions over the next few years.

Much of this proliferation will happen in the workplace. For Software Asset Managers, this could have significant implications. The Internet of Things will not merely be a corner case for SAM: it could impact some of the biggest contracts, with key vendors like Oracle.

Oracle’s licensing rules are explained, at some length, in their Software Investment Guide. One commonly-used license type is called “Named-User Plus”.  Aimed at “environments where users and/or devices can be easily identified and counted”, the license model is illustrated with the following example:

Forklift-based licensing example from the Oracle Software Investment guide

Here, 15 fixed temperature devices are communicating directly with an Oracle database.  There are also 30 forklifts, each of which has a transporter which also writes to the database.

In this case, a total of 415 licenses are required: 15 for the temperature sensors, and 400 for the humans operating the forklifts (because “the forklift is not a non-human-operated device”).

In the past, I’ve used this example, only semi-seriously, to illustrate what might happen if the Internet of Things grows at the speed of the pundits’ projections. Recently, the 2015 Gartner Predicts report for the Internet of Things projected an almost 28-fold growth in connected devices from 2009 to 2020.

Gartner Projection for Growth in Internet of Things, from 0.9bn in 2009 to 25bn in 2020.

The year 2009 is rather pertinent, because Oracle’s forklift example seems to have first appeared in the Software Investment Guide in that year (here’s an example at the Internet Archive).

If we crudely apply the Gartner’s connected growth rate to the number of devices shown in Oracle’s forklift example, there would be well over 1200 connected devices to license by 2020. That’s a trebling of the cost.

I have always laughingly acknowledged this as a crude illustration, until I chanced upon a March 2015 Forbes article titled “The Intelligent Forklift in the Age of the Industrial Internet of Things”:

Today’s “smart” forklift includes diagnostics that allow the equipment to signal when it needs to be serviced, speed controls, anti-slip technology that monitors wheel spin and improve traction on slick floors, collision detection, fork speed optimization, and more.

All of a sudden, my deliberately far-fetched example didn’t seem quite so unlikely.

As always, in Software Asset Management, the challenge is unlikely to be simple or contained. Software Asset Managers deal with many vendors, with many license types. Many of those licenses may depend on counts of connected devices. Many contracts pre-date the Internet of Things, which means costing models are outdated. Unfortunately, that’s unlikely to make the consumer any less liable.

In part 2 of this article, we will look at another major challenge already arising from the Internet of Things: the increasing application of software-style license terms to hardware.

Thoughts on #cfgmgmgtcamp, and why ITSM needs to take note

Configuration Management Camp logo

Mention the role of “Configuration Manager” at an ITSM conference, and then use the same description at an Infrastructure Management conference, and your respective audiences will visualise completely different jobs*.

So, it was with some curiosity that I arrived this morning for the first day of the Configuration Management Camp in Ghent.

This particular event falls squarely into the infrastructure camp. It’s the realm of very clever people, doing very clever things with software-defined infrastructure. A glance at the conference sponsors makes this very clear: it includes Puppet Labs, Chef, Pivotal, and a number of the other big (or new) names in orchestrated cleverness.

While this is not the ITSM flavour of Configuration Management, however, today’s conference really made it clear that this new technology will become more and more relevant to the ITSM community. In short, ITSM should make itself aware of it.

The tools here have underpinned the growth of many recent household-name startups: including those internet “unicorns” like Facebook and Uber which have risen from zero to billions. They’ve enabled rapid, cloud-driven growth in a brand-new way. This new breed of companies have firmly entrenched DevOps methodologies, with ultra-rapid build, test, and release cycles, increasingly driven by packaged, repeatable scripts. It primarily takes place on cloud-based open-source software stacks. As a result, there’s not quite as much focus on resource and commercial constraints as we find in big enterprises.

But here’s the crux: methods like this will surely not remain solely the preserve of the startups.  As business get deeper into digital, there’s increasing pressure on the CIO from the CEO and CMO, to deliver more stuff, more quickly. As the frontline business pushes its demands onto IT, long development and deployment cycles simply won’t be acceptable. And, with equal pressure on IT’s costs and margins, these technologies and methods will become increasingly attractive.

Rapid innovation and deployment is becoming essential to business success:  PuppetLabs 2014 State of Devops study recently claimed that strongly performing organizations are deploying code 30 times more often, with 50 times fewer failures. Okay, those numbers come from squarely within the Devops camp, but they are credibly reinforcing past analysis such as MIT Sloan’s 2007 classic study on “the alignment trap”. IT can’t just be a ponderous back-end function if a company wants success.

That’s not to say that this conference is providing the answers. I’d argue that Configuration Management Camp is the “bottom up” conference to an ITSM conference’s “top down”.

Some of the toolsets on display here are very granular indeed. Many of the presentations were slides, or live demos, full of slightly arcane command-line-interface text input. We watched individual clusters get provisioned, quickly, impressively, but devoid of context.

However, there was also a sense of an increasing imperative to start connecting those dots: to define the applications and their inter-dependencies, and ultimately, the services. We’ve seen talks today with titles like “Configuring Services, not Systems”. Dianne Mueller of Red Hat OpenShift described new tools which focus on the deployment of applications, rather than points of infrastructure.

I spoke with more than one person today who described DevOps starting to meet “reality” – that is, the day-to-day expectations of an enterprise environment.  There is a feeling of “cutting edge” here, probably justified, but the counterpoint to that might be that this community tends to see “traditional” IT is seen as slow and clunky.  PuppetLabs founder and CEO, Luke Kaines dismissed this: “The enterprise space doesn’t move slowly because they’re stupid or they hate technology. It’s because they have… what’s the word… users”.

One thing that was clear today was that these technologies are only in their infancy. Gartner recently identified Software Defined Infrastructure as one of its key trends for 2015. Forrester have declared that Docker, the rapidly-emerging containerisation technology, will “live up to the hype and rule the cloud”.

And that’s why IT Service Management needs to take note.

We can’t understand the services we provide our customers, if we don’t have a grasp on the underlying infrastructure. We can’t formalise change control without getting the buy-in of the technical experts for whom rapid infrastructure shifts are a command-line prompt away. We can’t help prevent uncontrolled software spend, or inadvertent license breach, if we don’t proactively map our contracts onto the new infrastructure. With change cycles moving from weeks to seconds (it was claimed today, in one session, that Amazon deploys code to production on a sub-one-second interval), established ITSM practices will need to adapt significantly.

So, if it feels like ITSM’s “top-down” and infrastructure configuration management’s “bottom-up” are striving to find a connection point, it also feels like that join has not yet been made. It’s up to IT as a whole to make that connection, because if we don’t do it, we’ll end up repeating the lessons of the past. But faster.

It’s going to be a fun challenge to face. This is exciting stuff.

*ITSM: defender of the CMDB. Infrastructure: server deploying and tweaking wizard. Right?

Five years on from the iPad’s launch

Ipad, front and rear

On 27th January 2010, Steve Jobs unveiled the iPad. If you have 90 minutes or so to spare, that launch presentation is a fascinating watch.

There was mixed reaction to that initial announcement. Some people called it right. Even days before the launch, Deloitte predicted that a new generation “NetTabs”, such as Apple’s rumoured new product, would have a “breakout year”.  Other pundits were far more sceptical.  Bill Gates derided it: “It’s a nice reader, but there’s nothing on the iPad I look at and say, ‘Oh, I wish Microsoft had done it.'”

Gates called it wrong, particularly in backing netbooks over tablets. By 2011, tablets were selling twice as many units as netbooks.  (Note to readers in the future: Netbooks were small laptop computers which didn’t have touchscreens or detatchable keyboards. Yes, I know!). Tablets, led by the iPad (to say the least), put a huge dent in the PC market, leading to its longest decline in history.

One of the most profound changes that the iPad brought was a change in user expectations. Since the advent of the WIMP (Windows-Icons-Mouse-Pointer) interface in the 1980s, most business software took the form of a form.  Users interacted with clickable, typeable boxes.

Tablets, without a pointer or a physical keyboard, don’t adapt well to “paperless form” interfaces.  It’s hard to click with a finger in a small box, and it’s not great navigating from field to field, typing into them one by one, either. Some might have expected that hardware would make the shift to accommodate expectations. Instead, the applications themselves changed. The tablet brought in new user experience designs, with great success.

Developers and designers quickly built on the lessons they’d already learned with touchscreen smartphones. They learned to produce tactile apps, making the most of swipes and gestures and movement. It’s arguably much more intuitive and friendly, as anyone who has ever given a modern touchscreen device to a toddler will attest. It’s also a great opportunity to deliver a more productive experience too. Applications like Shazam and Uber have taken this to an extreme: A single press on the screen replaces complex and clunky keyboard-and-mouse driven interactions.

This influence is even finding its way back to the desktop. PCs aren’t dead yet, and they are still holding their own as the superior platform for many use cases. But the influence of the iPad is pervasive. There’s a lot less tolerance amongst users for business applications which look like their tax form, with a save button. There is no tolerance at all for that in consumer applications.

Harnessing, and taking advantage of those expectations enables better software with better outcomes.  A tool like Smart IT enables tablet interaction for ITSM, but also carries many of the design and efficiency principles to the desktop.  The result is better productivity, a smarter Service Desk, and happier stakeholders on both sides of the service relationship.

Mobile ITSM isn’t only about field support: It’s about everyone.

iPhone 6

When we built the new Smart IT UX for BMC Remedy, we were determined to allow ALL IT Support Workers to be Mobile. Why? Because everyone can benefit from mobility.

In the short history of enterprise mobility, mobile business applications have generally focused on two specific user groups. The first is the group of users for whom being on the road is the bulk of their job, such as field engineers: they go to a location, perform some tasks, move on to the next place.

The second group is those who might be based at a desk in an office, but who move around through a series of meetings, on and off-site. For these users, the primary purpose of mobility has been continuity of communication (with the weapon of choice having historically been the keyboard-equipped Blackberry).

For most other users, performing most other business tasks, the desktop computer (or desk-based notebook computer) still remained the key delivery mechanism for business applications.

Today, this is an outdated philosophy.

I recently stood in a lift at a customer’s office. There were four people in that elevator, and there were seven smartphones on display.  Okay, two of them were mine (I’m a mobility product manager, after all), but that is still a notable average.

Even in the short moment offered by a journey of just a few floors, those office-based employees found a moment to communicate. Whether that communication was work-based or personal, one-way or two-way, is irrelevant. The point is that the time was being used to perform those tasks in a way that could not have happened just a few years ago.

In the December 2014/January 2015 edition of Fast Company, Larry Erwin, a Business Development Executive with Google, points out:

“When I was a kid growing up back in the ’90s, I was the only kid on my block with a Tandy 1000. Now kids who are 15, 16 years old have a supercomputer in their pocket”

The opportunity for business software tools to take advantage of that new computing power is huge, and growing. The very structure of the traditional office is under pressure, as users become more mobile and more technology enabled. That generation of teenagers will soon enter the workplace having had a completely different, and more universal grounding in technology than we select geeks who owned the Tandy 1000s and Sinclair Spectrums of yesteryear.

Mobility has already become a primary means of service consumption for customers, across a swathe of industries. Consider the process of taking a flight: with many airlines, the entire customer experience has been mobilized. Forrester Research outlined this beautifully in a 2014 illustration charting the timeline of mobile engagement for the airline passenger:

  • -2 Weeks: Book ticket, change reservation
  • -2 Days: Change seat, request upgrade
  • -2 Hours: Check in, check gate, departure time, lounge access
  • Flight: Arrival time, food order, movies, wi-fi, duty free
  • +2 Hours: Ground transport, lost luggage, navigation
  • +2 Days: Mileage status, reward travel, upcoming reservations
  • +2 Weeks: Mileage points earned, customer satisfaction survey
    (Source: Forrester)

Mobility for the consumer is now table stakes. So why not extend this to the people serving those consumers? Mobility, simply, provides great opportunities to enhance the role of the service representative.

When I arrived at a Westin Hotel in Chicago last month, I needed to speak with reception, and joined the line of people at the check-in desk. However, I was approached by a staff member with an iPad. They were quickly able to answer my question. The Starwood Hotels group, he told me, aims to keep its hotel staff on their feet, closer to customers, delivering service in a more dynamic way. Even the group’s CEO, Fritz van Paasschen, has abandoned his desk and PC: a Wall Street Journal article in November 2014 revealed that he works entirely on tablet and smartphone (van Paasschen’s office contains no desk – just a boardroom table and a couch).

In an IT Service Management environment, the case for mobility for field support users has long been clear: the alternative being a hotch-potch of printed dockets, slow communication, and inconvenient (or omitted) retrospective updates to systems of record, back at a field office.

But even in the office, it’s important to realise that good IT service, like all good customer service, combines communication, expertise, initiative and process. Many people involved in that process are not at their desk all day: they may be in meetings, or travelling between sites, or sitting with colleagues.

If those people can only access their support tools from their desk, then gaps appear. Twenty minutes waiting for input from an approver or technical expert could amount to twenty minutes more waiting time for the customer, or even a missed window to communicate with the next person in the chain (and hence an even bigger gap). Mobilising people – properly – fills those gaps, even in the office. And, as the IT department’s customers get more mobile, the best way to support them is often to become more mobile.

When we built the Smart IT interface for BMC Remedy, released in September 2014, this was the philosophy of our mobile approach: ITSM should be mobile for every user, whether they are field support technicians roaming a wide area, or a service desk agent taking a five minute break at the coffee machine.

The tool needed to provide all the features they need, including comprehensive teamwork features and assistiveness, so that they are never forces to find a desk or wait for the slow boot-up of a traditional PC. We released the tablet version of Smart IT on day one, and the phone version, scheduled to be live in December 2014, has been already received a great reception in demonstrations at customer events. As with Smart IT in general, there’s no additional cost over and above a standard Remedy ITSM license.

Our work with our ITSM customers has shown us, and them, that there are huge and real business benefits to seamless and comprehensive mobile experience. Time not spent in front of a PC no longer needs to be time spent not helping customers.

Properly equipped, an increasingly mobile-focused user base is sure to find those benefits, and that means faster, better outcomes for IT’s customers.

Does SaaS mean the end of audits? The BSA don’t think so.

BSA document cover

In an industry which has struggled with year-on-year rises in the number of vendor-imposed software compliance audits, it can be tempting to see SaaS software, with its subscription pricing models, as a panacea. If we can replace a complex web of installation, site, and user-based licenses with a set of simple subscriptions, won’t that make the compliance challenge much simpler?

Unfortunately, it’s not as straightforward as that. This white paper (pdf, opens in new tab) by industry watchdog BSA – The Software Alliance – explores the breadth of ways it’ll be possible to breach terms and conditions of SaaS software.

A basic SaaS subscription for a simple client application might seem very easy to manage. BSA’s document, however, effectively arms auditors with a checklist of breaches to look for, including:

  • Accessing the service from prohibited geographies.
  • Sharing user accounts.
  • Allowing systems to pose as users.
  • Providing access to non-employees (e.g. contractors) where such access is prohibited.

For companies working with Cloud Service Providers, BSA goes into significant detail on the challenges they may face in retaining compliance with their existing licensing agreements: a range of challenges including IP challenges, geographical limitations, and providing auditors with required access to Cloud infrastructure environments.

BSA represents many of the most assertive organizations involved in license audits, and this document suggests, firmly, that the challenge of audits will not be disappearing soon.  As the document states, while Cloud-based software “solves some license compliance challenges, it also creates new ones”.

Why customer centricity is an approach, not a dogma

“Customer first” is a much debated philosophy in ITSM. Studies and reports frequently place customer-centricity high on the priority list of CIOs and CTOs. But sceptical commentators argue that IT may be falling victim to its own faddish obsession: we are not the same as some of the most high-profile service innovators such as those in the consumer marketplace, and we have different drivers and limitations.

Some attempts to deliver customer-centricity in ITSM may indeed be truly faddish: driven by fashion or the notion that something is a cool idea, without really delivering a better business result.

However, I’d argue that such actions are no more customer centric than ignoring the customer’s wishes altogether: if service is delivered on the basis of improperly considered ideas, it isn’t destined to be successful, unless we get lucky.

Customer centricity is not just about doing whatever the customer asks. It’s about marshalling the available support resources to deliver service in the most effective manner for the customer. Most importantly, it’s about methodically identifying what that “most effective manner” actually is. The IT industry hasn’t always been very good at that bit.

As an example: One of the most significant areas of debate, conflict and sheer revolution has been the “Bring Your Own Device” phenomenon. Much has been written on the subject, but occasionally a statistic appears which really illustrates the need for a more customer centric approach to corporate IT.

Last year, an APAC-focused survey by VMWare, “A New Way of Life”, contained one such gem. The gem was not the survey’s finding that 83% of employees are bringing their own devices to work. This number is not unusual; many similar surveys have produced similar figures. It was a subsequent result which stood out: 41% of of these positive respondents cited “contactability by customers” as a primary driver for their use of non-corporate items.

Let that sink in for a moment: Fully one-third of the overall respondents in this survey stated that they needed to augment the technology their employer is providing them, just to give customers adequate means to contact them (and that is before we even start to ask how many of that group are actually customer facing: the percentage might actually be much higher for the most relevant groups of users).

Surely, then, IT departments need to ask themselves why this is happening?

But this is the problem: We already did that. The IT organization already sat down and thought up the best policies it could: usually making considered judgements built on knowledge and experience, trying to find the best balance between security and customer requirements. But if more than a third of users still need to bring in their own technology to deal with customers, then something went wrong.

Maybe IT really didn’t learn enough because it didn’t get far enough away from its own desk. IT should be asking their customers why this is happening.

Even then, though, simply asking a question may not give us what we need to provide the best solutions. Customers, asked directly, may tell you what they think they need, based on their own frame of reference. Henry Ford, sadly, probably never uttered the quote widely attributed to him about giving customers “faster horses” if he’d simply followed their stated wishes, but it’s still an important point.

Instead, the best way for IT to be customer centric is to leave our desks. We need to stand with our users as they go about their job. We should shadow field support people, sit in customer service call centres, spend a day with a sales rep, observe the warehouse for a while. As technology experts, we know more about the challenges of managing evolving technology in an increasingly complex corporate environment, but we don’t know our customers’ jobs like they do.

IT deeply knows technology, but the customer knows their job most deeply. The foundation for customer-centric support is simply the combination of those two pieces of knowledge.

(Image credit: doctorow on Flickr)

Cloud’s impact on ITSM tools: it’s not just about SaaS

Image of clouds in a deep blue sky

The last few years in the ITSM toolset market have been somewhat dominated by the subject of cloud delivery. Business has, of course, rapidly embraced the cloud as an application consumption option. ITSM has been no exception: new entrants and established brands alike have invested either in fully SaaS offerings, or in diversification of their offering to provide a choice between on-premise and cloud delivery models.

However, for the users of those tools, or their customers in the wider organisation using SaaS software, the delivery method alone does not necessarily change much. This is hugely important to remember. If software is consumed via a URL, it does not particularly matter whether the screens and features are served from the company’s own servers, or from a data centre halfway across the country or even the world.  There are often points of benefit for the SaaS end user, of course. But the mechanism alone? It’s a big deal for the buyer, or for the people managing the system, but it might be wholly transparent to everyone else.

It’s important, therefore, to look at what the real differences are to those real-life users: the people whose jobs are constantly underpinned by the applications. Now that we have a solid set of SaaS platforms underpinning ITSM, it seems right to focus on where cloud has already created dramatic user benefits outside the ITSM space. These huge trends show us what is possible:

Autonomy: When an employee stores or shares files using a cloud storage provider like Dropbox, they are detaching them from the traditional corporate infrastructure of hard drives, email, and groupware. When they use their own smartphone or tablet at work, as more than 80% of knowledge workers are doing, they are making a conscious decision to augment their toolset with technology of their own choice, rather than their company’s.

Collectivisation: Cloud applications have the potential to pull broad user groups together in a manner that no closed corporate system can ever hope to do. In the consumer space, this is the key difference between crowdsourced guidance and point expert advice (a battle in which the momentum is only going one way: as evidenced by numerous examples such as the disruption of the travel guide book market by Yelp and TripAdvisor). Aggregated information and real time interaction are new and powerful disruption to traditional tools and services, and Cloud is a huge enabler of these.

Communication: Facebook’s impact on social communication has been to close down distances and seamlessly bring groups of people together in an effortless manner. In a similar manner, Cloud platforms give us new ways to link disparate ITSM actors (whether customers or deliverers) across multiple systems, locations and organizations, without the requirement to build and maintain multiple, expensive ad-hoc paths of communication, and without some of the drawbacks of traditional channels such as email. Service, at least when things get complicated, is a team effort, and slick communication underpins that effort.

Cross-Platformity: Cloud underpinnings have enabled a new generation of applications to work seamlessly across different devices. An employee on a customer visit can use a tool like Evernote to dictate stand-up notes using a smartphone, before editing them on the train home using a tablet, and retrieving them on the laptop in the office the next morning. Nothing needs to be transferred: there is no fiddling with SD Cards or emails.

These are the principles which will change the game for ITSM’s front line service providers, and it’s customers. Bringing some or all of them together opens up a huge range of possibilities:

  • Integrated service platforms, connecting the customer in new ways to those serving them (think of the “two halves of Uber”, for instance: separate applications for passenger and driver, with powerful linkage between the two for geolocation, payment and feedback).
  • Fully mobilised ITSM, delivering a truly cross platform “Evernote” experience with persistent personal data such as field notes.
  • Easy application linkages, driven by tools like IFTTT and Zapier, opening up powerful but controllable autonomy and user-driven innovation.
  • Integrated community interaction beyond the bounds of the single company instance, enabling knowledge sharing and greater self-help.
  • Highly contextual and assistive features, underpinned by broad learning of user needs and behaviours across large sets of users, and detailed analysis of individual patterns.
  • Open marketplaces for granular services and quick “plug and play” supplier offerings, rapidly consumed and integrated through open cloud-driven toolsets.
  • New collaboration spaces for disparate teams of stakeholders, bringing the right people together in a more effective way, to get the job done.

Autonomy, collectivisation, communication, cross-platformity: these are four key principles that are truly making a difference to ITSM. Cloud delivery is just the start.  It is now time to harness the real frontline benefits of this technological revolution.

 

Cloud image: https://www.flickr.com/photos/aztlek/2357990839.  Used under Creative Commons licensing.