Five years on from the iPad’s launch

Ipad, front and rear

On 27th January 2010, Steve Jobs unveiled the iPad. If you have 90 minutes or so to spare, that launch presentation is a fascinating watch.

There was mixed reaction to that initial announcement. Some people called it right. Even days before the launch, Deloitte predicted that a new generation “NetTabs”, such as Apple’s rumoured new product, would have a “breakout year”.  Other pundits were far more sceptical.  Bill Gates derided it: “It’s a nice reader, but there’s nothing on the iPad I look at and say, ‘Oh, I wish Microsoft had done it.'”

Gates called it wrong, particularly in backing netbooks over tablets. By 2011, tablets were selling twice as many units as netbooks.  (Note to readers in the future: Netbooks were small laptop computers which didn’t have touchscreens or detatchable keyboards. Yes, I know!). Tablets, led by the iPad (to say the least), put a huge dent in the PC market, leading to its longest decline in history.

One of the most profound changes that the iPad brought was a change in user expectations. Since the advent of the WIMP (Windows-Icons-Mouse-Pointer) interface in the 1980s, most business software took the form of a form.  Users interacted with clickable, typeable boxes.

Tablets, without a pointer or a physical keyboard, don’t adapt well to “paperless form” interfaces.  It’s hard to click with a finger in a small box, and it’s not great navigating from field to field, typing into them one by one, either. Some might have expected that hardware would make the shift to accommodate expectations. Instead, the applications themselves changed. The tablet brought in new user experience designs, with great success.

Developers and designers quickly built on the lessons they’d already learned with touchscreen smartphones. They learned to produce tactile apps, making the most of swipes and gestures and movement. It’s arguably much more intuitive and friendly, as anyone who has ever given a modern touchscreen device to a toddler will attest. It’s also a great opportunity to deliver a more productive experience too. Applications like Shazam and Uber have taken this to an extreme: A single press on the screen replaces complex and clunky keyboard-and-mouse driven interactions.

This influence is even finding its way back to the desktop. PCs aren’t dead yet, and they are still holding their own as the superior platform for many use cases. But the influence of the iPad is pervasive. There’s a lot less tolerance amongst users for business applications which look like their tax form, with a save button. There is no tolerance at all for that in consumer applications.

Harnessing, and taking advantage of those expectations enables better software with better outcomes.  A tool like Smart IT enables tablet interaction for ITSM, but also carries many of the design and efficiency principles to the desktop.  The result is better productivity, a smarter Service Desk, and happier stakeholders on both sides of the service relationship.

Advertisement

Why customer centricity is an approach, not a dogma

“Customer first” is a much debated philosophy in ITSM. Studies and reports frequently place customer-centricity high on the priority list of CIOs and CTOs. But sceptical commentators argue that IT may be falling victim to its own faddish obsession: we are not the same as some of the most high-profile service innovators such as those in the consumer marketplace, and we have different drivers and limitations.

Some attempts to deliver customer-centricity in ITSM may indeed be truly faddish: driven by fashion or the notion that something is a cool idea, without really delivering a better business result.

However, I’d argue that such actions are no more customer centric than ignoring the customer’s wishes altogether: if service is delivered on the basis of improperly considered ideas, it isn’t destined to be successful, unless we get lucky.

Customer centricity is not just about doing whatever the customer asks. It’s about marshalling the available support resources to deliver service in the most effective manner for the customer. Most importantly, it’s about methodically identifying what that “most effective manner” actually is. The IT industry hasn’t always been very good at that bit.

As an example: One of the most significant areas of debate, conflict and sheer revolution has been the “Bring Your Own Device” phenomenon. Much has been written on the subject, but occasionally a statistic appears which really illustrates the need for a more customer centric approach to corporate IT.

Last year, an APAC-focused survey by VMWare, “A New Way of Life”, contained one such gem. The gem was not the survey’s finding that 83% of employees are bringing their own devices to work. This number is not unusual; many similar surveys have produced similar figures. It was a subsequent result which stood out: 41% of of these positive respondents cited “contactability by customers” as a primary driver for their use of non-corporate items.

Let that sink in for a moment: Fully one-third of the overall respondents in this survey stated that they needed to augment the technology their employer is providing them, just to give customers adequate means to contact them (and that is before we even start to ask how many of that group are actually customer facing: the percentage might actually be much higher for the most relevant groups of users).

Surely, then, IT departments need to ask themselves why this is happening?

But this is the problem: We already did that. The IT organization already sat down and thought up the best policies it could: usually making considered judgements built on knowledge and experience, trying to find the best balance between security and customer requirements. But if more than a third of users still need to bring in their own technology to deal with customers, then something went wrong.

Maybe IT really didn’t learn enough because it didn’t get far enough away from its own desk. IT should be asking their customers why this is happening.

Even then, though, simply asking a question may not give us what we need to provide the best solutions. Customers, asked directly, may tell you what they think they need, based on their own frame of reference. Henry Ford, sadly, probably never uttered the quote widely attributed to him about giving customers “faster horses” if he’d simply followed their stated wishes, but it’s still an important point.

Instead, the best way for IT to be customer centric is to leave our desks. We need to stand with our users as they go about their job. We should shadow field support people, sit in customer service call centres, spend a day with a sales rep, observe the warehouse for a while. As technology experts, we know more about the challenges of managing evolving technology in an increasingly complex corporate environment, but we don’t know our customers’ jobs like they do.

IT deeply knows technology, but the customer knows their job most deeply. The foundation for customer-centric support is simply the combination of those two pieces of knowledge.

(Image credit: doctorow on Flickr)

Tomorrow’s Future Today 2014: The End of IT’s Monopoly on Trust

Tomorrow's Future Today logo

On 17th February 2014 I presented at the Tomorrow’s Future Today 24-hour online conference. The presentation explored the impact of Uber, TripAdvisor, Yelp and other consumer-oriented services on established (and legacy) “providers of trust such as guidebooks, regulators and establishments. In this context, it discussed the lessons corporate IT can learn from these huge trends.

You can view a recorded presentation, and my slides, here:

It was a real joy to be involved with this conference: it is a tremendous and free resource for the IT and technology field, with some great contributors.

Yale Shuts Down Student Course Selection Tool on Grounds of “Malice”

Yaleblock

Hot on the heels of my recent “Alf’s Zoo” on Trust, here’s a fine example of the phenomenon.

Yale College has blocked a website which two of its students had created.  The students, brothers Peter and Harry Yu, had created an alternative version of the prestigious institution’s own course planning tool.  “We found that it was really hard to find and compare courses when we first arrived at Yale”, one of the brothers told the media. Yale students are given tremendous flexibility to choose classes, and hence the brothers identified a key frustration with the official solution: a lack of adequate comparison data.  Their tool added students’ course evaluation ratings to the class listings – a feature which proved immensely popular.

The university’s response was heavy handed to say the least. After quibbling with the students about their product’s original name (a derivation of the official platform’s own title), they went a step further and blocked the site on campus. Students attempting to access the site were confronted instead with a Yale-branded screen, purportedly “to help guard against malicious activity on Yale networks”.

But the brothers’ product, far from being a tool of malice, is a great example of a new generation of tools and technologies aimed at making consumers of services more informed… at least to its target audience.  The university sees it differently, using one of its monopolistic powers (its complete control over its own network) to assert its sole control of the course selection process. Yale initially justified its actions on the basis that it had not permitted its course evaluation data to be used in this way. That would be a plausible (if mean-spirited) explanation, if the news had not subsequently emerged that another student-derived tool, a light-hearted random course selector, had also been blocked on the grounds of malice.

Of course Yale are far from alone in behaving like this.

Alf’s Zoo – The Erosion of IT Trust

“Alf’s Zoo – This week, Jon Hall explains how Uber has changed our view of IT and the world. We no longer trust authorities as much as we trust our peers when it comes to selecting tools and services for work and life. Instead of prescriptive measures issued by so-called experts, we now rely heavily on peer-assisted selections, where we rate the vendor – and the vendor rates us. Imagine what customer reviews have done to online shopping, and ask yourself what IT can do to earn back some of the trust from its stakeholders. Jon provides one of the few concrete examples of how the consumerization of IT impact the business”

Why the CIO won’t go the same way as the VP of Electricity – an article at the ITSM Review

A dodo

Commoditisation is, without doubt, a massive and revolutionary trend in IT. In just a handful of years, a huge range of industrialised, cost-effective solutions have created rapid change, so much so that some commentators now predict the end of the corporate IT department altogether.

Info-Tech Research Group’s June 2013 article highlights a comparison made by some, between today’s CIO, and the “VP of Electricity” role apparently ubiquitous in large organisations at the turn of the last century…

More here at the ITSM review: http://www.theitsmreview.com/2013/06/cio-vp-electricity/

Image credit

Painted into a Corner: Why Software Licensing isn’t getting simpler

It’s not easy being a Software License Manager.

It’s really not easy being a Software License Manager in a company which uses products from one or more of the “usual suspects” among the major software vendors.  Some of the largest have spent recent years creating a licensing puzzle of staggering complexity.

There’s an optimistic school of thought which supposes that the next big change in the software industry – a shift to service-oriented, cloud-based software delivery – will make this particular challenge go away.  But how true is this? To answer the question, we need to take a look back, and understand how we arrived at the current problem.

In short, today’s complexity was driven by the last big industry megatrend: virtualization.

In an old-fashioned datacenter, licensing was pretty straightforward.  You’re running our software on a box?  License that box,  please.  Some boxes have got bigger?  Okay, count the CPUs, thanks. It was nothing that should have been a big issue for an organized Asset Manager with an effective discovery tool.  But as servers started to look a bit less like, well, servers, things changed, and it was a change that became rather dramatic.

The same humming metal boxes were still there in the data center, but the operating system instances they were supporting had become much more difficult to pin down.  Software vendors found themselves in a tricky situation, because suddenly there were plenty of options to tweak the infrastructure to deliver the same amount of software at a lower license cost. This, of course, posed a direct threat to revenues.

The license models had to be changed, and quickly. The result was a new set of metrics, based on assessment of the actual capacity delivered, rather than on direct counting of physical components.

In 2006, in line with a ramping-up of the processor core count in its Power5 server offering, IBM announced its new licensing rules.We want customers to think in terms of ‘processor value units’ instead of cores”, said their spokesman. A key message was simplification, but that was at best debatable:  CPUs and cores can be counted, whereas processor-specific unit costs have to be looked-up.  And note the timing: This was not something that arrived with the first Power5 servers. It was well into the lifetime of that particular product line.  Oh, and by the way, older environments like Power4 were brought into the model, too.

And what about the costs?  “This is not a pricing action. We aren’t changing prices”. added the spokesman.

For a vendor, that assertion is important. Changing pricing frameworks is a dangerous game for software companies, even if on paper it looks like a zero-sum game.  The consequences of deviating significantly around the current mean can be severe:  The customers whose prices rise tend to leave. Those whose prices drop pay you less.  Balance isn’t enough – you need to make it smooth for every customer.

Of course, virtualization didn’t stand still from August 2006 onwards, and hence neither did the license models.  With customers often using increasingly sophisticated data centers, built on large physical platforms, the actual processing capacity allocated to software might be significantly less than the total capacity of the server farm.  You can’t get away with charging for hundreds of processors where software is perhaps running on a handful of VMs.

So once again, those license models needed to change.  And, as is typical for revisions like this, sub-capacity licensing was achieved through the addition of more details, and more rules.  It was pretty much impossible to make any such change reductive.

This trend has continued:  IBM’s Passport Advantage framework, at the time of writing, has an astonishing  46 different scenarios modelled in its processor value unit counting rules, and this number keeps increasing as new virtualization technologies are released. Most aren’t simple to measure: the Asset Manager needs access to a number of detailed facts and statistics.  Cores, CPUs, capacity caps, partitioning, the ability of VMs to leap from one physical box to another – all of these and more may be involved in the complex calculations. Simply getting hold of the raw data is a big challenge.

Another problem for the Software Asset Manager is the fact that there is often a significant and annoying lag between the emergence of a new technology, and the revision of software pricing models to fit it. In 2006, Amazon transformed IT infrastructure with their cloud offering. Oracle’s guidelines for applying its licensing rules in that environment only date back to 2008. Until the models are clarified, there’s ambiguity. Afterwards, there are more rules to account for.

(Incidentally, this problem is not limited to server-based software.  A literal interpretation of many desktop applications’ EULAs can be quite frightening for companies using widespread thin-client deployment. You might only have one user licensed to work with a specialist tool, but if they can theoretically open it on all 50,000 devices in the company, a bad-tempered auditor might be within their rights to demand 50,000 licenses.)

License models catch up slowly, and they catch up reactively, only when vendors feel the pressure to change them. This highlights another problem: despite the fine efforts of industry bodies like the SAM Standards Working Group, vendors have not found a way to collaborate.  As the IBM spokesman put it in that initial announcement: “We can’t tell the other vendors how to do their pricing structure”.

As a result, the problem is not just that these license models are complex.  There are also lots of them.  Despite fundamentally measuring the same thing, Oracle’s Processor Core Factors are completely different to IBM’s Processor Value Units.  Each vendor deals with sub-capacity in its own way, not just in terms of counting rules but even in terms of which virtual systems can be costed on this basis. Running stuff in the cloud? There are still endless uncertainties and ambiguities.  Each vendor is playing a constant game of catch-up, and they’re each separately writing their own own rules for the game. And meanwhile, their auditors knock on the door more and more.

Customers, of course, want simplification. But the industry is not delivering it. And the key problem is that pricing challenge.  A YouTube video from 2009 shows Microsoft’s Steve Ballmer responding to a customer looking for a simpler set of license models.  An edited transcript is as follows:

Questioner:

Particularly in application virtualization and general virtualization, some of Microsoft’s licensing is full of challenging fine print…

…I would appreciate your thoughts on simplifying the licensing applications and the licensing policies.”

Ballmer:

“I don’t anticipate a big round of simplifying our licenses.  It turns out every time you simplify something, you get rid of something.  And usually what we get rid of, somebody has used to keep their prices down…

…The last round of simplification we did of licensing was six years ago…. it turned out that a lot of the footnotes, a lot of the fine print,  a lot of the caveats, were there because somebody had used them to reduce their costs…

…I know we would all like the goal to be simplification, but I think the goal is simplification without price increase. And our shareholders would like it to be a simplification without price decreases”…

…I’d say we succeeded on simplification, and our customer satisfaction numbers plummeted for two and a half years”.

In engineering circles there is a wise saying: “Strong, light, cheap: Pick any two”. The lesson from the last few years in IT  is that we can apply a similar mantra to software licensing:  Simple, Flexible, Consistently Priced: Pick any two.

Vendors have almost always chosen the latter two.

This brings us to the present day, and the next great trend in the industry. According to IDC’s 2011 Software Licensing and Pricing survey, a significant majority of the new commercial applications brought to market in 2012 will be built for the Cloud. Vendors are seeing declining revenues from perpetual license models, while subscription-based revenue increases. Some commentators view this as a trend that will lead to the simplification of software license management. After all, people are easier to count than dynamic infrastructure… right?

However, for this simplification to occur, the previous pattern has to change, and it’s not showing any sign of doing so.  The IDC survey reported that nearly half of the vendors who are imminently moving to usage-based pricing models still had no means to track that usage. But no tracking will mean no revenue, so we know they’ll need to implement something. Once again, the software industry is in an individual and reactive state, rather than a collaborative one, and that will mean different metrics, multiple data collection tools, and a new set of complex challenges for the software asset manager.

And usage based pricing is no guarantee of simplicity. A glance at the Platform-as-as-Service sector illustrates this problem neatly. Microsoft’s Azure, announced in 2009 and launched in 2010, promised new flexibility and scalability… and simplicity. But again, flexibility and simplicity don’t seem to be sitting well together.

To work out the price of an Azure service, the Asset Manager needs to understand a huge range of facts, including (but by no means limited to) usage times, usage volumes, and secondary options such as caching (both performance and geographic), messaging and storage.  Got all that?  Good, because now we have to get to grips with the contractual complications: MSDN subscriptions have to be accounted for, along with the impact of any existing Enterprise Agreements. Microsoft recognized the challenge and provided a handy calculator, only to acknowledge that “you will most likely find that the details of your scenario warrant a more comprehensive solution”. Simplicity, Flexibility, Consistent Pricing: Pick any two.

And, of course, the old models won’t go away either. Even in a service-oriented future, there will still be on-premise IT, particularly amongst the organizations providing those services.

Software vendors have painted themselves into a corner with their license models, and unless they can find a way to break that pattern, we face a real risk that the license management challenge will get even more complex. Entrenched complexity in the on-premise sector will be joined by a new set of challenges in the cloud.

The pattern needs to change. If it doesn’t change, be nice to your Software Asset Manager. They’ll need a coffee.