Congestion charging… in IT?

Congestion Charge sign in London

Does your organization understand the real costs of the congestion suffered by your IT services? Effective management and avoidance of congestion can deliver better service and reduced costs, but some solutions can be tough to sell to customers.

The Externalities of Congestion

In 2009, transport analyst and activist Charles Komanoff published, in an astonishingly detailed spreadsheet, his Balanced Transportation Analysis for New York City.  His aim was to explore the negative external costs caused by the vehicular traffic trying to squeeze into the most congested parts of the city each day.

His conclusion? In the busiest time periods, each car entering the business district generates congestion costs of over $150.

Graph showing Congestion Costs outlined in Komanoff's Balanced Transportation Analysis
Congestion Costs outlined in Komanoff’s Balanced Transportation Analysis

Komanoff’s spreadsheet can be downloaded directly here. Please be warned: it’s a beast – over three megabytes of extremely complex and intricate analysis. Reuters write Felix Salmon succinctly stated that “you really need Komanoff himself to walk you through it“.

Komanoff’s work drills into the effect of each vehicle moving into the Manhattan business district at different times of day, analyzing the cascading impact of each vehicle on the other occupants of the city.  The specific delays caused by any given car on any other given vehicle is probably tiny, but the cumulative effect is huge.

The Externalities of Congested IT Services

Komanoff’s city analysis models the financial impact of a delay to each vehicle, such as commercial vehicles, carring several paid professionals, travelling to fulfil charged-for business services.  With uncontrolled access to the city, there is no consideration of the “value” of each journey, and thus high-value traffic (perhaps a delivery of expensive retail goods to an out-of-stock outlet) gets no prioritization over any lower value journey.

Congested access to IT resources, such as the Service Desk, has equivalent effects.  Imagine a retail unit losing its point-of-sale systems on the Monday morning that a HQ staff return from their Christmas vacation.  The shop manager’s frantic call may find itself queued behind dozens of forgotten passwords.  Ten minutes of lost shop business will probably cost far more than a ten minute delay in unlocking user accounts.

That’s not to say that each password reset isn’t important.  But in a congested situation, each caller is impacted by, and impacts, the other people calling in at the same time.

The dynamics and theory of demand management in call centers have been extensively studied and can be extremely complex (a google search reveals plentiful studies, often with complex and deep mathematical analysis. This is a by no means the most example!).

Fortunately, we can illustrate the effects of congestion with a relatively simple model.

Our example has the following components:

  • Four incoming call lines, each manned by an agent
  • A group of fifteen customers, dialling-in at or after 9am, with incidents which each take 4 minutes for the agent to resolve.
  • Calls arriving at discrete 2-minute intervals (this is the main simplification, but for the purposes of this model, it suffices!)
  • A call queuing system which can line up unanswered calls.

When three of our customers call in each 2-minute period, we quickly start to build up a backlog of calls:

Congestion at the Service Desk - a table models the impact of too many customers arriving in each unit of time.
With three calls arriving at the start of each two-minute interval, a queue quickly builds.

We’ve got through the callers in a relatively short time (everything is resolved by 09:16). However, that has come at a price: 30 customer-minutes of waiting time.

If we spread out the demand slightly, and assume that only two customers call in at the start of each two-minute period, however, the difference is impressive:

Table showing a moderated arrival rate at the service desk, resulting in no queueing
If the arrival rate is slowed to two callers in each time period, no queue develops

Although a few users (customers 3,7, 11 and 15) get their issues resolved a couple of minutes later in absolute terms, there is no hold time, for anyone.  Assuming there are more productive things a user can be doing other than waiting on hold (notwithstanding their outstanding incident), the gains are clear.  In the congestion scenario, the company has lost half an hour of labour, to no significant positive end.

Of course, while Komanoff’s analysis is comprehensive, it is one single model and can’t be assumed completely definitive. But it is undeniable that congestion imposes externalities.

Komanoff’s proposed solution involves a number of factors, including:

  • A congestion charge, applying at all levels of day, with varying rates, applying to anyone wishing to bring a car into the central area of the city.
  • Variable pricing on some alternative transportation methods such as trains, with very low fares at off-peak times.
  • Completely free bus transport at ALL times.

Congestion management of this kind is nothing new, of course.  London, having failed to capitalize on its one big chance to remodel its ancient street layout, introduced a central, flat-fare central congestion charge in 2003.  Other cities have followed suit (although proposals in New York have not come to fruition). Peak time rail fares and bridge tolls are familiar concepts in many parts of the world. Telecoms, the holiday industry, and numerous other sectors vary their pricing according to periodic demand.

Congestion Charging in IT?

Presumably, then, we can apply the principles of congestion charging to contested IT resources, implementing a variable cost model to smooth demand? In the case of the Service Desk, this may not always be straightforward, simply because in many cases the billing system is not a straightforward “per call” model. And in any case, how will the customer see such a proposal?

Nobel Laureate William S. Vickery is often described as “the father of congestion charging”, having originally proposed it for New York in 1952. Addressing the objections to his idea, he said:

“People see it as a tax increase, which I think is a gut reaction. When motorists’ time is considered, it’s really a savings.”

If the customer agrees, then demand-based pricing could indeed be a solution. A higher price at peak times could discourage lower priority calls, while still representing sufficient value to those needing more urgent attention. This model will increasingly be seen for other IT services such as cloud-based infrastructure.

There are still some big challenges, though. Vickrey’s principles included the need to vary prices smoothly over time. If prices suddenly fall at the end of a peak period, this generates spikes in demand which themselves may cause congestion. In fact, as our model shows, the impact can be worse than with no control at all:

Table showing the negative impact of a failed off peak/peak pricing model
If we implement a peak/off-peak pricing system, this can cause spikes. In this case, all but four of the customers wait until a hypothetical cheaper price band starting at 09:08, at which point they all call. there is even more lost time (40 minutes) in the queue than before.

This effect is familiar to many train commuters (the 09:32 train from Reading to London, here in the UK, is the first off-peak service of the morning, and hence one of the most crowded).  However, implementing smooth pricing transitions can be complex and confusing compared to more easily understood fixed price brackets.

Amazon’s spot pricing of its EC2 service is an interesting alternative.  In effect, it’s still congestion pricing, but it’s set by the customer, who is able to bid their own price for spare capacity on the Amazon cloud.


Even if the service is not priced in a manner that can be restructured in this way, or if the proposition is not acceptable to the customer, there are still other options.

Just as Komanoff proposes a range of positive and negative inducements to draw people away from the congested peak-time roads, an IT department might consider a range of options, such as:

  • Implementation of a service credits system, where customers are given a positive inducement to access the service at lower demand periods, could enable the provider to enhance the overall service provided, with the savings from congestion reduction passed directly to the consumer.
  • Prioritization of access, whereby critical tasks are fast-tracked in priority to more routine activities.
  • Varieable Service Level Agreements, offering faster turnarounds of routine requests at off-peak times. Again, if we can realise Vickrey’s net overall saving, it may be possible to show enhanced overall service without increased overall costs.
  • Customer-driven work scheduling. Apple’s Genius Bar encourages customers to book timeslots in advance. This may result in a longer time to resolution than a first-come-first-served queue, but it also gives the customer the opportunity to choose a specific time that may be more convenient to them anyway. Spare capacity still allows “walk up” service to be provided, but this may involve a wait.
  • Customer self-service solutions such as BMC’s Service Request Management. Frankly, this should be a no-brainer for many organizations. If we have an effective solution which allows customers to both log and fulfil their own requests, we can probably cut a significant number of our 15 customer calls altogether. Self-service systems offer much more parallel management of requests, so if all 15 customers hit our system at once, we’d not expect that to cause any issue.

Of course, there remains the option of spending more to provide a broader capacity, whether this is the expansion of a helpdesk or the widening of roads in a city.  However, when effective congestion management can be shown to provide positive outcomes from unexpanded infrastructure, shouldn’t this be the last resort?

(congestion charge sign photo courtesy of mariodoro on Flickr, used under Creative Commons licensing)

Native mobile apps… an important reminder from XKCD

Native mobile applications... an important reminder from XKCD

“If I click ‘no’, I’ve probably given up on everything, so don’t bother taking me to the page I was trying to go to. Just drop me on the homepage. Thanks.”

Native apps can be by far the best way to deliver an application or service, but only if well designed. And websites need to remember that “No” means “No”.  Let’s be careful to differentiate between casual website visitors, and those users of the service who’d really benefit most from a dedicated application.

From the ever excellent XKCD:

When critical IT suppliers fail, the impact can be severe

The collapse of 2e2 is a warning to all IT organizations.

As IT evolves into a multi-sourced, supplier driven model, how many companies understand the risks?

One of the big stories in corporate IT this week has been the troubles of the IT service provider 2e2.  2e2 are a supplier of a range of outsourcing, resourcing and support services. As liquidators moved in, staff were cut and services ceased. It’s horrible for the staff, of course, and I wish everybody all the best. For customers, particularly datacenter customers, the situation is uncertain.

Increasingly, the role of IT within a large organization is to be a broker of services, driven by a range of internal functions and external suppliers. This trend continues to drive growth in the outsourcing market, which Gartner now estimates to be in excess of quarter of a trillion US dollars.

This hybrid model means that a typical IT service, as understood by its customers and stakeholders, will be dependent on both internal capabilities, and the performance and viability of multiple external suppliers. The collapse of 2e2 is a reminder that suppliers sometimes fail. When this happens, not all organizations are prepared for the consequences.

A failed service can kill a business

A harsh truth:  The failure of a critical business service can kill a profitable multi-billion-dollar company in a matter of months.  It has happened before, and will happen again.

One of the biggest examples of this is the billing system failure that caused Independent Energy to collapse.  A British darling of the dot-com stock market boom, Independent Energy was a new energy supplier, operating a multi-sourced supply chain model to compete with large post-privatization incumbents.

The model was initially a big success.  Floating in 1996 for 15 million pounds, it had risen in value to over 1 billion pounds (approx USD$1.6bm at current rates) by 2000.

However, in February of that year, the company was forced to admit that it was facing serious problems billing its customers, due to serious problems with systems and processes. Complex dependencies on external companies and previous suppliers were compounded by internal IT issues, and the effect was devastating.

The company simply couldn’t invoice the consumers of its products. The deficit was significant: the company itself stated that it was unable to bill some 30% of the hundreds of millions of pounds outstanding from its customers.

The company itself seemed otherwise to be healthy, and even reported profits in May, but the billing problems continued.  Months later, it was all over:  in September 2000, Independent Energy collapsed with at least 119 million pounds of uncollected debt. The remains of the company were purchased by a competitor for just ten million pounds.

The impact on customers

For 2e2’s customers, the immediate problem is a demand for funding to keep the lights on in the datacenter. The biggest customers are reported to have been asked for GB£40,000 (US$63,000 immediately), with smaller customers receiving letters demanding GB£4,000 (US$6,300). Non payment means disconnection of service. Worse still, there is the additional threat that if overall funding from all customers is insufficient, operations might shut down regardless:

Administrator's letter to 2e2 customers
The Administrator’s letter to 2e2 customers warns that any customers unable to pay the demanded charge will lose all services immediately, and that ALL services may cease if the total required amount is not raised.

But the complications don’t end there. The infrastructure in the 2e2 datacenters is reportedly leased from another supplier, according to an article in the UK IT journal The Register. Customers, the article claims, may face additional payments to cover the outstanding leasing costs for the equipment hosting their data and services.

A key lesson: It’s vital to understand the services you are providing

The events we’ve discussed reinforce the importance of understanding, in detail, your critical IT services. The Service Model is key to this, even in simple examples such as this one:

Sketch of a simple service, including an external datacenter, several data stores, and a client server application.
A sketch model of a simple service-driving application

Even for this simplistic example, we see some a number of critical questions for the organization. Here are just a few:

  • How do I know which equipment in the datacenter belongs to us, which belongs to the customer, and which is leased? Recently, a number of companies experienced devastating flooding as result of the storm which hit the USA’s Eastern Seaboard. Many are now struggling to identify their losses for insurance purposes. This can cause a serious cashflow hit, as equipment has to be replaced regardless of the fact that payouts are delayed.
  • What happens if our cloud-based archiving provider gets into difficulties? In this situation, the immediate impact on live service may be limited, but in the medium and longer term, how will billing and vital financial record keeping be affected?
  • Our client tool is dependent on a 3rd party platform. What risks arise from that? A few days ago, Oracle released a critical fix which patched 50 major security holes.  Updates like this are nothing unusual, of course.  But there are many examples of major security breaches caused by unpatched platforms (the Information Commissioner’s Office recently cited this error in its assessment of the Sony Playstation Network failure, adding a £250,000 fine to the huge costs already borne by Sony as a result of the collapse). Of course, there are other risks to consider too: How long will the supplier continue to support and maintain the platform, and what might happen if they stop?

The required understanding of a service can only be achieved with effective planning, management and control of the components that make it up. Is this the most critical role of IT Service Management in today’s organization?

ITAM 2015: The evolving role of the IT Asset Manager

In a previous post, we discussed the fact that IT Asset Management is underappreciated by the organizations which depend on it.

That article discussed a framework through which we can measure our performance within ITAM, and build a structured and well-argued case for more investment into the function.  I’ve been lucky enough to meet some of the best IT Asset Management professionals in the business, and have always been inspired by their stories of opportunities found, disasters averted, and millions saved.  ITAM, done properly, is never just a cataloging exercise.

As the evolution of corporate IT continues at a rapid pace, there is a huge opportunity (and a need) for Asset Management to become a critical part of the management of that change.  The role of IT is changing fundamentally: Historically, most IT departments were the primary (or sole) provider of IT to their organizations. Recent years have seen a seismic shift, leaving IT as the broker of a range of services underpinned both by internal resources and external suppliers. As the role of the public cloud expands, this trend will only accelerate.

Here are four ways in which the IT Asset Manager can ensure that their function is right at the heart of the next few years’ evolution and transition in IT:

1: Ensure that ITIL v3’s “Service Asset and Configuration Management” concept becomes a reality

IT Asset Management and IT Service Management have often, if not always, existed with a degree of separation. In  Martin Thompson’s survey for the ITAM Review, in late 2011, over half of the respondents reported that ITSM and ITAM existed as completely separate entities.

Despite its huge adoption in IT, previous incarnations of the IT Infrastructure Library (ITIL) framework did not significantly detail IT Asset Management as many practitioners understand it. Indeed, the ITIL version 2 definition of an Asset was somewhat unhelpful:

“Asset”, according to ITIL v2:
“Literally a valuable person or thing that is ‘owned’, assets will often appear on a balance sheet as items to be set against an organization’s liabilities. In IT Service Continuity and in Security Audit and Management, an asset is thought of as an item against which threats and vulnerabilities are identified and calculated in order to carry out a risk assessment. In this sense, it is the asset’s importance in underpinning services that matters rather than its cost”

This narrow definition needs to be read in the context of ITIL v2’s wider focus on the CMDB and Configuration Items, of course, but it still arguably didn’t capture what Asset Managers all over the world were doing for their employers: managing the IT infrastructure supply chain and lifecycle, and understanding the costs, liabilities and risks associated with its ownership.

ITIL version 3 completely rewrites this definition, and goes broad. Very broad:

“Service Asset”, according to ITIL v3: Any Capability or Resource of a Service Provider.   Resource (ITILv3):    [Service Strategy] A generic term that includes IT Infrastructure, people, money or anything else that might help to deliver an IT Service. Resources are considered to be Assets of an Organization  Capability (ITIL v3): [Service Strategy] The ability of an Organization, person, Process, Application, Configuration Item or IT Service to carry out an Activity. Capabilities are intangible Assets of an Organization.”

This is really important. IT Asset Management has a huge role to play in enabling the organization to understand the key components of the services it is providing. The building blocks of those services will not just be traditional physical infrastructure, but will be a combination of physical, logical and virtual nodes, some owned internally, some leased, some supplied by external providers, and so forth.

In many cases, it will be possible to choose from a range of such options, and a range of suppliers, to fulfill any given task. Each option will still bear costs, whether up-front, ongoing, or both. There may be a financial contract management context, and potentially software licenses to manage. Support and maintenance details, both internal and external, need captured.

In short, it’s all still Asset management, but the IT Asset Manager needs to show the organization that the concept of IT Assets wraps up much more than just pieces of tin.

2: Learn about the core technologies in use in the organization, and way they are evolving:

A good IT Asset Manager needs to have a working understanding of the IT infrastructure on which their organization depends, and, importantly, the key trends changing it. It is useful to monitor information sources such as Gartner’s Top 10 Strategic Technology Trends, and to consider how each major technology shift will impact the IT resources being managed by the Asset Manager.  For example:

Big Data will change the nature of storage hardware and services.  Estimates of the annual growth rate of stored data in the corporate datacenter typically range from 40% to over 70%. With this level of rapid data expansion, technologies will evolve rapidly to cope.  Large monolithic data warehouses are likely to be replaced by multiple systems, linked together with smart control systems and metadata.

Servers are evolving rapidly in a number of different ways. Dedicated appliance servers, often installed in a complete unit by application service providers, are simple to deploy but may bring new operating systems, software and hardware types into the corporate environment for the first time. With an increasing focus on energy costs, many tasks will be fulfilled by much smaller server technology, using lower powered processors such as ARM cores to deliver perhaps hundreds of servers on a single blade.

Image of Boston Viridis U2 server
An example of a new-generation server device: Boston’s Viridis U2 packs 192 server cores into a single, low-power unit

Software Controlled Networks will do for network infrastructure changes what virtualization has done for servers: they’ll be faster, simpler, and propagated across multiple tiers of infrastructure in single operations. Simply: the network assets underpinning your key services might not be doing the same thing in an hour’s time.

“The Internet of Things” refers to the rapid growth in IP enabled smart devices.
Gartner now state that over 50% of internet connections are “things” rather than traditional computers. Their analysis continues by predicting that in more than 70% of organizations, a single executive will have management oversight over all internet connected devices. That executive, of course, will usually be the CIO. Those devices? They could be almost anything. From an Asset Management point of view, this could mean anything from managing the support contracts on IP-enabled parking meters to monitoring the Oracle licensing implications of forklift trucks (this is a real example, found in their increasingly labyrinthine Software Investment Guide). IT Asset Management’s scope will go well beyond what many might consider to be IT.

SF parking meter - an example of an IP enabled "thing"
A “thing” on the streets of San Francisco, and on the internet.

3: Be highly cross functional to find opportunities where others haven’t spotted them

The Asset Manager can’t expect to be an expert in every developing line of data center technology, and every new cloud storage offering. However, by working with each expert team to understand their objectives, strategies, and roadmaps, they can be at the center of an internal network that enables them to find great opportunities.

A real life example is a British medical research charity, working at the frontline of research into disease prevention. The core scientific work they do is right at the cutting edge of big data, and their particular requirements in this regard lead them to some of the largest, fastest and most innovative on-premise data storage and retrieval technologies (Cloud storage is not currently viable for this: “The problem we’d have for active data is the access speed – a single genome is 100Gb – imagine downloading that from Google”).

These core systems are scalable to a point, but they still inevitably reach an end-of-life state. In the case of this research organization, periodic renewals are a standard part of the supply agreement. As their data centre manager told me:

“What they do is sell you a bit of kit that’ll fit your needs, front-loaded with three years support costs. After the three years, they re-look at your data needs and suggest a bigger system. Three years on, you’re invariably needing bigger, better, faster

With the last major refresh of the equipment, a clever decision was made: instead of simply disposing of, or selling, the now redundant storage equipment, the charity has been able to re-use it internally:

We use the old one for second tier data: desktop backups, old data, etc. We got third-party hardware-only support for our old equipment”.

This is a great example of joined-up IT Asset Management. The equipment has already largely been depreciated. The expensive three year, up-front (and hence capital-cost) support has expired, but the equipment can be stood up for less critical applications using much cheaper third party support. It’s more than enough of a solution for the next few years’ requirements for another team in the organization, so an additional purchase for backup storage has been avoided.


4: Become the trusted advisor to IT’s Financial Controller

The IT Asset Manager is uniquely positioned to be able to observe, oversee, manage and influence the make up of the next generation, hybrid IT service environment. This should place them right at the heart of the decision support process. The story above is just one example of the way this cross-functional, educated view of the IT environment enables the Asset Manager to help the organization to optimize its assets and reduce unnecessary spend.

This unique oversight is a huge potential asset to the CFO. The Asset Manager should get closely acquainted with the organization’s financial objectives and strategy. Is there an increased drive away from capital spend, and towards subscription based services? How much is it costing to buy, lease, support, and dispose of IT equipment? What is the organization’s spend on software licensing, and how much would it cost to use the same licensing paradigms if the base infrastructure changes to a newer technology, or to a cloud solution.

A key role for the Asset Manager role in this shifting environment is that of decision support.  A broad and informed oversight over the structure of IT services and the financial frameworks in place around them, together with proactive analysis of the impact of planned, anticipated or proposed changes, should enable the Asset Manager to become one of the key sources of information to executive management as they steer the IT organization forwards.

Parking meter photo courtesy of SJSharkTank on Flickr, used under Creative Commons license

Of course there is value in BYOD, and users know where to find it.

A pile of smartphones and tablets

Analysis of the advantages and disadvantages of BYOD has filled countless blogs, articles and reports, but has generally missed the point.

Commentators have sought to answer two questions. Firstly, if we allow our employees to use their own devices, will it save us money?  Secondly, will it make them more productive?

The answer to the first question was widely assumed, early on, to be yes.  An early adopter in the US government sector was the State of Delaware, who initiated a pilot in early 2011. With their Blackberry Enterprise Server reaching end-of-life, the program aimed to replace it altogether, getting all users off the infrastructure by mid-2013, and replacing it with monthly payments to users to cover the costs of working on their own cellular plans:

The State agreed to reimburse a flat amount for an employee using their personal device or cell phone for state business. It was expected that by taking this action the State could stand to save $2.5 million or approximately half of the current wireless expenditure.

The State evaluated the cost of supplying its own Blackberry devices at $80 per month, per user. The highest rate paid to employees using their own devices (for voice and data) is $40 per month.

At face value, this looks like a big saving, but many commentators – and practitioners – don’t see it as typical. One of the most prominent naysayers in this regard has been the Aberdeen Group. In February 2012, Aberdeen published a widely-discussed report which suggested that the overall cost of a BYOD program would actually be notably higher than a traditional, centralized, company-issued smartphone program:

The incremental and difficult-to-track BYOD costs include: the disagregation of carrier billing; an increase in the number of expense reports filed for employee reimbursement; added burden on IT to manage and secure corporate data on employee devices; increased workload on other operational groups not normally tasked with mobility support; and the increased complexity of the resulting mobile landscape resulting in rising support costs.

Aberdeen reported the average monthly reimbursement paid to BYOD users as $70, higher than the State of Delaware’s $40. And reimbursement is an important term here: to avoid the payments being treated as a “benefit-in-kind”, employees had to submit expense reports showing proof of already-paid mobile bills.

The State had to ensure that it was not providing a stipend, but in fact a reimbursement after the fact… This avoids the issue associated with stipends being taxable under the IRS regulations.

As Aberdeen pointed out, there is a cost to processing those expense reports. They reckon the typical cost of this to be $29. Even with the State’s $40 reimbursement level, that factor alone would wipe out most of the difference in cost compared to that $80 monthly cost of a State-issued Blackberry, and that is before other costs such as Mobile Device Management are accounted for (another US Government pilot, at the Equal Employment Opportunities Commission, reported $10 per month, per device, for their cloud-based MDM solution). Assuming this document is genuine, it’s clearly an important marketing message for Blackberry.

Of course, there are probably ways to trim many of these costs, and perhaps a reasonable assessment would be that many organizations will be able to find benefits, but others may find it difficult.

So if the financial case is not a slam-dunk, then BYOD needs to be justified with productivity gains.  And this is a big challenge: how do we find quantifiable benefits from a policy of allowing users to work with their own gadgets?

The analysis in this regard has been a mixed bag. The conclusions have often been ranged from subjective to faintly baffling (such as the argument that BYOD will be a “productivity killer” because employees will no longer log in and work during their international vacations.  Er… perhaps it’s just me, but if I were a shareholder of an organization that felt that productivity depended on employees working from the beach, I’d be pretty concerned).

One of the best pieces of analysis to date has been Forrester’s report, commissioned by Trend Micro, entitled “Key Strategies to Capture and Measure the Value Of Consumerization of IT”:

More than 80% of surveyed enterprises stated that worker productivity increased due to BYOD programs. These productivity benefits are achieved as employees use their mobile devices to communicate with other workers more frequently, from any location, at any time of the day. In addition, nearly 70% of firms increased their bottom line revenues as a result of deploying BYOD programs.

A nice positive message for BYOD there, but there’s arguably a bit of a leap to the conclusion about bottom-line revenue increase. It’s not particularly clear from the report how these gains have resulted directly from a BYOD program. A critic might be justified in asking how believable this conclusion is.

However, when we look at how people use their personal devices through their day, surely it’s perfectly credible to associate productivity and revenue increases to their use of consumer technology at work? Even before the working day has started, if an employee has got to their desk on time, there’s a pretty strong chance this was assisted by their smartphone. The methods, and the applications of choice, will vary from person to person: perhaps they are using satellite road navigation to avoid delays, or smoothly linking public transport options using online timetables, or avoiding queues using electronic ticketing. On top of that, if they’re on the train, they’re probably online, which can mean networking and communication has been going on even before they arrive at the building.

This reduction of friction in the daily commute, as described by BMC’s Jason Frye in two blog posts here and here, is a daily reality for many employees, and it’s indicative of the wider power of harnessing users’ affinity with their own gadgets. But how can this effectively be measured?  It’s difficult, because no two employees will be doing things quite the same – everybody’s journey to work is different. The probability of finding the same collection of transport applications on two employees’ smartphones is near zero, yet the benefits to each individual are obvious.

Equally, every knowledge worker’s approach to their job is different, and the selection of supporting applications and tools available through consumer devices is vast. Employees will find the best tools to help them in their day job, just as they do for their commute.

Now, perhaps we can also see some flaws in the balance-sheet analysis we’ve already discussed. As employees work better with their consumer devices, they rely less on traditional business applications. The global application store is proving to be a much better selector of the best tools than any narrow assessment process, ensuring that the best tools rise to the top. Legacy applications don’t need to be expensively replaced or upgraded in the consumer world: they die out of their own accord and are easily replaced. BYOD, done well, should reduce the cost of providing software, as well as hardware.

Some commentators cite incompatibility between different applications as a potential hindrance to overall productivity, but this misses the point that the consumer ecosystem is proving much better at sharing and collaboration than the business software industry has been. Users expect their content to be able to work with other users’ applications of choice, and providers that miss this point see their products quickly abandoned (imagine how short-lived a blogging tool would be if it dropped support for RSS).

The lesson for business? Trust your employees to find the best tools for themselves. Don’t rely on over-rigid productivity studies that miss the big picture. Don’t over-prescribe; concentrate on the important things: device and data security, and the provision of effective sharing and collaboration tools that join the dots. And ask yourself whether that expense report really needs to cost $29 to process through traditional business systems and processes, when your employees are so seamlessly enabled by their smartphones…

Image courtesy of Blakespot on Flickr, used under Creative Commons license.

Microsoft hike key license price by 15%. How can you offset the rise?

A few days ago, Microsoft (or rather, many of its resellers) announced a 15% price rise for it’s user-based Client Access license, for a range of applications. The price hike was pretty much immediate, taking effect from 1st December 2012.

The change affects a comprehensive list of applications, so it’s likely that most organizations will be affected (although there are some exceptions, such as the PSA12 agreement in the UK public sector).

Under Microsoft’s client/server licensing system, Client Access Licenses (CALs) are required for every user or device accessing a server.

Customers using these models need to purchase these licenses in addition to the server application licenses themselves (and in fact, some analysts claim that CALs provide up to 80% of  license revenue derived from these models).

What’s interesting is that the price rise only affects User-based CALs, not Device-based CALs. Prior to this change, the price of each CAL was typically the same for any given application/option, regardless of type.

This is likely to be a response to a significant industry shift towards user-based licensing, driven to a large extent by the rise of “Bring your own Device” (BYOD). As employees use more and more devices to connect to server-based applications, the Device CAL becomes less and less attractive.

As a result, many customers are shifting to user-based licensing, and with good reason.CALbeforeafter

15% is a big rise to swallow.   However, CAL licensing has often been pretty inefficient. With the benefit-of-proof firmly on the customer, a true-up or audit often results in “precautionary spending”: “You’re not sure how many of our 5,000 users will be using this system, so we’d suggest just buying 5,000 CALs“. This may be compounded by ineffective use of the different licensing options available.

Here are three questions that every Microsoft customer affected by this change should be asking:

Do we know how many of our users actually use the software?
This is the most important question of all. It’s very easy to over-purchase CALs, particularly if you don’t have good data on actual usage. But if you can credibly show that 20% of that user base is not using the software, that could be a huge saving.

Could we save money by using both CAL types?
Microsoft and their resellers typically recommend that companies stick to one type of CAL or the other, for each application. But this is normally based on ease of management, not a specific prohibition of this approach.
But what if our sales force uses lots of mobile devices and laptops, while our warehouse staff only access a small number of shared PCs. It is likely to be far more cost effective to purchase user CALs for the former group, while licensing the shared PCs with device licenses. The saving may make the additional management overhead very worthwhile.

Do we have a lot of access by non-employee third parties such as contractors?
If so, look into the option of purchasing an External Connector license for the application, rather than individual CALs for those users or their devices.  External Connectors are typically a fixed price option, rather than a per-user CAL, so understand the breakpoints at which they become cost effective.  The exercise is described at the Emma Explains Microsoft Licensing in Depth blog.  Microsoft’s explanation of this license type is here.

The good news is that the price hike will usually kick in at most customer’s next renewal. If you have a current volume licensing agreement, the previous prices should still apply until then.

This gives most Software Asset Managers a bit of time to do some thinking. If you can arm your company with the answer to the above questions by the time your next renewal comes around, you could potentially save a significant sum of money, and put a big dent in that unwelcome 15% price hike.

Image courtesy of Howard Lake on Flickr, used under Creative Commons licensing

Let’s work together to fix ITAM’s image problem

Intel Datacenter

Intel Datacenter

This is a long article, but I hope it is an important one. I think IT Asset Management has an image problem, and it’s one that we need to address.

I want to start with a quick story:

Representing BMC software, I recently had the privilege of speaking at the Annual Conference and Exhibition of the International Association of IT Asset Managers (IAITAM).  I was curious about how well attended my presentation would be. It was up against seven other simultaneous tracks, and the presentation wasn’t about the latest new-fangled technology or hot industry trend. In fact, I was concerned that it might seem a bit dry, even though I felt pretty passionate that it was a message worth presenting.

It turned out that my worries were completely unfounded.  “Benchmarking ITAM; Understand and grow your organization’s Asset Management maturity”  filled the room on day 1, and earned a repeat show on day 2. That was nice after such a long flight. It proved to be as important to the audience as I hoped it would be.

I was even more confident that I’d picked the right topic when I’d finished my introduction and my obligatory joke about the weather (I’m British, it was hot, it’s the rules), I asked the first few questions of my audience:

“How many of you are involved in hands-on IT Asset Management?”

Of the fifty or so people present, about 48 hands went up.

“And how many of you feel that if your companies invested more in your function, you could really repay that strongly?”

There were still at least 46 hands in the air.

IT Asset Management is in an interesting position right now.  Gartner’s 2012 Hype Cycle for IT Operations Management placed it at the bottom of the “Trough of Disillusionment”… that deep low point where the hype and expectations have faded.  Looking on the bright side, the only way is up from here.

It’s all a bit strange, because there is a massive role for ITAM right now. Software auditors keep on auditing. Departments keep buying on their own credit cards. Even as we move to a more virtualized, cloud-driven world, there are still flashing boxes to maintain and patch, as well as a host of virtual IT assets which still cost us money to support and license. We need to address BYOD and mobile device management. Cloud doesn’t remove the role of ITAM, it intensifies it.

There are probably many reasons for this image problem, but I want to present an idea that I hope will help us to fix it.

One of the massive drivers of the ITSM market as a whole has been the development of a recognized framework of processes, objectives, and – to an extent – standards. The IT Infrastructure Library, or ITIL, a huge success story for the UK’s Office of Government Commerce since its creation in the 1980s.

ITIL gave ITSM a means to define and shape itself, perfectly judging the tipping point between not-enough-substance and too-much-detail.

Many people, however, contend that ITIL never quite got Asset Management. As a discipline, ITAM evolved in different markets at different times, often driven by local policies such as taxation on IT equipment. Some vendors such as France’s Staff&Line go right back to the 1980s. ITIL’s focus on the Configuration Management Database (CMDB) worked for some organizations, but was irrelevant to many people focused solely on the business of managing IT assets in their own right.  ITIL v3’s Service Asset Management is arguably something of an end-around.

However, ITIL came with a whole set of tools, practices and service providers that helped organizations to understand where they currently sat on an ITSM maturity curve, and where they could be. ITIL has an ecosystem – and it’s a really big one.

Time for another story…

In my first role as an IT professional, back in 1997, I worked for a company whose IT department boldly drove a multi-year transformation around ITIL. Each year auditors spoke with ITIL process owners, prodded and poked around the toolsets (this was my part of the story), and rated our progress in each of the ITIL disciplines.

Each year we could demonstrate our progress in Change Management, or Capacity Management, or Configuration Management, or any of the other ITIL disciplines. It told us where we were succeeding and where we needed to pick up. And because this was based on a commonly understood framework, we could also benchmark against other companies and organizations. As the transformation progressed, we started setting highest benchmark scores in the business. That felt good, and it showed our company what they were getting for their investment.

But at the same time, there was a successful little team, also working with our custom Remedy apps, who were automating the process of asset request, approval and fulfillment.  Sadly, they didn’t really figure in the ITIL assessments, because, well, there was no “Asset Management” discipline defined in ITIL version 2. We all knew how good they were, but the wider audience didn’t hear about them.

Even today, we don’t have a benchmarking structure for IT Asset Management that is widely shared across the industry. There are examples of proprietary frameworks like Microsoft’s SAM Optimization Model, but it seems to me that there is no specific open “ITIL for ITAM”.

This is a real shame, because Benchmarking could be a really strong tool for the IT Asset Manager to win backing from their business. There are many reasons why:

  • Benchmarking helps us to understand where we are today.
  • More importantly, it helps us to show where we could get, how difficult and expensive that might be, and what we’re missing by not being there.

Those two points alone start to show us what a good tool it is for building a case for investment. Furthermore:

  • Asset Management is a very broad topic. If we benchmark each aspect of it in our organizations, we can get a better idea of where our key strengths and weaknesses are, and where we should focus our efforts.
  • Importantly, we can also show what we have achieved. If Asset Management has an image problem, then we need a way to show off our successes.

And then, provided we work to a common framework…

  • Benchmarking gives us an effective way of comparing with our peers, and with the best (and worst!) in the industry.

At the IAITAM conference, and every time I’ve raised this topic with customers since, there has been a really positive response. There seems to be a real hunger for a straightforward and consistent way of ranking ITAM maturity, and using it to reinforce our business cases.

For our presentation at IAITAM, we wanted to have a starting point, so we built one, using some simple benchmarking principles.

First, we came up with a simple scoring system. “1 to 4” or “1 to 5”, it doesn’t really matter, but we went for the former.  Next, we identified what an organization might look like, at a broad ITAM level, at each score. That’s pretty straightforward too:

Asset Maturity – General Scoring Guidelines

  • Level 1: Little or no effective management, process or automation.
  • Level 2: Evidence of established processes, automation and management.  Partial coverage and value realization. Some automation.
  • Level 3: Fully established and comprehensive processes. Centralized data repository. Significant
  • Level 4:  Best-in class processes, tools and results. Integral part of wider business decision support and strategy.  Extensive automation.

In other words, Level 4 would be off-the-chart, industry leading good. Level 1 would be head-in-the-sand barely started.  Next, we need to tackle that breadth. Asset, as we’ve said, is a broad subject. Software and hardware, datacenter and desktop, etc…

We did this by specifying two broad areas of measurement scope:

  • Structural:  How we do things.  Tools, processes, people, coverage.
  • Value: What we achieve with those things.  Financial effectiveness, compliance, environmental.

Each of these areas can now be divided into sub-categories. For example, on “Coverage” we can now describe in a bit more detail how we’d expect an organization at each level to look:

“Asset Coverage” Scoring Levels

  • Level 1: None, or negligible amount, of the organization’s IT Assets under management
  • Level 2: Key parts of the IT Asset estate under management, but some significant gaps remaining
  • Level 3: Majority of the IT Asset estate is under management, with few gaps
  • Level 4: Entire IT Asset estate under full management by the ITAM function.

This process repeats for each measurement area. Once each is defined, the method of application is up to the user (for example, separate assessments might be appropriate for datacenter assets and laptops/desktops, perhaps with different ranking/weighting for each).

You can see our initial, work-in-progress take on this at our Communities website at BMC, without needing to log in:  We feel that this resource is strongest as a community resource. If it helps IT Asset managers to build a strong case for investment, then it helps the ITAM sector.

Does this look like something that would be useful to you as an IT Asset Manager, and if so, would you like to be part of the community that builds it out?

Photo from the IntelFreePress Flickr feed and used here under Creative Commons Licensing without implying any endorsement by its creator.

Socialized Media: The shift to mobile

News media websites, always among the most dynamic and widely-read places on the internet, are currently undergoing a design shift that is highly significant to the IT industry as a whole.

Last October, the BBC’s website, ranked by Alexa as the 49th most visited in the world, unveiled its new beta layout:

BBC website layout - new and old
The BBC’s new website layout (left) and its previous incarnation (right). Click for bigger.

It’s interesting to look at the main changes made to the layout:

  • Vertical scrolling was mostly replaced by a side-to-side horizontal motion.
  • The “above the fold” part of the screen… the view presented to users on opening the screen… was optimized to a landscape layout.  This part of the page is filled with the most current and dynamic content.
  • Total vertical real estate was limited to just the same amount of screen again.
  • Links are square, large and bold, rather than “traditional” single line HTML text hyperlinks.
  • A prominent “What’s Popular” section appeared.

These design changes, of course, made the site much more tablet friendly.  The portrait layout was perfectly sized to fit a typical tablet screen such as the iPad. Single line links are awkward on a tablet, often needing a very accurate finger jab or a pinch-and-zoom action. In contrast, a big square click area is much more touchscreen friendly. Mobile users are familiar and comfortable with the side-to-side swipe action to move between screens, so the new scrolling method suits them well.  “What’s Popular” wasn’t a brand new concept in news websites, of course, but it’s a very familiar feature to users of mobile products like Apple’s App Store.

It was easy to suppose that the layout had been designed with mobility in mind, and the BBC Homepage Product Manager, James Thornett, confirmed this:

“It shares a design principle that we’ve seen in tablets and mobile phones and we’ve heard from reviewers during testing over the last couple of months that it feels quite natural to them”.

What was really interesting was Thornett’s subsequent statement:

“We’ve checked out the new page on our desktop computers as well as on our iPad 2 and we must say, it looks a little too simplified for the PC, but it suits the size and screen of a tablet device like the iPad perfectly.

I would expect you to see, within the course of the next few weeks, months and years, the rollout of the design front and this kind of interaction and style across all of our sites.”

In other words, we know it’s not what PC users are used to, but we’re going to progress this way anyway.  And that’s not a bad decision, because it’s better to be slightly simple on one device, and optimized for another, than to be very ill-suited to one of them.  It goes a step further than simply providing a “mobile” version of the site, formatted for small telephone screens, and asking tablet users to choose between two bad options.

The BBC seem confident that this is the correct path to take. At present, their sites are still in some degree of transition. The beta layout has become the primary layout for the main BBC site. The BBC news site retains its old desktop layout, while its sport section has a much more mobile-optimized interface:

BBC news and sport layout November 2012
BBC’s current News and Sport layouts. Note that the Sport layout (on the right) is better optimised for tablets and mobile devices than the News layout

Many other websites are undergoing similar transitions, and it can be interesting exploring for unpublicized “beta” versions. For example, here is the current website of the Guardian newspaper:

Guardian newspaper desktop layout
The current, desktop friendly version of the Guardian Newspaper’s homepage (November 2012)

However, navigating to the largely unpublicised reveals an experimental tablet-friendly view that is much more radical than the BBC’s transformed pages:

The Guardian Beta layout in November 2012
The Guardian Beta layout in November 2012, tucked away at

The media industry’s transition is still very much in progress, and some media companies are moving faster are more effectively than others. ABC News is already optimised pretty well for mobile devices, with links given reasonable space for jabbing at with a heavy finger. CNN, on the other hand, are trying, but still present huge numbers of tiny links, to vast amounts of content.  Even their Beta tour suggests that they’re struggling to shake this habit:

CNN's Beta site
CNN’s Beta walkthrough. Better sharpen those fingertips.

Tablets sales are carving a huge chunk out of the PC market and will inevitably outsell them, according to Microsoft, Apple, and most other commentators. This is driving a simple but profound change: users want to swoosh and scroll, to click links with their finger rather than a mouse pointer.  They want interfaces that work in portrait and landscape, and align themselves appropriately with the simple rotation of a device. This will become the normal interface, and sites and services which insist on depending on “old” interface components like scrollbars, flat text links, and fiddly drop down menus, will be missing the point entirely.

The Phenomenal Success of Strava

Endurance sports may not be the most obvious place to find a social media revolution. There is no fixed time window for a bike ride:  Some people are limited to weekends; others may grab a spare hour in the early morning, or pack a ride into their lunchtime. For a few, it’s a day job.

For many of us weekend warrior mountain bikers, organized competition, with a mass of participants, is something we might only dabble with once in a while.  Bike riding is typically more about getting out in the sunshine (or, here in southern England, the gloom), burning off a bit of sedentary-career belly, and having some fun. Most miles are ridden pretty much alone or in small groups

One thing that’s certain, however, is that cyclists are voracious adopters of technology. We love carbon things, and shiny things, and faster things. Technical innovation is a big part of the professional sport, and that element trickles strongly down to the recreational level, at a relatively affordable price compared to other technology-focused sports such as motor racing.

It’s perhaps no suprise, then, that cyclists were very early adopters of recreational GPS devices. Many of us are map geeks, but that still doesn’t mean we want to have to retrieve a soggy scrap of paper from the tree it has just blown into for the third time.

This trend started in 2000 with a mini-revolution, brought about by a key policy change. On May 1st, US President Bill Clinton turned off selective availability, an artificial wobbling error which had deliberately reduced the accuracy of the non-military Global Positioning System signal. For the first time, consumers could fix their location not just to a vague area of a few hundred metres, but right to the very trail they were standing on, walking along, or cycling up.

President Clinton’s move drove the huge success of generation of cheap, rugged handheld GPS devices like the Garmin Etrex, launched that same year.  As the decade progressed, these gadgets increasingly began to adorn bike handlebars and hike backpacks.

Garmin's original "Yellow Etrex"
Garmin’s original “Yellow Etrex”, launched in 2000

These gadgets didn’t just bring easier navigation… they brought tracking and logging. Riders keenly compiled their own statistics, and were able to share routes easily with others. A new outdoor-focused software industry sprang up, with companies like Anquet, Memory Map and Tracklogs combining detailed mapping with GPS connectivity to make the best of those basic early devices.

The sophistication of recreational GPS units continued to increase, but it was a trend that would soon be overwhelmed by a new development. In 2007, smartphones phones such as Nokia’s N95 began to be shipped with built in GPS units. Suddenly, people didn’t have to buy a navigation device to take advantage of GPS navigation. It was right there in their pocket. And while recreational GPS units had shipped by the million, smartphones ship by the hundreds of millions, every year. .

From a gadgetry point of view, the trend has pushed Garmin towards a more specialist sporting GPS market, with high-end devices featuring integrated heart-monitors, cadence (pedalling rate) sensors and more. The associated software, meanwhile, made an inevitable shift to the mobile device market, and a flood of applications hit the online stores.

The concept, of course, is pretty simple. Launch the app. Press “Start” at the beginning of a walk, or bike ride, or jog, or swim. Hmm, perhaps not a swim. Put phone in safe pocket. Press “End” at the end. The application measures the GPS log, overlays it with public or commercial mapping data, and calculates metrics such as distance, elevation gain, and time.  It’s all saved to a log of the person’s activities, enabling them to repeat routes, compare previous activities, and view overall achievements and stats.  Most of these apps look pretty similar, and there are plenty of them, as any quick App Store search will reveal.

This now brings us to Strava.

Strava was publically launched in 2009 (although there are rides logged dating back to the spring of 2008).  It was the brainchild of two Harvard alumni, Michael Horvath and Mark Gainey.  The concept was pretty standard – Strava is a ride logging system that interfaces with a smartphone’s GPS via a native app, or takes a website upload from recreational GPS devices.

Strava showed from early on that they had some new ideas. They introduced a neat feature called the KOM, or “King of the Mountain”. Named after the prize given to the best mountain climber in professional events such as the Tour de France, KOMs were originally awarded to riders who’d made the fastest ascent of pre-defined climbs.

In August of 2009, they made a huge decision, which would really set them down the path of being a bit different to the crowd. The KOM concept was cleverly expanded, as described in this entry on the Strava Blog:

“Until this release, Strava processed ride data in such a way that it could identify when you had ridden a categorized climb and match it with previous efforts on the same climb. That allowed us to show you the “KOM” standings for categorized climbs, for example. Many of you have suggested that we expand this concept to more than just categorized climbs. The new data model will allow just that. In the coming weeks you will be able to name and compare your effort on any section of trail or road with a previous effort in our database on the same section.”

Now, any rider could create any segment and start to register times on it. That long, fast and rather boring stretch of road on your commute suddenly became a sporting battle waged against a set of otherwise invisible opponents. The Strava leaderboard was born:

Strava leaderboard for Sandy Hill, Oxfordshire
Part of Strava’s leaderboard for Sandy Hill, near Reading.Yes, that’s me in 3rd, and yes, I want my KOM back.

Strava has differentiated itself by turning what was previously a solo experience into a shared one. The gentle pseudo-competition of competing for KOMs is addictive, fun, and much easier and cheaper than entering and travelling to races. The use of simple social network features like friend lists, chat, and “Kudos” (a simple thumbs up to convey one’s admiration of another rider’s achievements) have built a thriving community.  They’ve cleverly signed up big name riders like the USA’s Taylor Phinney, so users can follow the achievements of the pros (and feel mildly inadequate at the gulf between our abilities and theirs!).

In the process, they’ve motivated a lot of people to ride more, not least through some neat little tricks. We work hard to secure a KOM, and finally get there, only for the “Uh oh!” email to pop into our mailbox a short while later, breaking the bad news to us that we’ve been beaten. Perhaps we’d like to get out there and have another go?

Strava lost KOM email
Uh oh!

Strava is a classic story of a commodity concept being revolutionized by Social Media. In 2011, the influential VeloNews magazine voted Strava their technical innovation of the year (no mean feat in a high-income-demographic sport sector, full of carbon fibre and titanium bling).  Alexa’s site stats show how they have comfortably passed some of 2011’s big names like MapMyRide (who in 2012 have been trying to play catch-up on the leaderboard model). Even the seasonal northern hemisphere winter slump doesn’t significantly dent a very strong growth. I fully expect next summer to see them rocketing northwards.


This growth is impressive particularly because this segment should really have inertia on its side. After building up a set of logs on one site, there’s a strong incentive to stay there, particularly when it’s not always easy to move data to a new site (as noted with some light hearted profanity in articles like this one). Strava is compelling enough to make users walk away and start again.

Importantly, Strava has embedded itself in the consciousness of recreational cyclists. It is THE talked about app on the forums, and appears to be reaching an important critical mass whereby it is normal for hobbyist cyclists to have an account.  Participants are committed and enthusiastic: A recent challenge on the Strava site encouraged riders to attempt a 79 mile ride over one three-day weekend. They got almost 11,000 signups, of whom an amazing 7,000 successfully completed the task.  Strava is a social and motivational phenomenon.

Strava's iphone app
Strava’s latest iPhone app is packed full of social features and content

Socialization is an incredibly powerful, market-changing concept. It’s there to be harnessed: our users now carry better gadgets than our companies ever lent them, and they interact with them in more aspects of their lives than the IT industry ever really imagined they would.

People like to collaborate, compare, convey their stories and experiences. They like to see the admire the achievements of others and to learn what is achievable. It’s motivating and it’s fun.

These concepts are a huge disruptor. They have changed sector after sector, and they’ll change ours.