Effective Benchmarks IT Can Use to Gauge Success (January 30, 2009)
In this roundtable discussion, each high-level IT executive was asked to comment on benchmarks that he or she regularly uses to gauge IT success. Some that were mentioned included:
• IT expenses in capital as percentage of revenue
• Users per IT professional
• Successful releases into production
• User perceptions and confidence
• Customer satisfaction
• Willingness to recommend to another client
• Number of tickets
• Time spent on Maintenance of Business (MOB) vs. new development/projects
• The cost to maintain and operate organization systems equipment (MOOSE)
• Percent of work that is rework (within scope)
• How many customizations in the ERP?
• Time to market where IT is the product
No one set of metrics is right for all firms, but developing, incorporating, and tracking metrics over time is an essential management tool.
A popular and effective benchmark is the end-user survey. To make this tool effective some companies are designing more scientific surveys and engineering questions to provide feedback of what the company is doing well, not just what is wrong. Many times surveys involve a five point scale; most attendees agreed that expecting a perfect five as an average is unrealistic and understand that a number slightly lower than perfect will be the most realistic goal. How the number changes over time may be the most important information.
It is also important to know which type of user survey is most effective for the company. Which type of customer does the IT department service? High-level business users or low level employee concerns. A “5” from executives is more important than from typical employees, where a “3” might be sufficient. One has to weigh the costs of trying to get to a “5” versus the benefits.
Many times when a user’s service ticket is resolved, an automatic survey is generated with multiple choice answers as well as open ended questions. It may be important to make sure that the same person does not receive the same survey twice in the same month. Also when outsourcing IT services, it is best to monitor surveys internally. Assess survey results frequently and report the overall process at least quarterly to executive management.
Publishing survey results builds accountability with users filling out the surveys; they feel their opinion is valued and are more likely to fill out surveys in the future. Making IT benchmark results transparent in an organization can be very effective for an IT department. Other service departments will have little opportunity to blame their own internal performance failures on “system failures” if the facts say otherwise. This may take the form of incorporating results on the company intranet home page or regularly publishing results in managerial reports.
Free and Inexpensive Resources
Smaller firms may prefer to keep their benchmarks simple such as tracking the number of complaints or ticket volume, or even forming focus groups. Inexpensive, effective survey tools suggested for small companies include surveymonkey.com as well as the Google Docs Survey tool. There are many useful benchmark resources on the internet. Infotech (www.infotech.com) offers an “IT Budget and Staffing Report” that provides a summary on 60 different benchmarks IT can use in an organization. The 2007-2008 report is currently free on their website.
NOREX (www.norex.net) is a site where you can share IT experiences with a large number of organizations. You can hold regional forums or symposiums and it is inexpensive; for about $5,000 anyone in your company can access the site’s vast resources. In comparison, APQC wanted $2,500 for just four surveys from one of our participants, with the lowest cost being $800. An advantage to the NOREX data was having the ability to understand what assumptions go with various benchmarks. For example, in thinking about capital expenses, is depreciation included? Comparing ratios of a firm that does include versus a firm that does not can be grossly misleading.
Other free and inexpensive IT data resources include mining your current vendor list for average IT costs, signing up for services like Gartner research on a trial basis, or searching the internet for companies that have purchased and published Gartner research on their sites already, such as in an industry newsletter.
Tracking the proportion of time an IT department focuses on Maintenance of the Business (MOB) is another effective benchmark. How much time is spent maintaining internal IT services and how much is spent outside of that time, such as on new upgrades to software or switches? By forecasting dashboards and analyzing them frequently a company can track IT management over time.
The cost to Maintain and Operate Organization Systems Equipment (MOOSE) is a benchmark that totals all salaried IT staff. If your company stopped selling orders, how long could your IT department operate? Quantifying this annual number allows a firm to compare its own performance annually rather than to other companies who could have significantly different variables. This benchmark can help you determine whether you want to spend time on new implementation or maintaining current operations.
Another benchmark used relates to the effectiveness of software development. Measure the amount of rework needed after concluding a project (of things that were in scope), or how many times you need to return to fix problems (break fixes). Of course metrics can work both ways. User-initiated scope creep needs to be measured as well.
Other more common benchmarks include dividing total IT expense by total company revenue as well as calculating the number of users supported divided by each IT professional. Regularly monitoring these measurements builds history so the company can measure information services over time.
Another form of benchmark is to see whether or not ROI projections have panned out. Most firms in the room did not do this very often. Mainly this was done only for the biggest projects. One participant related an instance where no one showed up to the “year later” review. IT can show costs, but to show the benefits on the business side remains elusive.
Some organizations try to capture best practices and measure against them. But these can differ across organizations dramatically. Whereas one organization may think that four software releases per year is a good target, another firm may release numerous mini-releases of patches, security upgrades, and other minor fixes on an almost continuous basis, especially is the software is being used as a service online. Distinguishing between what is an effective practice in your company versus what is an industry “best practice” is important.
Linking Compensation to Metrics
Many local companies have improved individual and team goals by linking small proportions (anywhere from 2% to 10%) of annual compensation with IT metric outcomes. Other approaches link customer satisfaction surveys to compensation; sometimes a certain number of surveys must be reported before a bonus will be awarded. Companies have reported improvements in IT benchmark measurements after compensation was linked to metrics.
Next Meeting (March 27, 1:00 pm to 4:00 pm)
Several potential topics were discussed for future ITEE meetings, including:
• Data protection & privacy
• Collaboration technology, including fielding a reliable desktop videoconferencing experience
• Changes in the web infrastructure, requirements for Web 2.0 and Web 3.0 in the future (examples: buy online, pick up at store; analytics; web advertising)
The topic we chose for next meeting is The Open Source Software Proposition.
This summary was prepared by Sara Lucas, The University of Akron