Two industry veteran’s share their insights about using Benchmarks & Best Practices

   

A catalog executive suffers from no shortage of metrics to watch for: from average order value to email inquiry turnaround times to indirect labor costs to number of calls answered in 20 seconds or less. The real questions, though, are how to use the numbers, and if the metrics even are appropriate to track for your operations. Comparing operations solely on numbers can be misleading. Is it better to establish a set of best practices and then hold your staff accountable to them?

Donna Loyle, editor in chief of Catalog Success, asked two catalog operations experts for their thoughts on such questions:

Curt Barry, president of F. Curtis Barry & Company, a Richmond, VA – based multichannel operations and fulfillment consultancy. The firm conducts benchmarking and best practice implementation via ShareGroups in disciplines such as contact centers, fulfillment centers, and forecasting and inventory management.

Liz Kislik, president of Liz Kislik Associates LLC, a Rockville Centre, NY – based company that specializes in management consulting, organizational change, customer service management and employee development.

Barry and Kislik shared their thoughts on the most effective way to use external benchmarks (operations benchmarked against other companies), internal benchmarks (benchmarks set within your own company) and industry-wide best practices in a catalog/ecommerce platform.

Catalog Success: What are the best-case scenarios for using external benchmarks?

Liz Kislik: I think the real issue about external benchmarking is that people just want to find out what other companies are doing and then compare themselves to that. In the process, they’re either trying to identify best practices to follow, or compare and justify their current practices. Or else they want to take the data, so they can go to some group within their own organization and shake them up.

Curt Barry: We started doing benchmarking studies back in 1996, and to date we’ve benchmarked a couple of hundred companies. Over the years, we’ve developed a series of benchmarks with our clients. People do want to know what others are doing. But how you define the data and then use them for comparison is the hard part of this.

The No.1 thing people should do is identify the general areas to focus on for improvement, to either improve customer service and productivity or to reduce costs. A basic tenet of benchmarking is this: You can’t improve something if you haven’t measured it.

Kislik: When most practitioners see surveys of what businesses are doing, they don’t fully consider the comparability issue.

Barry: I agree that some of those studies I see put out by other consultants or by publishers are really not helpful, because they use averaged or summarized data. Data need to be detail-specific by company, so you can compare them to your company.

We publish detailed data in our studies by company name, so you’re able to see which companies answered the questions, and get behind the numbers. That’s the key to benchmarking. If you’re an apparel company, you shouldn’t compare yourself with just any other apparel company. You have to look at AOV [average order value], number of calls answered, number of contact center seats, length of call, number of line items per order, etc. These and other factors are the key drivers to those benchmarks.

Kislik: People tend to look for the big number in the sky. If they find it, they then think they’ll know what to do to achieve it within their own organizations. That’s just not true. Some of what ends up getting benchmarked isn’t sufficiently defined as an activity. The units need to be comparable. For example, look at emails answered per hour. Are the emails closed-end, meaning they require merely factual answers, such as when will my order be shipped? Or are they open-ended questions, for example, how do I put two components together for this product?

Barry: When you look at benchmarking data for multiple companies, you’ll want to know which companies are efficient or offer good customer service. Additionally, we’ll often throw out the highs and lows in our benchmarking studies, as they may be unrealistic levels to try and achieve or can signal some error in the data. We use a weighted average to more fairly represent the average. For example, if you look just at calls taken per agent per hour, a super-efficient company may have a high number there. Or they may use IVR [interactive voice response] for a high percentage of transactions.

Kislik: The real question, I think, is to know why you want to do a comparison in the first place. What are you going to use it for? In some cases, it may be better to compare dissimilar companies. This is especially important for companies that have been doing the same things in the same ways for many years and are mired in their own processes. They may be better off wanting to know what’s possible, rather than what’s the average.

Barry: That leads me back to my contention that best practices may be more valuable than individual pieces of data. Benchmarking will point to a specific function or area to review for improvement. But detailed analysis of process and implementation of best practices is how you effect change.

Kislik: Plus, the averages actually could denote some terrible number for, say, performance levels. Say a study finds some average on customer service, and you’re doing better than that, well, that’s great. In that case, it doesn’t really matter what everyone else is doing, because you’re doing well.

Or say that the average contact center turnover is 65 percent. I’m just making up a number here. And your turnover is 50 percent. You could say, “Wow! I’m doing well.” But what if those 50 percent of your staffers are leaving because you have a terrible manager, and in a few months some of those who left file lawsuits against you? Still think your lower-than-average turnover rate is commendable?

Barry: The Internet is radically changing these benchmarks, so that has to be taken into consideration here, too. Say you have two companies with AOV of $100 and two line orders, for a $100 million a year business. But one company gets 10 percent of orders via the Net, and the other gets 50 percent via the Net. Most of the ratios will be radically different, including cost per order, cost per contact, call to order ratio, etc.

Kislik: That’s true. You have to consider your customer population. If you have a lot of older customers, they may be more prone to use the phone than the Net.

CS: Please name some common external benchmarks you see catalogers using.

Barry: We track all kinds of data, including call abandonment rates, percentage of calls answered in 20 seconds or less, email turnaround time, calls answered be hour, how many emails handled per hour, average call time, call to order ratio, calls answered per square foot of the contact center, costs per order, cost of credit, cost per contact, cost per call, percent to net sales for major expenses such as indirect and direct labor, benefits, training, recruiting telecom, occupancy, and much more.

We try to see how those vary among companies within our ShareGroups. But it’s tough, because the AOV among those ShareGroup companies can be a wide range, from say, $45 to $400.

Kislik: Curt’s array of costs to measure is impressive. Companies not participating in this kind of rigorous, ongoing study…well, some companies just can’t get near that level of data to pursue; they just can’t handle it.

Barry: Companies find real value in capturing and collecting the benchmark data. They often learn many things they didn’t know about their operations, which leads to positive changes in process.

Kislik: Curt has been talking about efficiency and costs. But if companies are looking to increase revenue or boost service levels, they may not want to look specifically at costs. Some companies want to know who’s got the best upsell program, for example, or who’s increasing order value. I’d like to see what percentage of customers’ inquiries/complaints were resolved on first contact. And from a selling point of view, not just from orders per hour, I’d like to see a call to order ratio, so you can see how much work the enterprise has to do to get the order.

Barry: The universal agent part of this equation also is important. It makes it much more difficult to get accurate data about activities and units of work. Half the companies we deal with have universal agents, and your ability to analyze that data is part of the challenge there. What’s key with benchmarking is measuring year to year and season to season.

CS: Are internal benchmarks more helpful than external ones?

Barry: One of the most effective ways to benchmark is against yourself, one season to another, or year to year, against a standard or expectation. External benchmarks give you a general idea of where to zero in.

Internal benchmarking allows you to focus on trends in specific areas and make positive changes where possible. You may have internal benchmarks that say some aspect of your operations are at 90 percent, but if you have no external benchmark of what’s good, you could mislead yourself into thinking 90 percent is good, when it may not be. A combination of internal and external benchmarks works best.

One thing you have to do is look at weighted averages. Otherwise, poor performers or superior performers skew the results.

Kislik: That’s really true. Let’s look at, say, labor rates in different parts of the country. Curt, do you have data on that?

Barry: Contact center reps in the Northwest make, on average, $12 to $14 per hour. In the Midwest it’s $7 to $9 per hour. If labor makes up 50 percent of your cost per order, as it does for most catalogers, your costs will be very different depending solely on where you’re located. Looking at cost per call or cost per order without getting behind the data to the labor rates will mislead you.

CS: When in your view does benchmarking become particularly problematic?

Barry: Financial people often want to know ratios such as percent to net sales. These can be misleading if you make an incorrect comparison between companies of widely differing AOV, conventional vs. automated sortation in the warehouse, etc. It’s better to focus on units of work: calls or emails answered, cost per order, etc. These can be compared between similar companies.

Kislik: And don’t forget the service levels. Say I’m a b-to-b cataloger or I sell only high-end merchandise. I have a limited pool of customers. I can’t afford to lose even one. It’s better for me to be inefficient in some cases and spend more time on the phone or make more calls per sale, instead of lost a customer.

Barry: Other times that benchmarking becomes problematic are when you don’t define the data clearly, when data comparability is not possible, or when your business changes dramatically, for example, when you expand your merchandise mix. Say you always sold apparel, but you started selling home décor, too. Your ratios will be different for both internal and external benchmarks.

Kislik: The most problematic part of benchmarking is when people go to sessions at a conference and hear a panel of three practitioners who share some data element. The conference attendee then goes home and says, “We need to get our x to this level that they quoted at the conference.”

Barry: You can’t take someone else’s numbers as your own, unless the businesses are identical – and few are.

Kislik: Or he uses the number as a prod to try to force his operations to some level of performance. That tactic can have consequences that can be very damaging.

CS: Can you give an example?

Kislik: Agent talk time. Sometimes when companies track that they end up with draconian supervisory techniques. They do all kinds of harsh things to keep people in seats. But employees will find ways to rebel. They’ll pace themselves, so they get lost in calls; their call time gets very long, which can lead to lots of call sin queue and bad customer service. They’ll take their breaks while on the phone.

CS: How could someone participate in a benchmarking study?

Barry: You have to start at a level that your organization can support from a data capture and collection perspective. The data have to be something you can gather. Then you build slowly, looking for companies within your same industry. Join a ShareGroup. You have to be willing to share your data and best practices.

Kislik: Businesses must go into trusted, sharing, candid mode. The worst way to start is to go in thinking: “I’m going to get the skinny on other companies.” When that happens, other participants detect that, and they become protective of their own data, so you don’t get the sharing of useful information you want.

Barry: Being trustworthy is at the top of our list for participants in ShareGroups. We uninvited people every year because they only want to give, say 15 answers to a survey that asks 100 questions. That’s not good enough. We’ve had people submit prior year’s data. We try to build checks and balances into our data validation.