Proprietary Metrics — the Next Big Thing in Talent Management

The idea that you can create a template that will work forever doesn’t happen in any business … There’s some really, really bright people in this business. You can’t do the same thing the same way and be successful for a long period of time. — Billy Beane

Screen Shot 2014-04-10 at 8.57.20 AMI am a strong advocate of what I call “parallel benchmarking,” which is borrowing the proven best practices from completely different industries and functions. This article advocates the borrowing and the adaptation to talent management of what are known as “proprietary metrics” from the baseball industry. Proprietary metrics get their name because they cover metrics that are so powerful that they are “owned” and their components are therefore not shared. In baseball, there are dozens of proprietary metrics, while in the corporate world of talent management, they are surprisingly rare. Corporate examples of these proprietary metrics include Google’s “retention metric” for predicting which employees are about to quit and its “hiring success algorithm” for predicting the characteristics that lead to new hire success on the job.

Baseball Has the Most Advanced Metric Model to Learn From

You might not know it, but baseball metrics (which are known as Sabermetrics) are literally years ahead of the metric practices in talent management. Most talent metrics are calculated but once a year and they merely inform the user about last year’s results. In direct contrast, most baseball metrics are provided in real time on the scoreboard for all players and managers to see precisely when they need right as they are making a decision. Many baseball metrics are also “predictive talent decision metrics” that accurately guide executives in important talent decisions including who to hire, how much to pay, and how long a player will continue to add value. Even “old-school” baseball managers now realize that the use of metrics for talent decisions can result in more productive hires, increased revenue, and significantly more wins.

The Value Gained by Not-sharing Your Metrics

Another critical lesson to learn from baseball relates to the value of sharing or not sharing the details behind your metrics. During the early Moneyball era in baseball, metrics were open and commonly shared by all teams. While this universal “open-source” sharing made it easy for teams to compare their performance against each other, the fact that every team used the same metrics meant that no individual team could gain a competitive advantage. It took a few years, but eventually baseball executives realized that increasing performance above that of your competitors was a critical goal. So in an attempt to develop a competitive advantage in metrics, the best teams and some vendors started to develop what are now known as “proprietary metrics” (examples include WAR, Ultimate zone ratings, and StatRank).

Proprietary metrics in both baseball and talent management by definition are unique and valuable, so the data used, the methods for collecting the data, and the components in the metric formula are all treated as valuable secrets. This exclusive or limited use allows executives using the proprietary metrics to make better talent decisions than their competitors.

A List of Proprietary Talent Management Metrics to Develop

If you have weak metrics, keeping them secret obviously doesn’t by itself add much value. What is needed are advanced talent decision metrics which provide such measurable insight and value that you want to keep them secret long as possible. Whether you are a corporation or a vendor, you should be constantly striving to develop these “proprietary metrics” that when used correctly, significantly improve talent management decisions and results. In order to have a large impact, proprietary metrics in most cases have to be developed in areas where no current metrics exist. Some areas where I suggest that proprietary talent metrics should be developed in the corporate world include:


  • The factors or algorithm that predicts candidate on-the-job performance and retention
  • A metric that shows what the level of competition for external top talent will be 6 to 12 months into the future


  • A risk metric that shows which employees have a high probability of quitting within six months
  • A metric that predicts what the turnover rate by manager will be in 6-12 months into the future


  • An algorithm that successfully identifies leadership potential in team members with less than two years at the firm
  • A leadership algorithm that predicts a leader’s success over the next two years based on the actions that they take
  • Calculating in which cases moving and retraining existing workers has a higher return on investment than externally hiring new ones

Productivity/ innovation

  • A metrics process that identifies the job-related factors that increase employee productivity and innovation
  • A metric that accurately identifies innovators among candidates and recent hires


  • An algorithm which shows which reward and recognition factors have the greatest impact on improving employee productivity
  • A metric which accurately determines which employees are under or overpaid
  • Predicting into the future how many years an individual employee will remain productive and “worth their salary”

Business case metrics

  • Calculating how much the value of a replacement new hire is above (or below) the value produced by the average current employee.
  • Calculating the increased dollar impact for each percentage increase in new hire on-the-job performance.

The proprietary metrics mentioned above might seem far-fetched to many talent management leaders, but some of them are already being used for improving baseball talent decisions.

Article Continues Below

In a Competitive World, Metrics Must Also Be Continually Improved 

Another lesson to be learned from baseball is that no matter how good your array of metrics are initially,  they will eventually be copied and even exceeded by your competitors (as baseball guru Billy Beane stated above.)

Keeping your best metrics proprietary will work up to a point. But in order to remain competitive, you must have a process for continually upgrading your talent metrics, so that your organization is continually in the lead in understanding the factors that cause current performance and that reveal future performance. Next-step metrics for most talent functions start with the development and use of real-time metrics to help managers make decisions based on today’s data. And at some point, talent decision makers will begin to demand predictive metrics that tell you in advance how you must act today in order to ensure superior future results.

And last but perhaps the most important metric frontier is the development of business-case metrics, which show you the direct value-chain connection between improving talent management results and the subsequent improvement in business results. This last step is essential because nothing increases funding and credibility more than quantifying and showing your direct dollar impact on corporate strategic goals.

What Exactly Should Be Kept Secret and Proprietary?

In baseball, some will reveal the name and the even the value of their proprietary metrics. But unless you are going to sell them, in the corporate world I wouldn’t even reveal those two factors because the mere knowledge of their existence and success will encourage others to develop similar metrics. You should also strive to keep secret the following metric-related components:

  • The data needed to calculate the metric
  • Where, when, and how that data is gathered
  • The elements and their weight in the formula for the metric
  • What is a passing and failing score on the metric
  • Which talent decisions are improved by the metric
  • Common problems involved in using the metric
  • The models developed as a result of the metric

Final Thoughts

After decades of work in metrics, I have found that both corporate recruiting and talent management are literally years behind in the adoption of all forms of advanced metrics. Google of course is the lone exception, with a variety of proprietary algorithms and its employee research lab. Google is also internally focused, so it avoids the use of benchmark comparison metrics with other firms.

A handful of ERP and talent management vendors have actually developed some proprietary metrics, but for the most part, I can’t honestly say that I have found them to be worthy of being kept secret. Instead, what is needed are bold corporate talent leaders who are not afraid to study and learn about the type of talent decisions that are currently made in baseball. Corporate leaders should then proactively identify new talent areas where a metric that explains why things are happening, what will happen in the future, and the correct actions to implement in order to take advantage of that future because these actions would add significant business value by increasing revenue, productivity, and innovation.

Obviously advanced and proprietary metrics are more difficult to develop, but the dollar business impact may be up to five times higher than using existing “copycat” low-value metrics like cost per hire or the number of training hours provided. So the last step is for leaders to stop worrying about benchmark comparisons with other firms and instead to focus on metrics that provide quarterly and year-to-year double-digit improvement in their own talent results.

Some of the Related Conference Sessions at the ERE Recruiting Conference in San Diego:

  • Using Big Data to Drive Measurable Recruiting Results: Getting Past the Hype, Wednesday, April 23, 4:15 p.m.
  • Stop Slowness from Killing Your Recruiting Department, Thursday, April 24, 2 p.m.

Dr. John Sullivan, professor, author, corporate speaker, and advisor, is an internationally known HR thought-leader from the Silicon Valley who specializes in providing bold and high-business-impact talent management solutions.

He’s a prolific author with over 900 articles and 10 books covering all areas of talent management. He has written over a dozen white papers, conducted over 50 webinars, dozens of workshops, and he has been featured in over 35 videos. He is an engaging corporate speaker who has excited audiences at over 300 corporations/ organizations in 30 countries on all six continents. His ideas have appeared in every major business source including the Wall Street Journal, Fortune, BusinessWeek, Fast Company, CFO, Inc., NY Times, SmartMoney, USA Today, HBR, and the Financial Times. In addition, he writes for the WSJ Experts column. He has been interviewed on CNN and the CBS and ABC nightly news, NPR, as well many local TV and radio outlets. Fast Company called him the "Michael Jordan of Hiring," called him “the father of HR metrics,” and SHRM called him “One of the industry's most respected strategists." He was selected among HR’s “Top 10 Leading Thinkers” and he was ranked No. 8 among the top 25 online influencers in talent management. He served as the Chief Talent Officer of Agilent Technologies, the HP spinoff with 43,000 employees, and he was the CEO of the Business Development Center, a minority business consulting firm in Bakersfield, California. He is currently a Professor of Management at San Francisco State (1982 – present). His articles can be found all over the Internet and on his popular website and on He lives in Pacifica, California.



15 Comments on “Proprietary Metrics — the Next Big Thing in Talent Management

  1. As David St. Hubbins said, “it’s a fine line between clever and stupid”. Likewise, in many aspects, it’s a fine line between metrics and numerology.

    Numerology is a belief in the divine, mystical or other special relationship between a number and some coinciding events. It has many systems and traditions and beliefs. The term numerologist is also used for those perceived to place excess faith in numerical patterns (and draw scientifically unsound inferences from them), even if those people do not practice traditional numerology. For example, in his 1997 book Numerology: Or What Pythagoras Wrought, mathematician Underwood Dudley uses the term to discuss practitioners of the Elliott wave principle of stock market analysis.

    One might suggest the same sorts of dangers attend to the calculation of an algorithm that predicts candidate on-the-job performance and retention or a metric that shows what the level of competition for external top talent will be 6 to 12 months into the future. We know that these results are mainly emergent; not entirely random, but often not practically predictable either.

    Proprietary metrics would seem require extra layers of caution and validation; if you could successfully and reliably forecast hiring results and labor market conditions years ahead of time, you are in the wrong business!

    The power of numbers to describe is great; to predict, rather less so. Pointing that out does not mean measurement and development of metrics is a waste of time.

    Like members of a cargo cult, the mystics of metrics deride caution or criticism; they reject any notion that some events are beyond useful measurement or that some decisions cannot be made by pure quant methods. They paint critics as uninformed and fearful of the truths revealed by these numbers.

    I’m all for good metrics, but I caution against haphazard development and inappropriate use of them.

  2. “The power of numbers to describe is great; to predict, rather less so.”

    Well said. The problem with this approach is it requires investment a lot of companies won’t make, and absent that, understanding a lot of people don’t have. Most such metrics I’ve seen developed by people were nothing more than broad associations, temporary correlations assumed to be causative. What’s more, even when companies outsource these functions they don’t use them correctly, usually by not taking the time to develop models well, or to test their existing workforce to determine just what the hell it is they’re looking for.

    When it comes to metrics, the basics usually are best to go with. WRT recruiting, time to hire and quality of hire, cost of hire, etc., the classics.

    “You can’t beat a classic,” Mr. Christmas.

  3. Thanks Dr. Sullivan.
    The fact that something can be measured doesn’t mean that it should be measured. There’s a tradeoff in the value gained by the information and the cost (real or opportunity) in the cost of its acquisition. IMHO, if you asked the recruiting staff of any organization, you’d find very little interest in spending money on a recruiting analyst to develop these metrics and algorithms or the additional staff to collect, compile, and analyze them.(The money and resources could be far better spent on what the recruiting staff itself needs.) To restate Richard: a recruiting organization needs to concern itself with putting quality butts in chairs on time and within budget, and attempting to concern itself with more than that decreases recruiting efficiency and increases corporate bloatocracy.



  4. I feel compelled to start out on a positive note in light of the Nay sayers who have chimed in thus far.

    I would like to offer the viewpoint that it is not only the numbers that enable successful predictions, but the person, or people who conduct the predictive process who may have a unique sense of intuition about the outcome and thusly can decrease the error of margin of a prediction significantly to the point that any “non-intuitive” scientist could ever do.

    This “intuition” that I propose as a significant factor stated previously here, can only be made probable as a factor in any specific prediction through a scientific determination that the person, or persons making the prediction has achieved a level of Subject Matter Expertise (SME) and/or widely recognized rate of success in such specific SME predictions.

    As for the players in MLB being used as a standard by which any other employees are to be compared, lets not forget that MLB is a Union sport.

    Consequently, any players career decisions or statistics to that effect, must be put in perspective as a “closed loop” union based model which, in my opinion can Never be used in a non-union scenario.

  5. The current US financial situation was essentially created by people who convinced themselves that they could predict the future with numbers, and whose ‘models’ told them they could mitigate the risk of financial loss to the point that they started giving home loans to people with no credit who were in prison. ‘Intuition’ is a nice concept but it’s not quantifiable or verifiable by definition, for the same reason people fall for psychics’ acts; they remember hits and forget misses.

    Most predictive data is really correlative data. People gather information on top performers and project that backwards to look for in people who they want to hire. This is extremely problematic because you have no idea whether you really have measured something that’s causative or just a chance correlation. Also, their experience in your company from the date you hired them to the date you tried to quantify the aspects of their character you want to emulate biases the measurement. If people truly want to try and predict success they can only measure people before hire, and then make a hire blind to the results, and when they know the successes from the failures they can try to find the cause using only the testing data gathered prior to the hiring decision. Testing people after you already know they’re a success and then looking for that in applicants begs the question; everything about that person’s personality and style can be assumed to be a reason why they succeeded, even if it’s developed since and even potentially as a result of their hire. It’s so prone to bias it’s nearly useless. If you want to do this testing in a truly scientific, meaningful way, then it has to be randomized and double blind, no company will do that.

    Not to mention that this all ignores the elephant in the room, only one person can be hired for a position, but that doesn’t mean only one person could have done that job. So if you take a typical position, say you only use active applicants. 100 people apply. Assume 80% of them have relevant experience. Assume it gets pared down to ten candidates via this or that process. The HM interviews all ten, picks one, they’re hired and they do well.

    That’s still a 90% rejection rate of interviewees alone, based simply on the fact that there’s only one position. Does anyone honestly believe all 9 of the others couldn’t do the job as well as the one hired? Or that the remaining 70 applicants with relevant experience, but who didn’t get to interview stage, couldn’t do it? In this scenario the existence of one ‘winner,’ or hire, essentially means someone decided that 99% of the people who applied couldn’t do the job as well as the one who was hired. And we know there’s an error rate in new hires so these decisions are often incorrect. But more importantly, this means selection criteria other than actually being to do the job are dominant in hiring. Otherwise hiring would just be a matter of the first person you come across who can do the job is hired, end of process.

    It forces you to ask the question of just what information and criteria are people actually basing their decisions upon? The ability to do the job is not it, rejection rates are too high and driven by the fact that each position is discrete and can only be filled by one person. And if you ask in hindsight what differentiated each hire, even if it’s for a similar position with the same hiring manager, they’ll give you a different answer every time. Nor will any company willingly submit to the double blind standard necessary to really try and predict what does in fact matter, and is a reliable predictor of success. Nor would such testing even make hiring easier or necessarily better, because a scenario of being presented with two or more people with roughy equal probability of success is not impossible, in fact it’s probably very likely, and who do you choose, and why, in such a scenario?

    I’m forced to think the hiring process is nearly random in many instances, and will remain so for the foreseeable future. And even with effective testing, people would be forced to choose one among many, which would mean you still get the same biases and mistakes, maybe minimized but still ever present.

  6. @ Richard: “I’m forced to think the hiring process is nearly random in many instances, and will remain so for the foreseeable future.” Home run! I do not know if this is true, but have often suggested that a concerted effort by parties only invested in fact-finding (and not nest-feathering) work to determine the real parameters of hiring process validity: aka, coming up with “Generally Accepted Recruiting Practices (GARPs) to know what does work, doesn’t work, or doesn’t matter.

    I also think this is unlikely to occur- there is too much money and too many careers to be made/kept under the current dysfunctional “Faith-Based Recruiting System” (FBRS). What is a FBRS (“Fibbers”)? It is a recruiting system where whatever the *people who dictate or manage the recruiting system say is the “right way” IS the “right way”, apart from any verified facts or even accumulated anecdotal experience. Meanwhile, let us continue to be paid fiddlers while Rome burns…

    Happy Friday,

    *Even if they have no/little/no recent recruiting experience, and often guided by likewise non-experienced shills, sycophants, and suck-ups playing to their vanity and telling them what they want to hear. I’m glad we don’t have people like that on ERE.

  7. Non-constructive criticism=Whining.

    If you whiners could add whining 2.0 to your resume, I would think you would get an interview…Maybe. Honestly you can’t even do that well.

    I will set my auto discussion robot to ignore should any of you-You know who you are- be on a discussion.

  8. @ Claudio: So far, Martin, Richard, you, and I have commented. I’ve found the comments to be both interesting and informative, with practical, constructive components. Who are you talking about being “whiners”?


  9. @Claudio

    In all honesty that’s the standard reply from someone who either doesn’t understand testing and the pitfalls and biases, or someone who wants to brush them under the rug. I’ve posed all those questions to numerous testing agencies, including Profiles International and Kenexa, both services I’ve used before. None could answer, and admitted they were major problems.

    If you test a top performer at your company against a standard, you have no way of knowing what traits were there when he was hired vs developed on the job, or during his tenure there in some other way. That ‘pattern’ whether it’s a measure of personality or so called hard skills, is not necessarily going to help you hire better people. But it will be marketed as such a tool and used to include and exclude people. Many of the assessments I’ve seen are filled to the brim with out and out Barnum statements. You may as well consult a psychic.

    So, if you don’t understand the need for a double blind protocol and the problem of confirmation bias and other such issues, that’s really your weakness, not anyone else’s. If pointing those issues out is whining, then I assume you have something to sell with regard to testing.

  10. @ Richard: Our readers and we need need to remember one of the cardinal principles of recruiting (and other forms of commerce):
    The value in a product, service, or advice is not in its efficacy, but in its salability: caveat emptor.


  11. Gentlemen, I apologize for interpreting your frustration with recruiting as whining. perhaps I was a bit too fast to judge.

    As I am sure you have experienced after being in your chosen field for a long time, you get a bit tired of the generalized statement of the obvious… recruiting is an imperfect, impersonal, biased and mostly inane practice. No kidding. So is every other business decision out there.

    Now providing ideas and solutions, that is constructive. However, any concept that uses the word blind in it, will not get very far with me.

    Let me clarify. I am Not a proponent of Behavioral testing or personality testing or any such snapshot of how a person responds to a line of questioning that they do not appreciate nor believe to be valid.

    On the other hand, every comment we are making now online is part of our permanent record. I am sure that if you were to look at it that way, you may be more careful about how you lash out at things you really don’t understand.

    Group discussions and blogs are an opportunity for people to raise their online sentiment analysis and reputation. These should only be done as you would do any other public conversation and that is to stick to what you know. If you don’t understand something, discuss it privately and do the research.

    For you to publicly, and permanently profess your ignorance on a subject as easily researched as recruiting will get you no support in the world of online reputation, only negative sentiment which will open you up to even more inane scrutiny of those that care enough to dissect every word you have said against you in the court of gainful employment.

    e.g. it offers proof that you don’t understand the politics of the online generation.

    Make sense??

    Now if you want to get better informed of how to get your next dream job, then get on your keyboard and keep digging for the answer. It is out there and you will be a better person for it.

    I have spent the better part of 14 years as an independent professional recruiter researching ways to change a broken system and I am getting closer every day.

    Best of luck in all you do.

  12. “Make sense??”

    Yup, you’re a troll. Not particularly good one either. Though, I’ve got to hand it to you, several paragraphs with nothing actually said and some vague implied insults aimed at everyone and no one would have kept you going way longer in another forum. If you’re truly going to learn to troll, you need to learn to not pull back so quickly. I’d recommend practice on 4chan or somewhere similar.

  13. In all honesty, if this is your best response, you really do need practice. You’re too blatant and easy to spot. If you had kept the conversation more on topic and laced some of those implied and direct insults in better, I’m sure you would have inspired many inflammatory responses from more readers. Your approach lacks finesse is all, it’s too easy to spot.

Leave a Comment

Your email address will not be published. Required fields are marked *