Why Are Biodata Prediction Systems Better Than Tests?

The Internet now provides the medium of choice for delivering the broad range of biodata-based prediction systems provided by e-Selex.com. These systems offer several important advantages compared to traditional knowledge, skill, and ability tests. The following provides a brief overview of some important differences between biodata prediction systems and tests, and describes the huge practical and legal advantages.

Tests measure individual characteristics that may not be job-relevant.


Tests are developed to measure characteristics of a person, such as knowledge, skill or ability. Unfortunately, these personal characteristics are not necessarily related to a job-relevant outcome such as job success. For example, a person's bookkeeping or accounting skill is probably not relevant to success as a firefighter or peace officer.

By contrast, each of our prediction systems is designed to focus first and foremost on a specific outcome (or criterion measure) that an organization wants to impact. Each predictor attempts to capture a portion of that outcome in advance of its occurrence.

Test items are developed for internal consistency, not predictive validity.


Tests are developed through item analyses to increase internal consistency. The goal is to make each item a parallel version of every other item, thereby creating a pure measure of that internal characteristic.

Conversely, items on our predictor scales are developed to have very low internal consistency, but each item contributes criterion-related validity for predicting a specific outcome measure, such as job performance.
Because our predictors have low internal consistency across items, the validity of each item adds directly to the validity of the other predictor items. Because tests are developed to have high internal consistency, whatever criterion-related validity happens to be provided by each test item overlaps with the validity of other test items, and therefore is non-additive. This produces a lower overall predictive validity for the test, compared to the overall validity of our predictor systems against specific outcome criteria.

With tests, the trick is to find the one right answer.


Test items are written to have a single right answer. By design, test items are constructed to score nearly correct answers as incorrect answers. In fact, the principal method, used by test developers to increase the difficulty of a test item, is to make one or more of the wrong answers appear to be correct, but to be technically incorrect.

Having a single correct answer to each item makes tests highly vulnerable to various types of cheating. This fundamental feature of testing has been a particular problem with the Internet administration of tests.

Again, by contrast, our predictor items are scored using an empirical keying procedure. This procedure generally results in positive scores for more than one option to each item.
The actual relationship between each predictor item and the relevant criterion is often curvilinear, making it extremely difficult to raise oneís score by giving false answers.

In fact, our prediction systems differ from tests at the most fundamental level: The basic nature of the task presented to the respondent. With tests, the response task is to choose the right answer from the options. By contrast, our prediction systems ask the applicant to give the answer that is most accurate, truthful or factually correct in describing previous performance. Our items focus on verifiable outcomes, facts, and achievements. The correct answer is the one that gives the most accurate description of the person, and this right answer is going to be a different answer for different people.

Tests measure peak ability, not typical performance.


Tests are maximum performance tasks. They tell how well a person can perform under circumstances of peak motivation, but do not tell how well a person will perform on a typical, day-to-day basis.

Unlike tests, our prediction systems focus on past and present outcomes of typical performance. We use those past outcomes to predict typical outcomes in the future. This strategy derives directly from the old axiom: The best predictor of future behavior is past behavior.
Evidence of past achievement captures the individualís ability to achieve, and also indicates the necessary motivation to achieve, because both ability and motivation are required for that past achievement to have occurred.

In other words, maximum performance measures such as skill and ability tests may tell the employer if a person can do a problem-solving task. At best, however, this is only half of the equation. By comparison, past achievement shows that a person can do and that a person will do what is needed to achieve a related successful outcome.

Tests have adverse impact on protected minority groups.


Most employment tests, particularly cognitive ability tests, show massive differences in average scores for minority and non-minority groups. In statistical terms, these differences are often a full standard deviation or more. By contrast, our prediction systems often show near zero differences in average scores for minorities and non-minorities.

Even when differences in average predictor scores are present, they tend to be small, and are almost always less than the difference in average minority and non-minority scores on the criterion itself.
In legal terminology, cognitive ability tests will almost always show adverse impact on protected minority groups, whereas our prediction systems almost never do. A finding of adverse impact alone establishes a prima facie case of employment discrimination.

The employer must then defend the use of the test through evidence of its validity and fairness. Even when the employer prevails, this can be an extremely expensive proposition in legal fees alone.

Whatís the bottom line?


All in all, these differences are huge practical and legal advantages for Internet administration of our prediction systems. Compared to tests, our predictors are developed to provide predictive validity against specific outcomes such as actual job success. Our predictors are ideally suited to Internet administration due to factual content and verifiability, making them highly resistant to falsification.
This predictive validity and resistance to falsification, coupled with little if any adverse impact on protected groups, offers a unique combination of usefulness and legal defensibility for the employer.


Privacy Policy
Copyright © 1994 to 2017 e-Selex.com, San Diego, California. All rights reserved.

Selex, e-Selex, Selection Excellence, PreSelex, SkillSelex, JobSelex, WEBselex,
Selex Success, and SmartSelex are registered trademarks of e-Selex.com.
The Intelligent Choice, ACE, and Authorized Consulting Expert are trademarks of e-Selex.com.