Skip Nav

Reliability

re·li·a·bil·i·ty

❶To prevent learning or recency effects, researchers may administer a second test that is different but equivalent to the first. Users and managers of this infrastructure are concerned with reliability and service.

Split-half method

Reliability in research
reliability
This article is a part of the guide:

Ecologists and social scientists, on the other hand, understand that achieving identical results on repeat experiments is practically impossible. Complex systems, human behavior and biological organisms are subject to far more random error and variation. While any experimental design must attempt to eliminate confounding variables and natural variations, there will always be some disparities in these disciplines. Reliability and validity are often confused; the terms describe two inter-related but completely different concepts.

This difference is best described with an example:. A researcher devises a new test that measures IQ more quickly than the standard IQ test:.

Reliability is an essential component of validity but, on its own, is not a sufficient measure of validity. A test can be reliable but not valid, whereas a test cannot be valid yet unreliable.

A test that is extremely unreliable is essentially not valid either. A bathroom scale that measures your weight one day as kg and the next day as 2 kg is not unreliable, it merely is not measuring what it is meant to. There are several methods to assess the reliability of instruments. In the social sciences and psychology, testing internal reliability is essentially a matter of comparing the instrument with itself.

How could you determine whether each item on an inventory is contributing to the final score equally? One technique is the split-half method which cuts the test into two pieces and compares those pieces with each other.

The test can be split in a few ways: Split-half methods can only be done on tests measuring one construct — for example an extroversion subscale on a personality test. The internal consistency test compares two different versions of the same instrument, to ensure that there is a correlation and that they ultimately measure the same thing. For example, imagine that an examining board wants to test that its new mathematics exam is reliable, and selects a group of test students.

For each section of the exam, such as calculus, geometry, algebra and trigonometry, they actually ask two questions, designed to measure the aptitude of the student in that particular area. If there is a high internal consistency, i. The test - retest method involves two separate administrations of the same instrument, while internal consistency measures two different versions at the same time. Researchers may use internal consistency to develop two equivalent tests to later administer to the same group.

A statistical formula called Cronbach's Alpha tests the reliability and compares various pairs of questions. Luckily, modern computer programs take care of the details saving researchers from doing the calculations themselves.

There are two common ways to establish external reliability: The Test-Retest Method is the simplest method for testing external reliability, and involves testing the same subjects once and then again at a later date, then measuring the correlation between those results. One difficulty with this method lies with the time between the tests. This method assumes that nothing has changed in the meantime. If the tests are administered too close together, then participants can easily remember the material and score higher on the second round.

But if administered too far apart, other variables can enter the picture: To prevent learning or recency effects, researchers may administer a second test that is different but equivalent to the first. Anyone who has watched American Idol or a cooking competition will understand the principle of inter-rating reliability. An example is clinical psychology role play examinations, where students are rated on their performance in a mock session.

Another example is a grading of a portfolio of photographic work or essays for a competition. Processes that rely on expert rating of performance or skill are subject to their own kind of error, however. Inter-rater reliability is a measure of the agreement of concordance between two or more raters in their respective appraisals, i.

The principle is simple: If, however, the judges have wildly different assessments of that performance, their assessments show low reliability. Importantly, reliability is a characteristic of the ratings, and not the performance being rated. In psychometry, for example, the constructs being measured first need to be isolated before they can be measured. For this reason, extensive research programs always involve comprehensive pre-testing, ensuring that the instruments used are both consistent and valid.

Those in the physical sciences also perform instrumental pre-tests, ensuring that their measuring equipment is calibrated against established standards. Check out our quiz-page with tests about:. Retrieved Sep 13, from Explorable. The text in this article is licensed under the Creative Commons-License Attribution 4. You can use it freely with some kind of link , and we're also okay with people reprinting in publications like books, blogs, newsletters, course-material, papers, wikipedia and presentations with clear attribution.

Don't have time for it all now? Internal reliability assesses the consistency of results across items within a test. External reliability refers to the extent to which a measure varies from one use to another. The split-half method assesses the internal consistency of a test, such as psychometric tests and questionnaires.

There, it measures the extent to which all parts of the test contribute equally to what is being measured. This is done by comparing the results of one half of a test with the results from the other half. A test can be split in half in several ways, e. If the two halves of the test provide similar results this would suggest that the test has internal reliability. The reliability of a test could be improved through using this method. For example any items on separate halves of a test which have a low correlation e.

The split-half method is a quick and easy way to establish reliability. However it can only be effective with large questionnaires in which all questions measure the same construct. This means it would not be appropriate for tests which measure different constructs. For example, the Minnesota Multiphasic Personality Inventory has sub scales measuring differently behaviors such depression, schizophrenia, social introversion.

Therefore the split-half method was not be an appropriate method to assess reliability for this personality test. The test-retest method assesses the external consistency of a test. Examples of appropriate tests include questionnaires and psychometric tests. It measures the stability of a test over time.

A typical assessment would involve giving participants the same test on two separate occasions. If the same or similar results are obtained then external reliability is established.

The disadvantages of the test-retest method are that it takes a long time for results to be obtained. The timing of the test is important; if the duration is to brief then participants may recall information from the first test which could bias the results.

Alternatively, if the duration is too long it is feasible that the participants could have changed in some important way which could also bias the results.

This refers to the degree to which different raters give consistent estimates of the same behavior. Inter-rater reliability can be used for interviews. Note, it can also be called inter-observer reliability when referring to observational research. Here researcher when observe the same behavior independently to avoided bias and compare their data.

Assessing Reliability

Main Topics

Privacy Policy

Reliability is something that every scientist, especially in social sciences and biology, must be aware of. In science, the definition is the same, but needs a much narrower and unequivocal definition. Another way of looking at this is as maximizing the inherent repeatability or consistency in an experiment.

Privacy FAQs

If findings from research are replicated consistently they are reliable. A correlation coefficient can be used to assess the degree of reliability. If a test is reliable it should show a high positive kekaromese.ml: Saul Mcleod.

About Our Ads

Reliability and Validity. In order for research data to be of value and of use, they must be both reliable and valid.. Reliability. Reliability has to do with the quality of measurement. In its everyday sense, reliability is the "consistency" or "repeatability" of your measures. Before we can define reliability precisely we have to lay the groundwork.

Cookie Info

Definition of reliability. 1: the quality or state of being reliable. 2: the extent to which an experiment, test, or measuring procedure yields the same results on repeated trials. Reliability, like validity, is a way of assessing the quality of the measurement procedure used to collect data in a dissertation. In order for the results from a study to be considered valid, the measurement procedure must first be reliable.