Were the outcomes measured accurately?
• Researchers used a valid technique to measure the dependent variable. Validity is the ability of the technique to measure what it is intended to measure. Researchers may choose a gold standard measurement (known to have the highest level of validity) or may attempt to show how closely related their measurement was to a criterion measurement. For example, the gold standard for measuring percentage of body fat is dual energy x-ray absorptiometry scanning. Due to cost and availability, a researcher will discuss the level of agreement between dual energy x-ray absorptiometry and skinfolds taken by trained personnel.
• The way in which the measurements were taken must also be consistent. Reliability is the ability to accurately repeat the measurement over and over so that any changes in the data are due to the intervention. There are several potential threats to reliability in a study. There could be changes to the instrument or how the instrument is being used. For instance, the evaluator making the pretest measurement may not follow the same protocol at the posttest, which shows low intratester reliability. If multiple evaluators are being used to measure range of motion, there may be differences in how each performs, causing poor intertester reliability. Researchers should analyze and report the reliabilities of their measurements. There are several statistics for reliability, but, generally, a coefficient of .70 is needed.
• The way in which the tests were set up could also influence their reliability. Participants may get fatigued or bored and not perform their best on physical or mental tasks. When a variable is measured multiple times, the participants may experience a learning effect. For example, while performing an agility task, participants may become more proficient from practice with the test. Also, the fact that the patients are being observed and monitored closely may result in different responses to treatment. Lastly, measurements taken in experimental settings can produce different results from ones taken in a natural setting. For example, if a balance test is administered individually to soccer athletes in a quiet laboratory, their levels of focus and their scores may be different from that of on-field tests. In this way, highly controlled laboratory studies may not translate to real-world situations.
Low Impact Exercises For Weight Loss Photo Gallery
• Other common threats to reliability include the following:
Hawthorne effect: Subjects respond to the attention given by the researchers rather than to the actual treatments.
John Henry effect: The control group becomes aware of its secondary status and tries to outperform the experimental group.
Placebo effect: Participants improve based on the belief that the treatment will work, even if given a sham treatment.
5. Could the results have been due to chance, error, or a confounding variable?
• In the methods and results sections, the researcher should explain how a positive outcome was determined. The P-value that most commonly determines statistical significance is .05. Higher P-values mean a greater possibility that the results are due to chance. However, statistical significance does not mean clinically significant. The author should also report measures such as confidence intervals and effect size to estimate how useful the treatment is likely to be.
• Did the researcher consider all the possible explanations for the result besides the treatment? Possibly, some error or a confounding variable influenced the outcome. These limitations should be addressed in the discussion section. One should consider possible problems that the researcher did not by thinking about the sample, type of study, measurements, and treatment protocol. There is a wide range in the quality of research available, and it is important to be critical of anything that seems too good to be true.
• An example of a confounding variable is found in the following scenario. A researcher gave the experimental group a specific weight loss diet to follow and simultaneously asked the control group to eat as they normally would. The amount of physical activity performed by the participants was not controlled or monitored in either group. The experimental group, as expected, lost more weight, but because there was no way to sort out the influence of physical activity, it was difficult to trust that the difference in outcomes was due solely to the diet.