Open Door Review

Finally, the stability of symptomatic change over the follow-up period may be an issue of concern in its own right. Monitoring of individual patients suggests that a proportion will change their symptom status more than once (e.g. Brown & Kulik, 1977; Shapiro et al., 1995). Reporting of group-averages tends to obscure this variability, leading to an over-estimation of longer-term outcomes in clinical practice. =((%$($)*!! All clinical trials will lose patients at various points in treatment; the point at which they are lost will have differing impacts on validity. Early loss from a trial may disrupt the randomisation of treatment, threatening internal validity. Even where there is no differential attrition from treatments, it may be the case that significant attrition could lead to results being applicable only to a sub-group of persistent patients, threatening external validity. Alternatively, attrition rates across treatment conditions may not be random, and may reflect the acceptability of therapies, suggesting that attrition may be a important variable in its own right. Significant levels of attrition will restrict the conclusions that can be drawn from a study, and complicate reporting of results. A number of statistical solutions to this problem are available to researchers which utilise the last available data-point to estimate the likely bias introduced by loss of patients (e.g. Flick, 1988; Little & Rubin, 1987). Alternatively data can be reported on the basis of an "intention-to-treat" sample, including all subjects entered into the trial, as well as presenting separate data for those completing all or a specified length of therapy (e.g. Elkin et al., 1989). @0(2E2*2<46$6! In the past 15-20 years, techniques have been developed to enable quantitative review of psychotherapy studies. Meta-analysis is a procedure which enables data from separate studies to be considered collectively through the calculation of an effect size from each investigation (Rosenthal, 1991). Effect sizes are calculated according to the formula: ES = M1 - M2 S.D. where M 1 = the mean of the treatment group M 2 = the mean of the control group S.D. = the pooled standard deviation The terms M 1 and M 2 can stand for the means of any two groups of interest, such as psychotherapy contrasted against a waiting list control, or equally could be the comparison of two forms of psychotherapy. Because this technique converts outcome measures to a common metric, individual effect-sizes can be pooled. In addition to examining the contribution of main effects such as therapy modality, effect-sizes for any variable of interest can be calculated, such as the impact of methodological quality or investigator allegiance on reported outcomes (e.g. Robinson, Berman, & Neimeyer, 1990; Smith, Glass, & Miller, 1980). Effect sizes refer to group differences in standard deviation units on the normal distribution. Their intuitive meaning is made clearer by translating them into percentiles, indicating the degree to which the average treated client is better off than control patients. Thus an effect size of 1.0 corresponds to a result where 84% of the treated group are better off than the average control patient.

DT

.01230/1.40/5&&'67894/0/571.8/5&&/6648./1.40&

Made with FlippingBook Ebook Creator