Nigerian Medical Journal

: 2020  |  Volume : 61  |  Issue : 2  |  Page : 51--54

Common design concepts in randomized controlled trials

Bolaji Emmanuel Egbewale 
 Department of Community Medicine, Ladoke Akintola University of Technology, Ogbomoso, Nigeria

Correspondence Address:
Dr. Bolaji Emmanuel Egbewale
Department of Community Medicine, Faculty of Clinical Sciences, College of Health Sciences, Ladoke Akintola University of Technology, Osogbo


It is a known fact that Randomized Controlled Trials (RCTs) are the gold standard design methods in medical investigations particularly when the aim is comparison of medical therapies or effectiveness of intervention between treatment groups. This design method, once carefully followed, presents the highest level of evidence attainable in the measurement of treatment effect. Oftentimes, researchers confuse concepts related to the design of RCTs and thereby jeopardizing its benefits. Furthermore, in resource-poor settings, a very limited access to educational materials on design, conduct, and reporting of clinical trials exists. This among other reasons explains why most studies in such settings are observational in nature as RCTs are not as popular. This review adopted a narrative synthesis approach to aggregate current knowledge scattered in literatures in respect of selected common design concepts in RCTs so as to elucidate on their meaning and demands. Overall, 25 literatures drawn majorly from the PubMed database including 8 textbook materials were involved in examining the following concepts; Study Population in RCTs Setting, Primary and Secondary Outcome Measures, Single and Multicenter Trials, Pragmatic and Explanatory trials, and Blinding. Appropriate search terms for each of the concepts were entered into the PubMed database and relevant articles accessed. This review article, intended for educational purposes could also serve as a guide, especially for new entrants, in the design of RCTs. It is hoped that this educational material would contribute immensely toward maximizing the benefits of this all-important design method.

How to cite this article:
Egbewale BE. Common design concepts in randomized controlled trials.Niger Med J 2020;61:51-54

How to cite this URL:
Egbewale BE. Common design concepts in randomized controlled trials. Niger Med J [serial online] 2020 [cited 2020 Nov 27 ];61:51-54
Available from:

Full Text


Indeed Randomized Controlled Trials (RCTs) offer solutions to some of the issues that have been raised against observational studies, for example, treatment effect identified by observational studies are prone to methodological weaknesses such as selection bias and confounding.[1],[2] However, this assertion is only correct for a well-designed RCT. A clear understanding of design concepts and procedures in a controlled trial by researchers is very germane to a successful experiment. Such concepts and procedures are what define and distinguish a RCT from observational studies. A successfully designed RCT requires a succinct reporting on how such procedural concepts have been carried out at the design stage of the experiment, and thus, the need for trial researchers to be well informed on their demands and requirements. Furthermore, the risk of bias associated with particular RCT study is graded based on such information. This review attempts to aggregate current knowledge scattered in literatures in respect of selected common design concepts in RCT so as to elucidate on their meaning and demands. It is anticipated that this article would provide helpful hints for educational purposes and also contribute immensely to enhancing the design of future controlled experiments.


A narrative synthesis of selected common concepts in RCTs was adopted in this review. Search for appropriate literatures was conducted electronically and manually where necessary. Overall, 25 reference materials including articles and books on the selected concepts were drawn from different literature sources: library text materials, and mostly, PubMed database. Appropriate search terms for each of the concepts to be reviewed were entered into the PubMed database and relevant article accessed. The following selected design concepts were considered in this article; study population in RCT, primary and secondary outcome measures, single and multi-center trials, pragmatic and explanatory trials, and blinding.

 Study Population in Randomized Controlled Trial Setting

Patient selection basically hinges on two opposing principles: homogeneity and heterogeneity of the study population. Both have pros and cons. The more homogeneous the population is, the narrower the population on which the results apply (internal validity) and hence, the smaller the number of patients needed to detect a given difference. On the other hand, the greater the heterogeneity, the broader the basis for generalizing findings at the end of the study– external validity.[3] In the spirit of a large and simple trial, some authors recommend that eligibility criteria be kept to a minimum.[4],[5] They are not to be too restrictive; otherwise, they undermine the external validity of the trial. However, some valid reasons exist for the exclusion of certain participants, for example, contraindication to intervention.

Eligibility criteria for a trial should be clear, specific, and applied before randomization. Trialists should endeavor to avoid exclusion after randomization so as not to disrupt the structure supportive of valid comparison of treatment effect. For the primary analysis, all participants enrolled should be included and analyzed as part of the original group to which they were assigned – intention-to-treat (ITT) analysis. Mishandling of exclusions causes serious methodological difficulties and undermines trial validity.[6] The aim of the trial is to generalize its results to all patients who are like those randomized and treated in the trial. Without a strict set of eligibility criteria, it is more or less impossible to describe which types of patients the results of the study can be applied.

 Primary and Secondary Outcome Measures

The choice of appropriate outcome measures, which can be primary or secondary, has been identified as one of the most challenging activities in the design of a randomized controlled trial.[7] A primary endpoint is used to address the primary objective of the clinical trial, whereas a secondary endpoint addresses a secondary objective of a study. It is necessary that a clear definition of the two be stated. The choice of the most appropriate outcome measures has implications for the cost of the trial, the sample size, the burden that the trial will place on patients and clinicians taking part, and the likelihood that the result of the trial will influence clinical practice; therefore, whichever outcome is chosen, it is important that it has been properly validated in a representative sample of patients for the disease under study.[8]

Information about the effect of a treatment is often gathered in relation to many variables, thus, there is a temptation to analyze each of the variables and look to see which difference is significant between groups. It should be noted that such an approach leads to misleading results. Presenting only the most significant results as if they were the only analyses performed has been described as fraudulent.[9] Thus, it was suggested that the best practice would be to decide in advance of analysis on the main outcome variable of interest for particular trial; data could be analyzed for other emerging variables, but this should be considered as of secondary importance. The results of such secondary outcome variables should be interpreted cautiously and should be seen as ideas for further research rather than as definitive results. The major reason for this is that the study might not have been powered to detect the difference in respect of such secondary variables. It is also important to note that even when the major or the primary outcome variables number more than one, the sample size calculation is usually based only on one variable.[9]

 Single and Multi-Center Trials

“Centre” in a clinical trial sense refers to an autonomous unit that is involved in the collection, determination, classification, assessment, or analysis of data or that provides logistical support for the trial.[10] For a trial to be multicenter, it must consist of two or more centers and must involve a common treatment and data collection protocol, with each center to receiving and processing study data. Centers are treated as a stratifying variable in a multicenter trial and as such patients need to be randomized independently unless there is a central coordinated randomizing service.[10]

A multicenter study, unlike a singlecenter study, allows a large number of patients to be recruited in a shorter time as recruitment can take place in each of the centers at the same time. The results are more generalizable since the scope of recruitment is generally wider than that obtained in a single center trial, and the participants are likely going to be more diverse in their attributes. Multicenter trial studies are critical in trials involving patients with rare presentations or diseases.[7],[11] Large trials are usually studied when investigating rare conditions. However, when the number of centers is too large, multicenter trials can pose administrative and logistic challenges. In contrast, a single center trial demonstrates homogeneity of the study population since patients enrolled for the trial usually come from the same area.[10] It has been noted that the analysis of data collected in multicenter trials offers challenges because the data from the individual centers must be combined in some way to give an overall evaluation of the differences between the treatments in the trial.[12] However, the practice of combining together all the data and ignoring the centers is not theoretically sound and should, therefore, be avoided. Ideally, the center variable ought to be accounted for in the analysis and be treated as a stratifying.

 Pragmatic and Explanatory Trials

Previous authors[13] had earlier reported that Schwartz and Lellouch were the earliest authors to publish on the differences between explanatory and pragmatic RCTs. Explanatory RCTs are intended to assess the underlying effect of a therapy carried out under optimal conditions, whereas pragmatic trials are intended to assess the effectiveness to be expected in normal medical practice.[14] In normal medical practice, patients are sometimes seen not to comply with treatment prescriptions one way or the other; they default, take some other treatments not prescribed, or do not take the treatment when it is due. Explanatory RCTs are usually associated with treatment efficacy in drug trials and are limited in relation to the generalizability of results. This is because of the tight inclusion criteria that are inherent with this randomized controlled trial method, which places artificial constraints on participation that limit the applicability of the findings.

It was however noticed that while this is a particular concern for efficacy (explanatory) studies of drugs, it is likely to be less of a problem in quality improvement evaluations that are likely to be inherent with pragmatic trials that allow for what obtains in normal medical practice, for example, switching of treatment by the patients. Furthermore, efficacy studies assess differences in effect between two or more conditions under ideal, highly controlled conditions, while effectiveness studies assess differences in effect between two or more conditions when used in normal real-world clinical circumstances.[13]

As was observed, a particular treatment approach might be shown to be efficacious but may prove not to be clinically effective.[15] It has been argued that since pragmatic studies aim to test whether an intervention is likely to be effective in routine practice by comparing the new procedure against the current regimen, they are as such the most useful trial design for developing policy.[16] While the explanatory approach recruits homogeneous populations and aims primarily to further scientific knowledge on, for example, underlying pharmacological effects, a pragmatic trial reflects variations between patients that occur in real clinical practice and aims to inform choices between treatments. Pragmatic trials are normally conducted on patients who represent the full spectrum of the population to which the treatment might apply. These patients may demonstrate variation in compliance, have a number of comorbid conditions and use other medication.[17],[18] Another important point of difference between these two trials, as was observed,[19] is the use of placebo in explanatory trials. Pragmatic trials would not compare placebo with an active treatment since a placebo is never administered in real-life clinical practice; instead, an existing treatment is compared with a new intervention.

Various authors have argued that in a pragmatic trial, it is neither necessary nor always desirable for all patients to complete the trial in the group to which they were allocated; so as to have a good representation of the population to which treatment may apply. However, patients are always analyzed in the group to which they were initially randomized even if they dropout of the study. Application of ITT analysis is considered to be synonymous to the pragmatic approach.[14],[17] Pragmatic trials are well suited to situations where blinding is difficult or impossible.[15] Roland and Torgerson claim, somewhat controversially, that in pragmatic trials, the biases of both clinicians and therapists can be accepted. This, they argue, reflects a normal clinical environment where the expectations of the patient and the therapist may influence the size of the treatment effect. Even in this circumstance, previous authors[20],[21] have warned that concealment of randomization is still important, as is blinding the assessor of outcomes to minimize the risk of selection, information, or measurement bias by the researchers. In conclusion, with a pragmatic study, it has been observed that if an intervention is shown to have a beneficial effect, then it has been shown not only that it can work, but also that it does work in real life.[18]


Blinding in relation to the design method is a procedure by which groups of individuals involved in a trial are made unaware of which treatment the participants are assigned. These groups of individuals may include some or all of the participants, trial investigators or assessors, and data analysts. In some situations, the term masking is preferred to blinding to describe the same procedure. For example, masking might be more appropriate in trials that involve participants who have impaired vision and could be less confusing in trials, in which blindness is an outcome.[22] In a trial, knowledge of treatment allocation can bring about subjective bias by both the patient and the investigator. This can influence the reporting, evaluation, and management of data, and the statistical analysis of treatment effect can also be influenced.[23],[24] Knowledge of treatment allocation can also affect compliance and retention of trial participants.

There are some instances in which it may be relatively difficult to achieve blinding. For example, if the new intervention under consideration is a surgical procedure and this is being compared with chemotherapy delivered by tablet, here the difference between the two is clear, and the trial needs be carried out unblinded as far as patients and caregivers are concerned. Such studies are known as open or unblinded. Open or unblinded studies have the advantages of being simple, relatively inexpensive, and a true reflection of clinical practice. Single-blind usually means that one of the three groups of individuals aforementioned remains unaware. In a double-blind trial, participants, investigators, and assessors usually all remain unaware of the intervention assignments throughout the trial.[22] Here, three groups are kept ignorant; thus, double-blind is sometimes misrepresented. It should be noted that in medical research, the investigator frequently assesses, so in this instance, there are actually two groups. Triple-blind usually refers to double-blind trials that also maintain a blind data analyst.[23] The standard guideline for reporting RCT – Consolidated Standards for Reporting Trials CONSORT, requires investigator not only to use the terms; single-blind, double-blind, or triple-blind; authors must show who was blinded and how and also provide information about the procedure on how it was carried out.[6]


It is important that trial researchers have a clear understanding of common concepts associated with a successful design of RCT. By this, they would be able to assess the readiness of their intellectual and material capacities to meet the requirements of a successful design and conduct of controlled trials. This will certainly minimize the rate of incorrectly designed RCT in future research endeavors. A strict adherence to those common principles would guarantee a successfully designed experiment.


The information provided in this paper represents part of the literature review in my PhD thesis. I would like to acknowledge my PhD supervisors; Professor Julius Sim and Dr Martyn Lewis of the Keele University United Kingdom, for their support.

Financial support and sponsorship

This review is a subsection of literature review chapter of my PhD project that was funded by the Research grant of the Keele University, United Kingdom.

Conflicts of interest

There are no conflicts of interest.


1Egbewale BE. Statistical issues in controlled clinical trials: A narrative synthesis. Asian Pac J Trop Biomed 2015;5:354-9.
2Kang M, Ragan BG, Park JH. Issues in outcomes research: An overview of randomization techniques for clinical trials. J Athl Train 2008;43:215-21.
3Curtis LM. Clinical Trials Design, Conduct and Analysis. Oxford University Press: New York; 1986.
4Peto R, Pike MC, Armitage P, Breslow NE, Cox DR, Howard SV, et al. Design and analysis of randomized clinical trials requiring prolonged observation of each patient. I. Introduction and design. Br J Cancer 1976;34:585-612.
5Peto R, Pike MC, Armitage P, Breslow NE, Cox DR, Howard SV, et al. Design and analysis of randomized clinical trials requiring prolonged observation of each patient. II. analysis and examples. Br J Cancer 1977;35:1-39.
6Schulz KF, Grimes DA, Altman DG, Hayes RJ. Blinding and exclusions after allocation in randomised controlled trials: Survey of published parallel group trials in obstetrics and gynaecology. BMJ 1996;312:742-4.
7Wang D, Bakhai A. Clinical Trials: A Practical Guide to Design, Analysis and Reporting. London: Remedica Medical Education and Publishing; 2006.
8Rothwell PM. Responsiveness of outcome measures in randomised controlled trials in neurology. J Neurol Neurosurg Psychiatry 2000;68:274-5.
9Altman DG. Practical Statistics for Medical Research. London: Chapman and Hall; 1991.
10Meinert C. Clinical Trials: Design, Conduct, and Analysis. New York: Oxford University Press; 1986.
11Hedman C, Andersen AR, Olesen J. Multi-centre versus single-centre trials in migraine. Neuroepidemiology 1987;6:190-7.
12Fedorov V, Jones B. The design of multicentre trials. Stat Methods Med Res 2005;14:205-48.
13Alford L. On differences between explanatory and pragmatic clinical trials. N Z J Physiother 2006;35:12-6.
14Cook TH, DeMets DL. Introduction to Statistical Methods for Clinical Trials. Madison: Chapman and Hall/CRC; 2008.
15Helms PJ. Real world pragmatic clinical trials: What are they and what do they tell us? Pediatr Allergy Immunol 2002;13:4-9.
16Eccles M, Grimshaw J, Campbell M, Ramsay C. Research designs for studies evaluating the effectiveness of change and improvement strategies. Qual Saf Health Care 2003;12:47-52.
17Roland M, Torgerson DJ. What are pragmatic trials? BMJ 1998;316:285.
18Godwin M, Ruhland L, Casson I, MacDonald S, Delva D, Birtwhistle R, et al. Pragmatic controlled clinical trials in primary care: The struggle between external and internal validity. BMC Med Res Methodol 2003;3:28.
19Macpherson H. Pragmatic clinical trials. Complement Ther Med 2004;12:136-40.
20Hotopf M. The pragmatic randomized controlled trial. Adv Psychiatr Treat 2002;8:326-333.
21Herbert R, Jamtvedt G, Mead J, Hagen KB. Practical Evidence-based Physiotherapy. Butterworth Heinemann: London; 2005.
22Schulz KF, Chalmers I, Altman DG. The landscape and lexicon of blinding in randomized trials. Ann Intern Med 2002;136:254-9.
23Pocock SJ. Clinical Trials: A Practical Approach. New York: John Wiley; 1983.
24Chow SC, Liu JP. Design and Analysis of Clinical Trials: Concepts and Methodologies. New York: John Wiley; 1998.