The Nordic countries have unique prerequisites to undertake cutting-edge surgical research, but this opportunity is underutilized. A limited set of strategic initiatives might improve the situation. Leaders in surgical research need a deep understanding of epidemiologic theory and should master a broad repertoire of study designs. Resources – human, financial and others – should focus on large, original studies with a high likelihood to improve clinical practice.

Clinicians often think of epidemiology as distinct from clinical research. As a consequence epidemiologic methods have been taught chiefly in epidemiology departments and at schools of public health. Many of these institutions, however, have become too isolated from the practice of medicine and the conduct of clinical research. And both camps – epidemiologic and clinical research – have suffered from this mutual isolation. Epidemiology would be fertilized by close interaction with clinical medicine, while offering clinical researchers a powerful toolbox derived from advanced methodologic developments. Epidemiologic principles and methods are not only integral to public health, but also highly relevant to clinical research. Still today, this fundamental fact is not adequately appreciated by many clinical researchers.

A SURGEON’S JOURNEY TO EPIDEMIOLOGY

Could epidemiologic methods and clinical epidemiology indeed revolutionize clinical research? Would methodologic rigor, adequate sample size, and skilled statistical analyses allow more rapid progress and quicker implementation of important discoveries? This bold and perhaps naïve idea came to my mind some 40 years ago and my initial hunch that it is true has grown ever since. Still a practicing surgeon at the time, my own research forced and encouraged some familiarity with the fundamental principles of epidemiology. And this familiarity truly changed my perspective on my professional performance in the operating room, clinical ward, outpatient departments, emergency units, and in the classroom where I lectured to medical students.

Foremost, my slowly growing familiarity with epidemiologic methodology helped me understand the fundamental prerequisites for causal inference – after all, a successful treatment is little more than a cause of good outcome. This insight made me increasingly uncertain about the real benefit of our therapeutic interventions and the performance of our diagnostic technologies. This was a time when hip replacement, coronary bypass surgery, breast-conserving surgery, laparoscopic cholecystectomy, kidney transplantation, vascular reconstruction, radical prostatectomy and ESWL treatment for kidney stones (just to mention a few examples) transformed our work in the operating room – often without any support from randomized trials that these new technologies added substantial benefits. At the same time, endoscopic examinations, computerized tomography, ultrasound, PET scans, and subsequently, magnetic resonance revolutionized our ability to visualize organs and assess bodily functions. Today, the flow of novel therapeutic and diagnostic techniques is even more intense.

As a surgeon I navigated through these years with two competing feelings. One was a growing frustration with how haphazardly clinical methods were used and combined; that novel surgical procedures – unlike the strictly regulated approval of new drugs – could be introduced over-night, often with no strategy to quantify risks and benefits. As a corollary, decisions influencing the life and health of our patients were based on little, if any, scientific evidence. But another feeling grew too – a fascination with epidemiologic theory and methodology as directly relevant to advancing the evidence base for clinical practice. After 17 years I left the operating room peacefully, permanently, and with no subsequent regret to become a full-time epidemiologist.

Persuading clinicians that methods of extraordinary relevance for their research are readily available in the epidemiologic toolbox can be challenging. Trickier still 40 years ago was to find text that helps a surgical researcher see the light and opportunities. Today, however, an abundance of books has been published that predominantly deal with the theory of epidemiologic research whilst fewer accessible texts are available specifically about methods in clinical research. As a start, I would strongly recommend anyone who plans to undertake surgical research to familiarize themselves with at least one of the following books: “Epidemiology – An Introduction” by Kenneth J. Rothman and co-authors (1), or “Clinical Epidemiology: Principles, Methods, and Applications for Clinical Research” by Diederick E. Grobbee and Arno W. Hoes (2).

I wish that such texts had been available to me 40 years ago. I congratulate all those younger colleagues who now receive a firm and stable helping hand in their necessary endeavor to study a wide variety of clinical phenomena. And I hope that these books will also be read by the growing number of clinicians who also need to understand the sophisticated methods used in clinical research.

TWO FUNDAMENTAL CHALLENGES IN SURGICAL RESEARCH

Although beyond the scope of this short appetizer, I can only encourage those contemplating to undertake cutting-edge research to dive into the literature to broaden their methodologic repertoire and deepen their understanding of the underlying theory. The reward will become lifelong in terms of improved research quality, greater flexibility in choosing the study design best aligned with your research hypotheses, and a deeper understanding of the best published research. For the purposes of this short introduction, I will focus on two fundamental areas of surgical research.

Performance of diagnostic tests
Unlike the predominant surgical research assessing outcome with and without therapeutic intervention, diagnostic research is cross-sectional with no time-dimension and confounding being of no relevance. As we all know, diagnosis of a disease is rarely based on a single test. Instead, the predominant sequence is history taking, clinical examination followed by laboratory and imaging tests. We also know that novel, technically sophisticated and often expensive diagnostic technologies are constantly added to our armament. Hence, the scientific challenge is rarely to assess the properties of one diagnostic test but instead the added value of one particular examination. To allow cost-effective use of resources, the optimal order in which diagnostic technologies are used should also be investigated. This increasingly important area of research, hitherto largely neglected, should offer interesting opportunities in all surgical disciplines.

The outcome of diagnostic research is surprisingly simple because sensitivity and specificity captures all essential information about test performance. Positive and negative predictive value may add utility with one caveat: These measures depend entirely on disease prevalence among the patients under investigation which limits their generalizability. When diagnostic test results are continuous rather than dichotomous (a common situation) receiver operating characteristics (ROC) curves can be constructed. Such curves allow more nuanced interpretation based on different combinations of sensitivity and specificity. But they also complicate our task to unequivocally answer if a diagnostic test provides meaningful added value beyond what has already been achieved.

Investigating outcomes in surgical research
The main stray of research we are concerned with is whether surgical interventions improve patient outcomes, and if benefits outweigh harms. The relevant outcomes range from postoperative complications and reoperations to recurrence, quality of life, and death. But they all have one fundamental feature in common; person-time. To illustrate we could ask how many disease-related deaths would be expected if we followed a million operated patients for zero seconds. Conversely, how many deaths would be expected if we follow zero patients for one million years? The answer in both instances is, of course, zero. Neither people nor time alone provides adequate information about the outcome among patients, and must both be taken into account. Person-time is the sum of all the time contributed to a study by subjects at risk of outcome. As a corollary, the fundamental challenge in outcome research is to harvest causal information from a defined set of person-time.

The randomized control trial (RCT) remains the gold standard to maximize prerequisites for causal inference in outcome research. In such a study of adequate size, the randomized groups are balanced with regard to all extraneous factors that affect the outcome; the only difference is the intervention under study. Occasionally, clinically useful information can be generated through observational studies of patient cohorts (3). But even seemingly straightforward issues such as interpretation of trends in cancer patients’ survival may be surprisingly complex and uncertain (4). Nevertheless, many – if not the majority of – burning, unanswered clinical questions and related research hypothesis cannot be tested in RCTs because logistic, ethical and financial obstacles are unsurmountable. A surgical researcher would therefore be severely constrained if mastering only the theory of RCTs.

WHY NOT ONLY RANDOMIZED CONTROLLED TRIALS?

So far, my message has been that access to a broad methodologic repertoire is a sine qua non to harvest causal information from a defined set of person-time. To illustrate my point, let us work through an example. In inflammatory bowel disease (IBD), sulfasalazine has remained first line therapy for more than half a century. Because chronic inflammation might be a driver of malignant transformation and sulfasalazine has anti-inflammatory properties, it might reduce the excess risk of colorectal cancer in patients with IBD. The burning question is then how this plausible, clinically relevant but never tested, hypothesis can be investigated.

Consider first an RCT. Many thousand patients, preferably with newly diagnosed IBD would need to be randomized to receive or denied sulfasalazine, compliance with randomized assignment should be monitored and the incidence of colorectal cancer ascertained during several decades of follow-up. Such a trial would be both unethical, prohibitively expensive and non-feasible. Would a cohort study be an alternative? Because the exposure of interest (treatment with sulfasalazine) is beyond the investigator’s control, information on treatment and numerous potential confounders must be prospectively recorded or abstracted from medical records. But only a small minority – definitely less than 5% – of the patients with IBD will develop colorectal cancer, and to achieve adequate statistical power the cohort must encompass many thousand patients. The cohort study design is therefore highly inefficient and, in the end, not feasible.

One alternative remains, namely a case-control study nested in a large cohort of patients with IBD. This design preserves the validity of the underlying cohort study, provides largely similar statistical power but reduces the workload by almost two orders of magnitude. Cases are all patients with IBD diagnosed with colorectal cancer. Information about their use of sulfasalazine and relevant confounders is abstracted from medical records and compared with the same information in a properly selected sample of patients with IBD and no colorectal cancer. Based on this information, we can readily quantify any reduction in cancer risk attributable to treatment with sulfasalazine (5).

THE RESEARCH PROCESS

A cook-book prescription of the research process is beyond the scope of this short essay, but key elements are summarized in Box 1. In my experience, the first four components are often treated too haphazardly. Defining an original, clinically relevant and crystal-clear hypothesis, is often a protracted process – and should be allowed to remain so. Likewise, discussion about study design deserves unlimited time and effort (6). Do not become surprised, let alone frustrated, if this initial process last for one or a few years. Decisions reflected and justified in the protocol might influence your scholarly work for many years, and you will never regret such an investment.

BOX 1: THE RESEARCH PROCESS

  1. Define precisely a testable hypothesis.
  2. Develop in great detail a study design optimally aligned with the study hypothesis.
  3. Define sample size based on realistic statistical power calculations.
  4. Write and rewrite a detailed study protocol ultimately approved by all collaborators.
  5. Apply for approval from all relevant ethical instances.
  6. Obtain adequate funding to pursue the study without compromising validity or statistical power/sample size.
  7. Design an optimal machinery for complete and timely enrollment of eligible patients.
  8. Whenever appropriate, appoint an external advisory committee.
  9. Devote continuous effort to foster perseverance, patience, and collaborative spirit.
  10. Develop and pursue a statistical analysis plan.
  11. Embrace unexpected findings.
  12. Sort out authorship issues transparently and early, appoint first and senior author(s).
  13. Design an inclusive and efficient mechanism for manuscript writing. And devote unlimited efforts to achieve linguistic quality and clarity.
  14. Format your manuscript pedantically to accommodate all journal instructions. Submit your manuscript and keep your fingers crossed.

Regardless of how valid the design is, the study likely becomes uninformative if statistical power is inadequate. Far too many clinical studies are hopelessly underpowered. Further, enrollment into RCTs typically takes at least twice as long as predicted. Therefore, perseverance and close interaction with collaborators is profoundly important throughout the process of patient enrollment.

Finally, access to solid biostatistical support remains an Achille’s heal in many Nordic academic settings. Such expertise is nevertheless a must. In cutting-edge clinical research, advanced, sophisticated biostatistical analyses are indeed becoming common practice. As a first step, develop and adhere to a solid statistical analysis plan. And refrain from endless exploratory analyses that unavoidably generate significant findings due to play of chance. The majority of published research results never becomes confirmed by other independent investigators and an estimated 85% of research resources are wasted (7, 8). Hence, it should be our overarching responsibility as surgical researchers to not burden the scientific community with even more noise.

A FEW HUMBLE RECOMMENDATIONS

Prioritize
As a researcher, you work in a context that is brutally competitive, ever-expanding, and global. Unless you get published, read, understood, believed, remembered, and cited you have likely worked in vain. Sadly, only a small minority of all published work achieve these fundamental goals. To merely expand your list of publications is after all a trivial and transient reward. Indeed, eons of time and Herculean efforts would be saved if we eliminated research that does not expand the realm of human knowledge, is never cited, and has zero impact on clinical practice. By all means protect yourself from becoming a big producer of small things.

Instead, remember that virtually everything remains to be discovered in clinical medicine with surgery as no exception. The future will be surprised how little we knew. All research findings are preliminary and research has no end. If you embrace these prerequisites, there can be no urgency in defining your research hypothesis. Read, ponder, and discuss with critical colleagues. In the end, choose a hypothesis that profoundly stimulates your curiosity, is feasible to investigate and has a realistic chance to influence clinical practice. And embrace the fact that you probably start a journey taking one decade and often longer.

Authorship
Far too often, the research process and the joy of creative human interaction is poisoned by controversies on authorship. Admittedly, publications are the driving force for recognition, fame and academic promotion, perhaps even salary increase. Hence, it is not at all surprising that human greed comes into play. Nevertheless, authorship should be earned not bestowed. It often struck me that among those who argued loudly about their own presence or position in the author list, few, if any, became a particularly successful scholar. These colleagues may win one battle or two but lose countless future opportunities because they become unattractive as collaborators.

Having said this, dealing with issues of authorship with fairness and justice is likely to become increasingly complex. Cutting-edge research will require more and more of large teams with investigators representing a variety of complementary expertise. How then can we navigate in this archipelago that becomes more and more complex? I can see no alternative to an open respectful discourse beginning early rather than late during the research process.

The pleasure of finding things out
Do not devote your short life to scholarly work unless other alternatives are deterrently unattractive. And choose your environment, supervisors, mentors, and collaborators carefully and critically; referees, publication records and funding situation often provides useful guidance. Remember that creativity requires freedom and tolerance. And do not dismiss unexpected findings; they may be more important than the expected ones and sometimes define the beginning of a journey towards a Nobel Prize! The purpose of science is not to confirm prejudices. And foster all aspects of generosity: With ideas, support, resources, appreciation, and encouragement. If all these aspects are successfully considered, chances are good that from time to time you and your colleagues may enjoy the pleasure of finding things out (9).

Keep digging
Unlike experimental laboratory research – in which hypotheses may be tested and results unfold in a short time-period – clinical research is predominantly a drawn- out process. Fundamental discoveries that entail shifts in paradigm and clinical practice often require a series of studies or even life-long efforts. Clinical investigators therefore need to adopt the ethos from the most successful basic researchers; do not give up after your first attack but refine, revise, or change your study hypothesis, perhaps also your study design. And keep digging until your treasure is found.

REFERANSER

  1. Kenneth J. Rothman, Krista F. Huybrechts, Elenor J. Murray. Epidemiology – An Introduction, 3rd edition, Oxford University Press, USA, October 2024.
  2. Diederick E. Grobbee, Arno W. Hoes. Clinical Epidemiology: Principles, Methods, and Applications for Clinical Research, 2nd edition, Jones and Bartletts Publishers, Inc., USA, 2014.
  3. Xu H, Bretthauer M, Fang F et al. Dramatic improvements in outcome following pancreatoduodenectomy for pancreatic and periampullary cancers. Br J Cancer 2024;131:747- 54.
  4. Dickman P, Adami HO. Interpreting trends in cancer patient survival. J Int Med 2006;260:103-17.
  5. Pinczowski D, Ekbom A, Baron J et al. Risk factors for colorectal cancer in patients with ulcerative colitis: a case-control study. Gastroenterology 1994;107:117-20.
  6. Kalager M, Adami HO, Lagergren P et al. Cancer outcomes research – a European challenge Measures of the cancer burden. Mol Oncol 2021:3225-41.
  7. Ioannidis JP. Why most published research findings are false. PLoS Med 2005;2:e124.
  8. Ioannidis JP. How to make more published research true. PLoS Med 2014;11:e1001747.
  9. Feynman, R. P. 1., Robbins, J. The pleasure of finding things out: the best short works of Richard P. Feynman. Cambridge, Mass., Perseus Books 1999.

ANNONSER

Kurs/Møter