Skip Navigation
Skip to contents

CPP : Cardiovascular Prevention and Pharmacotherapy

Sumissioin : submit your manuscript


Page Path
HOME > Search
8 "Epidemiologic studies"
Article category
Publication year
Special Articles
Methods for Evaluating the Accuracy of Diagnostic Tests
Chi-Yeon Lim
Cardiovasc Prev Pharmacother. 2021;3(1):15-20.   Published online January 31, 2021
  • 1,063 View
  • 23 Download
Abstract PDF
The accuracy of a diagnostic test should be evaluated before it is used in clinical situations. The sensitivity, specificity, and the trade-off between the 2 need to be considered. Sensitivity and specificity in diagnostic tests depend on the selection of cut-off values, and appropriate cut-off values can be arrived at by analyzing the receiver operating characteristic curve. In actual clinical setting, it is often difficult to obtain an appropriate gold standard for diagnosis, and in this case, consent is required as well. In this article, we summarize the basic concepts and methods for evaluating the performance of diagnostic tests.
Causal Claims in Health Sciences and Medicine: a Difference-in-Differences Method
Kyoung-Nam Kim
Cardiovasc Prev Pharmacother. 2020;2(3):99-102.   Published online July 31, 2020
  • 957 View
  • 5 Download
Abstract PDF
The difference-in-differences (DID) method is a useful tool to make causal claims using observational data. The key idea is to compare the difference between exposure and control groups before and after an event. The potential outcome of the exposure group during the post-exposure period is estimated by adding the observed outcome change of the control group between the pre- and post-exposure period to the observed outcome of the exposure group during the pre-exposure period. Because the effect of exposure is evaluated by comparing the observed outcome and potential outcome of the same exposure group, unmeasured potential confounders can be cancelled out by the design. To apply this method appropriately, the difference between the exposure and control groups needs to be relatively stable if no exposure occurred. Despite the strengths of the DID method, the assumptions, such as parallel trends and proper comparison groups, need to be carefully considered before application. If used properly, this method can be a useful tool for epidemiologists and clinicians to make causal claims with observational data.
Pragmatic Clinical Trials for Real-World Evidence: Concept and Implementation
Na-Young Jeong, Seon-Ha Kim, Eunsun Lim, Nam-Kyong Choi
Cardiovasc Prev Pharmacother. 2020;2(3):85-98.   Published online July 31, 2020
  • 2,124 View
  • 30 Download
  • 1 Citations
Abstract PDF
The importance of real-world evidence (RWE) has been highlighted in recent years, and the limitations of the classical randomized controlled trials, also known as explanatory clinical trials (ECTs), have been emphasized. Post-marketing observational studies have several problems, such as biases and incomparability between patient groups, and RWE can only be obtained after a certain period. Therefore, pragmatic clinical trials (PCTs) have garnered attention as an alternative to obtaining scientifically robust RWE in a relatively short time. PCTs are clinical trials that have a pragmatic concept, i.e., the opposite of ECTs and are intended to help decision makers by evaluating the effectiveness of interventions in routine clinical practice. The characteristics of PCTs are the inclusion of various patients in clinical practice, recruitment of patients in heterogeneous settings, and comparison with actual clinical treatments rather than a placebo. Thus, the results of PCTs are likely to be generalized and can have a direct impact on clinical and policy decision-making. This study aimed to describe the characteristics and definitions of PCTs compared with those of ECTs and to highlight the important considerations in the planning process of PCTs. To perform PCTs for the purpose of obtaining RWE, the contents covered in this study will be helpful.


Citations to this article as recorded by  
  • A scoping review of the Choice and Partnership Approach in child and adolescent mental health services
    Kathleen Pajer, Carlos Pastrana, William Gardner, Aditi Sivakumar, Ann York
    Journal of Child Health Care.2022; : 136749352210762.     CrossRef
Competing Risk Model in Survival Analysis
Yena Jeon, Won Kee Lee
Cardiovasc Prev Pharmacother. 2020;2(3):77-84.   Published online July 31, 2020
  • 2,066 View
  • 88 Download
Abstract PDF
Survival analysis is primarily used to identify the time-to-event for events of interest. However, there subjects may undergo several outcomes; competing risks occur when other events may affect the incidence rate of the event of interest. In the presence of competing risks, traditional survival analysis such as the Kaplan-Meier method or the Cox proportional hazard regression introduces biases into the estimation of survival probability. In this review, we discuss several methods that can be used to consider competing risks in survival analysis: the cumulative incidence function, the cause-specific hazard function, and Fine and Gray's Subdistribution hazard function. We also provide a guide for conducting competing risk analysis using SAS with the bone marrow transplantation dataset presented by Klein and Moeschberger (1997).
From Traditional Statistical Methods to Machine and Deep Learning for Prediction Models
Jun Hyeok Lee, Dae Ryong Kang
Cardiovasc Prev Pharmacother. 2020;2(2):50-55.   Published online April 30, 2020
  • 964 View
  • 11 Download
Abstract PDF
Traditional statistical methods have low accuracy and predictability in the analysis of large amounts of data. In this method, non-linear models cannot be developed. Moreover, methods used to analyze data for a single time point exhibit lower performance than those used to analyze data for multiple time points, and the difference in performance increases as the amount of data increases. Using deep learning, it is possible to build a model that reflects all information on repeated measures. A recurrent neural network can be built to develop a predictive model using repeated measures. However, there are long-term dependencies and vanishing gradient problems. Meanwhile, long short-term memory method can be applied to solve problems with long-term dependency and vanishing gradient by assigning a fixed weight inside the cell state. Unlike traditional statistical methods, deep learning methods allow researchers to build non-linear models with high accuracy and predictability, using information from multiple time points. However, deep learning models cannot be interpreted; although, recently, many methods have been developed to do so by weighting time points and variables using attention algorithms, such as ReversE Time AttentIoN (RETAIN). In the future, deep learning methods, as well as traditional statistical methods, will become essential methods for big data analysis.
Basic Concepts of a Mendelian Randomization Approach
Tae-Hwa Go, Dae Ryong Kang
Cardiovasc Prev Pharmacother. 2020;2(1):24-30.   Published online January 31, 2020
  • 1,648 View
  • 32 Download
Abstract PDF
The Mendelian Randomization (MR) approach is a method that enables causal inference in observational studies. There are 3 assumptions that must be satisfied to obtain suitable results: 1) The genetic variant is strongly associated with the exposure, 2) The genetic variant is independent of the outcome, given the exposure and all confounders (measured and unmeasured) of the exposure-outcome association, 3) The genetic variant is independent of factors (measured and unmeasured) that confound the exposure-outcome relationship. This analysis has been used increasingly since 2011, but many researchers still do not know how to perform MR. Here, we introduce the basic concepts, assumptions, and methods of MR analysis to enable better understanding of this approach.
Improving Causal Inference in Observational Studies: Interrupted Time Series Design
Kyoung-Nam Kim
Cardiovasc Prev Pharmacother. 2020;2(1):18-23.   Published online January 31, 2020
  • 1,468 View
  • 39 Download
Abstract PDF
Interrupted time series analysis is often used to evaluate the effects of healthcare policies and interventional projects using observational data. Interrupted time series analysis is one of the epidemiological methods, which are based on the assumption that the trend of the pre-intervention time series, if not intervened, would have the same tendency in the post-intervention period. Time series during the pre-intervention period are used to model a counterfactual situation without intervention during the post-intervention period. The effects of intervention can be seen in the form of abrupt changes in the result level (intercept) due to the intervention and/or changes in the result over time (slope) after the intervention. If the effects of intervention are predefined, the effects of the intervention can be distinguished and analyzed based on the time series analysis model constructed accordingly. Interrupted time series analysis is generally performed in a pre-post comparison using the intervention series. Recently, however, controlled interrupted time series analysis, which uses a control series as well as an intervention series, has also been used. The controlled interrupted time series analysis uses a control series to control potential confounding due to events occurring concurrently with the intervention of interest. Even though interrupted time series analysis is a useful way to assess the effects of intervention using observational data, misleading results can be derived if the conditions for proper application are not met. Before applying the method, it is necessary to make sure that the data conforms to the conditions for proper application.
Improving Causal Inference in Observational Studies: Propensity Score Matching
Min Heui Yu, Dae Ryong Kang
Cardiovasc Prev Pharmacother. 2019;1(2):57-62.   Published online October 31, 2019
  • 1,469 View
  • 24 Download
Abstract PDF
Propensity score matching (PSM) is a useful statistical methods to improve causal inference in observational studies. It guarantees comparability between 2 comparison groups are required. PSM is based on a “counterfactual” framework, where a causal effect on study participants (factual) and assumed participants (counterfactual) are compared. All participants are divided into 2 groups with the same covariates matched as much as possible. Propensity score is used for matching, and it reflects the conditional probabilities that individuals will be included in the experimental group when covariates are controlled for all subjects. The counterfactuals for the experimental group are matched between groups with characteristics as similar as possible. In this article, we introduce the concept of PSM, PSM methods, limitations, and statistical tools.

CPP : Cardiovascular Prevention and Pharmacotherapy