"Most of Glioma Patients Have Local Recurrence and Their Median Survival Rate Is Reported To
"Most of Glioma Patients Have Local Recurrence and Their Median Survival Rate Is Reported To
1. In response to comment 2 :
The individual survival was based on the literature cited on Page3- Theoretical Framework-
"Most of glioma patients have local recurrence and their median survival rate is reported to
be 16 to 24 weeks (Wong et al, 1999)", so we decided to that the patient to be considered
for this study to have at least 4 months (that is 16 weeks) to consider for this study as
mentioned in the Selection of Participants section.
Reply: Individual survival can only be estimated via a model. Since individual survival is
mentioned, I asked how it was estimated. The section only mentions "median survival rate"
without reference to how it has been calculated (this is relevant because it may inform how
the data is going to be analysed in the study). If it is calculated based on the population (e.g.
by looking at a Kaplan-Meier graph), then it’s NOT individual survival. So the term
“individual survival” is misleading at best, wrong at worst.
Comment 4: Clinically significant or statistically significant? If the latter, it's not clear how this
decision was obtained formally.
2.In response to comment 4-
It is a clinically significant study and this was asked during the workshop.
Reply: This doesn't answer the question, and the statement “clinically significant study” is
meaningless; of course you would expect/desire a study to be "clinically significant", but the
way you are going to show evidence of a “clinically significant” difference is by using
statistics. The statistics on the other hand may show evidence of a difference, but the
difference may not be clinically significant. The clinician will usually set a “clinically
significant” difference (to be obtained via some exam/procedure/assessment) and then
assess the statistical significance of this difference.
Comment 5: (this isn't really comment 5, it's in the side comments section)
3. There is comment 5 that "it is not clear whether 24 patients is per arm or for both" -
I have already mentioned in my Executive summary (Last paragraph) that we will enroll 24
patients which are to be divided in to two arms.That means 12 in each arm!
Reply: in the Executive Summary it's mentioned only that there are two arms, not that the
allocation ratio is the same; it's a different thing, it doesn't mean automatically "12 in each
arm"; it could be 10 in one arm, 14 in another. It is quite common to imply a 1:1 allocation
ratio, and specify the basic sample of the ratio, which would be interpreted as 24:24, i.e. 24
per arm.
Only later at the end of the "Sample and recruitment" section, there's a "divided equally"
comment. The point was that the information about the sample should have appeared
earlier, when the sample size is first mentioned, that is in the Executive Summary section.
4. As per the assignment brief "Describe how the sample size was calculated if
appropriate" -
I have used this statement and reviewed and quoted my reasons to choose 24
patients as mentioned on Page 6 Sample and recruitment :"Reviewing the
literature on similar recent studies (Y.K. Oh et. al., 2014; S.A. Oh et. al. ,2016;
Shields et.al., 2013) and based on the average number of patients per year treated
for glioma using radiotherapy at our centre, we have decided to enroll 24 patients to
this study with an aim to complete the treatment of all participants in one calendar
year."
This is a clinical study and number of participants are taken based on number of
cases we get per year and it is mentioned we intend to complete the study in a year's
time.
There are sources quoted as well who used similar approach and there was no
sample size calculation done in them either.As from the brief , and from the
communication with you during the workshops, you mentioned if it is a short study , a
sample size calculation may not be possible or not required? but I have justified the
number of participants in my assignment.
Reply: a sample size calculation may not be required if, for example, the study
consists in a retrospective analysis of data (although even in these cases, calculation
of the power for the given sample size may not hurt.) But even if the sample size is
not calculated for some reason, a power analysis may be required (and power
analysis doesn’t necessarily mean “sample size calculation”; it may mean “power
calculation” if the sample size is fixed.)
In your case you are limited to 24 patients, and you are going to assess their
compliance. How do you know if the 24 patients of your population are enough to
show an effect? If you are limited by 24 patients by whatever reasons, then you
should know something about the power that you can achieve with 24 patients. It is
quite likely that a sample of 24 patients will have a very low power. But this depends
also on the significance that you find acceptable (it is quite common to choose
alpha=0.05 or 0.01), and most importantly, what is the “clinically significant”
difference. If the difference is very small, you may end up with an unrealistically low
power, which means that the study is not useful. If the difference is large, the
achievable power may be sufficient; it’s usual to require a power of at least 0.8. If
you can’t achieve it, it means you have to either accept a larger “clinically significant”
difference, or increase the alpha. There is a trade-off, and you may well end up in a
situation where the study is simply not doable with a given, fixed, sample size.
Note that the 24 patients in another study may have different characteristics than the
24 patients in your study, so assuming that everything else will be the same is
dangerous. You can refer to other studies if some approximate quantity is needed
(for example, other studies may report the standard deviation of some quantity of
interest, which may then be taken and used in a new study, assuming it’s justified
given the population of interest.) In your case, you have taken the sample size from
a different study, but it doesn’t mean you can stop there. You need to use that piece
of information with your “clinically significant” difference, to derive the achievable
power.
Reply: How do you know if the >95% compliance in both cohorts is just a fluke? In
particular, considering that the population will be small, you cannot rely on "clinical
significance" alone; per my previous comment, clinical significance is informed by
statistical significance.
Once you apply a measure to a sample of subject, that measure automatically
becomes a statistic, simply because you cannot extrapolate that measure to the
whole population. So, suppose that in one cohort the compliance is 97%. It is larger
than 95%, so you may think you are done. But what’s the variability of this estimate,
accounting for the small sample size? A confidence interval will need to be
calculated, so the confidence interval around that 97% may be [85%, 100%]; so in
this example, assuming a 95% confidence interval, it means that with a 95%
probability, the true estimate of the compliance is anything between 85% and 100%.
Would this variation be considered clinically acceptable? Probably not, since your
goal was a >95% compliance.
The text on page 6, "Data collection and processing" does mention "to check disease
free survival and compare effects of reduced margins in both cohorts", but it does not
mention which analysis is going to be used to actually do it. Ideally, this analysis
should be the same that was used to power the study (for example, Cox's model if
the outcome of interest was a survival difference.) Is this a further goal of the study?
If it is, the power calculation needs to account for the fact that you also want to
assess mortality besides compliance. As a general principle, the more information
you want to “extract” from the data, the larger the sample size needs to be; or if it is
fixed like in your case, you need to use a lower alpha (to avoid false discoveries.) In
turn, this reduction of alpha may well result in a lower overall power. If you look at
the formulas for power calculations, you’ll see that there is a trade-off between
sample size, power, and significance.
6. Lastly to answer " there is no statistics at all"-
Reply: Per my previous comments, there is no power calculation given the sample
size, and no mention of any type of formal statistical analysis on the collected data to
statistically assess whether the objectives have been achieved.