WEBVTT

WEBVTT
Kind: captions
Language: en

00:00:00.760 --> 00:00:06.590
This video explains the phenomenal statistical
and methodological myths and urban legends.

00:00:06.590 --> 00:00:11.500
So what are these statistical myths and how
do they emerge and what's the outcome for

00:00:11.500 --> 00:00:13.280
research and practice?

00:00:13.280 --> 00:00:18.789
Let's start by asking the question how should
I or how do I choose which analysis technique

00:00:18.789 --> 00:00:21.310
or which methods to apply?

00:00:21.310 --> 00:00:26.410
One reasonably sounding strategy is to take
a look at the journals where you want to publish

00:00:26.410 --> 00:00:31.580
and see what other people are using in those
journals.

00:00:31.580 --> 00:00:37.079
But turns out that that is the source of methodological
myths and urban legends.

00:00:37.079 --> 00:00:44.840
These are a beliefs that are widely held but
are not correct and can lead to subobtimal

00:00:44.840 --> 00:00:48.000
decisions and even incorrect results.

00:00:48.000 --> 00:00:50.440
So how do these myths emerge?

00:00:50.440 --> 00:00:56.030
Typically the way we get new methods into
applied discipline is that somebody introduces

00:00:56.030 --> 00:01:02.409
an idea in a research methods journal such
as psychological metrica or econometrica.

00:01:02.409 --> 00:01:08.190
Then somebody that applied this discipline
reads the article in the research methods

00:01:08.190 --> 00:01:15.400
journal - uses the technique perhaps likely
misunderstands the technique and then cites

00:01:15.400 --> 00:01:19.280
the methods journal that they got the technique.

00:01:19.280 --> 00:01:26.910
So what do happen then when a next person
in that journal wants to apply the same technique?

00:01:26.910 --> 00:01:32.640
Do they go to the metrics journal try to understand
the equations and proofs of simulate results

00:01:32.640 --> 00:01:38.561
or do they just look at the justification
that this empirical paper gave for the technique

00:01:38.561 --> 00:01:42.180
and explains not how and why works?

00:01:42.180 --> 00:01:47.160
They will go to the empirical paper instead
of looking at the methods paper.

00:01:47.160 --> 00:01:53.310
And chances are that there's another careless
citation of the idea and the idea becomes

00:01:53.310 --> 00:01:55.380
more misunderstood.

00:01:55.380 --> 00:02:00.930
So this is a similar to the broken telephone
game that many people have played as kids.

00:02:00.930 --> 00:02:06.440
If you have ten kids in a row - the first
tells a message to the second one who repeats

00:02:06.440 --> 00:02:12.719
it to the third one who repeats to the fourth
one and then by the end by the time the message

00:02:12.719 --> 00:02:20.239
reaches the tenth kid - it is something completely
different than the original message was.

00:02:20.239 --> 00:02:25.750
So these long chains of citation from one
empirical paper to another instead of looking

00:02:25.750 --> 00:02:30.650
at the original source cause confusion and
misunderstanding.

00:02:30.650 --> 00:02:36.359
So what's more problematic then is that when
we have these two articles here that cite

00:02:36.359 --> 00:02:42.450
the misunderstood idea then people in this
discipline think that they have all the knowledge

00:02:42.450 --> 00:02:48.370
about the technique and then all the other
people who want to publish or most of them

00:02:48.370 --> 00:02:54.019
who want to publish in this journal cite these
two papers as evidence for this is how the

00:02:54.019 --> 00:02:56.090
technique is supposed to work.

00:02:56.090 --> 00:03:03.200
Then the more careless - the careless citation
and the more misunderstood of the idea becomes

00:03:03.200 --> 00:03:08.749
institutional lies in the research practice
so that no one even questions it.

00:03:08.749 --> 00:03:15.459
Once we have ten papers that apply technique
incorrectly or repeat a claim that is not

00:03:15.459 --> 00:03:21.000
true then everyone think that claim is true
because it has been repeated many times.

00:03:21.000 --> 00:03:28.599
What will happen next is that this more misunderstood
idea will be institutionalized to the discipline

00:03:28.599 --> 00:03:32.709
to the review process and doctoral student
teaching.

00:03:32.709 --> 00:03:37.809
When you take an introductory research methods
class quite often those classes will tell

00:03:37.809 --> 00:03:42.349
you that these are the techniques that we
apply in our field and then they show you

00:03:42.349 --> 00:03:48.779
how to apply those techniques using a statistical
software instead of explaining that this is

00:03:48.779 --> 00:03:54.980
what the methods literature says about this
technique and that leads to the current application

00:03:54.980 --> 00:03:59.690
or past application of the technique instead
of the proven properties of the technique

00:03:59.690 --> 00:04:04.010
or method driving its future using in the
discipline.

00:04:04.010 --> 00:04:11.200
Then if you have a person who wants to do
the technique right that person runs into

00:04:11.200 --> 00:04:13.319
the problems because of the review process.

00:04:13.319 --> 00:04:18.980
So you have a person who wants to do the technique
- the idea - correctly cites the method literature

00:04:18.980 --> 00:04:21.640
and then subs this journal.

00:04:21.640 --> 00:04:27.639
The reviewers will say that no this is how
the technique is applied citing five different

00:04:27.639 --> 00:04:30.220
- these articles.

00:04:30.220 --> 00:04:35.400
So the misapplication - it actually becomes
that the discipline starts to enforce the

00:04:35.400 --> 00:04:37.920
incorrect application of the technique.

00:04:37.920 --> 00:04:41.360
And this is very difficult to break.

00:04:41.360 --> 00:04:45.530
There are a number of articles and books about
this topic.

00:04:45.530 --> 00:04:52.020
One of the leading authors is Vandenberg and
he has this special issue in organizational

00:04:52.020 --> 00:04:55.300
research methods as well as edited books.

00:04:55.300 --> 00:05:00.580
One good idea if you want to understand a
techniques that you apply well - is that you

00:05:00.580 --> 00:05:05.660
use the term methodological myths and then
the name of your technique in google scholar

00:05:05.660 --> 00:05:10.130
because this is actually widely used term
to explain these misunderstandings.

00:05:10.130 --> 00:05:15.140
So not only you need to understand how your
techniques are applied and how and why they

00:05:15.140 --> 00:05:20.910
work - it's useful to understand what are
the common misunderstandings or misconceptions

00:05:20.910 --> 00:05:26.170
about the technique and this article is about
methodological myths and urban legends are

00:05:26.170 --> 00:05:27.520
useful in that regard.

00:05:27.520 --> 00:05:33.420
They are typically written in a way that is
fairly readable for applied research instead

00:05:33.420 --> 00:05:38.870
of being like an original hard core research
methods articles that explain algorithms and

00:05:38.870 --> 00:05:40.030
provide proofs and equations.

00:05:40.030 --> 00:05:42.140
These are fairly easy to read.

00:05:42.140 --> 00:05:44.410
And they are also fairly fun to read at least
for myself.

00:05:44.410 --> 00:05:46.650
So I recommend this text.

00:05:46.650 --> 00:05:53.140
Let's take a look at now a couple of examples
of methodological myths and then we will discuss

00:05:53.140 --> 00:05:59.140
how you can avoid spreading these myths in
your own work and when you review work by

00:05:59.140 --> 00:06:00.140
others.

00:06:00.140 --> 00:06:03.890
So this is an article that I reviewed recently.

00:06:03.890 --> 00:06:10.710
And the article was not exactly that it is
but instead we invited a revision and asked

00:06:10.710 --> 00:06:13.300
the authors to completely redo the analysis.

00:06:13.300 --> 00:06:16.270
So let's see what's going on here.

00:06:16.270 --> 00:06:20.590
The authors had the ingenuity problem and
this was a nice article that it actually noted

00:06:20.590 --> 00:06:26.451
that there's any problem and tried to do something
about it and they decided to apply two stage

00:06:26.451 --> 00:06:27.800
least squares.

00:06:27.800 --> 00:06:32.550
So in the first stage regression analysis
they regressed the endogenous explanatory

00:06:32.550 --> 00:06:39.420
value to X on the instrument and then in the
second stage regression analysis they took

00:06:39.420 --> 00:06:45.500
the residual from the first stage regression
analysis and used that implied X as a predictor

00:06:45.500 --> 00:06:47.550
of the final dependent variable.

00:06:47.550 --> 00:06:48.920
So what's the problem?

00:06:48.920 --> 00:06:52.680
The problem is that this is not how two stage
least squares work.

00:06:52.680 --> 00:06:58.330
So you don't take the residual from the first
stage regression analysis instead you take

00:06:58.330 --> 00:07:00.740
the fitted value.

00:07:00.740 --> 00:07:07.430
What's - where do these researchers come up
with the idea that this is how two stage least

00:07:07.430 --> 00:07:09.420
square is supposed to be done?

00:07:09.420 --> 00:07:16.130
They cited to two articles that were published
the previous year in the same journal.

00:07:16.130 --> 00:07:17.660
So that's fairly common.

00:07:17.660 --> 00:07:23.870
You cite articles that use the same technique
that you haven't applied before and it happens

00:07:23.870 --> 00:07:30.500
that these two cited articles quoted here
actually explain the two stage least squares

00:07:30.500 --> 00:07:33.110
procedure incorrectly.

00:07:33.110 --> 00:07:38.360
So what would have happen if that article
would have been published as such?

00:07:38.360 --> 00:07:42.730
Then there would have been three researched
articles that explain two stage least squares

00:07:42.730 --> 00:07:48.060
and if a person who doesn't understand the
technique - if they want to know more about

00:07:48.060 --> 00:07:52.630
it - they read the first article then they'll
look at the explanation of the second article

00:07:52.630 --> 00:07:58.190
that looks the same if they're still not sure
if that is how two stage least squares work

00:07:58.190 --> 00:08:01.180
they will look at the third article that says
the same.

00:08:01.180 --> 00:08:07.730
And all this is a misunderstanding perhaps
by one or two researchers that is just repeated

00:08:07.730 --> 00:08:13.810
in the literature and instead of looking at
the original sources or good methods books

00:08:13.810 --> 00:08:19.810
people cut corners and they look at the guidance
provided by the journal that they target.

00:08:19.810 --> 00:08:27.440
So that's one example and to avoid this would
be a good idea to justify your choices based

00:08:27.440 --> 00:08:31.830
on methods literature instead of previous
empirical applications.

00:08:31.830 --> 00:08:37.200
This is an example of where analysis results
were clearly incorrect because the technique

00:08:37.200 --> 00:08:38.440
was misapplied.

00:08:38.440 --> 00:08:45.890
The second one is the less severe but this
is perhaps the most widely spread methodological

00:08:45.890 --> 00:08:46.890
myth.

00:08:46.890 --> 00:08:54.770
The myth is that coefficent alpha must be
more than 0.7 for to be acceptable and that

00:08:54.770 --> 00:09:01.330
Nunnally in 1978 in the book psychometric
theory has this dated so.

00:09:01.330 --> 00:09:04.420
This is an example of this myth in actual.

00:09:04.420 --> 00:09:11.110
What's a - so we have a 0.7 cut-off being
cited without the page number.

00:09:11.110 --> 00:09:17.210
Without the page number citation is a good
indication that perhaps the reader has not

00:09:17.210 --> 00:09:23.480
actually read the book but is citing it out
of the happiness of doing so in a particular

00:09:23.480 --> 00:09:24.480
discipline.

00:09:24.480 --> 00:09:25.520
So this is not true.

00:09:25.520 --> 00:09:28.070
Nunnally says nothing of this sort.

00:09:28.070 --> 00:09:30.490
They didn't give a specific cut-off.

00:09:30.490 --> 00:09:36.691
What the book actually says has been written
about in many different places and you can

00:09:36.691 --> 00:09:39.430
also check the book itself.

00:09:39.430 --> 00:09:45.580
The recommendations for liability values is
that the value should depend on the context.

00:09:45.580 --> 00:09:50.340
So if you have a very early stage research
you have a new scale that no one has used

00:09:50.340 --> 00:09:54.450
before then perhaps 0.7 is a good cut-off.

00:09:54.450 --> 00:10:03.140
But if you have more mature - research more
mature area and you are more interested getting

00:10:03.140 --> 00:10:07.070
the magnitudes of the effect right instead
of just checking whether the effect exists

00:10:07.070 --> 00:10:11.570
or not then you could perhaps need something
like 0.9.

00:10:11.570 --> 00:10:15.380
So Nunnally clearly explains that context
matters.

00:10:15.380 --> 00:10:23.050
But people read these as that 0.7 is the ultimate
cut-off in every scenario if you have in any

00:10:23.050 --> 00:10:27.500
scenario that applies to all scenarios that's
not what Nunnally says but that's how - that's

00:10:27.500 --> 00:10:28.500
what the myth is.

00:10:28.500 --> 00:10:29.950
0.7 is always enough.

00:10:29.950 --> 00:10:36.260
It is not always enough and Nunnally does
not recommend one cut-off for every scenario.

00:10:36.260 --> 00:10:44.260
A more reasonable strategy for looking for
comparable reliability statistic is to look

00:10:44.260 --> 00:10:49.760
at what other people in your discipline - what
kind of results they have gotten using the

00:10:49.760 --> 00:10:52.760
same scale - what kind of reliability statistics.

00:10:52.760 --> 00:10:57.779
And then compare is your reliability better
or worse than the previous application of

00:10:57.779 --> 00:10:58.779
the scale.

00:10:58.779 --> 00:11:05.839
That is probably a lot more relevant reliability
standard than a psychometrics book written

00:11:05.839 --> 00:11:09.380
more than 40 years ago.

00:11:09.380 --> 00:11:13.630
Let's take a look at third example and this
is a big on again.

00:11:13.630 --> 00:11:18.010
This is about partial least squares I have
written some papers about this topic.

00:11:18.010 --> 00:11:24.170
The idea of partial least square analysis
is that we apply a regression analysis but

00:11:24.170 --> 00:11:31.290
instead of applying - taking our scales scores
as sums of items we take weighted sum before

00:11:31.290 --> 00:11:32.740
doing the regression analysis.

00:11:32.740 --> 00:11:39.230
So the partial least squares analysis is essentially
an indicator weighting system for creating

00:11:39.230 --> 00:11:43.779
a composite variables or weighted sums to
be used in regression analysis.

00:11:43.779 --> 00:11:48.970
There are many myths around this technique
and I will focus one of them.

00:11:48.970 --> 00:11:52.920
And the particular myth that I'm focusing
on that is the way that the partial least

00:11:52.920 --> 00:11:57.820
squares algorithm weights the indicators increases
reliability.

00:11:57.820 --> 00:12:03.670
This is a stated in for example in this editor
MIS quarterly which is the leading information

00:12:03.670 --> 00:12:09.490
systems journal and also FT50 journal so optimization
of the weights by the partial least squares

00:12:09.490 --> 00:12:13.330
algorithm aims to reduce measurement error.

00:12:13.330 --> 00:12:15.320
So improve reliability.

00:12:15.320 --> 00:12:20.630
The problem would this claim is that there
are reasons to believe that it cannot be true

00:12:20.630 --> 00:12:24.399
and there is no evidence for it being true.

00:12:24.399 --> 00:12:30.790
If we take a look at how we form indicators
when we construct scale scores for regression

00:12:30.790 --> 00:12:31.790
analysis.

00:12:31.790 --> 00:12:36.710
The typical way is that we take a sum and
there's a problem that when we take a sum

00:12:36.710 --> 00:12:42.880
of the indicators then we will underestimate
the relationship between the variables that

00:12:42.880 --> 00:12:44.880
those indicators represent.

00:12:44.880 --> 00:12:50.770
So here you can see we did the simulation
study for paper and we varied the true correlation

00:12:50.770 --> 00:12:56.779
between the thing that we measure and then
we simulated the different data sets and these

00:12:56.779 --> 00:13:04.980
estimates for regression analysis using weighted
sums of scale items are systematically too

00:13:04.980 --> 00:13:05.980
low.

00:13:05.980 --> 00:13:08.470
So they are underestimating the true relationship.

00:13:08.470 --> 00:13:14.560
It is true regardless of whether we take an
equal weight sum - so just take a sum of mean

00:13:14.560 --> 00:13:21.720
of items - or whether we use weights that
are optimized to maximize reliability.

00:13:21.720 --> 00:13:25.480
This is something that you can't do but in
simulated scenarios you can.

00:13:25.480 --> 00:13:30.560
So even if we have and ideal set of weights
that maximize the reliability in the simulated

00:13:30.560 --> 00:13:36.380
scenario where everything is under our control
there is no noticeable advantage in reliability

00:13:36.380 --> 00:13:41.800
over weighting indicators more based on their
reliability compared to using equal weights

00:13:41.800 --> 00:13:44.460
- weighting each indicator the same.

00:13:44.460 --> 00:13:52.170
Just looking at the claim that weighting more
reliable indicators more than unreliable indicators

00:13:52.170 --> 00:13:57.670
sounds reasonable but it doesn't actually
improve reliability and there is no evidence

00:13:57.670 --> 00:13:58.770
for it to do so.

00:13:58.770 --> 00:14:05.110
So how can people start to believe that the
partial least square weights particularly

00:14:05.110 --> 00:14:08.380
would improve reliability in a meaningful
way?

00:14:08.380 --> 00:14:15.440
Let's take a look at evidence that supports
this belief.

00:14:15.440 --> 00:14:21.430
There are books chapters and articles mostly
published outside the mainstream research

00:14:21.430 --> 00:14:26.190
methods journals that claim that there is
evidence for this phenomenon.

00:14:26.190 --> 00:14:32.470
For example Chink 1991 book chapter cited
here tells that in their simulation study

00:14:32.470 --> 00:14:37.279
the partial least square weights - after applying
those weights the regression results were

00:14:37.279 --> 00:14:42.410
more accurate than using equal weights that
we normally do.

00:14:42.410 --> 00:14:45.560
Okay so people claim there's evidence for
this.

00:14:45.560 --> 00:14:48.240
What does the evidence actually say?

00:14:48.240 --> 00:14:52.870
Let's take a look at what the partial least
squares weights actually do.

00:14:52.870 --> 00:15:01.779
The weights - how they work is that they create
a bias away from zero and if you don't consider

00:15:01.779 --> 00:15:06.970
technique fully like we would do in a regression
analysis research method study but you for

00:15:06.970 --> 00:15:14.510
example simulate the correlation values between
02 and 0.5 for example then you can fool yourself

00:15:14.510 --> 00:15:19.899
into thinking that these scale scores from
the partial least squares algorithm are more

00:15:19.899 --> 00:15:27.370
reliable because actually this biased weight
from zero happens to be canceling the biased

00:15:27.370 --> 00:15:30.870
due to the measurement error in this particular
scenario.

00:15:30.870 --> 00:15:37.970
So that is not evidence for reliability it's
just evidence that in some scenarios one source

00:15:37.970 --> 00:15:41.610
of bias can cancel another source of bias.

00:15:41.610 --> 00:15:48.810
Of course as a routine research practice relying
on one biased to cancel another one is really

00:15:48.810 --> 00:15:50.430
bad idea.

00:15:50.430 --> 00:15:56.120
Additionally if you're - the objective of
your analysis is to check whether the existing

00:15:56.120 --> 00:16:02.120
effect of - whether an effect is non zero
then a technique that is biased away from

00:16:02.120 --> 00:16:08.209
zero so that it ever indicates that your indicate
is close to zero is probably the worst possible

00:16:08.209 --> 00:16:12.220
thing that you can do in terms of indicator
weighting.

00:16:12.220 --> 00:16:18.810
Of course why people like to use this technique
is that it provides you support for the existence

00:16:18.810 --> 00:16:26.810
of results even if the results show that there
is actually no effect because normally we

00:16:26.810 --> 00:16:32.930
want to demonstrate that our hypothesis are
actually not rejected by the data.

00:16:32.930 --> 00:16:37.350
So what can we do about these problems - these
statistical and methodological myths and urban

00:16:37.350 --> 00:16:38.350
legends?

00:16:38.350 --> 00:16:45.860
There is a - beyond there are articles about
this phenomenon specifically editors are trying

00:16:45.860 --> 00:16:48.279
to do something.

00:16:48.279 --> 00:16:54.580
For example in this editor journal for operational
management Guy and Ketokivi specifically use

00:16:54.580 --> 00:17:00.410
partial least squares as an example state
that you should always have a basic understanding

00:17:00.410 --> 00:17:03.110
what your analysis technique does.

00:17:03.110 --> 00:17:08.059
Unfortunately many research methods courses
focus on how a technique is being applied

00:17:08.059 --> 00:17:13.900
in the past and then how you apply or use
it with SPSS or some other software instead

00:17:13.900 --> 00:17:19.120
of explaining what are the basic principles
that the technique is based on.

00:17:19.120 --> 00:17:23.090
You don't have to be a statistician but you
have to understand the basics: What is the

00:17:23.090 --> 00:17:27.039
principle that allows the technique to work.

00:17:27.039 --> 00:17:37.090
Then another recommendation they give is that
you should never do - never provide justification

00:17:37.090 --> 00:17:43.110
in the form that expert X has recommended
that technique Y should be used in a particular

00:17:43.110 --> 00:17:44.110
scenario.

00:17:44.110 --> 00:17:45.110
No.

00:17:45.110 --> 00:17:50.809
The methodological choices should be justified
based on methodological evidence.

00:17:50.809 --> 00:17:57.450
For example if you want to justify using technique
X then you can say that well that method X

00:17:57.450 --> 00:18:03.120
technique X has been proven to be ideal technique
in this particular scenario.

00:18:03.120 --> 00:18:08.080
By proving we mean that there exists - somebody
has written a mathematical proof that for

00:18:08.080 --> 00:18:13.640
example regression analysis is done biased
in certain conditions - then you don't necessarily

00:18:13.640 --> 00:18:19.400
have to cite the proof itself but if a good
techniques book says that something has been

00:18:19.400 --> 00:18:23.910
proven then you can cite that textbook as
an example.

00:18:23.910 --> 00:18:31.300
Then another way of justifying your choices
is to point out that simulation evidence - which

00:18:31.300 --> 00:18:38.940
is another way of supporting methodological
claims - points out that technique X works

00:18:38.940 --> 00:18:43.780
well in conditions that are closed to your
conditions.

00:18:43.780 --> 00:18:48.690
Never use the justification that expert X
recommends method Y.

00:18:48.690 --> 00:18:54.960
Experts - if they really are experts - they
will always provide you the justification

00:18:54.960 --> 00:18:56.260
for the recommendation.

00:18:56.260 --> 00:19:01.290
So explaining the justification instead of
saying that someone says so.

00:19:01.290 --> 00:19:06.900
It's also a worth thinking who is - if you
cite expert - who is an expert?

00:19:06.900 --> 00:19:15.090
Do you - if you want to say something about
regression analysis should you cite an econometrics

00:19:15.090 --> 00:19:20.730
professor that has built their career on studying
regression analysis and related techniques

00:19:20.730 --> 00:19:26.580
or perhaps a marketing professor who has built
their career applying that technique in marketing

00:19:26.580 --> 00:19:27.850
scenarios.

00:19:27.850 --> 00:19:32.160
So which one is a more better source to cite?

00:19:32.160 --> 00:19:38.170
Then never apply empirical precedent as justification.

00:19:38.170 --> 00:19:43.290
As demonstrated by the two stages least square
example that someone has done something in

00:19:43.290 --> 00:19:48.990
the past article in the journal where you
publish does not mean that that is the correct

00:19:48.990 --> 00:19:52.090
thing to do and it's not evidence for the
thing be correct.

00:19:52.090 --> 00:19:53.450
Cite good books.

00:19:53.450 --> 00:19:57.960
Cite articles that apply research method journals
such as organizational research methods.

00:19:57.960 --> 00:20:02.210
But the fact that somebody has used something
before is not evidence for that technique

00:20:02.210 --> 00:20:03.290
to be useful.

00:20:03.290 --> 00:20:08.730
It probably correlates for that technique
to be useful but it is not direct evidence.

00:20:08.730 --> 00:20:11.820
Finally always read what you cite.

00:20:11.820 --> 00:20:17.990
So when you cite a book about regression analysis
then you should read that book or at least

00:20:17.990 --> 00:20:20.040
the part that you cite.

00:20:20.040 --> 00:20:22.500
And if you cite provide the page number.

00:20:22.500 --> 00:20:29.900
It's much more difficult to do careless citations
with specific page number than to do careless

00:20:29.900 --> 00:20:36.850
citation just to a book and then relying that
I hope that somewhere in the book it says

00:20:36.850 --> 00:20:37.850
so.

00:20:37.850 --> 00:20:43.730
One of my favorite things to complain about
as review is citation to econometrics book

00:20:43.730 --> 00:20:48.490
such as Green's 2012 book which is more than
1000 pages.

00:20:48.490 --> 00:20:53.780
People make a claim or the authors make a
claim about their methods and then they cite

00:20:53.780 --> 00:20:56.010
Green's book without the page number.

00:20:56.010 --> 00:21:00.200
When I see that - in my response letter I
tell the authors that you need to add a page

00:21:00.200 --> 00:21:05.970
number in the Green's book because you can't
possible expect me to read the full 1 200

00:21:05.970 --> 00:21:10.049
or so page book to check the claims.

00:21:10.049 --> 00:21:13.679
Typically in a revised version the citation
is removed.

00:21:13.679 --> 00:21:17.770
That's indirect evidence that the authors
never actually read the book in the first

00:21:17.770 --> 00:21:18.770
place.

00:21:18.770 --> 00:21:20.990
If they had read it - they could provide a
page number.