Linked to LO25939 - A Search for LO's and Metanoia
Please see below my third post of this series. With this post I finish the
Introductory part and in the next post I wil enter the core of the
I.4. "Good Enough" Research Methods
First, let me acknowledge that I got the idea of "good enough" research
methods (in management and organizational studies) from Bruno Bettelheim's
book "A good enough parent" (that borrowed it from D.W. Winnicott's
concept of "good enough mother"). Writing in 1987, after Spock's books,
and the tendency from many parents of the 60ís to try to be the "best
parents", Bettelheim advised that people that try to be the "best parents"
sometimes end being the worst, and that what one must try is to be a "good
enough parent". Great advice, really...
I will try to make a point recognizing that in management, organizational
and social studies "good enough" methods are in contradiction with two
completely different types of ideas disseminated in books and articles. On
one side, they ontradict the "guru's literature", that present some sort
of prescription or recipe to follow in order to be the best manager or
consultant in the world, at least during one or two years (all the books
from Tom Peters and the reengineering books and articles came to mind, but
also many articles of scholarly driven magazines. One of the problems with
this kind of literature is that, contrarily to what Karl Popper proposed,
they are not "falsifiable" - negative evidence will not falsify the
previous theory it will only allow the authors (or others) to write a new
book or article on a new or updated theory.
But a "good enough" method also contradicts the majority of "research"
papers and thesis produced by the academia, that are mostly derived from
the "positivist epistemology" I have criticized in previous posts. The
majority of those papers are "quantitative", but some of them are
qualitative but, in many cases, continue to accept the principles of the
referred epistemology, as I will try to prove in a minute.
As I have frequently criticized the "positivist epistemology", I must
repeat that I am not criticizing "science" per se, but a positivist view
of science and especially a positivist view of social sciences and
organizational studies a view that, by the way, I consider to be
anti-scientific and based on an epistemology of physics that was current
in the XIX century, but was already bypassed in the beginning of the XX
century with relativist theory(ies) and quantum physics, not to speak
about later developments like chaos theory.
In my engineering initial studies, I had to opportunity to study physics
(including restricted and general relativity and quantum theory) and I
think that I have learnt the "scientific method". When, some 10 years
later, I decided to go to the university again, this time to study
Psychology, I was astonished by the fact that I had a subject on the
"Epistemology of Social Sciences" where the epistemology of XIX century
positivistic physics was taught as if it was still valid and also
applicable to social and psychological matters. When some years later, in
the 90's, my eldest daughter studied psychology, the same epistemology was
still taught to her and the other students. Has she had not the
opportunity to previously study physics, she could not fully understood me
when I explained her that the "scientific method" she was taught was the
XIX century scientific method and:
(1) it was not in use in physics any more, as after Einstein and Max
Planck (not to speak about Khun, Popper or Feyerabend) things
became more complex, and,
(2) there was no scientific evidence that the "control experiments"
of physics could be a good method to study psychology and social
subjects (or management).
As some of you many remember, I have been an Information System
professional (mainly a "developer") since the beginning of my professional
life, and later an Information Systems Strategic Planning consultant. Very
early in my professional life I have discovered that what I was developing
were not only (nor mainly) "technical systems" but new "organizational
systems" based on new "IS/IT systems", which is quite different. As an
ISSP consultant, again, I was advising clients on the new organizational
systems they could create (based on new IS/IT Systems). Anyhow, I could
not consider myself a competent professional without studying
organizations and management, that from that date I always have studied by
my own, and not in academia. So I am a subscriber of the Harvard Business
Review (HBR) for more than 20 years and of the Sloan Management Review
(SMR; now MIT/SMR...) for a little less, and I have read many books on
management since then.
"In Search of Excellence" and afterwards
I think that one of the first management books that I read immediately
after publishing was Tom Peter and Robert Waterman's "In Search of
Excellence" (1982). I found it a very interesting book. It was based on a
description of real cases, that presented a spirit one could emulate, but
no prescriptions, only lessons (from "America's Best-Run Companies"...).
As you know this was the first management book to become a bestseller. But
with that fact, it changed the marketplace of management books, creating
the guru's marketplace and even, eventually, changed the management
discipline, as the world was changing and managers were in search of new
ways of managing, becoming predisposed to accept fad "solutions".
I don't know of any other later books from Waterman. But it was probably
quite natural that Tom Peters was the first to profit from that new
market. "Thriving on Chaos: Handbook for a Management Revolution" (1987)
is a real "cook book", a book of recipes. "Recipes" was the tittle of part
one, but all the titles of chapters are in the form of recipes. In the
first part the program is defined ęthere are no excellent companies. The
old saying "if it is not broken don't touch it" must be reviewed. My
proposal is "if it is not broken it is because you haven't examined it
carefully enough. But you must change it!Ľ (sorry, I am translating back
from Portuguese to English). And if those recipes are not enough, one can
always try to create a "crazy company" or "pursuit of wow".
All the history of "gurulogy", introduced by Peters, is based on
exploiting manager's fears and saying that they can be solved with
recipes. Those fears result from the complex times we are living in, and
from the fact that to manage imply to take decisions with insufficient
data, which seams to imply a kind of artistry where no one can be sure of
being successful hence a manager will feel more "sure" if he can use the
last recipes to "engineer" management work, and some will feel even more
comfortable using some sort of magic and ritual.
In some case the guru's recipes are presented as based on previous
experiences; in some others they are presented as a completely new
"creation" of the authors, based only on their own imagination. But the
title or the advertisements will say clearly "cook with these recipes and
you will become a great cooker" (I mean, a great manager or consultant).
The books and articles in this strand have some common characteristics:
- after some reflection, based either in real cases, or in conceptual
analysis, a solution, generally in the form of a prescription or recipe,
is presented that a company or a manager shall use to solve
some problems or, in the case of "great theories", all the
problems in the management and organizational arena;
- there is no attempt from the management community to try to falsify
- both the authors and the managers or consultants that try to use
the theory will claim success when they feel they had a success,
but will justify all failures "saving" the theory they have created
or espoused, and claiming that the failures were due to something
else - "someone", namely the CEO, the "system", the "change of
- this will allow them to promote a "revised version" of the same
theory, "more powerful" or "more adapted", or the old recipes
will disappear from the titles and discourses without ever being
- the same or other author then choose a new and good name and
write a different book about a new theory that will solve part or
all of the new problems.
Should any scientist in physics, chemistry, biology, etc. behave in such a
way he would be completely discredited in a few years. In the management
and organizational disciplines he or she will, on the contrary, became
more and more accepted as being always at the edge of innovation.
One can try the mechanical recipes of "never ending excellence pursuit"
or, if that fails, to conclude that "crazy times call for crazy
organizations", or believe that "only paranoids survive". Or, on the
contrary, choose a new guru and a new recipe of the technologically
oriented stream, from "digital something" to "reengineering" (the example
of "reengineering" is very interesting, and maybe I will analyze it more
closely in the sequel of this post).
I will not refer to all the gurus and all the management books and
articles of these two streams mechanical thinking recipes and "crazy
solutions" with some magic rituals. In real life, the technologically
oriented or mechanical thinking recipes (including mechanical systems
thinking) and the magic thinking can oscillate and replace each other over
time. Those two complementary tendencies can, by the way, be also found in
"personal counseling" literature from techniques to being "assertive" or
"full-something" (say NLP, as an example) to magic solutions (the books on
astrology or about the "guardian angels" are examples).
I find interesting that, being management considered a science by many
professors, managers and consultants, the dominant books and articles
(even in "quality magazines") belong in the major part to one of those two
categories and I donít find anyone criticizing this (except maybe seldom,
as in Argyrisís criticism to the concept of empowerment - see
"Empowerment: The Emperorís New Clothes", in HBR, May-June 1998).
The Literature from Scholars
Even if scholars are, in some cases, also the authors of books and
articles of both types previously referred, there is also a different type
of discourse that can be seen in master dissertations, Ph.D. thesis,
academic research papers presented to "scientific" conferences or
published in reviews distributed mainly within the academic community, and
from summaries and digests of the above published in books or more
generally oriented magazines.
In the domain of management and organizational studies, but also in the
domain of the social sciences, the great majority of this stream is
constituted by quantitative studies.
The quantitative studies in the social domain come directly from the
positivist epistemology I have already commented, namely from the idea of
applying to human and social affairs the kind of "rigor" and
experimentation that is used in the physical disciplines, in laboratory.
This implies to try to ontrol all "independent variables" to be sure that
the ones we want to study are the only ones that affect the dependent
The limits of the transposition of this method to a different reality can
be best understood in the medical studies. In some cases the only way to
control all the variables would be to kill the patient, which is not a
very good idea, if not because of other reasons, at least because we then
can't anymore "vary" the variables one wants to study...
I think the same applies in most social and organizational "research"
unless we kill the organization or social entity we want to study, we
cannot control all variables. And indeed the majority of the "rigorous
research" in this fields:
- produces useless results;
- is anti-scientific by nature, because methods that are valid in
certain restricted and controlled domains are applied to
other domains where they should be considered wrong. At least
the decision if they are right or wrong is not a scientific decision
it is an ideological decision, based on the positivist paradigm.
Some examples occur to mind. First the placebo effect in medicine and
psychology; also the known effect of the "expectations of the
experimenter" on the results of the experiments - from students to rats,
when the experimenter "knows" that a group will behave above (below) the
average, the group will effectively behave above (below) the average.
In the organizational studies the most known example is the "Hawthorne
effect". All people know the story, but only a few takes into the last
consequences... Indeed, the initial "experiment" conducted by the
Hawthorne's engineers was a real positivistic experiment. The results: the
productivity of the text group increased when the light increased but also
when the light was reduced; and the productivity of the control group (not
subject to any changes) also increased.
Were the engineers conducting an experiment for a master dissertation, the
hypothesis being "productivity increases when light conditions increase"
they would fail their examination after one year of experiments.
Alternatively they could "change the numbers" (probably no one would
notice...) or try a mix of courage and luck courage to state as the
conclusions that "the method we have used is not adequate" and luck to
have a jury that would accept this which is very unlikely.
What is known as the second part of the experiment (after Elton Mayo and
his colleagues from Harvard entered the story) was indeed, in my modest
opinion, Action Research.
It is true that some experiments were tried (some changes were made) but
there was no previous hypothesis and the main conclusions come from open
interviews with the workers. What is known at the Hawthorn effect
("productivity will increase whenever you show interest to the workers")
is not even proved and the fact that productivity is caused by a "complex
emotional change reaction" states what we all always knew about humans,
organizations and society.
If the Hawthorne experiment proves anything is that the positivistic
approach and quantitative studies are not very useful to study
In the last years, following early studies from Lewin and others (and
after an interruption of some decades) we see also a tendency to the
increase of qualitative studies, namely Action Research (AR).
In my opinion, AR is a much more effective way to combine intervention in
the organizations with research, to produce knowledge about reality, and
namely about the quality of the methods and concepts used during the
intervention. I will not try to better define AR at this point, as I will
not compare the multiple variations in the AR methods.
After seeing some studies using Action Research and some books and
articles promoting it, I nevertheless come to think that there are many
limits in the Action Research practiced in the universities.
First, in all the studies I read, a great part of the "thesis" is
dedicated to "prove that AR is a valid research method" - I would say that
10-30% in the number of pages and probably much more than that in research
time and readings his dedicated to that, hence diverting the efforts from
the real subject at hand.
Second, this effort to prove that AR is also a good research method is
frequently conducted from the point of view of the positivist
epistemology. Indeed, it is as if the candidate was saying "yes, I accept
the positivist epistemology, but please note that maybe AR is also an
acceptable method". This is due to the fact that the candidate and his
Professor will have to face a jury with people that explicitly or
implicitly accept the dominant positivist methodology; or the author will
have to submit the paper to a board of reviewers of the same mind.
In some AR based methods (for instance, in Checkland's SSM) it is even
suggested to researchers that they must first define clearly the research
hypothesis that they will later try to prove. In my opinion, in Action
Research one must reflect before, during and after the action (or
practice) and frequently the conclusions are different from the hypothesis
one could state before the action - as it happened, by the way, in many
scientific discoveries in physics. The researcher must be open to the
"emerging" trends of the situation (and to the "situation back talk", to
use Schon's words) more then trying to prove or test an "a priori
I suppose that the need to previously define the hypothesis is suggested
by professors to their graduate students because they believe that natural
sciences work this way which is not always true, and even if it was it
wouldn't prove anything about the correct methods for a different subject.
Indeed, the problem is that even in post-graduate studies the evaluation
(or certification) process drives (and limits) the research and learning
processes. Not to speak about all the undergraduate education where
evaluation and certification of the "knowledge transmitted" is almost
always the major concern of most professors.
I would suggest that it could be useful:
- to write a strong argument in favor of AR and have it clearly
accepted, so that future researchers can spend only 10
minutes of their time to say "as XYZ has proved, AR is the
adequate method to study ABC" (XYZ, 200x), so they can
dedicate 100% of their time to the subject at hand;
- to try to prove that the use of the positivist approach and
quantitative studies are generally inadequate in what concerns
organizational and social subjects, and hence the onus to
prove the correction of the research methodology used
should be with the researchers using a positivist approach.
In Search of "Good Enough Methods"
I am not claiming that I have some "good enough method" readily made to
present to you as a prescription. And I don't even believe in
I am not saying also that we shall define those methods in this list. But
I am claiming that, as practitioners, we can have a saying in this matter.
And that we indeed are part of it as we all buy books and articles, even
when they are only the fad of the month or a recipes cook book. I think
that the community interested in management and organizational problems
could to something to modify the described situation.
The following are some points I would suggest in relation with "good
enough methods" in management, organizational and social studies.
- In the academia, or if one has the opportunity to write a review
or to share opinions with others, I would suggest that
we get the habitude to criticize the epistemological principles
of all studies and criticize their conclusions if they are trivial,
useless or simply wrong.
- In what concerns books or articles produced out of the academia,
we shall expect that the authors:
- clarify and explicit what one can expect from using the methods
- which results will falsify the methods proposed (prove that they
- if an author first published a book on a certain subject, in a
second book on the same subject, one should expected him
to clearly state what results he or others obtained that confirms
or disconfirms the claims of the previous book.
- consultants trying to apply some "method" in organizational
interventions should be expected, at least in significant
projects, to complement their action with some kind of reflective
practice or action research, in such a way that, at the end,
they could produce some general knowledge, namely in relation with
the usefulness and quality of the intervention methods used.
Until the collective of managers and consultants try to behave in a way
that is compatible with "reflective practice" and "good enough research",
we will be condemned to the management fads and to the dominance of books
and articles that are either "mechanical recipes" or "magic recipes".
I think that as managers, scholars or consultants we are all responsible
for the credibility of the "organizational disciplines". They will be as
good or as bad as we, as practitioners, will allow them to be.
"Artur F. Silva" <email@example.com>
"Artur F. Silva" <firstname.lastname@example.org>
Learning-org -- Hosted by Rick Karash <Richard@Karash.com> Public Dialog on Learning Organizations -- <http://www.learning-org.com>
"Learning-org" and the format of our message identifiers (LO1234, etc.) are trademarks of Richard Karash.