Generation of Theory from Qualitative data: Summary of Chapter 8: The Flexible Use of Data
The 2nd part of the
book named The Flexible Use of Data
consists of Chapter VII and VIII.
Chapter VII: Theoretical
Elaboration of Quantitative Data
Main Theme: The generation
of theory from Quantitative data is discussed in detail.
·
Too much emphasis is given to the verification
of quantitative data that it becomes difficult to generate new theories out of
it. However, some of the quantitative data has great potential of discovering a
theory and it breaks away the limits of ‘verification’ and ‘preconceived
conceptual schemes’, to give us a theory. But the researchers are focused on
the ‘rhetoric of verification’ that their results are presented as ‘plausible
suggestion’.
‘The generating capacities of these
sociologists and the richness of their research are not given the fullest
impetus’ pg. 185
·
Discovery made through quantitative data is
treated as a ‘byproduct of the main work’. When discovery forces itself on the
researcher he writes his ‘induced hypothesis’ as if it was thought up in
advance and they seem to fulfill the requirements of verification. In rare
examples we come across ‘purposeful generation of grounded theory… in short
papers where a single carefully worked out explanation of a hypothesis is
offered’.
·
Sociologists need to give themselves freedom to
use the quantitative data flexibly or they will not be able to generate a
theory out of it. Rules of (sampling, saturation, integration, density of
property development, etc.) need to be relaxed not for accuracy and
verification but for generating theory from quantitative data that will lead to
new avenues of quantitative analysis. Rigorous rules for accuracy and
verification can be relaxed at strategic points to facilitate theory generation
from quantitative data.
The
authors say that new rules of theory generation will be developed, some from
the existing ways of data collection and some from new ones.
They believe that there are many styles of
quantitative analysis having their own set of rules and their aim is to
describe how these styles can be flexibly adapted to generating theory.
Secondary Analysis of Quantitative data
If the purpose of the sociologist
is to generate a theory then he/she is more likely to analyze previously
collected data (Secondary Analysis), mainly for two reasons:
One it is easier, responsibility
of the researcher is to generate theory. Secondly, the task of description (of
the total survey) and theoretical analysis can be conflicting. Therefore, most
generation of theory will be based on secondary analysis and sociologists with
a ‘theoretical bent’ prefer it. Comparative Analysis requires secondary
analysis, example: population from different studies, etc. Trivial data, like a
market survey on the consumption of products, can have important theoretical
relevance.
Limitations of Secondary Analysis
1- It
is difficult to pin down the accuracy of findings because it is necessarily a
secondhand view, population goes through constant change, and data collection
procedures may not be known. But we need to keep in mind that the problem of
accuracy is not as important for generating theory as it is for describing a
particular social unit or verifying hypothesis.
It is the
general categories and properties and the general relations between them that
emerge from data that are important. These can be applied to many current
situations as relevant concepts and hypothesis regardless of whether the
specific descriptions are currently accurate for the population. Therefore,
secondary analysis is uniquely well suited for the generation of theory but
limited for description and verification -------------for which it is mostly
used.
2- Another
limitation is representativeness of the population studied. Accuracy is crucial
in description and verification so the sample should be carefully chosen by
random sampling. Secondary analysis of a random sample may lead to systematic
and random biases into the secondary study, making its accuracy questionable.
It is indeed difficult to be able to answer all wh- questions regarding the
data collected previously however, if theory generation is the purpose than representativeness
is not an issue for these two reasons: The direction of a relationship (used to
suggest a hypothesis) is assumed until disproved and theoretical (not
statistical) sampling guides the choosing and handling of the data.
3- Being
more important, scope of population can be increased by being less concerned
about representativeness. The sociologist takes carefully stratified samples
from a larger survey sample and cuts down on scope by omitting contaminating
influences.
Concepts and Indices
A lot of work is
done in the last decade, on the topic of flexible use of concepts and their
empirical indices, in quantitative analysis. Lazarsfeld has describes the
process by which concepts are translated into empirical indices. Authors have
discussed a few points in this category.
When the main
objective of the survey analysis is theory generation then crude indices are
sufficient to indicate the concepts of the theory and to establish relationship
between them which becomes the basis for suggesting hypothesis.
Discovering Hypotheses
‘In generating
theory, preconceived hypotheses are not necessary for correlating or
cross-tabulating two variables (runs) with indices of core categories and properties.
The rule for generation of theory is to have sensitivity to all possible
theoretical relevance. But the reverse is true in preconceived hypothesis for
verificational studies. According to verificational rules data should be
collected after the formation of hypothesis. For generation of theory data can
be collected at any time. Mostly it is collected beforehand.
To explore all
possible findings, for suggesting hypothesis the analyst may run the core
concepts with all remotely relevant items of the questionnaire. At this point
theory of the core indices starts emerging. Clusters of items are discovered as
associated with the index. This strategy discovers theory by providing link to
be conceptualized and analyzed. A theory is induced from the general relationships
analyst has found.
Liberties in Presentation of Data
Quantitative data is presented in
tabulated form but this need not to be so in generating of theory. All the
relationships on which grounded theory is based are so huge that it would make
the report of the theory too bulky. It is also important that peers and laymen
are able to understand the theory therefore the analyst can make some
deviations while presenting data and describing it. The data should not be changed
of course, but all data does not have to be presented and described in detail.
No information is distorted, only
enough information is presented to show, in the simplest way, the grounded
basis of the theory. Data that is not important for theoretical analysis can be
left out
Theoretical Elaboration
Next step is elaboration analysis
----‘to make three or more variable analyses to saturate categories further by
developing their properties and thereby achieving a denser theory. So, the
discovery of relationship among the indices gives the beginning suggestions for
a theory and a theoretical direction and focus for its elaboration.
In ‘elaboration’ structural
conditions of two variable associations are specified, their causes and
consequences are sought, with possible false factors checked for, and their
intervening variables discovered.
Lazarsfeld has given three ways
of ordering the variables in an elaboration analysis. They are:
1- Temporal 2-structural level
of complexity 3- Conceptual generality
However, Lazarsfeld used only the
first type of ordering and the other two were merely suggested.
Elaboration analysis is stopped
right in the beginning because survey data is cross-sectional in time, it’s
difficult for analyst to establish clear cut, factual time order and there is
too much temporal relationship in survey variables.
Elaboration analysis is
stimulating because its findings ‘fit’ the thought patterns of sociological
theory. The analyst can literally speak through tables and infer from his indices
the conceptual level of his talk.
Theoretical Ordering
The theory emerges from data if
the purpose of the analyst is to generate theory because then he is no longer
concerned about the temporal ordering, required for verification and
description. The analyst then moves ahead to order his variables theoretically,
(a new principle of ordering).
Lazarsfeld is not able to develop
a general theoretical ordering principle because he does not realize their
similarity. He fails to understand that temporal sequence can be handled both
theoraticallly and factually.
In generating theory as it
emerges,
1- the analyst discovers two-variable
relationships
2- Discovers their elaboration
3- Generates
possible further elaborations of two-variable relationship within the previous relationship
4- Goes
through data to look for indicators for concepts he thinks are related in
theoretical ways to his emerging theory
5- Arranges
elaboration tables to test his hypothesis (for suggestion or discovery, not
verification) Here, he is theoretically sampling his data, as directed by his
emerging theory.
Consistency and elaboration
analysis result in a grounded basis for his theory.
Conclusion: Careful relaxation of rules of quantitative analysis
can generate a theory.
Successive Stages of Building Up
to Theory from Quantitative Data
1-
Most frequent source of data used for generating
theory
2-
Indicating categories and properties with the
data
3-
Discovering hypotheses with conceptual indices
4-
Theoretical elaboration of hypotheses
The Interrelated Processes of Data
Collection
Data Ordering, and Data Analysis to Build Grounded Theory
|
Data Analysis (4)
|
|
|
|
|
|
|
|
Theory Development (5)
|
|
|
Data Ordering (3)
|
|
|
|
|
|
|
|
Theory Saturation ?
|
|
Yes
|
|
Data Collection (2)
|
|
|
|
|
|
|
|
|
No
|
|
Reach
|
Theoretical Sampling (1)
|
|
|
|
Within this general framework, data analysis for each case
involved generating concepts through the process of coding which, ...
represents the operations by which data are broken down, conceptualised, and
put back together in new ways. It is the central process by which theories are
built from data. (Strauss and Corbin,
1990, p. 57.)
‘Five analytic (and not strictly sequential) phases of grounded theory
building were identified: research design, data collection, data ordering, data
analysis and literature comparison. Within these phases, nine procedures or
steps were followed.
Table 1: The Process of Building Grounded
Theory
PHASE
|
ACTIVITY
|
RATIONALE
|
|||
RESEARCH DESIGN PHASE
|
|
|
|
|
|
Step 1
|
Review of technical
|
Definition of research question
Definition
of a priori constructs |
Focuses
Constrains
irrelevant variation and sharpens external validity |
||
Step 2
|
Selecting cases
|
Theoretical, not random, sampling
|
Focuses efforts on theoretically useful cases (e.g., those that test
and/or extend theory)
|
||
DATA COLLECTION PHASE
|
|
|
|
|
|
Step 3
|
Develop rigorous data collection protocol
|
Create case study
Employ
multiple |
Increases reliability Increases construct validity
Strengthens
grounding of theory by triangulation of evidence. Enhances internal validity Synergistic view of evidence |
||
Step 4
|
Entering the field
|
Overlap data
Flexible and
opportunistic data collection methods |
Speeds analysis and reveals
Allows
investigators to take advantage of emergent themes and unique case features |
||
DATA ORDERING PHASE
|
|
|
|
|
|
Step 5
|
Data ordering
|
Arraying events chronologically
|
Facilitates easier data analysis. Allows examination of processes
|
||
DATA ANALYSIS PHASE
|
|
|
|
|
|
Step 6
|
Analysing
|
Use open
Use axial |
Develop concepts, categories and properties
Develop
connections between a category and its sub-categories Integrate categories to build theoretical framework All forms of coding enhance internal validity |
||
Step 7
|
Theoretical sampling
|
Literal and theoretical replication across cases
|
Confirms, extends, and sharpens theoretical framework
|
||
Step 8
|
Reaching closure
|
Theoretical saturation when possible
|
Ends process when marginal improvement becomes small
|
||
LITERATURE COMPARISON PHASE
|
|
|
|
|
|
Step 9
|
Compare emergent theory with extant literature
|
Comparisons with conflicting frameworks
Comparisons
with similar frameworks |
Improves construct definitions, and therefore internal validity
Also
improves external validity by establishing the domain to which the study's
findings can be generalised |
by Naresh R. Pandit
Comments