(online) = ISSN 2285 – 3642

ISSN-L = 2285 – 3642

Journal of Economic Development, Environment and People

Volume 1, Issue 2, 2012

 

URL: http://jedep.spiruharet.ro

e-mail: office_jedep@spiruharet.ro

An Exposition of Research Methodology in

Management and Social Sciences

Dr Sorab Sadri

Professor of Political Economy and Management Sciences

Director, School of Humanities

JECRC University, Ramchandrapura, Jaipur 303905, Rajasthan

sorab.sadri2010@gmail.com

 

 

 

Abstract.

 In writing this paper the author has consciously stood apart from his earlier works and attempted to dispassionately review his own position so that some degree of clarity of thought might emerge in the process. The paper is based on the author’s contribution between 1992 and 2012 to this subject and which has been used as the basis for several doctoral level investigations under the author’s guidance. They had played a major role in helping the author to crystallize his views. To these scholars, therefore, the author’s gratitude is unflinchingly extended. Management has been described as being concerned with and based on the science of decision making and operating from the foundations of the art of decision executing. Hence, research in the area of modern Human Resources Management, especially, is both interesting and challenging having its one foot planted in industrial sociology and industrial psychology while the other placed in supply chain management and organisational restructuring. Hence, the argument of this paper is more relevant to serious research scholars and to those management teachers who wish to pursue rigorous academic research. This is not meant for those in the cut-copy-paste league, which unfortunately is, of late, becoming quite prevalent within the Indian academia.

 

Keywords: management, human resources, human resources management, research

 

JEL Codes:  J 24, J 53, O 15,

 

 

1.       The Philosophical Underpinning

Whereas polemical and descriptive papers have their own domain of importance, this author opines that there is a crying need for papers based on strong empirical research to come out of B-Schools especially in the area of Human Resources Management. With this need in view the author euphemistically has put finger to keyboard to come up with this paper addressed to research scholars in B-Schools in general. Essentially a B-School is not an academic institution per se but one where an environment is created so as to enable learning in a professional specialisation to take place. Invariably research methodology is one of the subjects the student has to clear. There are a number of fundamental issues that need to be clarified before embarking on such a course. Let us at the outset clarify that research is not everyone’s cup of tea and the proportion of B-Schools carrying the words “of management and research” where actual research takes place is miniscule. Many scholars and especially corporate managers who try to enter the world of academia believe that imparting practical information about business and industry through anecdotes, case studies or just references is enough to educate the student.

This is unfortunately not the case. It is very much like soft skills like oral communication, group discussion, presentation and interview techniques that can help a student in a B School to go up to the final interview. That is all. After that it is the domain/subject knowledge that will actually land him/her the job. So too is the case with knowledge about managerial sciences. One therefore just cannot dispense with theory and yet claim to have knowledge of the subject.

So, beginning with basics let us pose the question: what is theory? (Cairns) Methodologists in social science, such as this author, maintain that theory is an abstraction of reality that seeks to explain reality. If a theory does not explain reality it is a quasi theory, a Meta theory or not a theory at all. What does this mean? This means that the distinction between theory and practice disappears and you cannot say, “such and such thing will work in theory but not in practice”. What is true in practice must be true in theory too. If not it is not a theory and that is why the terms tendency or hypothesis is increasingly used to replace theory in advanced economics. (Althuser)

That brings the discussion to paradigm, a term increasingly used in management and social science literature. The Oxford English Dictionary defines the basic meaning of the term paradigm as "a pattern or model, an exemplar". Thomas Kuhn  the great historian of science gave it its contemporary meaning when he adopted the word to refer to the set of practices that define a scientific discipline at any particular period of time. In his book The Structure of Scientific Revolutions Kuhn defines a scientific paradigm as: "universally recognized scientific achievements that, for a time, provide model problems and solutions for a community of researchers, i.e., (i) what is to be observed and scrutinized, (ii) the kind of questions that are supposed to be asked and probed for answers in relation to this subject, (iii) how these questions are to be structured, (iv) how the results of scientific investigations should be interpreted, and (v) how is an experiment to be conducted, and what equipment is available to conduct the experiment

How does theory differ in natural sciences (physics, chemistry, medicine and engineering) from social sciences (economics, politics, sociology and psychology)? For Karl Raimund Popper the distinction lies in the fact that in physical sciences a theory can be falsified in that you can maintain that it is wrong based on the evidence. In social science one can at best say that a theory is refuted in that one can maintain that one disagrees based on the interpretation of the evidence. This is essentially because of the mutable (ever changing) nature of social reality, which allows you to interpret the same facts differently. Hence interpretation is often person specific, context specific and environment specific as is the case in the evaluation of an ethical dilemma in the 1999 and 2007 works of Sadri and Jayashree. This dilemma arises out of interpretation of facts and assumption of truths. Greek scholars like Plato, Aristotle and Xenophon do not differentiate between fact and truth. (Copleston) This distinction comes to the fore only later on when Marx responds to the position taken by Comte (Cooper and Schindler, Hubbard, Chisholm).

According to post 19th Century scholarship, facts are not truths necessarily. A fact is a thing known to have occurred, to exist or a datum of experience often followed by an explanation. In a way it is a piece of evidence. A truth is more fundamental. It is the quality or state of being. Hence while facts can be verified through methodology (as Ludwig Wittgenstein and Bertrand Russell held) all truth is relative (as Georg Hegel had advocated) except the Almighty, which is the only absolute truth. And, as Emile Durkeim has added, nothing in this world is definite except death. This author strongly opines that the author must always remember this and to facilitate the research scholar a brief paragraph on each of these four great thinkers (Wittgenstein, Russell, Descartes and Hegel) is given in the notes.

This brings the discussion to hypothesis, which is a proposition made as a basis of reasoning without the assumption of its truth. A hypothesis is a supposition that can only be the starting point of an investigation based on known facts; it has to be validated empirically. Every hypothesis can thus be proved or disproved. Hence when a hypothesis is stated, the null hypothesis must be stated alongside and their notations conventionally being Ha and Ho.  Once a hypothesis has been tested and proved it becomes a theory. (Lehmann Gupta and Steckel)

The process of converting a hypothesis into theory is fundamental to political economy, is known as praxis and forms the backbone of Research Methodology. And this process itself has a sequence known as the core methodology or method. It is called the 5 D Method, used in Human Resources Management and Organisational Development and is named after Sadri and Jayashree who first used it in 2002. A method is a sequential process whereas methodology is the science of method. (Sadri and Makkar). [That is why the reader will not find a treatment of issues like sampling validation and testing in this paper since they form a part of method and this discourse is on methodology.]

The first D stands for definition. Here the position of the investigator, the subject to be studied, and the ambit of inquiry, the purpose of the study and the limitations of the investigator are stated. This is needed in the interest of clarity and cogency. Definition, moreover, helps to hone in a direction of inquiry and avoids fuzziness. (Russell)

The second D stands for diagnosis. The investigator must diagnose the internal environment of business and the external environment of business using qualitative and quantitative techniques. Just as a doctor uses the thermometer, the stethoscope and the blood sample test to determine what kind of fever one is afflicted with, so too, the investigator uses a set of methods to arrive at a finding. Cross-referencing of data is very important and so is correlation of identified and defined variables. (Popper)

The third D stands for design and here the investigator makes the research design i.e. how the study will proceed. This is the stage from where the pilot study is launched within a restricted physical domain and when the environment is controlled to the best possible extent. (Bottomore)

The fourth stage is called development. Based on the results of the pilot study the hypothesis is reformulated and the direction of the inquiry is finalized. Here the investigator develops the study itself and collection of facts begins only to be followed by their systematic documentation. (Hubbard)

The fifth stage is called delivery when the study analyses the facts and arrives at a conclusion, which has some social, political, economic or technological significance. The delivery stage is also called actualizing the findings or implementing the intervention. (Berger and Luckmann)

However, Jayashree in 2005 had first talked of a sixth or Hidden D, which stands for data and documentation. So important is data and documentation that it needs to be singled out. Many Indian scholars traditionally do not maintain data and often massage existing data. Both practices are scientifically untenable. We need to realize that data does not mean numbers. It could very easily be qualitative as gleaned in the case of participant observation. It is known to be the product of scientifically conducted focused interviews of respondents during an inquiry.

This leads to an examination of an axiom. It is a widely accepted principle usually used in investigations. Every investigator consciously or otherwise must take care that these axioms are followed scrupulously. Not following an axiom creates a fallacy and this Quite simply this (fallacy) is a mistaken belief based on unsound assumptions e.g. the earth is the centre of the universe. (Keynes)

There are, moreover, three fallacies that are borrowed from the Economic Science and increasingly used in cautioning investigators against committing in management as well as in social science research. (Weber, Durkheim, Hobbes) These are:

i.                     The fallacy of composition. What is true of a part is not necessarily true of the whole.

ii.                   The fallacy of accident. What is true of the whole is not necessarily true of the part.

iii.                  Post hoc sed non-proctor hoc. Any occurrence after an event is not necessarily because of the event.

In addition scholars need to take account of two homilies: (a) Correlation is not causation. Because two variables are statistically correlated, it does not follow that one causes the other or is caused by it. (b) One must never forget that mathematics is a language albeit a scientific language just as music is an artistic language. These days no subject can be excelled in without the use of mathematics and hence it cannot be wished away as some so called practical thinkers seem to do. This is particularly true of managers who come to teach HR in B Schools mistaking it for a soft option. The author opines that mathematics is important and must be both taken and used seriously. Usually research scholars use statistics heavily in their research work and many user-friendly statistical software packages (SPSS, RATS to name a few) are available making it easy for a researcher to use statistics. Unfortunately, once again, scholars in HR seem to be falling behind researchers in other management specializations in this regard. Qualitative research does have its merits, should be used not descriptively but through sustained inquiry, participative research, and focused interviews (as in sociology and anthropology).

Statistics must always be used to support an argument or buttress an opinion propounded. It should not be used for its own sake or as a superficial embellishment so as to give the paper an academic look. Quoting statistical data as some HR specialists do does not amount to using statistical tools and the moot difference must be noted. (Sadri and Jayashree) . And it is here the words of caution by the great grammarian George Bernard Shaw must be remembered when using statistical analysis of data. He said that statistics is akin to a blind man looking for a black cat in a dark room that does not exist.

After all common sense is neither common nor really sense! Unfortunately many people who claim to be into doing serious research tend to either over use statistics to the point of stupidity or do not use statistics at all, rely on intuition and thereby miss the wood for the trees. The first characteristic can be found amongst people who specialise in subjects like econometrics and psychometrics and who confuse between statistics as a means to with statistics as an end of any investigation. The second tendency can be found amongst people who claim to know something about media and communications research when all they have done is looked at data somewhat logically but often cursorily without using any statistical instruments. Their innate confusion and inaccuracy springs from the fact that they can never have all the facts at their disposal and yet think they are being objective. Objectivity is like a rainbow – it can be approximated only with the use of scientific methodology. That is all. To take the argument further,  it is posited that the very claim to being objective is but a subjective one!

Management is concerned with people and production of value. In that respect it is only a factor of production. Management as was shown above is also concerned with the science of decision-making and the art of decision executing. Hence we use linear logic (a la Descartes) in decision-making but dialectical or circular logic (a la Hegel) in decision executing.  This understanding is important especially in dealing with case studies. A case study can be any of these:

i.                     A case history or a chronological statement of events.

ii.                   A case analysis when data is given to enable one to conclude there from and arrive at a theoretical construct.

iii.                  A case example when events and data are given to enable one to convert a theoretical construct into practical situations.

 

Whatever it may be, facilitators and students of management who seem to swear by the fact that what is needed is practical knowledge and not theory must ever forget the advice of Aldous Huxley, without theory facts shall continue to fall on the plains of human ignorance. Finally, let the facilitator first and the student later be introduced to (what was mentioned earlier), Praxis, which is nothing but the process of converting theory to practice. To emphasise the argument further, reality cannot be understood without praxis and a scientifically sound research methodology is the basis of arriving at praxis. Without knowledge of research methodology the executive will be taking decisions based purely on enlightened guesswork. He would never be like a passerby who just throws a stone at a tree and hopes that a fruit will fall. His aim must be accurate and for this to happen, the mass (data), magnitude (scope) and direction (goals) must be absolutely clear. The three Ms of research: meaning, method and measurement then become inextensible instruments for the scholar. Rigour is an indubitable factor in all serious research and this is often given the slip even by those who claim to have a Doctoral Degree in Management. That is why the indubitable question is posed during oral defence of theses: what is your contribution to the corpus of thought? Unless the investigator has been fair and rigorous at the same time the answer to this question remains vague and often even incompressible. In a forthcoming book (2014) this point is being elaborated upon.

2.       The Quest for Accuracy

Rigour leads to accuracy. Research especially in management sciences is not abstract but empirical and hence depends heavily on data and documentation (Jayashree’s sixth D). Data is first collected, and then analysed and finally a conclusion emanates there from. Data comes in many forms and is collected in several ways. We shall take up the major ones that concern a B-School academic and the doctoral degree scholar in management in this section. It would be proper to sequence the steps involved in research methodology and highlight the possible pitfalls hampering the accuracy-quest so that the ensuing discussion becomes meaningful. In the subsequent section this author will outline ways and means to overcome the pitfalls helping the scholar to ensure veracity of the research process and data. The research process is normally seen to be a seven stage process outlined here-in-under with the possible pitfalls encountered by researcher at each of this stage being duly highlighted.

1) Problem formulation: The author opines that stating the research question is too crucial to be taken lightly. During the course of this phase the researcher is expected to formulate his problem statement. This formulation is expected to be preceded by exploratory research and some review of literature to find out gaps in literature on which the researcher aims to work and bridge it through the research work. The gap in literature gives birth to the research question and signifies the contribution of the researcher to the existing body of knowledge. If the researcher ignores this exercise, there is a possibility that the work could end up being a mere repetition of same or similar work already done, thus defeating the very purpose of research. The exploratory research is not done properly could lead to erroneous statement of the problem.

2) Extensive Review of literature: This step involves extensively going through the literature available on the topic This helps one to consolidate the topic and helps the researcher know what body of knowledge exists on the topic and what research techniques have been previously used. A critical analysis of the methodologies used also helps him to choose the appropriate methods for his research work. Inadequate review of literature may lead to erroneous choice of topic and methodologies. Without a comprehensive literature review the scholar will be unable to identify the research gap and this lacuna will water down the work considerably.

3) Research Design: This step involves preparation of plan (even a Gantt Chart) as to how and at what pace the research work would be conducted and outlines the following: (a) Participant Involvement. (b) Survey Design. (c) Sampling Plan. (d) Data Sources. (e) Statistical Analysis Tools and (f) Questionnaire Design. In short, it is a blue--print of how the research work would proceed. The commonly observed pitfalls in this step are improper choice of data collection instruments, erroneous sample choice, inadequate sample size, improper questionnaire design and wrong choice of statistical tools. Some researchers tend to use conceptual relationship diagram but these diagrams often have inbuilt identification problems. The lack of validity and reliability of the research instruments to the given context often leads to research deviating form its proclaimed objectives. This tends to hamper the quest of researcher to get accurate data and correct information and knowledge based on the data analysis.

4) Field work: Here the researcher collects data from the predefined sources using specified tools. During this phase 1) medium used for data collection, 2) sincerity of the respondent and 3) the respondents’ ability and willingness to share the data tend to affect the accuracy of the data collected. Meticulous notes and cataloguing observations are the basic ingredients of proper field work. To avoid untoward bias creeping into the study it is advisable to cross reference responses and keep the research question in mind at all times.

5) Data Analysis and Findings: Unclear knowledge of the analysis tools and their limitations can lead to inappropriate choice and use of tools which can adversely affect the accuracy and predictive or descriptive informative value of the research project and may lead to any or all of the fallacies mentioned above. Testing for statistical significance, for instance, is an important component of analysis but unfortunately is also one of the most misunderstood terms and can have implications for the interpretation of the analysis. (Hubbard)

6) Interpretation of data and drawing conclusions: It is absolutely crucial that the assumptions made and constraints (limitations) outlined are adhered to when data is interpreted. If not, the researcher will easily commit either the fallacy of composition or the fallacy of accident. Generally speaking the fewer the assumptions are the greater is the chance of getting a realistic conclusion.

7) Writing the report: Poor language has more than often spoilt the quality of an otherwise good work. More often than not, the researcher’s biases and prejudices also come into play. This marks the quality and the authenticity of the report, making it purely ideological and/or journalistic. Presenting the findings and presenting the same in a compressible manner is an art and often sways the judgement in favour of the researcher especially when future funding is sought. However, it would a heinous offence if the researcher were to doctor his/her findings to gain the funding and/or peer-acceptance.  A balanced view based on data and their logical interpretation is the safest option available to the researcher especially in issues involving people like performance management systems and industrial relations in management science and in class and property relations in social science.

The author in the tradition of Peter Berger has sketched seven stages of research and hinted at the pitfalls in each case. Needless to say, the researcher needs to take due care to avoid these pitfalls. Some of the important yet rarely used tools and methodologies that, it is believed, will go a long way in avoiding these pitfalls are outlined below. Every researcher must ideally acquaint himself/herself with these tools to guide him/her in his pursuit of the objective if he/she is to make a mark and not just be someone ‘who also ran’. This author (using forward reference) shall in the rest of this paper try to briefly introduce the techniques which a researcher should use to overcome these pitfalls.

3.       Soft Systems Methodology:

Research into HR issues invariably involves a social, cultural, ideological and political angle. Soft Systems Methodology (SSM) as developed by Peter Checkland is a way of dealing with problem situations in which there is a high social, political and human activity component.  This distinguishes SSM from other methodologies which deal with hard problems which are more technologically oriented. Soft problems, exemplified by most of the management research problems are difficult to define.  They have a large social and political component.  Following SSM when a researcher thinks of soft problems, he doesn’t think of problems but of problem situations. He knows that things are not working the way they are expected and he wants to find out why and see if there is anything he can do about it.  It is the classic situation of it not being a "problem" but an "opportunity". SSM is divided into seven distinct stages. Parsons and Smelser, in a manner of speaking, used soft systems methodology to posit their systems view of organisations and many structural functionalists are known to tow this line. The use of soft systems methodology is unfortunately remains limited to a few scholars especially those who have attained an academic pinnacle. The reason is simple. It requires a high level of understanding of the subject being written about. Many writers unfortunately feel that using abstract formulations and abstruse logic is a sign of profundity. Nothing could be farther from truth. Using soft system methodology helps the writer to remain simple (so that his/her thought goes from the writer’s mind into the readers mind), without being simplistic (so that a fundamental concept is not trivialised).

Stage 1: Unstructured problem situation:   This stage involves fundamental research into the problem area. It addresses questions such as:  Who are the key players?  What are their roles? How does the process work now? It takes a review of the problem situation. Essentially soft system methodology takes the term 'problem' as inappropriate because this might narrow the view of the situation. Soft system considers the phrase 'the problem situation' to be more appropriate since there might be many problems which are not perceived and yet need to be solved

Stage 2: Problem situation expressed through Rich Pictures.  This stage attempts to express the problem situation through pictorial presentation called rich pictures using the principle that more knowledge can be communicated visually and a picture is often worth more than a thousand words. The researcher in this stage attempts to sort and present the information in the form of a rich picture along the following parameters. (a) Structure of the organization: factors that do not change easily (e.g. buildings, locations, and environment); (b) Processes or transformations which are carried out within the system: many of these are changing constantly; and (c) Issues that are expressed or felt by organizational members (complaints, criticisms, suggestions, endorsements).

The astute researcher employs various formal and informal tools to collect information about the problem situation e. g. Interview, Observation, Questionnaire, Discussions etc.  The objective here is to present the richest possible picture and collect as much information as possible about the problem situation and avoid narrowing the scope of the problem too early before the situation is well understood. Therefore, unstructured tools are preferred by scholars at this stage. The richer the picture the more explicable is the argument.

Stage 3: Root Definitions: This stage involves naming the relevant systems also termed as giving the root definition of the relevant systems. The root definition expresses the core purpose of some purposeful activity system. Properly written root definitions provide a much simpler insight into building system models. Root definitions are written in such a way such that a model could be built based on them. A root definition is expressed as a transformation process that takes some entity as input, changes or transforms that entity, and produces a new form of the entity as output. Producing a root definition is a two step process that consists of: (i) An issue or task is chosen from a rich picture and (ii) a system is defined to carry out the task or address the issue. Root definitions are usually written as sentences that elaborate a transformation. There are six elements that make up a well formulated root definition, which are summed up in the mnemonic CATWOE.

1.       Customer: Everyone who stands to benefit from a system is considered as a customer of the system.  If the system involves sacrifices such as layoffs, then those victims must also be counted as customers. If one considers metropolitan public transport system as a system, passengers are the customers. However, in a B-School the student is not a customer but a product that is purchased by a customer i.e. the industry or business that employs the graduate.

2.       Actor:  As Dunlop and Flanders had argued in their Industrial Relations Systems the actors perform the activities defined in the system. The organisations and government agencies involved in the passenger transport business are actors.

3.       Transformation process: This is shown as the conversion of input to output. The efficient commutation of passenger from any part of the city to their desired destination is an example of transportation process. If the process is opaque it is referred to as the “black box”.

4.       Weltanschauung:   This is the classic German expression for world view.  This world view makes the transformation process meaningful in context. Taking an example from a road leading from a housing colony to a factory site the commuting employees would see it as a means of reaching the desired destination without hassles while the traffic management authorities would see it as a way to avoid traffic congestion. Environmentalists would view the system as a way to reduce pollution effectively while the IR Manager would look closely at safety and legal hazards of driving along the designated road.

5.       Owners:  Every system has one or a set of proprietors, who have the power to start up and shut down the system. This is why decision makers talk of taking ownership of the decision and that is also why modern writers argue for power to be based at the point of decision making (sales or production).The government which has the power to shut down the system would be the owner of the system.

6.      Environmental constraints:  External elements exist outside the system which it takes as given.  Internal elements exist within the systems which are overlooked for want of rigour. These constraints include organizational policies as well as legal and ethical matters. These could be endogenous such as structure, culture and workers collectivity. These could be exogenous such as state policy, market forces and political environment. Economic and physical geography of the city, political system are examples of the environmental constraints for the metropolitan passenger public transport system. In the not so recent past the examples of Singur and Nandigram have highlighted other socio-cultural constraints relating to land acquisition for SEZs. This is a moot point conveniently missed by the bourgeois press and sometimes not even understood by writers who present papers on SEZ.

CATWOE could be used as a building block for the scholar to derive the root definition. Knowing the CATWOE elements, although its main use is to analyse root definitions, it lends clarity and rigour to the scholar. Different world views may lead to formation of different root definitions and at this point Checkland comes quite alarmingly close to the Ricardo – Marx – Sraffa position that different world views create systems that generate the seeds of their own disintegration. However, Checkland does not take the Ricardo-Marx-Sraffa route and his argument goes somewhat as follows.

A system has two critical components viz. transformation and world view that constitute the essential component of system definition. The SSM revolves around how best the transformation can be achieved in accordance with the world view and employs the process of iterative modelling. This is also referred to in literature as formal system thinking.

Stage 4: Building conceptual models: This stage deals with development of conceptual models based on the root definitions. A conceptual model is a human activity model that strictly conforms to the root definition using the minimum set of activities.  The formal Systems thinking is applied in this development. The Formal System Model serves as a guideline for checking the conceptual model drawn.  If a system represents a human activity system, under the Formal System Model, it is a formal system if and only if it meets the following eight criteria: (a) It must have some mission. (b) It must have a measure of performance. (c) It must have a decision making process. (d) It must have components which interact with each other such that the effects and actions are transmitted through the system. (e) It must be part of a wider system with which it interacts. (f) It must be bounded from the wider system, based on the area where its decision making process has power to enforce an action. (g) It must have resources at the disposal of its decision making process. (h) It must either have long term stability, or the ability to recover in event of a disturbance.

Systems as a concept are ubiquitous and exist in the mind of the beholder. Components of the system, therefore, must be systems having all the properties of the system (subsystems). The conceptual model is similar to a PERT chart.  Nodes in the graph are activities that need to be done and can be written as a directed graph.  Verbs in the root definition are used to describe these activities. Logical dependencies are used to describe the structure of the system. The conceptual model is incomplete unless the monitoring standards are set in terms of efficiency, effectiveness and efficacy and control measures are given. Career planning and performance management systems fall into this description neatly.

Stage 5: Comparison of the conceptual models with the real world: This stage involves comparison of the results from stage 4 (Conceptual Model) and 2 (Rich Picture) and location of the similarities and differences if any between the two. While the conceptual model presents a possible ideal state, rich picture is a reflection of reality as perceived by the researcher. This comparison leads to identification of feasible and desirable changes.  The researcher at this point would like to know if there are ways of improving the situation. More often than not, it has been found that these comparisons lead to iteration of the steps 2, 3 and 4. After these iterations and in step 5 the desirable changes are identified. The purpose of the comparison stage is to generate debate about possible changes which might be made within the perceived problem situation. This is akin to stage 3 in the Sadri-Jayashree model i.e. design of the system or the proposed intervention.

Stage 6: Identification and recommendation of feasible and desirable changes: Any model that is worth its salt must lead the researcher towards his/her final objective. The comparison made in stage 5 leads to identification of desirable changes and recommendations of how these changes can be implemented. The outcome of this stage is creation of a desirable system by contemplating changes in the problem situation, which can be structural, process related or attitude or behavioural. The researcher is now ready to move from step 4 to step 5 of the Sadri-Jayashree model i.e. from development to delivery.

Stage 7: Implementation of the changes: The researcher’s job at stage 7 is to implement changes and put the findings and the system into action like the delivery in the Sadri-Jayashree model. When action is taken, it might even be a straightforward one. However, other situations may be encountered that impede straight forward activation. The introduction of the action may change the situation such that although the originally perceived problem has been eliminated, new problems could emerge. Often it is recommended that a temporary system be used to carry out the task under the supervision of the analyst, followed by a transition to the operation of the new system. Checkland pointed out that this methodology has in fact not emerged as a once-and-for-all approach to something sharply defined as a problem, but rather one that is perceived as a problem. The importance of the feedback loop in any systems model is thus highlighted yet again. This is especially important for persons who are working in the area of change management and organisation restructuring.

One important feature of SSM is that it is goal-driven, focuses on a desirable system and how to reach it. Checkland indicated that the changes must be systemically desirable as result of the insight gained from selection of root definitions and conceptual model building, and they must also be culturally feasible given the characteristics of the situation, the people in it, their shared experiences and their prejudices. It is hard to find any changes which do not meet both criteria.

Checkland found out from one of his case studies that it is important to move quickly and lightly through all the methodological stages, several times if necessary, in order to engineer a bridgeable gap between 'what is' and 'what might be'. Having explained how the problem will be defined one could go on to examine other issues which this author feels are germane to the researcher and which he now seeks to address. Sampling is an important aspect of the research design and one needs to be careful in employing or implementing a sampling design. Many a time, for instance, student feedback on faculty performance is taken only from those who are perceive to the close to the Director of a B School and the sample is poorly selected and so not representative enough. Hence the feedback is flawed and any action taken thereupon is vitiated.

Sampling Techniques: Many a time, errors occur in the data on account of the wrong choice of sample. Ideally the sample should be free from any sort of bias in order to ensure that it truly represents the population. Estimation is based on the sampling of data and these needs to be truly representative of the population. The under-estimators and over-estimator elements in the sample population should ideally balance each other so that the sample is accurate. This also obviates any possible variation in the measures due to some known or unknown influences that cause the sample score to lean in one direction or the other. This variation is referred to as systematic variance. The sample data, to be accurate, need to be free of systematic variance. In order to ensure the precision of the estimate based on the sample, the sample size should be appropriate. More often than not, the sample size is compromised upon by researcher in favour of cost consideration, speed of research work and ease of the work. This invariably has the potential to play havoc with the quality of data and the subsequent research results.

In order to avoid this pitfall, it is important that the researcher asks the right question and get satisfactory answers to them while deciding upon the sampling design in terms (a) relevance, (b) parameters of interest, (c) sampling frame, (d) type of sample, and (e) sample size. It is not just the choice of the sampling method but its implementation that is also critical. For instance, if random sampling is chosen as a sampling method during implementation of the plan a researcher is tempted to alter the randomly chosen sample to facilitate ease and convenience. The common problems in adhering to the random sampling norm are the not at home responses, (that which requires additional effort in terms of follow up), temptation to add elements to the sampling frame and to substitute sample elements ignoring the predetermined decision rules. This should be strictly avoided especially in respect of issues relating to the mutable nature of social reality.

There are some myths prevailing about the sample size calculations e.g. the sample must be large else it is not representative; a sample should necessarily bear some proportional relationship to the size of the population. In reality, a researcher must understand and appreciate that the size of the sample should be a function of the variation in the population parameters under study and the estimation of the precision needed by the researcher. In fact statistically, the sample size determination is a function of the variance in the population parameter under study, precision level needed (standard error of the estimate), confidence level of the estimate, the narrowness of the interval range, the number of subgroups within the sample. The higher their score is the larger is the sample size required. So this author strongly urges the researcher to consider all these aspects before the sample size is fixed. Since sampling design is an art and science as well, all statistical calculations apart, (which any standard research methodology book would have them elaborately explained), an experienced hand can always add value to the same and a quick second opinion would never go waste. In fact the author recommends this very highly.     

Measurement: An ideal study, it is further opined, should be designed and controlled for precise and unambiguous measurement of the variables. Data accuracy can be compromised during the measurement process on account of the following factors that invariably crop up during the measurement process

o   The respondent’s ignorance of the issues in question coupled with his reluctance to admit the same; Central tendency of the respondents; other temporary factors like fatigue, boredom

o   Situational factors sometimes puts strain on the measurement or response gathering session affecting the data quality

o   The measurer if not diligent or adequately trained or is biased can contaminate the data.

o   A defective instrument can cause distortion in the data quality

While a researcher can minimize the first three by ensuring adequate checks in the process, to ensure that error does not creep in the data on account of the fourth, he/she needs to ensure a defect free measurement instrument. Validity and Reliability of the instrument used are the two most important means to ensure the same. Many scholars use the Chronbach Alpha and an equal number of scholars use qualitative validation through (a) expert opinion (b) comparing collected date with results of focused interviews and (c) peer review.

It pains the author to see several doctoral studies that have not bothered to validate the instrument used for data collection and rely purely on subjective perceptions of objective reality to conclude the findings. A brief discussion below will hopefully clarify the point that this author seeks to make.

Validity: This is an indicator of the extent to which an instrument measures what a researcher actually wishes to measure. Although a reference made to many kinds of validity measures can be found in the literature, they can be segregated under the all-encompassing terms: (i) external Validity (Data’s ability to be generalised) and (ii) internal validity (Ability of the instrument to measure what it is supposed to measure). External validity is critical when a researcher intends to extrapolate beyond the sample and inductively generalises for the total population. The external validity can be improved if the researcher is mindful of the factors which are ignored during the study and which interact with the variables under study. The internal validity is of three types (a) Content (b) Criterion Related and (c) Construct Validity.

Content validity: is the degree to which the content of the measurement instrument represents the universe of all the relevant items under subject matter of the study. Content validity being judgemental, in addition in using his/she expertise in the area to define the topic carefully, the researcher should ideally determine the items to be scaled and the scales to be used, he/ she should refer the instrument to a panel of  experts to judge how well the instrument meets the standards. These experts would asses each item in the instrument independently and classify them into essential, useful and not necessary items. The response from each panel member on each item is evaluated by content validity ratio and those meeting statistical significance should be retained.

Criterion related validity is a measure of the degree to which a predictor variable is capable of capturing the relevant aspects of the criterion. This could be either predictive validity or concurrent validity depending upon whether a researcher wants to use the variable to predict or to describe the behaviour. The essential difference between the two is the time perspective. It is important for the researcher to ensure that the criteria set captures the behaviour and also the variable chosen (Predictive or Descriptive) captures the essence of the criterion. By carrying out the test and finding out the scores on criteria, variable and the actual behaviour, the researcher can easily find out the degree of correlation between them to conclude about the criterion related validity. The statistical tool which comes handy here is the correlation.

Construct Validity attempts to identify the underlying constructs being measured and how well the instrument represents them. In judging the construct validity, the researcher needs to consider both the theory and the measuring instrument. One can use the convergent validity technique where the scores on the construct under study are correlated with the outcomes of some pre-developed established construct if there is any; else the discriminent validity technique is employed where one separates it from other constructs which are available in related theories. The widely used techniques for the discriminent validity are factor analysis and multi-trait – multi-method analysis.   

Reliability: This refers to the degree to which the instrument provides consistent results. It is concerned with the estimates of the degree to which a measurement is free of random or unstable error. There are three frequently used perspectives on reliability based on time and condition: stability, equivalence and internal consistency.

(i)    Stability takes care of the personal and situational factors and indicates the extent to which the instrument is able to produce consistent results with the same person at different times. A test – retest method is applied to measure stability of the instrument. The stability scores are often affected by several factors like time between the two tests, topic sensitivity and extraneous factors affecting the respondents’ opinion. The alertness of the researcher and manoeuvring the time between measurements is critical to ensure the veracity of the test scores

(ii)  Equivalence is concerned with the variations at one point in time among both the observers or interviewers and different samples of items being studied. A good way to test equivalence of the measurements by different observers or interviewers is to comparing their scores on the same event. Measures of iterative reliability can be obtained in such cases where a panel of judges is involved, by finding correlation in their observations. The researcher can rank observations and find out the rank correlation coefficient to judge the equivalence. The objective in judging equivalence is to find out how a given set of items will categorize the individual. Variations in responses apart, if a person is classified in the same way by each test, the test is deemed to have high equivalence. In order to judge the equivalence one administers the alternative or parallel forms of the test to the same person either simultaneously or with certain time gap between the two to avoid impact of fatigue or boredom on the test result. The scores are then correlated to judge equivalence.

(iii)  Internal consistency refers to internal consistency or homogeneity among the items. The split half technique can be used when there are several similar questions or statements in the instrument. The instrument is administered only once and the results are separated into odd and even items or randomly split into two sections. The correlation between the results of the two halves indicates the degree of internal consistency. To adjust for the reduced length due to the splitting of the test, the spearman – Brown formula is used. As was alluded to above there are some tests which do not require the splitting of the instrument and are quite frequently used by researcher the KunderRichardson Formula 20 (KR20) for dichotomous items and Cronbach’s coefficient alpha for multi item scales.      

Correlation and Causation: The fallacy of Post hoc sed non-proctor hoc occurs when correlation between variables is confused with causation. Determining the nature of causation is very difficult. Sometimes a cause and effect are closely related - spatially, temporally or both - but sometimes they are not. However, most investigators seem to be inclined to assume that events which are closely connected either spatially or temporally are also connected causally. This problem is commonly known as the difference between correlation and causation. Just because two events correlate (are close in time or space) does not mean that one has caused the other. Scientific method demands that any correlation to be attributed to causation, it has to testable and one must remember that science forces us to remain open to the possibility that new evidence will cause a change in what we know and believe. With enough information, one can justify concluding that a strong correlation between two events is indicative of a causal relationship. The author strongly opines that a researcher should be mindful of the fact that when all reliable evidence points to one conclusion while no reliable evidence indicates points to anything else, then we don't commit the fallacy of confusing correlation with causation by concluding that we have likely identified the cause of the phenomenon in question. 

Significance Test: The term significance is one of the most misunderstood terms in statistics. The normal process for testing significance is that the investigator specifies the null (Ho) and alternate hypotheses (Ha) and the level of significance alpha. Then he finds out the p value form the sample and compares it with the predefined type I error (α). Statistical significance is then established by using the p < α criterion; if p < α, a result is deemed statistically significant and if p > α, it is not. A researcher must understand that alpha and p values are completely different entities with completely different interpretations. The p value is now associated in researchers’ minds with the type I error rate, α. Since, both concepts are tail area probabilities, the p value is erroneously interpreted as a frequency-based “observed” type I error rate, and at the same time as an incorrect (i.e., p < α) measure of evidence against Ho. While the p value from Fisher’s significance testing procedure measures the probability of encountering an outcome (x) of this magnitude (or larger) conditional on a true null hypothesis of no effect or relationship, or Pr (x | Ho), the significance level, or Type I error, α, is the false rejection of Ho, while a Type II error, β, is the false acceptance of Ho. The perspective of finding p value is to gather evidence against the null hypothesis. The perspective of specifying α value is to minimize error in the decision. The researcher must realise that while α is a pre-determined value, p is an observed value. The former takes a decision perspective to minimise error while the latter gathers evidence against null hypotheses. Furthermore, a researcher should be clear on these two perspectives, α and p which has been assimilated to give what most books refer to as a theory of statistical inference.  Particularly, the researcher should refrain from using the terms such as roving alpha and very highly significant relationships. This is because Fisher did not speak of the alternative hypothesis while proposing the concept of p value while Newman – Pearson gave the concept of type I error i.e. α and they did not speak of p value in their concept of hypothesis testing.  Hence it would be unwise to mix oranges and apples.

Fig. 1: Problem identification

If the arrow indicating relationship between advertising message and word of mouth is removed, the diagram would be free of identification problems.   

 

Fig 2: Adjusted conceptual diagram without the Identification problem

Relationship diagrams: Conceptual models (as shown above) are the backbone of empirical and theoretical marketing. They must be treated with the same care as we do our questionnaire construction, purification of our measurement scales, design of experiments. More often than not the conceptual models are marred by the problem of identification. Sometimes it becomes impossible to “identify” the true relationships in the conceptual model, no matter how much data has been gathered. This inability to uniquely determine the model parameters is nothing but the identification problem. The theory incorporated in the conceptual model is insufficient to identify the structural coefficients and leads to erroneous conclusions. To take an example of marketing a newly formulated VRS scheme, the following diagram is marred with the identification problem as the relationship coefficient between advertisement expenditure and word of mouth and advertising message and attitude cannot be determined conclusively as the variables word of mouth and attitude also interact with each other making it difficult to estimate the coefficient of the true relationships.

A researcher in management sciences must thus be mindful of the relationships outlined in the conceptual model and also ascertain if they can be truly identified. This is because every functional are has its own set of goals, its preferences and its agendas. The lack of identification in the conceptual diagram often leads to erroneous measurement tools and consequent erroneous interpretations of the same under such circumstances. A researcher in the author’s opinion may employ the following tactics as suggested by Hess to overcome this problem

1.  Erase Theoretical Linkages: The arrows missing in a conceptual model are equally as important arrows present or the predicted sign is of correlations. Superfluous linkages should be eliminated.

2.  Add an Exogenous Variable Exogenous: variables that “shift” other equations around but do not affect the equation in question help identify this questionable one’s coefficients.

3. Split Exogenous Variables: If one has multi-item measures of an exogenous variable, he can create extra movement in some equations by not collapsing all the items into a single scale. This can be done only if suggested by factor analysis.

4. Catalogue Missing Variables if the Model is partially recursive: If the model is partially recursive, then one needs to think carefully about the missing variables that are “not” in the equations. If the list of missing variables for each equation is unique, then one can plausibly assume that the disturbances are independent, making the model fully recursive and thus identified.

4.       Conclusion:

Hard core research is often the bane of academicians who are brought up on the cut-copy-paste work culture. For them real research is anathema. The gift of the gab and playing institutional politics becomes the sine qua non of their existence. Others openly state that “copying from one book may be plagiarism but copying from many is research.” They are quick to offer opinions without having the humility to first listen to the views of others or the patience to think through their own logic. Students innocently ape these wrong traits to their own detriment.  In summarily debunking these positions, therefore, this paper began by introducing the readers in B-Schools to the philosophical underpinnings of research in social and management sciences. In the process the author took them through a historical journey of philosophy on the one hand and introduced them to the epistemology of research on the other. Having done that, the author emphasised on two things: clarity of definition and scope of inquiry on the one hand, and accuracy of data to facilitate conclusions on the other. In order to facilitate clarity and accuracy the quest of the researcher was directed towards identifying and avoiding selected pitfalls in their research. To that extent it is polemical in the first part, instructional in the second and pedagogical all through.

5.       References

[1]           Aaker, David A. and Richard P. Bagozzi (1979), Unobservable Variables in Structural Equation Models with an Application in Industrial Selling, Journal of Marketing Research, 16 (May).

[2]           Althuser Louis (1976): Essays in Self Criticism, London, New Left Books. 31.              

[3]           Aquinas St Thomas, (1948): Summa Theologica, 3 Vols., New York and Boston, Benzeiger Brothers Inc.    (Reprint )

[4]           Austin Cline  http://atheism.about.com/library/FAQs/skepticism/blfaq_ fall_correlation.htm

[5]           Bentham Jeremy, (1843): John Bowring (ed.) the Works of Jeremy Bentham Edinburgh, William Tait.

[6]           Berger Peter, (1977): Pyramids of Sacrifice, Penguin, Harmondsworth.

[7]           Berger Peter and Luckmann, Thomas, (1971): The Social Construction of Reality, Penguin Harmondsworth

[8]           Bottomore Tom (1975): Sociology as Social Criticism, London, George Allen and Unwin.

[9]           Cairns, Huntington (1949): Legal Philosophy from Plato to Hegel, Baltimore, Johns Hopkins Press

[10]       Checkland, P B (1999): Soft Systems Methodology: a thirty year retrospective, Chichester, John Wiley and Sons Limited,

[11]       Chisholm, R N (1977): The Theory of Knowledge, New Delhi, Prentice Hall.

[12]       Comte, Auguste (1853): The Positive Philosophy Trans Harriet Martineu 2 Volumes, London, Toubner.

[13]       Cooper, Donald R., and Pamela S. Schindler (2006), Marketing Research, New York: McGraw–Hill.

[14]       Copleston, Frederic S J (1946) (1950)(1953): The History of Philosophy in three volumes, New York, Doubleday

[15]       Descartes Rene’ (1965): A Discourse on Method and Other Works, New York, Washington Square Press.

[16]       Durkheim, Emile (1950): The Rules of Sociological Method, Chicago, Chicago Press

[17]       Hegel, G F W (1953) (1988): Reason in History, (Reprint), London, Macmillan.

[18]       Hegel, G F W (1956): The Philosophy of History, New York, Dover Publication.

[19]       Hess James D. (1998), Unidentifiable Relationships in conceptual Marketing Models (quoted elsewhere)

[20]       Hobbes, Thomas (1909): Leviathan, New York, Oxford University Press.

[21]       Hubbard R. (2005), We Don’t Really Know What “Statistical Significance” Means: A Major Educational Failure, Working paper, The Wharton School University of Pennsylvania, Philadelphia, July.

[22]       Huxley, Aldous (1969). In Grover Smith. Letters of Aldous Huxley. London: Chatto and Windus. 

[23]       Jayashree S (2005): What Every MBA Should Know About HRM, Himalaya Publishing House, Mumbai.

[24]       Jayashree S, Sadri S and Dastoor D S (2008) Theory and Practice of Managerial Ethics, 2nd Edition, Jaico Publishing Co., Delhi.

[25]       Jayashree S, Sadri S and Nayak N (2009): A Strategic Approach to Human Resources Management

[26]       Kant, Immanuel (1936): Critique of Pure Reason, (Reprint) London, Everyman’s Library.

[27]       Kant, Immanuel (1980): Critique of Practical Reason, Indianapolis, (Reprint) Bobb – Merrill Educational Publishing.

[28]       Keynes, John Maynard (1998). The Collected Writings of John Maynard Keynes (30 Volume Hardback). Cambridge: Cambridge University Press. 

[29]       Kuhn, Thomas S. (1966) The Structure of Scientific Revolutions, 3rd Ed. Chicago and London: Univ. of Chicago Press, 

[30]       Lehmann, Donald R., Senil Gupta, and Joe H. Steckel (1998), Marketing research, New York: Addison–Wesley.

[31]       Malhotra, N.K. (2004), Marketing Research: An applied orientation, Upper Saddle River, NJ: Prentice Hall, Fourth Edition.

[32]       Marx, Karl (1961): Economic and Philosophical Manuscripts of 1844, Moscow, Foreign Language Publishing House.

[33]       Parsons, Talcott and Smelser, Neil J (1984) : Economy and Society : a Study in the Integration of Economic and Social Theory, London and new York, Routledge,

[34]       Patinkin, Don (1987). "Keynes, John Maynard", The New Palgrave: A Dictionary of Economics. v. 2, New York, Macmillan

[35]       Plamenatz, John (1971): Ideology, London, Macmillan.

[36]       Plato (n.d): The Republic, London, Heron Books, and Translated by W H D Rouse, New American Library.

[37]       Plato (1956): The Great Dialogues of Plato, translated by W H D Rouse, Holland, New Amsterdam Library.

[38]       Popper, Karl Raimund (1961): The Property of Historicism, London, Routledge and Kegan Paul.

[39]       Popper, Karl Raimund (1962): The Open Society and Its Enemies, London Routledge and Kegan Paul.

[40]       Popper, Karl Raimund (1963): Conjectures and Refutations, London, Routledge and Kegan Paul

[41]       Russell, Bertrand (1930): The Conquest of Happiness, London, Routledge.

[42]       Sadri S, Jayashree S and Ajgaonkar M (2002): Geometry of HR, Himalaya Publishing House, Mumbai.

[43]       Sadri S and Jayashree S (2011); Business Ethics and Corporate Governance, Current Publications, Agra.

[44]       Sadri S and Makkar U (eds) (2012) Future Directions in Management, Bharati Publications, Ghaziabad

[45]       Sadri S and Jayashree S (2013): Human Resources Management in Modern India: concepts and cases, Himalaya Publishing House, Mumbai

[46]       Shanker, S., and Shanker, V. A. (1986), Ludwig Wittgenstein: critical assessments. London: Croom Helm.

[47]       Shaw, Bernard (1972). In Dan H. Laurence. Collected Letters, 1898–1910. New York: Dodd, Mead & Company.

[48]       Tull, Donald S. and Del I. Hawkins (1993), Marketing research: Measurement & method, New York: Macmillan, Sixth Edition.

[49]       Voltaire (Francois Marie Arouet) (1964): Philosophic Dictionary, quoted in Lord Morley: Voltaire, London, Foreign Classics for English Readers Series.

[50]       Weber, Max (1933): The Methodology of Social Sciences, New York, the Free Press of Glencoe.

[51]       Weber, Max (1947) The Theory of Social and Economic Organisation, New York, Free Press.

[52]       Wittgenstein, Ludwig: (1989), Diamond, Cora, ed., Wittgenstein's Lectures on the Foundations of Mathematics,  Chicago, University Of Chicago Press,