0% found this document useful (0 votes)
24 views

RM Module 4 Part 3 Errors

Uploaded by

vk
Copyright
© © All Rights Reserved
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
24 views

RM Module 4 Part 3 Errors

Uploaded by

vk
Copyright
© © All Rights Reserved
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 4
82 Marketing Research Working with leading research firm KPMG Nunwood*, Natwest ran a continuous tracking programme that interviewed 4,000 customers, alongside 2,000 customers of Natwest’ competitors each month, Researchers made use ofa research method known as ‘critical incident technique’ (CIT), which involves research participants telling stories ‘about specific experiences (incidents) related to use of a product or service. This research generated insights in two important areas. Firstly, Natwest found it had some of the lowest customer satisfaction ratings for any bank telephone service. Sec- ndly, far from being unimportant, telephone banking was often the point of contact where customers were most in need of help, Toaddress these isues qualitative research was augmented with quantitative analysis cof customer verbatim comments using textual analysis software. The output resulted in a new call model for delivering a high-quality customer experience over the telephone, ‘and a dramatic increase in customer satisfaction. Examples of mult-method research designs can be criticised for taking too long to under take, being too expensive and perhaps applying too many techniques that do not offer sufficient additional understanding. Such criticism cannot really be addressed without knowing the value that decision makers may get from this decision support (not just at the end, but at the many stages of research as it unfolds), compared with how much they would have to pay for it. Deci- sion makers can receive interim reports and feed back their ideas to give more focus to the issues and types of participant in subsequent stages. The example also illustrates that research- crs ean be very creative in their choice of techniques that combine to make up a rescarch design, Total error Thevatiaonbetaeen the population a hevarabe tinted the baerved men valle Dbtaned nthe maveting reveren poset, Random sampling the prelate ‘lected an pare fepreseriation ofthe Dopulaon of eer maybe dene 3 the “rion eteen the ue ted te toe ean val of the population ‘Non-sampling error An ecrortatcin be Struted to source other Shan sampling ae that can Several potential sources of error can affect a research design. A good research design attempts to control the various sources of error. Although these errors are discussed in detail in subsequent chapters, itis pertinent at this stage to give brief descriptions. Where the focus of a study is a quantitative measurement, the (otal error is the variation between the true mean value in the population of the variable of interest and the observed ‘mean value obtained in the marketing research project. For example, the annual average income of a target population may be 85,650, as determined from census information via tax returns, but a marketing research project estimates it at 62,580 based upon a sample survey, As shown in Figure 3.4, the total error (in the above case 23,070) is composed of random sampling error and non-sampling error. Random sampling error ‘Random sampling error occurs because the particular sample selected is an imperfect rep- resentation of the population of interest. Random sampling error isthe variation between the {ruc mean value for the population and the true mean value forthe original sample, (Random sampling error is discussed further in Chapters 14 and 15.) Non-sampling error [Non-sampling ervors ean be atibuted to sources other than sampling, and may be random or non-random, They result froma variety of reasons, including errors in problem definition, approach, scales, questionnaire design, interviewing methods and data preparation and anal- ysis, Non sampling errors consist of non-response errors and response errors. Non-response error Axpe ofron-sampling Ssror that och Some of te paricpans rotrespond. This enor ray bedelned astne ‘ato between the ue ean vale ofthevarable Inthe ongialsampieand ‘he tua mean valine Response error ype ofnen samping ‘rorsring fom faeipans who do ‘ahowe newer are Imerecored or Ins anaysee maybe Gefned en arton Detwcanthe te mean ‘luv ofthe variable inthe hetsampe ana ‘bserved men value ‘btaned nthe marke esearch project, Chapter 3 Research design 83 A non-response error arses when some of the participant included in the sample simply do not respond. The primary causes of non-response are refusals and not-at-homes (see Chap- ter 15), Non-tesponse will cause the net or resulting sample to be different in size or compos tion from the original sample. Non-response error is defined asthe variation between the tm ‘mean value ofthe variable inthe original sample and the true mean value inthe net sample. ‘Response error arises when participants give inaccurate answers, or their answers ae mi recorded or mis-analysed, Response error is defined as the variation between the true mean value of the variable in the net sample and the observed mean value obtained in the marketing research project. Response error is determined not only by the non-response percentage, but also by the difference between participants and those who failed to cooperate, for whatever reason, as response errors can be made by researchers, interviewers or participants." central {question in evaluating response error is whether those who participated in a survey differ from those who did not take part, in characteristics relevant tothe content of the survey." ‘Response errors made by the researcher include surrogate information, measurement, population definition, sampling frame and data analysis ereors: ‘© Surrogate information error may be defined as the variation between the information needed for the marketing research problem and the information sought by the researcher. For example, instead of obtaining information on consumer choice of anew brand (needed for the marketing research problem), the researcher obtains information on consumer preferences because the choice process cannot be easily observed. ‘© Measurement error may be defined as the variation between the information sought and information generated by the measurement process employed by the researcher. While seeking to measure consumer preferences, the researcher emaploys a scale that measures perceptions rather than preferences. 84 Marketing Research '* Population definition error may be defined as the variation between the actual population relevant to the problem at hand and the population as defined by the researcher. The prob- lem of appropriately defining the population may be far from trivial, as illustrated by the following example of affluent households. Their number and characteristics varied ‘depending on the definition, underscoring the need to avoid population definition error. Depending upon the way the population of affluent households was defined, the results of this study would have varied markedly. te) “How affluentisaffiuent? ‘The population of the affluent households was defined in four different ways in a study: 1. Households with an income of €80,000 or more. 2 The top 20% of households, as measured by income. 3. Households with net worth over €450,000. 4 Households with discretionary income to spend being 30% higher than that of com- parable households. '* Sampling frame error may be defined as the variation between the population defined by the researcher and the population as implied by the sampling frame (list) used. For exam ple, the telephone directory used to generate a list of telephone numbers does not accu- rately represent the population of potential landline consumers duc to unlisted, disconnected and new numbers in service. It also misses out the great number of consum: «ers who choose not to have landlines, exclusively using mobile phones, ‘* Data analysis error encompasses errors that occur while raw data from questionnaires are transformed into research findings. For example, an inappropriate statistical procedure is, used, resulting in incorrect interpretation and findings. Response errors made by the interviewer include participant selection, questioning, record ing and cheating errors: '* Participant selection error occurs when interviewers select participants other than those specified by the sampling design, or in a manner inconsistent with the sampling design. ‘+ Questioning errors denotes errors made when asking questions of the participants, or in not probing when more information is needed. For example, while asking questions an, interviewer does not use the exact wording or prompts as sct out in the questionnaire. ‘© Recording error arises due to errors in hearing, interpreting and recording the answers given by the participants. For example, a participant indicates a neutral response (unde- cided) but the imerviewer misinterprets that to mean a positive response (would buy the new brand) ‘© Cheating error arises when the interviewer fabricates answers to a part or the whole of the interview. For example, an interviewer does not ask the sensitive questions related to a participant's debt but latcr fills in the answers based on personal assessment. Response errors made by the participant comprise errors of inability and unwillingness: ‘¢ Inability error results from the participant's inability to provide accurate answers. Par ticipants may provide inaccurate answers because of unfamiliarity, fatigue, boredom, {question format, question content or because the topic is buried deep in the participant's mind. An example of inability error is where a participant cannot recall the brand of toothpaste he or she purchased four weeks ago. Chapter3 Research design 85. ‘© Unwillingness error arises from the participant's unwillingness to provide accurate infor- ‘mation, Participants may intentionally misreport their answers because of a desire to pro- ‘vide socially acceptable answers, because they cannot see the relevance of the survey and/or a question posed, to avoid embarrassment or to please the interviewer." For ‘example, to impress the interviewer a participant intentionally says that he or she reads ‘The Economist magazine. ‘These sources of error are discussed in more detail in subsequent chapters; what is important here is that there are many sources of error. In formulating a research design, the researcher should attempt to minimise the total error, not just a particular source. This admonition is ‘warranted by the general tendency among naive researchers to control sampling error by using large samples. Increasing the sample size does decrease sampling error, but it may also inctease non-sampling error, e.g. by increasing interviewer errors. Non-sampling error is likely to be more problematic than sampling error. Sampling error can be calculated, whereas ‘many forms of non-sampling error defy estimation. Moreover, non-sampling error has been found to be the major contributor to total error, whereas random sampling error is relatively small in magnitude. The point is that researchers must not lose sight of the impact of total cerror upon the integrity of their research design and the findings they present. A particular type of exror is important only in that it contributes to total error. Sometimes, researchers deliberately increase a particular type of error to decrease the total ceror by reducing other errors. For example, suppose that a mail survey is being conducted to determine consumer preferences in purchasing shoes from a chain of specialist shoe shops. A. large sample size has been selected to reduce sampling error. A response rate of 30% may be ‘expected. Given the limited budget for the project, the selection of a large sample size does not allow for follow-up mailings, Past experience, however, indicates that the response rate could be increased to 45% with one follow-up mailing and to 5% with two follow-up mailings. Given the subject of the survey, non-participants are likely to differ from participants in many features. Therefore it may be wise to reduce the sample size to make money available for fol~ low-up mailings. While decreasing the sample size will increase random sampling error, the ‘two follow-up mailings will more than offset this loss by decreasing non-response error."? y)

You might also like