The quality assurance process involves systematic activities that ensure and assess the quality of survey data (Biemer, 2003). High-quality data are suitable for their intended purpose and reflect multifaceted characteristics, including accuracy, timeliness, accessibility, and comparability. The guidelines described in this document represent standardized procedures for the quality assurance of the NCD Mobile Phone Survey data (see Figure 2). Countries are encouraged to add additional quality assurance activities to ensure that high-quality data are collected.


Figure 2.       Quality Assurance Diagram



Quality Assurance: Post-data Collection

This section describes the quality assurance guidelines and procedures applicable to and recommended for the post-data collection phase. The post-data collection phase refers to the stage after all survey data have been collected and aggregated. It begins with the preparation of the analytic data file for data analysis and encompasses preparing the data for sample weight calculations, applying non-response adjustments; assessing the quality of the sampling, sampling error, and weights; and measuring the quality of non-response, and other non-sampling errors.


Data Preparation and Cleaning Data for Sample Weight Calculations

This section provides guidelines on merging data files, validation of variables and skip patterns, and the creation of the final disposition codes.

Technical assistance from CDC is available for the use of the following statistical software packages for data management: EpiInfo, SAS, SPSS, R, and Stata.


Creating the Master Database File

Merge data into a single comma delimited file and remove mobile phone numbers to maintain participant confidentiality. The merged data file will contain the sample information, aggregated files from all interviews and attempted interviews, and the questionnaire database and data dictionary containing a description of the contents, formats, and structure of the database.  A comma delimited, or CSV, file can be viewed with a Unicode-enabled text editor such as WordPad.


Clean and Validate the Merged Data File

Verify that variables have valid values and skip patterns worked correctly, and check on any fields that are unexpectedly blank. While many of these data quality checks are built into the data collection process, it is important to confirm that no errors were left undetected in the software programming by doing the following:

  • Check that the questionnaire skip patterns have worked as specified in the questionnaire. The skip patterns from the final country questionnaire should be verified for all core variables.
  • Check that skip pattern data are coded as 888. If other blank fields do exist, then output those records to an error file.
  • Check each variable to ensure that invalid values are not present. Use the country’s data dictionary (codebook) to confirm valid values for each variable in the dataset.
    • For example, the only valid responses for Q7 are 1 (daily), 2 (less than daily), 3 (not at all), # (refused), or the missing value code specific to the software used. If the end user prefers, simple frequencies can be run on each of the variables to see if there are any out-of-range values. Any records with invalid values can be flagged and output to the error file.
  • Verify that all respondent ages range from 18 to 120 and all respondents reported a sex.


Assign Final Disposition Codes

The data file will include the final result codes for all the sample cases during data collection. Each mobile phone number (MPN) interview case should have one final result code and these codes are generally specific to the country or mobile network operator (MNO). Using the final result codes, assign final disposition codes. The final disposition codes are then used to calculate and report response rates and quality assurance measures.

Specifically, final disposition codes are as follows (see Appendix A.1 for conversion of these result codes into final disposition codes for both IVR and SMS modes):


Table 1. Final disposition codes

1.0 Interview 
     1.10 Complete
     1.20 Partial
2.0 Eligible, Non-interview 
     2.20 Non-contact
3.0 Unknown Eligibility, Non-interview 
     3.10 Unknown if housing unit
     3.11 Not attempted or worked
     3.12 Always busy
     3.13 No answer
     3.14 Telephone answering device (don’t know if housing unit)
     3.15 Telecommunication technological barriers (e.g., call blocking)
     3.16 Technical phone problems
     3.161 Ambiguous operator message
     3.90 Other
4.0 Not Eligible 
     4.20 Fax/data line
     4.30 Nonworking/disconnected number
     4.31 Nonworking number
     4.32 Disconnected number
     4.33 Temporarily out of service
     4.44 Pagers
     4.50 Nonresidence
     4.51 Business, government office, other organization
     4.52 Institution
     4.53 Group quarters
     4.54 Person not household resident
     4.70 No eligible respondent
     4.80 Quota filled
     4.90 Other


Refer to Section 8.3 in the Sampling Design Manual for details on assigning the final disposition codes in addition to the calculations for response rates. The following guidelines inform this process:

  • Each case should be assigned one final disposition code for the MPN based on the final MPN result code.
  • Only cases with a disposition code of 1.0 should be included in the final analytic dataset. Therefore, it is essential to assign the disposition codes correctly.
  • Use cross tabs to check all final result codes against their disposition codes to identify any misclassification. If the two codes do not match as they should, it indicates a problem with the software code used to create the disposition codes.

Quality Measures: Sampling, Sampling Error, and Sample Weights

This section describes evaluation of the quality of estimates from NCD Mobile Phone Survey samples and shows the effects of sampling naturally occurring clusters and unequal weights on these estimates. Guidelines to assess the performance of the calculated weights are also included.


Weights Calibration Adjustments Among Cells

An important step in producing sample weights involves calibrating the weights to population counts by demonstrated relationships between key population characteristics and study outcomes, called calibration variables (e.g., sex and age, as suggested in the Sampling Design Manual).


Background. Calibration adjusts for differences between the distributions of the sample and population. Adjustments are applied to all units of deliberately constructed cells. The goal is to increase the weights for those population subgroups that are underrepresented and decrease the weights for those population subgroups that are overrepresented. The more values of calibration adjustments deviate from 1.00 (high or low side), the greater the potential impact of sample imbalance on the bias of survey estimates.


Producing post-stratification adjustments. Calibration of post-stratification involves constructing adjustment cells by the cross-classification of the related characteristics. The post-stratification adjustment (PSA) in each of these adjustment cells is <1.00 if the members in that cell were overrepresented in the sample and >1.00 in those cells where the sample was underrepresented.


Reporting post-stratification adjustments. Indicate for each adjustment cell how they are defined by the variables used for calibration. For each cell, report the value of the post-stratification adjustment and its size relative to 1.00. An optimal table of these values will indicate all PSAs are close to 1.00 with some a little greater or less than 1.00.


Multiplicative Effect of Variable Sample Weights on the Precision of Survey Estimates

The Sampling Design Manual calls for a design where selection probabilities will vary somewhat because of potential clustering or multiplicity of MPNs requiring adjustments to sample weights. The Sampling Design Manual describes the factors that are used to adjust the sampling weights.


Background. Variation in sample weights can increase sampling error in survey estimates and, therefore, lead to larger estimates of variances/standard errors. This multiplicative increase, referred to as MeffWts, depends on the degree of variability the weights are for the observations used to calculate the estimate.


Estimating MeffWts. The simple mean and variance of the weights are needed to compute MeffWts for the data used to produce survey estimates. The value of MeffWts for estimates is computed by first calculating the ratio of the variance and the square of the mean, and then adding one to this ratio.


Reporting MeffWtsBecause MeffWts applies to all estimates for a reporting domain for survey estimates (e.g., the overall population, age, sex), it should be reported for all the main population subgroups where estimates will be reported. This can be done in a table with a list of reporting subgroups and the associated values of MeffWts.


Interpreting MeffWts. Interpretation of the value of MeffWts for a reporting domain is the following:       

Variation in sample weights increased the variance of all estimates (from the reporting domain) by a factor of (MeffWts).


Example. Suppose that for male estimates MeffWts = 3.0:

“Variation in sample weights increased the variance of all estimates from male respondents by a factor of 3.0.”

While MeffWts close to 1.0 is preferred, MeffWts > 2.0 might be viewed as substantial, and weight trimming or truncating strategies may be considered for outliers or extreme weights (Potter, 1988). Trimming the extreme weights can substantially reduce the overall variation in sample weights and can considerably improve the precision of the estimates. Trimming the weights, however, may also introduce some degree of bias in an estimate. The final decision as to whether one should trim the weights depends on finding a balance between the reduction in variances and the increase in bias as indicated by the gain in the mean square error. If weight trimming reduces MeffWts but does not appreciably change weighted estimates for key study outcome measures, the trimming step may be justifiable.


Overall Design Effect on the Precision of Survey Estimates

Once the questionnaire data have been cleaned and the final sample weights have been calculated, sample data are ready to be reviewed before analysis findings are reported.


Background. The overall design effect for the estimate, or Deffo, is the variance of a survey estimate from a complex sample design divided by the variance of a comparable estimate based on a simple random sample of the same size. For the NCD Mobile Phone Survey, there is only one multiplicative component to Deffo: the multiplicative of variable sample weights MeffWts (see Section 2.2.2).


Estimating Deffo. Deffo is estimate specific and can be reported directly with some survey analysis software packages, or it can be computed from the estimate and its variance when the estimate is a proportion or rate. Because Deffo will have many values across variables, summarize them with the median, minimum, and maximum values for reporting.


Reporting Deffo. Estimates of Deffo should be reported for all key study outcomes.


Interpreting Deffo. Interpretation of the estimated values and Deffo is the following:

“The variance of the survey estimate (of the population characteristic), given the NCD Mobile Phone Survey sample design, is Deffo times greater than if simple random sampling had produced the same number of respondents.”


Example. Analysis to estimate the current smoking prevalence rate produces the following from a sample where Deffo = 3.0: “The variance of the survey estimate of current smoking prevalence rate, given the NCD Mobile Phone Survey sample design, is 3.0 times greater than if simple random sampling had produced the same number of respondents.”


Margin of Error for Key Survey Estimates

An estimate’s margin of error (MOE) is one way to report the statistical precision of survey estimates. The NCD Mobile Phone Survey recommends reporting the estimated MOE along with estimates for key survey measures. The 


Sampling Design Manual describes the two main features of NCD Mobile Phone Survey samples that will influence the statistical quality of estimates and findings from the data. These features are the selection of population members with unequal probabilities (hence the need to use sample weights in analysis) and the use of stratification.

General background and instructions on how to compute these measures are provided in this section.


Each estimate has its own MOE. MOE is the expected half-width of a confidence interval of an estimate of a key survey measure. MOE is interpreted as how close the estimate is likely to be to the actual survey measure in the population.


Estimating MOE. Although an estimate of MOE is not usually computed by survey analysis software, the information necessary to compute it is usually available. Three things are needed to compute and interpret MOE:

  • The estimate of the survey measure.
    • The estimated standard error (or variance).
    • A specified measure associated with the desired statistical confidence in the value of the estimated MOE.

The level of confidence is usually based on a value (Z) of the standard normal distribution. For example, for a 95% level of confidence, we can use Z = 1.96.

MOE is computed as the product of the desired confidence measure and the standard error of the estimate.


Reporting MOE. Key survey estimates and their associated values of MOE should be presented together. This includes overall national estimates of these measures as well as estimates of these measures for all important reporting subgroups (e.g., by sex and age).


Interpretation. When taken with the value of a survey estimate, MOE indicates how close the estimate is likely to be to the actual value in the population.

For example, when using Z = 1.96 to compute estimated MOE, the survey estimate and its value of MOE can be interpreted together as follows:

“We are 95% confident that the estimate, (VALUE OF THE ESTIMATE), is within (VALUE OF ITS MOE) of the corresponding population value.”


Example. Suppose that the reported value of a NCD Mobile Phone Survey estimate is 22.9%, with a standard error of 1.2% that was computed in accordance with the actual sample design in that country:

We are 95% confident that the estimate, 22.9%, is within 2.4% of the corresponding population value.


Quality Measures: Coverage, Non-response, and Other Non-sampling Errors

Patterns of Respondent Cutoff Rates

There will likely be some NCD Mobile Phone Survey interviews that are not complete. Respondents may decide they no longer wish to continue the interview and hang up, or a call may be interrupted because of a network issue. In either case, a partially completed interview is an indicator of respondent disengagement, which may be seen as a reflection on the respondents’ attitudes toward the survey and potentially the quality of the data.  The Sampling Design Manual contains a detailed discussion of respondent cutoff/response rates.


Data sources. The data file with final disposition codes should be used for these calculations. A disposition code of 1.0 indicates that the respondent completed at least the demographic questions and one NCD question of the NCD Mobile Phone Survey interview. A disposition code of 3.90 indicates that the respondent did not consent before the demographic questions could be completed.


Calculation. A survey respondent is defined as any selected individual who is assigned a final disposition code of 1.0 and 3.90. Also define a cutoff rate (COR) to be:


Example. Calculating COR

In this example, we use 238,927 as the number of MPNs needed to dial from Section 8.2 in the Sampling Design Manual. If only 20% of the numbers are active, we will reach 47,786 potential respondents. Assuming a 50% eligibility rate (i.e., 23,893 not eligible) and a 30% response rate (16,725 non-interview), we are left with 7,168 respondents. Suppose we have the distribution across the final codes as shown in Table 2.



Table 2. Final Disposition Code Data for Example Survey


Code

No.

Interview 

1.0

7,168

        Complete (I)

1.10

2,151

        Partial (P)

1.20

5,017

Eligible, Non-Interview 

2.0

4,516

         Non-contact (NC)

2.20

4,516

Unknown Eligibility, Non-Interview

3.0

12,209

        Unknown if Housing Unit (UH)

3.10

1,221

        Other (UO)

3.90

10,988

Not Eligible

23,893

Total

47,786


The cut-off rate would be calculated as follows:

Uses. Values of COR should be computed directly for the sample as a final stage of quality assurance after data collection is completed. Values of COR could also be computed by the following:

  • The week of data collection in which the interview took place
  • Respondent age
  • Respondent sex

Interpretation. Generally, the lower the value of COR, the better. While CORs are useful measures of data collection performance and overall survey quality, they are not the most critical component.


Item Non-response Rates for Fact Sheet Indicator Variables

For the NCD Mobile Phone Survey, item non-response rate (INRR) is defined as the percentage rate of all respondents who do not answer a specific interview question among all respondents who should have answered the question. INRRs should be computed for all indicators included in the country-specific NCD Mobile Phone Survey Fact Sheet (see Section 5). INRRs are computed as the ratio of the number of respondents for whom an in-scope valid response was not obtained (Mx for item x) to the total number of unit-level respondents (I) minus the number of respondents with a valid skip for item x (Vx):

The total number of unit-level non-respondents of x (Mx) will be obtained from an unweighted frequency of respondents with missing data for item x after appropriate cleaning to ensure proper skip patterns were followed. The total number of unit-level respondents will be obtained from the total unweighted frequency of responding males or females to the gender or age questions because this variable these variables will have no anticipated blank fields. The total number of respondents with a valid skip for item x can be obtained as the frequency of item x with a response of 888. INRRs below 5% are considered low.


Creation of Analytic Data File

After the sample weighting and all quality assurance checks have been completed, a new file should be created containing only cases with an individual level final disposition code of 1.0. Only cases with an individual level final disposition code of 1.0 will be considered to be respondents to the NCD Mobile Phone Survey.


This new file is called the Analytic Data File and should be used when conducting data analyses to create indicators for the Country Fact Sheet and Country Report.