- Levels of Measurement
Three levels of measurement will be used in this study. The nominal scale will be used to not only classify the sampled children into various demographic sets such as black and white; Latino and White; boys and girls; teenagers and kids; mothers and fathers; breadwinners and non-breadwinners among other categories (Mitchell & Jolley, 2012). This level of measurement requires that broad and homogenous categories van be established, which are then subsequently used to describe multiple variables in the study. While the nominal scale can only offer simple classifications, it is especially suitable for children since they still have not developed the ability to describe complex emotional effects. Effectively, broad descriptions of emotion such as “feel bad”, “don’t care” and “miss” can be easily categorized using this measurement level. In addition, people can be given arbitrary numbers according to predetermined criteria, which makes it easy for the children to describe their feelings (Usher, Holmes, Lindsay, & Luck, 2003).
The Second level will be the ordinal measurement scale, which will be used to rank variables according to the relative degree or size of differences among the observed variables. The numbers or weights are assigned to the variables/objects to indicate the magnitude. Specifically, this level of measurement would be applied to measure the ages of the children, the length of prison sentences for their parents, the extent of the psychological effects by broad homogenous descriptions e.g. mild, strong, very, severe and chronic (Kothari, 2009). The group mode and median will be used in the description of the effects and other variables. This level of measurement is important because in psychological measurements, statistical measures such as standard deviations, means and averages lack valid interpretations.
Most of the variables to be observed in this study cannot be measured with the instruments that make accurate statistical analysis possible, which makes ordinal measurements perfect estimations of the variable quantities. Ordinal scales describe predetermined order of variables, with the scale values deriving from a sorted group of magnitudes (Fulfer, Tyler, Choi, & intimate, 2007). The ordinal scale measures can be easily converted into definite quantities by use of a predefined Likert scale, which should in turn facilitate the application of many statistical tools to analyze the collected data.
Thirdly, the interval scale of measurement would be applied. This combines elements of the ordinal and ordinal scales and is founded on pre-determined intervals. The children (in hours), the children’s age, the number of visits will use the tests to assess the length of prison visits etc (Sanders, 2003). The interval measures may be represented by the mode, arithmetic means and medians, with statistical dispersions being assessed in standard manner, using averages or differences in range, standard deviation, interquartile range and range. This level of measurement lays emphasis on the central measures, which makes it unsuitable in many ways, but it still has utility (Kothari, 2009).
- Validity
- Content Validity- In order to ensure that the instruments used to collect data is suitable for the data type and nature of the research, face and curricular validation will be used. By the face validity approach, the appropriateness of the instruments will be assessed by conducting a research among lay subjects, members of the public, children psychologists (Garcia-Berthou & Alcaraz, 2004). The research assistants will approach them and ask if the questions contained in the questionnaires can accurately measure the emotional effects of children in the given circumstances and across the social and economic environments. This approach seeks to establish the logic of the measures used to measure the emotional effects on the children that visit parents in incarceration (Sanders, 2003). The recommendations from both the children psychology experts, lay subjects and other stakeholders will be applied in the design of the data collection, measurement and analysis instruments. The curricular validation will be used to assess whether the instruments will lead to the answering of the research questions. Effectively, the research questions and procedures will be assessed according to their ability to meet the research questions, and those that do not will be set aside (Kothari, 2009).
- Empirical Validity- It may be necessary to take tests in intimidating prison environments etc, and to ensure that the instruments are valid, the results obtained by the instruments will be compared against related measures and predictors of the behavior (Webster, Phalen, & Lichty, 2000). The correlation coefficient between the scores from different measures used to measure of predict the behavior and the results from the proposed study instruments will be assessed to ensure that it is high. While the differences are bound to exist, an index of association will be calculated and used to assess the relationship.
- Construct Validity
- The research will compile multiple evidence sources to see it the tests measures the emotional health/effects of children visiting their incarcerated parents. Discriminant and convergent validation will be used to ensure construct validity (Sanders, 2003). The tests will be chosen on the basis of the theoretical evidence that back the association with the emotional attributes measured by the instruments. The research hypotheses founded in theories such as Hull-Spence theory and Taylor’s Manifest Anxiety Scale and thus the instruments used to measure the emotional effects should conform to this theoretical evidence (Babbie, 2004). Using convergent validity will help prove that there is a correlation between the two measurement instruments, while discriminant validation is helpful in showing that two separate tests for measuring the children’s emotional well-being are not strongly correlated (Mastaglia, Toye, & Kristjanson, 2003).
- Reliability
In order to guarantee the reliability of data, a pilot study will be conducted in order to determine (i) the size of the sample size that will ensure that the results representative and (ii) test the ability of the results to be replicated by other studies. Three separate pilot studies will be carried out, to fine-tune the measurement instruments (Garcia-Berthou & Alcaraz, 2004). Further, the test-retest method will be applied twice, to determine the stability of the measurement instruments to produce consistent results (Babbie, 2004). In this way, the component of error through the repetition of the process of measurement on the same subjects in (nearly) identical conditions, coupled by the comparison of the results, can be determined.
The test-retest method does however suffer from a problem. The fact that it sensitizes the respondents about the nature of the study, which may engender attitude changes among the test subjects (Wilson, Pan, & Schumsky, 2012). In addition, conducting a pilot study on a limited group of people can easily introduce measurement errors that may translate to the main study. The precision of measurement will only exist if the measurement instruments produce as slight variations in the results during different occasions as possible. The pilot study subjects will be interviewed and re-interviewed after a month (under “identical” conditions) to reduce the possibility of the memory introducing bias in the second pilot study’s results, which will produce an artificial lack of variations in the measurement instruments (Webster, Phalen, & Lichty, 2000).
Two more ways will be employed in ensuring the reliability of the measurement instruments. The internal consistency of the instruments will be ensure by the conversion of the individual measures in the study into composite measures (Kothari, 2009). The reliability of the measures produced by the instruments would subsequently be determined by the evaluation of the correlation between the separate halves of the sub-tests. This is indicated by the Pearson Product Moment Correlation Coefficient, which shall be adjusted by the use of the Spearman-Brown Prediction Formula in order to produce a correlation between separate sub-samples (Mitchell & Jolley, 2012). The Cronbach’s Alpha or the mean of coefficients of all sub-samples is an acceptably good measure, and in order to bolster the reliability of the instruments and the results collected.
- Strengths and Weaknesses of a Questionnaire/Interview
The proposed study will employ a questionnaire as well as phone and face-to-face interviews based on the questionnaire in the collection of data. Questionnaires have multiple advantages that make for greater reliability and validity. To begin with, it allows for the data to be collected in a standardized manner, which serves to bolster the objectivity and certainty as compared to other instruments (Kothari, 2009). This is especially important especially when gathering data from children because it will help avoid drifting off the subject. Further, the data may collected from an expansive group of children without the possibility introducing bias, and since the questionnaire is designed and standardized it is easier to replicate the results. The ability to replicate the results of the study indicates increased reliability of the instruments. In addition, the fact that questionnaires are tested during design to ensure that they accurately measure the behavior etc being studied, while at once remaining the same through multiple studies implies that both the validity and reliability of the instrument is high (Babbie, 2004). In addition, the questionnaire can be implemented by multiple researchers without adverse effects on reliability or validity of the instruments.
However, questionnaires suffer multiple disadvantages that hurt validity and reliability. Firstly, the questionnaires are used in the collection of data after the occurrence of the events under study, which reduces the ability of the subjects to recollect them (Sanders, 2003). Further, the standardization of questionnaires’ closed-ended questions anticipate responses from research subjects, while open-ended questions lead to the collection of huge amounts of data, which render standardized analysis difficult (Fulfer, Tyler, Choi, & intimate, 2007). This in turn affects the validity of the data collected as well as the reliability of questionnaires as a data collection tool. Other disadvantages include the unwillingness of the participants to complete them; the fact that the subjects are children and thus require the questionnaires to be completed by the research staff through interviews and the lack of the fact that children have a short memory and attention spans, which affects their ability to answer lengthy questionnaires (Garcia-Berthou & Alcaraz, 2004).
References
Babbie, E. (2004). The Practice of Social Research, 10th edition. New York: Wadsworth, Thomson Learning Inc.
Fulfer, J., Tyler, J., Choi, N., & intimate, e. a. (2007). Using indirect questions to detect intimate partner violence: the SAFE-T questionnaire. J Interper Violence 18 , 295-309.
Garcia-Berthou, E., & Alcaraz, C. (2004). Incongruence between test. BMC Medical Research Methodology , 2289-107.
Kothari, D. C. (2009). Research methodology: methods and techniques. New York: New Age International.
Mastaglia, B., Toye, C., & Kristjanson, L. (2003). Ensuring content validity in instrument development: Challenges and innovative approaches. Contemporary Nurse 2003 14:3 , 281-291 .
Mitchell, M., & Jolley, J. (2012). Research Design Explained. New York: Cengage Learning.
Sanders, C. (2003). Application of Colaizzi's method: Interpretation of an auditable decision trail by a novice researcher. Contemporary Nurse 2003 14:3 , 292-302 .
Usher, K., Holmes, C., Lindsay, D., & Luck, L. (2003). PRN psychotropic medications: The need for nursing research. Contemporary Nurse 14:3 , 248-257.
Webster, J., Phalen, P., & Lichty, L. (2000). Ratings Analysis: The Theory and Practice of Audience Research Lawrence Erlbaum Associates.
Wilson, F., Pan, W., & Schumsky, D. (2012). Recalculation of the critical values for Lawshe’s content validity ratio. Measurement and Evaluation in Counseling and Development, 45 (3) , 197-210.