The society today is facing a rather disquieting series of developments affecting practical academic life in the contemporary society. Arguably, these developments risk degrading the pivotal role of cognitive endeavor or reasoned evaluative criticism in achieving, maintaining, and enhancing of good social justice program in the society. It is hard to simply educate on methods and means of doing evaluation; it is vital to convey the idea of evaluation as realistic, intellectual outlook and disposition on social and political platform. Overall, there is a pressing necessity to learn the need of humanistic evaluation in the social Justice programs. According to Geoff metrics used in evaluation of social justice program assume value is objective and discoverable through analysis. Geoff notes that current metrics conflate three very diverse roles; management of internal operations, being accountable for external stakeholders, and assessment of social impact. In reference to Geoff, the only meaningful concept of value is what comes up from the interaction of injections and withdrawals in a social setting. Arguably, the hybrid approach complicates the model though it is the best way of starting looking at societal matters like; poverty, environment and population wellness in a manner that is relatable across spectrum.
Cultural competence in Humanistic Evaluation in Social Justice Programs regards the awareness and appreciation of various social and cultural groups and their aptitude to correspond effectively across social justice programs in the design, interpretation, implementation and analysis of an evaluation. Logistically, it means that appropriate interventions, programs, measures, and standards ought to be used to evaluate that is valid and relevant for all crucial for Social Justice Programs. Arguably, in its top most level, cultural competence can be embedded in all the phases of evaluation process. Thomas (2002) states that while cultural competence can serve objective of social justice in further presents the data accuracy and validity through empirical and conceptual strategies to comprehend differing values, behaviors, belief systems, social regularities and behaviors within cultural groups that are involved in Social Justice Programs.
Methodology and analysis
According to Thomas (2008) data methodology decisions are critical in evaluation because they involve diverse considerations. Thomas (2008) further indicates that as part the methods that must be determined by an evaluator in the evaluation process is whether to use quantitative, qualitative, or mixed methods type data or information. Thomas (2008) indicates that qualitative data does not contain numerical information; it is characteristically textual and contains observation information that is narrative in nature. On the other hand, quantitative data is numerical. Thomas (2008) states that mixed are a combination of associated qualitative and quantitative types of data in an evaluation. Notably, mixed methods strengthen the evaluation of a research (Thomas, 2008).
Thomas (2008) notes that the most important rule when deciding on the data methods or procedures to follow is that evaluators must select the most appropriate method that will assist in answering he evaluation questions given the context of the humanistic evaluation in social justice program and its stakeholders. Thomas (2008) notes that method of collecting data that are commonly used to collect an evaluation include; surveys, tests, direct measures of certain constructs, individual interviews, artifacts, observations, focus group interviews and individual interviews.
On occasions when the evaluator is dynamically collecting data from program stakeholders the methods used in sampling must e determined and two frequently used types of sampling include purposive and random sampling (Thomas, 2008). Thomas (2008) indicates that purposive sampling is when the evaluator rationalizes the selection of certain people for their sample for definite reasons. Om the contrary, random sampling is when an evaluator uses a sampling method that gives allowance for chance to establish who is included in the sample and in itself random sampling justifies every sample selection since it reduces potential for bias (Greene, 2005). Greene (2005) indicates that specific paradigm of situations that purposive and arbitrary sampling are each apt. Notably, other three types of sampling include opportunistic sampling, snowball sampling, and convenience sampling, which are all probable methods of sampling when using qualitative data procedures (Greene, 2005).
While considering several other methods of sampling available, it is important to understand that all methods have been excluded from evaluation. As such, any method may be used with appropriate justification and in the proper contexts. It is noteworthy that any method of quantitative, qualitative and mixed method of data analysis may be necessary in evaluation. Greene (2005) states that general considerations must be considered when determining the methods and analysis that should be used in an evaluation have limitations in time, budget and data. Greene (2005) adds that another basic consideration when designing methods to use in social justice programs evaluation is the evaluation questions. Because evaluation questions are supposed to guide and structure the evaluation it is spontaneous that they must also direct the methods as well as the collected data.
Utilization-Focused Evaluation
Michael Quinn Patton developed the Utilization Focused Evaluation (UFE). This is an approach that is founded on the belief that an evaluation is judged on the basis of its effectiveness to its anticipated users. As such, evaluations must be conducted and planned in a manner that enhances the probable utilization of the findings and the process to come up with decisions and improve performance.
Patton notes that there are two fundamental elements of UFE. First, the primary intended users of evaluation should be unmistakably identify and personally engage at the initial stages of the evaluation process to make sure all primary intended users are clearly identified. Secondly, evaluators must make sure that the anticipated uses of the evaluation by the key intended users guide any other decisions that are made regarding the evaluation process. Instead of focusing on general and abstract uses and users, UFE concentrates on actual and exact uses and users. The work of the evaluator is to facilitate decision making among people who use the findings for evaluation but not to make decision independently of the intended users. Arguably, research on evaluation depicts that; intended users are exceedingly likely to use evaluations ones they understand and feel ownership ones they are actively involved. Patton states that by actively involving primary intended users, the evaluator prepares the ground work for usage.
Utilization Focus Evaluation can be used for various types of evaluation i.e. summative, formative, impact, and process. UFE can also use diverse research designs and nature of data. Arguably, the UFE framework is companionable in many ways depending on the framework and the requirements for the situation. The original framework that is given by Patton in UFE consists of a process of several effectual steps. The following steps include;
- The first step involves building a program and organizational promptness for an evaluation that is utilization-focused. It also involves assessment and enhancement of evaluator willingness and ability to carry out an evaluation that is utilization focused.
- Identification, organization and engagement of primary intended users. Situational analysis is also conducted jointly with the primary intended users.
- Identification and prioritization of primary projected uses by deciding priority purposes.
- Concentrate on critical evaluation questions, as well as, checking whether fundamental areas for evaluation inquiry are adequately addressed i.e. implementation, attributions and outcome questions.
- Determining the intervention model of change that is being evaluated and negotiating appropriate methods to come up with credible findings that support deliberated use by intended users.
- Replicate use of data findings; evaluation’s correspondent to a dress rehearsal.
- Gather information with the prevailing intention to use. Organizing and availing the data for explication and use by the primary intended users.
- Preparing and evaluation of the report to facilitate use and distribution of important findings to expand influence.
- Following up with the primary intended users to facilitate and enhance use and to make the Meta-evaluation of use i.e. being accountable, learning and improving.
Principally, there is a basic, step by step logic of following the UFE; actually, this process is not straightforward. For instance, the evaluator may establish the new users are as important midway the evaluation process and this would raise new questions in the midst of alternative decisions. Patton notes that there is no necessarily understandable and clean distinction between the progressions of focusing social justice programs’ evaluation questions and creating options decisions; methodological preferences, and question inform option can inform questions. Patton states that UFE requires dynamic and expert’s direction from and facilitation by a social justice programs’ evaluation facilitator. Patton further notes that time resources that are available for social justice programs’ evaluation must be distinctly negotiated, built in from the start, and indicated in the agreement.
The fundamentally collaborative nature of UFE demands time and dynamic participation, in all steps of the evaluation of social justice program process, from those who use the evaluation results. Notably, additional resources may be required if social justice programs uses and users are incorporated after the evaluation has begun. Arguably, financial resources obtainable for the social justice programs evaluation must clearly be stated. These financial resources must go beyond mere analysis and reporting. Markedly, resources that facilitate use must be obtainable. While conducting a UFE in social justice programs, the evaluator must give careful consideration to how all things are done from the start to the last step, will affect use.
Evaluator Roles and Strategies in Social Justice Program’s Evaluation
The foundation and the evolution of process-based influence are evidently visible in participatory evaluation models. Conspicuously, the involvements of stakeholders’ contribute to social justice program evaluation that is process based since all the parties involved in it shape the entire process. Arguably, stakeholders may express a level of passion for participatory evaluations unlike in other evaluation models. An evaluator may partake in a variety of roles to generate information that is useful in social justice program for any programmatic alterations. Certainly, whether an evaluation program is formative or summative, there is a sturdy consent that data attained from evaluation in a particular ways evaluators actively advocate for use.
According to Patton, utilization based evaluation emphasizes the use of evaluation throughout the entire process. Nonetheless, the influences of the evaluation process are often an afterthought and hardly integrate into goals of evaluation strategies. More so, the unintended influence of evaluation, according to Patton can be more impactful compared to projected influence. Several evaluation methodologies in social justice program incorporate a goal that seeks to attain benefit from the entire evaluation process. Notably, an exception is the methodology that involves the stakeholders in learning through social justice program evaluative inquiry process. Patton indicates that a need to examine the relationship between the social justice program erudition as process use and to understand all factors that initiate or obstruct the extent to which social justice program learning and development use evaluation approaches outcome in instrumental and abstract use.
Even with precise evaluation methodology employed within social justice program may have enormous effect on how the entire evaluation process influences the society. The roles of evaluators in social justice program, their philosophies, as well as, interpersonal dynamics that hold prospective advancement to a process influence. An internal evaluator uses various evaluator roles to interact with the social justice program stakeholders. Arguably, these roles and interactions add to the organizations willingness to make programmatic adjustments due to the evaluation process. The chief objective of the evaluation for the social justice program is to assess whether actual outcomes are consistent with the desired social justice trend. Notably, the desired outcome is developed from social justice program’s vision and mission statements. To support the society in accomplishing its objective, it is the responsibility of the evaluator to use a corroborative approach in which beliefs and knowledge of all stakeholders are venerated. Arguably, the responsibility of an evaluator is not to influence the stakeholders to adopt on their own perspective on regarding what really matters. Prior to planning the evaluation engage the stakeholders in different exercises to clarify their opinion, and engage in a dialogue to come towards a consensus on what will be evaluated on any specific instance.
Validity and Reliability in Social Justice Programs Evaluation
All data availed for evaluation purposes must be evaluated for quality (Patton). Patton further notes that reliability and validity of social justice program evaluation findings are crucial to reflect on in regards to the class of an evaluation. Arguably, the reliability of these findings should be undertaken through internal consistency coefficients for a coverage scale, inter-rated reliability for observations, as well as, the triangulation of methods. Thomas (2002) states that triangulation of techniques is when manifold data points that are separately collected through multiple methods are evaluated concomitantly in an attempt to triangulate the research findings. This method allows for a test of consistency since it identifies all convergent and divergent findings among the multiple applied data collection methods.
Accordingly, any potential divergent findings are later used as opportunities for extensive insight into the link between the phenomenon under study and the inquiry approach (Thomas 2002).Thomas (2002) also acknowledges that convergence can be used as evidence of quality criteria critical in evaluation of social justice program internal validity. As such, the combination of evidence from various sources and methods that are attained through triangulation allows for evaluators to assess their confidence in the validity of the information obtained. Arguably, convergence of these findings increases raising the confidence on the construct of interest that is captured.
The evaluation of an evaluation or metavaluation, is also done to improve the quality of evaluation findings by probing for any errors or biases in the process of evaluation. Possibly, evaluators may inadvertently underrepresent some stakeholder groups in social justice program in the evaluation or ignore cultural dissimilarities that change the meaning of evaluation findings. Notably, a metavaluation has the ability to inform and sometimes avoid misrepresentations.
Reporting Practices in Humanistic Evaluation in Social Justice Programs
It is comprehensible that coming up with environments conducive to practicing in humanistic evaluation in social justice program is an integral part in increasing the possibility that the evaluation process will create change on societal or individual level. Greene (2005) elaborates that general reporting for program evaluation is typically analyzed as a section of the evaluation contract which is implemented throughout the evaluation process. Purposefully, reporting may entail presentations, advancement report memo emails regarding the introductory findings, as well as, the closing evaluation report. The ongoing and reliable communication during the entire evaluation process is an important facet of evaluation use. As such, evaluators place interim reports in the entire evaluation. Significantly, provisional reports can be conformed to the evaluation milestones that follow data collection, data analysis, and program milestones that are related to budget cycle or scheduled meetings.
According to Greene (2005) the format of reporting relies on the intended audience and on the number of evaluation reports that are transmitted to substantiate the results of various reporting groups. Arguably, typical evaluation reports commences with a summary, an outline of essential findings and recommendations of the evaluations. It should also incorporate evidence in place to support the claims. Greene (2005)states that other sections of evaluation reports classically incorporated are; an introduction to the rationale and addressees of the evaluation; evaluation objectives; description of the evaluation; and evaluation questions. Other sections are included in the report include; a review of evaluation procedures; findings and results; conclusion and recommendations. It should also have an appropriate appendix that includes supporting documents that are related to the data collection analysis and interpretation. Even as these are typical traditionally formatted reports, all pieces of information can be conveyed though different alternative forms depending on the targeted audience and their requirements.
In conclusion, notions of accountability, standards and quality are ubiquitous in most societies globally. Arguably, we would be pushed to rally through life without appeals of any kind. The benefits of need of humanistic evaluation process in social justice programs are seldom tied to goals of evaluation methodologies. The evaluation process is a major breakthrough idea that has assisted in defining and influencing modern evaluation practice. Notably, an elementary caution to taking the role of an analyst in social justice programs is apparent. Professionals who get involved in areas they are not qualified are considered to practice illegally and unethically. Clearly, an evaluator must consider the social justice program as a living, evolving and live entity. As such, an evaluator must not counsel individuals within the social justice programs as this is the responsibility of licensed professionals. The role of social justice programs leaders is to create and guard ethical and boundary concerns. For instance; evaluator has their own beliefs and values and must be skilled at divulging these skills. Need of humanistic evaluation in social justice a program is necessary to the society to progress within this realm.
References
Geoff M. Measuring social value
Greene J.(2005). Evaluators and Stewards of Public Good. University of Illinois. Urbana Campaign
Patton MQ. Utilization focused evaluation. Retrievable from: http://www.pol.ulaval.ca/perfeval/upload/publication_195.pdf
Thomas Schwandt. (2002). Understanding of Evolutional Practices
Thomas Schwandt. (2008). Educating for Intelligent Belief in Evaluation