Posted: June 2nd, 2015

Topic: Evaluation Framework

( Use all information below including Tutor comment, student reflection and references list But Do not Copy any sentence please)

Approach” For Impact Evaluation refer to Table 3.1 and 3.2 in Owen’s book pages 52 & 53. 

Goal-Free Evaluation: A Potential Model for the Evaluation of Social Work Programs (Impact Evaluation)

  • This paper introduces goal-free evaluation to social workers with providing brief history of the approach and discussing many factors of it.
  • As defined in the paper, goal-free evaluation is an approach in which the evaluators consciously avoid learning about the declared objectives or goals of the programs (Youker 2013). Thus, all evaluation processes would be conducted through observing and measuring positive and negative effects, outcomes or impacts on its consumers.
  • These requirements for evaluator during a goal-free evaluation also meets the purpose of this approach to study all program influences, rather than limiting the enquiry to outcomes which reflect program objectives (Owen 2006).
  • The author mentioned that focusing on goals and objectives may prevent the evaluator from recognizing unintended positive and negative side effects (Youker 2013). Owen (2006) declared that these side effects may be relatively as important as the intended outcomes.
  • Four presented principles that govern goal-free evaluation in the article (Youker 2013):
    • Identify relevant effects to examine without referencing goals and objectives.
    • Identify what occurred without the prompting of goals and objectives.
    • Determine if what occurred can logically be attributed to the program or intervention.
    • Determine the degree to which the effects are positive, negative or neutral.
    • The evaluator’s ignorance of stated goals allows this approach maximize the independence.
    • In a goal-free evaluation, the balance of power is transferred from management to consumer so the evaluator only analyse the consumers’ needs and outcomes and judges the program based on the actual observable outcomes of its consumers (Youker 2013).

References

Youker, BW 2013, ‘Goal-Free Evaluation: A Potential Model for the Evaluation of Social Work Programs’, Social Work Research, vol. 37, no. 4, pp. 432-438.

713 words

Some of the key issues and things to note when deciding and developing tools to collect data using this method?  e.g. strengths and limitations

Chosen Owen’s (2006) form: Impact Evaluation; Approaches: Objective-based, Needs-based,
Data collection method When deciding When developing tools Strength of the method Limitation of the method Reference
 Using of intervention and control group ·         This method is decided before the implementation of the program.

·         The data-collection would be integrated by quantitative methods and analyse to give the comparison at different time marks from the beginning to the end of the implementation.

(Luo & Dappen 2005)

·         The data collection is developed during the implementation of the program through mixed methods. ·         Using content standards for instructional planning and reporting, to communicate the standards effectively, and to conduct successful professional development.

·         Using mixed methods to collect data refers to the potential of the evaluation to increase the possibility of eliciting the new ways of looking at the issues.

·         By using this mixed-method design, we not only captured the various levels of outcomes, but also evaluated the different facets of the action process of the objectives, which resulted in improving the implementation of the projects.

·         Integrating the methods together, they give results from which one can make better and more accurate inferences.

(Luo & Dappen 2005)

·         Evaluation professionals should be equipped with broader technical skills. Evaluators need expertise in conceptualizing complicated objectives, designing and implementing different methods, as well as in analyzing, interpreting and integrating the findings.

·         Time-consuming and higher cost.

(Luo & Dappen 2005)

·         Ahmadian, L, Salehi, NS & Khajouei, R 2015, ‘Evaluation methods used on health information systems (HISs) in Iran and the effects of HISs on Iranian healthcare: A systematic review’, International Journal of Medical Informatics, vol. 84, no. 6, pp. 444-453.

·         Luo, M & Dappen, L 2005, ‘Mixed-methods design for an objective-based evaluation of a magnet school assistance project’, Evaluation and Program Planning, vol. 28, no. 1, pp. 109-118.

·         Owen, J 2006, Program Evaluation: Forms and Approaches, 3rd edition, Allen and Unwin, St Leonards, NSW.

·         Waller, A, Girgis, A, Johnson, C, Mitchell, G, Yates, P, Kristjanson, L, et al 2010, ‘Facilitating needs based cancer care for people with a chronic disease: Evaluation of an intervention using a multi-centre interrupted time series design’, BMC palliative care, vol. 9, pp. 2.

Observations

·         As mentioned by Owen (2006) and shown in the evaluation of Luo & Dappen (2005), to collect data for preordinate research designs, observational data cannot be absent.

·         Observations play an important part in conducting preordained research designs, then it is decided at the same time with the designs – before the implementation of the program. ·         Periodic visits are conducted to gather the information from stakeholders with given objectives (Luo & Dappen 2005). ·         Accompanied by the surveys and interviews, observations contributed to benefits in improving the process of implementing the standards (Luo & Dappen 2005). ·         People responsible for observations need to be careful to conduct this method properly (Ahmadian, Salehi & Khajouei 2015).

 

Interview ·         Similarly to observations, it is decided before the implementation of the program. ·         Along with the observations during visits, participants are interviewed to collect information. ·         Interview is useful to clarify and interpret the findings obtained from the statistical analysis. Then the improvement can be shown for the later stages of the program.

·         This method also helps to create the in-depth interaction between evaluators and participants.

(Luo & Dappen 2005)

·         Interview avoids the limitation of some methods that is the different conceptual understanding of participants by explain the questions to them clearly (Ahmadian, Salehi & Khajouei 2015)

·         Participants in the interview are required to have a standard of language skills.

·         In addition, interviewers need to be patient and have fluent language skills to process the information properly.

(Waller et. al. 2010)

·         Interview relies on narrative data and may not reflect the actual behavior of users (Ahmadian, Salehi & Khajouei 2015).

 

What have you learnt: At the beginning of evaluation practices, I see myself with many blinding points in how to apply the theory into the practices and lots of questions needed to be answer before conducting an evaluation framework. Through this Sharing Learning Activity, I got initial steps in this path and learnt many fascinating lessons from peer reviewed articles. Obviously each data collection method includes both pros and cons and that requires evaluators to be skilful in choosing and applying methods into the evaluation. This point is also shown in Owen (2006) that there is no general formula for obtaining information for a given question so evaluators must have a range of skills, including both collecting and analysing data for evaluation questions. Recent studies has implicated that mixed methods can be a good option for evaluators to overcome the weaknesses and develop the strengths of an individual method. Despite time consuming, higher cost and some limitations, I particularly think that mixed methods can be used properly by professional evaluators to conduct an excellent evaluation.

 

 

 

Tutor comment

Hi Tam

I might be say this to many of you. But just varying it a bit depend on the form, approach and methods choice – so have look at my comments to others about the other methods  – you have control groups – so i am going to focus on this.

You hit the nail on the head when you talked about in your lesson leant section ‘requires evaluators to be skilful in choosing and applying methods into the evaluation’ and ‘mixed methods can be used properly by professional evaluators to conduct an excellent evaluation”

Think about defining the terms intervention and control group’ so you can clearly defined within the method.  I would like to send you to the look at Glossary of Cochrane terms

http://community.cochrane.org/glossary/5#lettera

Control groups were long considered the gold standard for demonstrating program impact. There is a possible ethical dilemma in this situation. One problem with the use of no treatment for the control group is the ethical problem of withholding an intervention in a ‘live’ program situation, as considered unethical practice.

If the project intervention initially appears to be effective and successful in reaching its goals, should project services continue to be withheld from the control group and data collected to further prove project effectiveness? Would the answer differ if the project provides life-saving interventions? These type of questions need to be conisdred if you use a control group method.

You might like to look at this book  – Research Methods for Generalist Social Work, 5th Edition by Christine Marlow. Go to page 107, and have look at this chapter sub heading  on “ethical issue in program evaluation design: assignment to comparison or control group”. Thelibrary should have this book.

Thus the concept of ‘Do no harm’  – ‘Rights of human subjects: Evaluations should be designed and conducted to respect rights and welfare of human subjects’.

BUT I wish for you to think about the methods order and design as well as part of your evaluation assignment 2, as if you were going to proceed with an impact evaluation it can be influenced in order is undertaken.

You have it all there for your the assignment 2 need to include the Evaluation question, as this will decide the form, approach and then the methods order in which you will apply the methods. For example:

  • Do you do observation first and interview second
  • Or do you do interview first and observation second
  • How will the control group be used or will the program intervention be delayed for some
  • Or etc

As you can see it has two or three different administrative techniques, therefore you can end up with varying and differing results.

You will need to define what this Evaluation question, to drive the methods order e.g. administration is key part of an evaluation  – what is the rational in choosing this methods and administrative techniques – this will need to be pulled out in assignment 2.

I hope this has helped, as very much about what do you want to achieve and why – thus answering your evaluation question will drive and choice of methods and their administrative techniques. but as I said above Control groups were long considered the gold standard for demonstrating program impact, but it is how it is applied and administrated in practice is the key.

well done

 

 

 

 

Reflection 1 hi Tam, I’ve chosen to do a reflection on your summary 🙂

I like how you have stated in the first sentence what is involved in the form and then carried on to say in the table the data collection methods, it reassures me what is involved in the impact evaluation form.

From reading your first method of using interventions and control groups, I learnt that this is decided upon before implementation of the program and not as they feel the need to use this method. As I have learnt in a previous summary, quantitative data and qualitative data can interconnect with each other, is this possible with interventions and control groups? I hope to learn more about this and use these skills in my studies and career. You have also mentioned that there are many positives to this method as you can develop theories, communications, mixed method designs, but these can only make a significant positive impact if the professional is equipped with technical skills of interpreting and integrating, I will remember this for future references.

For the observational method, I like how you have referenced quality information underneath this heading, reinforcing what it is about. I didn’t realise that observational data couldn’t be absent in research designs so I will remember this critical information for my studies as well as knowing that observations play an important role in conducting preordained research designs and as well as with the first data collection method, is also decided upon before a program has been started. I didn’t realise how observations rely on surveys and intervention observations as well and I learnt that it improves the process of implementation standards. The limitations were less than I thought it would be as well only consisting of human observational error.

In your last data collection method, interview I already had a brief understanding of what is involved and you have just reinforced what I thought I knew, but I didn’t know entirely that interviews avoid limitation of conceptual understandings by questions being explained clearly; this then refers back to the limitation of needing an expert with great skills to conduct the interviews as this can affect the results significantly.

In your summary of what you have learnt, I agree that I know understand there is a larger need to decide upon what data collection methods would be most appropriate to use other than during the program feeling the need to use a different method. You have reinforced a lot of information to my knowledge and I have learnt a few extra things from all the data collection method summaries and I will use these and replicate this information into my future studies and career.

Reflection 2 Hi Tam,

I have chosen to reply to your summary.

After reading your learning and reflection, I have learnt that there are three major data collection methods for Impact evaluation such as using of intervention and control group, observation, and interview. The most interesting point I found in your submission is that ‘there is no general formula for obtaining information for a given question so evaluators must have a range of skills, including both collecting and analysing data for evaluation questions’ (Owen 2006).  In addition, to strengthen the benefits and reduce the weakness of data collecting methods for this form, it is fundamentally important to use mixed methods and decide the order using those methods.

Before reading your summary, I do not have any ideas about data collecting methods for Impact evaluation especially using of intervention and control group. In particular, I learnt that using of intervention and control group developed during the implementation of the evaluation program aims to ‘use content standards for instructional planning and reporting’ (Luo & Dappen 2005). I have chosen Clarificative form for my assignment 2. With this form, observation and interview are two essential methods for data collection. Hence, your information help me to enhance knowledge about these two methods as well as show me how these ones are used in different contexts. One key point that I can apply into my assignment 2 is that participants and interviewers need to have a standard of language skills to make interview get a range of accurate information.

In short, your summary is informative, useful and interesting. I learnt that evaluators are required to have specific and important skills to appropriately choose and apply tools into practice to achieve the best outcomes of evaluation program. I am apparently acknowledged when we should decide to choose this form, when we should develop data collection process, strength and limitation of three different collecting data methods. They will be important lessons for my assignment 2.

References

Luo, M & Dappen, L 2005, ‘Mixed-methods design for an objective-based evaluation of a magnet school assistance project’, Evaluation and Program Planning, vol. 28, no. 1, pp. 109-118.

Owen, J 2006, Program evaluation: forms and approaches, 3rd edition, Allen and Unwin, St Leonards, NSW.

Reflection3

Hey Tam, I’ve chosen your learning and sharing activity as my second practice reflection.

Within your submission of impact evaluation data collection methods, the first concept that appears as helpful to me would be your description of intervention and control groups, or more specifically the when deciding section within it. In this section you have shown that data collection integration through the use of quantities methods should analysed to give comparisons at different time marks (Luo & Dappen 2005). After this a description of the use of mixed methods has shown me that a mixed method approach can be more effective as it has the potential to gather more data. This idea of multiple collection methods being used was then carried on into your description of observation data collection, where you stated that by combining the use of interventions and surveys with an observation improve standards (Luo & Dappen 2005). Additionally, you have shown some of the limitation that are apparent within observation, and that is that observes have to be especially careful to eliminate the bias that they bring, which you have demonstrated a understanding of by the use of your reference to proper training (Ahmadian, Salehi & Khajouei 2015). Finally in your understanding of the third data collection method of intervention, you have established another link, stating that both interviews and observations are decided before the implementation of the program.

Because I myself have also chosen impact evaluation as my chosen from, I have found your reflective activity to be especially useful. With the added knowledge I have gained form both your ability to link different methodologies as well as your baseline understanding of the given topic, I feel that I will be better equipped to produce a high quality paper using impact evaluations data collection methods. On top of this, my future career in medicine will be one in which informal observation/interviews will be my main source of information and will therefore by impeditive for my success

Reference list

Ahmadian, L, Salehi, NS & Khajouei, R 2015, ‘Evaluation methods used on health information systems (HISs) in Iran and the effects of HISs on Iranian healthcare: A systematic review’, International Journal of Medical Informatics, vol. 84, no. 6, pp. 444-453.

Luo, M & Dappen, L 2005, ‘Mixed-methods design for an objective-based evaluation of a magnet school assistance project’, Evaluation and Program Planning, vol. 28, no. 1, pp. 109-118.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Workshop 5

Some of the key issues that need to be considered with disseminating evaluation findings

To conduct an evaluation, there are many stages to go through from the basic beginning to the end with a satisfactory conclusion. One important stage is dissemination of findings which includes strategies and reporting issues and according to Owen’s book (2006) this stage requires communication skills. It is undoubtedly critical to collect and manage the data effectively to answer the evaluation question in progress. However, to achieve an impact on decision-making, it is also important for evaluators to use techniques for disseminating the findings and show a report transferring information effectively (Owen 2006).

As Osterling & Austin (2008) stated, dissemination requires a range of given steps to transfer knowledge to a target audience group, for example, the distribution of written materials in research. As dissemination is considered to be a complicated process, this stage could be influenced by a number of factors including individual and organizational features (Osterling & Austin 2008). Some individual barriers are the knowledge isolation from the study or not being aware of the research while the organizational factors include staff and management not supportive of the implementation of the study (Osterling & Austin 2008). Communication factors may be lack of availability or access to reports or the study is not presented in an easily understandable fashion (Osterling & Austin 2008). That is the reason why dissemination requires evaluator efforts to be responsive to audience needs during the evaluation process and on the communication strategies used in the study (Owen 2006). At this point, the lesson I could learn is the significant role of the relationship between evaluators and target audiences. To influence program stakeholders and keep the clients informed about all aspects of the study, evaluation practitioners have to use many techniques skilfully. Owen (2006) also declared that it is fundamental for clients to comprehend the evaluation findings and understand their implications. Kang, Anderson & Finnegan (2012) published an article about the evaluation practices of US international NGOs and mentioned some points about dissemination of evaluation practice results. It is discussed in the article that INGO evaluators need to strengthen the accountability to beneficiaries partly by improving dissemination of evaluation results (Kang, Anderson & Finnegan 2012). Involving beneficiaries may help to learn what program changes they are most interested in, as well as in monitoring and measuring how changes are being implemented (Kang, Anderson & Finnegan 2012). The authors also showed somewhat disappointed when the relatively limited attention to the media or the public as targets for inclusion in evaluations as the evaluation literature argued for broader participation (Kang, Anderson & Finnegan 2012). Although it is critical to transfer information to program donors and staff-members, the essential dissemination of findings to different stakeholders cannot be ignored. This point is also noticed by Owen (2006) that during reporting process, ‘the evaluator should bear in mind the sophistication of the audiences and, if possible, ensure that findings from the analyses can be presented in ways that make sense to them’.

Because the dissemination of findings is a complex step, it requires to be prepared and planned in the early stages of the study. Lorig et. al. (2004) studied the aids and barriers to the dissemination process, including attributes of the program, administrative factors and organizational factors. Based on pinpointed major disseminating hindrance, the authors also declared the importance of strong financial and administrative support with dedicated staff to support the implementation of the program (Lorig et. al. 2004). Planning on dissemination of findings at the beginning and communication with participants are also proved to be effective for the widespread dissemination (Lorig et. al. 2004). In general, from peer reviewed articles and data from Owen’s book, I learnt that dissemination of findings is as important as data collection and management and this step would influence different stakeholders and target audiences. Because of the sophistication of this stage, it requires evaluation practitioners to use many techniques effectively and have skilful communication skills.

 

 

References

Kang, J, Anderson, SG & Finnegan, D 2012, ‘The evaluation practices of US international NGOs’, Development in Practice, vol. 22, no. 3, pp. 317-333.

Lorig, KR, Hurwicz, M, Sobel, D, Hobbs, M & Ritter, PL 2005, ‘A national dissemination of an evidence-based self-management program: a process evaluation study’, Patient Education and Counseling, vol. 59, no. 1, pp. 69-79.

Osterling, KL & Austin, M 2008, ‘The Dissemination and Utilization of Research for Promoting Evidence-Based Practice’, Journal of Evidence-Based Social Work, vol. 5, no. 1-2, pp. 295-319.

Owen, J 2006, Program Evaluation: Forms and Approaches, 3ed, Allen & Unwin, Australia.

739 words

 

 

 

 

 

Reflection (1) Hi there

Your summary has provided a great description to why dissemination is important. As you have described the dissemination of information can help provide insight, support and can influence policy and programs. The information which is disseminated needs to reflect these ideas and provide such evidence for the rationale behind the dissemination. In order to determine what and how the information is disseminated as you have described involves consideration to what the needs of the audiences are. It is through this consideration which determines what information should be withheld and also when the information should be disseminated. These considerations should be led by what method would be most effective in creating the most influence from the evaluation findings. For my final assignment these issues must be addressed when deciding what method and form the dissemination of findings will take. It has also shown me the reasoning behind actions of evaluators regarding the dissemination of findings and that seldom things are done by accident but result from careful consideration.

The content of data which is disseminated has also proposed risks including the inclusion of details from participants, stakeholders or outcomes and knowledge which may be inappropriate information for the audience. The risks regarding the inclusion of information must be addressed prior to dissemination. The most important aspect of dissemination is communication. How information is presented is therefore an important consideration to how effective dissemination method is at reaching and influencing the target audiences. There is a need to present findings which will result in the best understanding of audiences. This issues also need to be considered when completing my final assignment when deciding methods for dissemination keeping in mind which will be most effective at reaching the population and evoke the greatest influence. This can also be carried over to my future study or carrier where considerations to my audience will determine the most effective method for presenting information.

 

Reflection 2 

Hi there,

Your submission for the topic of dissemination provided a solid backing of information surrounding the communication skills required to appropriately disseminate the information and data collected throughout the evaluation.

I think that your points regarding the knowledge requirements of the staff, management and participants is quite a key point. This is something for us all to be aware of, as the dissemination of our findings needs to be legible for all levels of stakeholders, from upper board rooms, to the community in general.

The lack of availability is something to address when planning for evaluation results to be disseminated. We cannot rely on a natural progression of the information to whomever needs it, but must make communication channels clear.

Your point regarding the relationship between the evaluators and the target audience echoes some of the other information put forward during previous submissions. These talked about how important the relationship between the evaluator and the staff, participants and stakeholders can be, especially in the Interactive Form, the Action Research and Empowerment Approaches in particular (Owen 2006). Having a built relationship allows for the transfer of information to happen more easily, and could increase the potential for any recommendations or suggestions to be taken under advisement.

Other submissions by classmates have also talked about the attributes of a program affecting the dissemination of a program into the wider community. This is probably linked to the better sounding a program (and its evaluation) the higher level of uptake the program has. Much of this can be linked back to your point of communication with participants, staff and external stakeholders, and planning the dissemination approach at the beginning of the evaluation. These principles can assist us to build our plans, and can also extend beyond the classroom to our lives, jobs and careers.

Thank you for your input into these forums,

Clare

Owen, J 2006, Program Evaluation: Forms and Approaches, 3ed, Allen & Unwin, Australia.

 

 

 

 

 

 

 

 

 

 

Reflection 3 

Hey Tam, I’ve chosen your learning and sharing activity as my fifth practice reflection.

In your first article wrote by Osterling & Austin (2008) you describe the use of steps in dissemination in order to transfer knowledge to the target audience. You described the use of the transfer material In the case of dissemination as written material gained from research. From this, barriers in conjunction with both individual and organizational factors in relation to dissemination was explained where individual may be knowledge isolation and organisational may be management staff not supporting the implementation (Osterling & Austin 2008).

Moving on you brought your discussion of dissemination back to core ideal, as you spoke about how within dissemination it is fundamental for clients to comprehend the evaluation finding and understand their implications (Owens 2006). This fundamental understanding, of the use of dissemination is imperative to effectiveness of information distribution, as creating information that nobody will use or need can be seen as pointless for both the evaluator as well as the stakeholders.

Finally in your article by Kang, Anderson & Fineggan (2012), you spoke about the need to consult stakeholder/beneficiaries in order to gain a more insightful understanding of both what knowledge is needed, as well as where this knowledge needs to be aimed at. This is imperative as you link back to Owens (2006) where it states that ‘the evaluator should bear in mind the sophistication of the audiences and, if possible, ensure that findings from the analyses can be presented in ways that make sense to them’

I have gained substantially from this reflective activity of dissemination, as a thorough explanation on the use of understanding the stakeholders/audience has been presented to me. From this I have gained a better understanding on the importance of need and distribution of information.

References

Kang, J, Anderson, SG & Finnegan, D 2012, ‘The evaluation practices of US international NGOs’, Development in Practice, vol. 22, no. 3, pp. 317-333.

Lorig, KR, Hurwicz, M, Sobel, D, Hobbs, M & Ritter, PL 2005, ‘A national dissemination of an evidence-based self-management program: a process evaluation study’, Patient Education and Counseling, vol. 59, no. 1, pp. 69-79.

Osterling, KL & Austin, M 2008, ‘The Dissemination and Utilization of Research for Promoting Evidence-Based Practice’, Journal of Evidence-Based Social Work, vol. 5, no. 1-2, pp. 295-319.

Owen, J 2006, Program Evaluation: Forms and Approaches, 3ed, Allen & Unwin, Australia.

 

 

 

 

 

 

 

Reflection 4

hi Tam,

Through your sharing and learning, I could get a better understanding of strategies to disseminate the findings of evaluation. Furthermore, I was shown that dissemination is one of the crucial steps in evaluation process as it helps to transfer information of evaluation program into targeted audiences effectively.

To begin with, the distribution of written materials in research, as you mentioned, can be impacted by a wide range of factors consisting of individual and organizational features (Osterling & Austin 2008). In particular, individual aspects could be ‘knowledge isolation from study or not being proved by research whilst staff and management are key organizational barriers. A key lesson emerges for me is that communication skill is considered as a significantly crucial component in dissemination. This is due to the fact that communication skill plays a vital role in collecting and managing data effectively to answer the evaluation question in progress (Owen 2006). To effectively disseminate the findings and conclusions of evaluation, the researchers should be responsive and take the audience needs into consideration. What is more, I learnt that ‘NGO evaluators need to strengthen the accountability to beneficiaries partly by improving dissemination of evaluation results’ (Kang, Anderson & Finnegan 2012). Lorig et al. (2004) identifies some barriers and solution to the dissemination process including ‘attributes of the program, administrative factors and organizational factors’. Moreover, financial and administrative support with dedicated staff could assist to support the implementation of the program (Lorig et al. 2004).

Briefly, I have realized the importance of dissemination in the evaluation process and its influence on different stakeholders and target audiences. It is required to prepared and planned in the early stages of the study due to the complexity of this step (Lorig et. al. 2004).

References

Kang, J, Anderson, SG & Finnegan, D 2012, ‘The evaluation practices of US international NGOs’, Development in Practice, vol. 22, no. 3, pp. 317-333.

Lorig, KR, Hurwicz, M, Sobel, D, Hobbs, M & Ritter, PL 2005, ‘A national dissemination of an evidence-based self-management program: a process evaluation study’, Patient Education and Counseling, vol. 59, no. 1, pp. 69-79.

Osterling, KL & Austin, M 2008, ‘The Dissemination and Utilization of Research for Promoting Evidence-Based Practice’, Journal of Evidence-Based Social Work, vol. 5, no. 1-2, pp. 295-319.

Owen, J 2006, Program Evaluation: Forms and Approaches, 3ed, Allen & Unwin, Australia

 

 

 

 

 

 

 

 

Tutor comment

Hi Tam

You have picked up on a very good point ‘relationship between evaluators and target audiences’; it as can either enhance or inhibit the dissemination of the evaluation findings.

Totally agree as the ‘sophistication of the audiences’ that adds to the complexity with dissemination to various groups including stakeholders.

Your learning captures this with you highlighting evaluator’s skill set that is required.

You have some key point to consider for your assignment two with your evaluand.  Well done

A

 

Expert paper writers are just a few clicks away

Place an order in 3 easy steps. Takes less than 5 mins.

Calculate the price of your order

You will get a personal manager and a discount.
We'll send you the first draft for approval by at
Total price:
$0.00
Live Chat+1-631-333-0101EmailWhatsApp