Just like anything in life, you have to take the good with the bad, and this is true for science fair competitions. While there is much I appreciate about these research events, like the student-student interaction, I feel that we need to be careful that the judging mimics what scientists do in real life, and that these events encourage student to imitate how scientists view learning.
Why do we judge student research? Teachers will tell you that the research events are wonderful culminating events that help students showcase the hard work they have put into their research. It gives them an authentic audience with which to share what they have learned, and that in and of itself, makes it valuable. So the next question is "Are scientists judged on their work?" The answer is Yes. This is why scientists must publish their findings in scientific journals. The scientist's methods, statistical analysis, and conclusions are carefully scrutinized by scientists in the same field of study. Another way in which scientists are "judged" is when they are trying to procure funding. Again, a team of experts review the purpose of their research, the methods, and whether or not the potential results will impact society's needs. In this way scientists are competing against colleagues within their respective fields of study, to have strong research designs that will provide answers to interesting questions. So in this way, student research events do mirror what scientists do in real life.
So the next logical question is how close the judges at a student research event emulate the peer evaluation used in the scientific community. Ask anyone who has hosted a student research event, and they will tell you that
procuring judges is one of the biggest challenges. Event organizers struggle with the purpose of the judging process in order to determine who is qualified as a judge. Depending on the level of the research, an individual who understands the scientific research process might be sufficient. Therefore, parents of science students might be able to serve as judges. However, if the content of the research is advanced, parents less likely able to see past the lingo in order to judge the method and conclusions with appropriate certainty.
procuring judges is one of the biggest challenges. Event organizers struggle with the purpose of the judging process in order to determine who is qualified as a judge. Depending on the level of the research, an individual who understands the scientific research process might be sufficient. Therefore, parents of science students might be able to serve as judges. However, if the content of the research is advanced, parents less likely able to see past the lingo in order to judge the method and conclusions with appropriate certainty.
The best judges are individuals from industry, research scientists, and undergraduate and graduate students who have done research themselves. And then to have these individuals judge research within their own field of expertise. Only its extremely difficult to get this caliber of judge at these events. Often, they are too busy, and the systems they find themselves, do not emphasize an importance of volunteering time in ways such as student research. Said another way, there is not extrinsic motivation or support from their institutions to participate in such events. Therefore, only those who intrinsically believe in supporting the up and coming STEM leaders take time out if their schedule to be judges at these events. But even if event organizers can get highly qualified judges, there is the issue of consistency in how students' research is evaluated. Those in industry or academia often don't understand the level at which high schoolers do research. They may expect too much or too little. In my experience, judges are often impressed with the level at which students do research and the scores are high across the board, making it difficult to determine winners.
The unspoken message students receive about having to "complete" their research is something with which event organizers need to concern themselves. There is little room in most rubrics to have "unfinished" research. However, I feel this is more of an attitude issue than it is a rubric issue. If those organizing research events will emphasize to teachers and judges that as long as student can give good explanation to what their data currently says, and what they still need to collect and discover...THIS is good science. What we should not be emphasizing is that your research needs to be completed, with all questions answered. While most teachers do not intentionally do this, the sense of urgency of the dated event, makes students feel like they must have all their questions answered! But in reality, as I always say, "The more you know, the more you know you don't know!"
In my STEM Student Research Handbook, I have a judges rubric as Appendix F. (Sorry no download of this file.) I have yet to find a perfect rubric, and every time I judge, I find new items I want to add or delete.
Based on my most recent judging events, I have a 100 point rubric for you to download. Here is the break down:
- Background = 15 points
- Methods and Hypothesis = 15 points
- Results (tables/graphs) = 9 points
- Conclusions = 36 points
- References = 3
- Communication Skills (both written and verbal) = 12 points
- Visual Appeal = 9 points
- Professionalism = 1
The objective rubric is meant to be used for scoring purposes only. The last page includes a space for judges to share with the student researcher, aspects of the project done well, and areas the project could be improved. Judges would return the rubric to the event organizer, and then the last page can be given directly to the student, or can be returned to the teachers at the end of the event so that the student can review the comments at a later time.
Download a new STEM Student Research Poster Rubric written by myself and Sara McCubbins, also a NSTA Press Author.
At the last CeMaST High School Research Symposium, we had judges rank posters to help us make distinctions between posters. This was to help with the issues of having so many high scores. While we did have guidelines, we did not have judges score using any rubric. I believe next time, we should use a combination rubric and ranking system. So that we have a quantitative rubric score, but we also force the judges to compare the posters to each other!
What judging issues have you had? Why do you think we have students judged at research events? Do you think we are sending the right message to students?
by Dr. Darci J. Harland
No comments:
Post a Comment
I love comments! Would love to know you were here! :)