My heart sank when Lola came into my class on
the afternoon of the science graduation test and exclaimed,
“Mrs. T., I’m sorry, but I ‘Christmas
tree-ed’ the science test. I just read the first ten
questions and bubbled in the rest.” I knew that Lola had the
knowledge and skills to be successful on the test. Why was she so proud
of not trying on the test? I believe the theory of self-efficacy, the
belief a student has in her own ability to complete the task, can at
least partially explain her attitude and her effort on the test
(Margolis & McCabe, 2006). How can I use my students’
individual self-efficacies toward science topics tested on the
graduation test to provide them with an effective review for the
science graduation test?
Over the last few decades, standardized testing
has taken command of educational systems worldwide: driving curriculum
changes, homogenizing teaching and learning, and quantifying the
inadequacies of U. S. students. Because of this testing, the nation
focuses its attention on the poor performance of American students on
mathematics and science tests (Chen, 2004). At the Covenant College
Educators’ Conference, Howard Gardner explained to the
audience that he “… often had the opportunity to
engage in discussions with the heads of education in many
countries,” and that without exception,
“educational leaders around the world state that their
greatest mission for their country’s educational system is to
have the highest scores on standardized tests” (public
presentation on February 23, 2006). This statement gives a very clear
indication of the importance of the performance of our students on
high-stakes tests. Consequently, providing students with the necessary
knowledge and skills to be successful on standardized tests is an
important aspect of a teacher’s job.
While standardized test scores have been
important to parents, teachers, and administrators for decades, these
scores take on an even more significant role because of current
educational legislation, specifically the Adequate Yearly Progress
(AYP) requirement of the No Child Left Behind (NCLB) legislation (No
Child Left Behind Act of 2001). According to the Georgia Department of
Education website, AYP “is an annual measure of student
participation and achievement on statewide assessments and academic
indicators” (DOE, What is AYP, 2006). Either a school meets
AYP, currently based on the performance of students on language arts
and mathematics exit exams, or it does not meet AYP. When a school does
not meet AYP, it joins other poor performing schools on a Needs
Improvement list. If a school is on the Needs Improvement list for two
consecutive years, it moves to the list of Poor Performing Schools. At
the present time, AYP determination is based on student performance on
the language arts and mathematics high school exit exams.
However, beginning in 2007, science exit exams
will also factor into determining AYP. The current law only requires
that every state must give a science test by 2007. The legislation does
not set standards for the science test; therefore, no one knows exactly
how the science test will affect the determination of AYP. Because
students consistently perform poorly on the science portion of the
Georgia High School Graduation Test (GHSGT) throughout the state, there
is much concern over the science scores. Although the metro-area
schools perform much better than the state average, our school, City
High School (CHS), is on the lower end of the metro-area range, with
only 79% of the students passing the test in 2005.
Apparently, the students are not retaining the
content from year to year and that the eleventh grade teachers need to
strengthen the review for the test. Although each year the
students’ performances on these tests receive more and more
emphasis, test scores on the science portion of the Georgia High School
Graduation Tests (GHSGT) are not significantly improving. While we are
beginning to see a small increase in the science scores, the passing
rate for first–time test takers in science is still
significantly lower than the passing rates for language arts and math.
An instructional gap might exist when students perform lower on
standardized tests than in their individual science classes. With a
failure rate of 21%, it is logical to assume that instructional gaps in
science do exist for the students.
Assuming that an instructional gap exists,
science educators should ask two important questions: (1) Why are
students consistently scoring poorly on the science GHSGT; and (2) What
can teachers do to improve their students’ chances of success
on the science GHSGT? Although interested in solving the issues
inherent in both of the preceding problems, eleventh-grade science
teachers should focus on solving the second question since their
students are the students who will take the GHSGT during the spring of
their junior year. While not responsible for teaching the students all
the content tested, these teachers are their students’ last
hope for an effective review that could improve their chances of
passing the test. Since test scores have not significantly improved
using any of the current methods of teaching the content and reviewing
the material, high school science educators need a new approach.
As the science department chair, my objective
was and continues to be to improve City High School students’
performances on the science portion of the Georgia High School
Graduation Test. Because the GHSGT tests the students on biology,
chemistry, and physics, we have the daunting task of re-teaching the
previous material effectively, while the students are involved in
learning physics for the first time. I
wanted to explore whether allowing students to exercise their
self-efficacies toward science content by choosing their topics is a
more effective teaching strategy than assigning the topics to the
students based on deficiencies identified by the diagnostic test.
Not only do educators want our students to have
the content and skills necessary to succeed on this test, but we also
want the test scores to improve in light of many external factors such
as home environment, work schedules, substance abuse, and pressures to
succeed. One such external factor is the AYP requirement of No Child
Left Behind (No Child Left Behind Act of 2001). The hope is that by
providing students with a more effective review for the science GHSGT,
the instructional gap will shrink and the students will be better
prepared to recall the information on the test, therefore improving
their test scores.
I began to believe that a review unit
incorporating Multiple Intelligences theory (MI) and Problem-Based
Learning strategies (PBL) could provide students with a more effective
review of the content tested on the GHSGT. Consequently, this more
effective review would attribute to greater self-efficacy in my
students, building more confidence for their success in taking the
science portion of the Georgia High School Graduation Tests. Howard
Gardner’s MI theory could provide a pathway for students to
learn and retain more content. In this theory, Gardner (1983,1999)
postulates that humans possess eight different intelligences , with
different ones being more highly developed in different individuals
(Gardner, 1983, 1999). Conceivably, a science review unit that
incorporated MI theory would provide students with the opportunity to
use their more highly developed intelligences while reviewing content
for the science graduation test. Following the completion of this unit,
the students should retain more content, build their self-efficacy
toward science, and, consequently, improve their scores.
A second learning strategy that could increase
students’ knowledge and skills is problem-based learning
(PBL). In PBL, students must research information to solve a problem
and produce a product. Since the person doing the work of learning is
the person who retains the most content, the incorporation of PBL into
a review of content for the science graduation test should improve
students’ knowledge base and retention of information
(Jensen, 2000). Again, greater self-efficacy should result, which in
turn should result in improved scores on the GHSGT.
I first implemented this MI- and PBL-based
Graduation Test Review Project as a pilot test during the fall term of
2005. Gains shown by comparing pre-test to post-test results of the
forty-eight students indicated only a 0.9 average increase. As I
pondered over why this increase was so slight, a couple of ideas came
to mind. First, I administered the post-test on December 14, not an
optimal time for adolescent concentration. I have little influence on
the concentration abilities of teenagers the week before winter break.
However, the second idea really caught my attention. As the final step
of the project, the student-teams presented their websites to the
class. As I observed my classes, it struck me that the students were
paying as little attention to their peers as they do to their teachers
when they lecture. Therefore, I decided to change the way in which the
students share their websites. During spring term, 2006, the students
shared their websites in a Rotating Review museum approach (Kagan,
1992, as cited in Baloche, p. 107). I asked each group to write a
scavenger hunt and a quiz to accompany their website. The class then
spent two days moving from website to website, solving the scavenger
hunt, and completing the quiz. Following these two implementations of
the project, the passing rate of the science GHSGT improved from 79% in
2005, to 83% in 2006.
Because “choice is a major
motivator” (Pintrich and Schunk, 2002, as cited in Margolis
& McCabe, 2006, p. 222), I allowed the students the freedom to
choose their topics and their group members. In choosing their own
topics, I believed that the students were choosing to work with the
content they felt they knew and understood best. This would be a
real-life example of self-efficacy. I am now wondering three things:
(1) Could giving the students control over topics and group dynamics
allow for greater motivation in completing the review project? (2)
Could assigning groups and topics based on deficiencies shown on the
diagnostic test be more effective? (3) Which of these approaches would
influence Lola’s self-efficacy or confidence in her knowledge
of the material enough for her to walk into the science GHSGT and know
that she can pass it?
Therefore, the current research problem was to
determine if a difference exists between allowing the students the
power over the make-up of their groups and topics and assigning the
groups and topics to the students based on the deficiencies shown on
the diagnostic test. For the purposes of this study,
students’ choices of topics represent their confidence in
their content knowledge indicating the students’ efficacies
toward the topics. The real question was which one of these teaching
strategies would contribute more to my students’ increased
self-efficacies toward the GHSGT in science? Consequently, as we taught
the graduation test review unit in our eight College Prep Physics
classes during fall term, the physics teachers attempted to quantify a
difference between allowing the students to exercise control in
choosing their topics and the assignment of topics to the students by
the teachers. In half of the classes, the students chose their own
groups and topics, while in the other four classes the teachers
assigned the group members and topics based on the students’
performances on a diagnostic pre-test.
As a science teacher and as the science
department chair, the pressure to raise the passing rate for the
science GHSGT is definitely present. All science departments of which I
have been a member placed great importance on preparing our students
for the science test. We tried many different approaches and learning
strategies. Yet each year following the test, I would listen to the
testing proctors talk about how quickly the students completed the
test, that they completed the test far more quickly than the math or
language arts tests. The proctors stated their belief that the students
finished the test so quickly because they felt defeated, gave up, and
simply bubbled in random answers. This leads me to believe that, for
the most part, the students did not believe that they could pass this
test; they lacked self-efficacy for this test.
The purpose of this intervention study was to
explore whether students’ self-efficacies toward their
abilities to pass high-stakes can be influenced by allowing them more
control over their learning activities. Since the classes were
heterogeneous in make-up, the students’ demographic
differences and ability levels for my eleventh grade College Prep
Physics students at a large suburban high school in the Southeastern
United States represented our school community at large. The
independent variable was the teaching strategy we applied to each
group: students were given the control to choose their own topics and
groups or teachers assigned topics and groups according to previously
identified content weaknesses. The belief is that choice of topic
indicates the content about which the students feel more confident,
i.e. the topics about which they are more self-efficacious. For the
purposes of this study, self-efficacy was generally defined as the
students’ beliefs in their knowledge of the topics tested on
the science GHSGT (Bandura, 1982, Wang, 2001). The dependent variable
was the change in achievement exemplified by comparison of the
students’ scores on the pre-test and post-test.
The purpose of this study
was to investigate allowing students to use their own self-efficacies
toward science content in the context of a problem-based learning
strategy incorporating multiple intelligences theory in the context a
review of the high school science curriculum. The purpose of the review
of literature was to place this research in the historical context of
MI, PBL, and self-efficacy, to rationalize the scholarly importance of
the research problem, to quantify the research variables and to support
the methodological choice. Specifically, the review of the related
literature is intended to answer these three questions:
- What is the current research and practice of
MI, PBL, and self-efficacy?
- What are the effects of using MI as an
- Can self-efficacy, in the form of student
choice, be incorporated into the classroom environment?
Multiple intelligences theory.
First published in Frames of Mind: The Theory
of Multiple Intelligences ( Gardner, 1983), and refined in Intelligence
Reframed: Multiple Intelligences for the 21 st Century, ( Gardner,
1999), Howard Gardner’s theory of multiple intelligences
continues to be an innovative way to consider human intelligence.
Rather than defining intelligence as a single score on a test written
to perceive intelligence as thinking like a scientist, Gardner defines
intelligence as “a biopsychological potential to process
information that can be activated in a cultural setting to solve
problems or create products that are of value in a culture”
(Gardner, 1999, p. 33). To date, Gardner (1999) identifies and defines
eight intelligences. The combination of these eight intelligences makes
up each person’s overall intelligence. Individuals have some
intelligences more highly developed than others. Brief explanations of
each of these intelligences are
- Linguistic intelligence relates to the
spoken and written word. Most individuals whose careers involve the
spoken and written word have highly developed linguistic intelligence.
- Logical/Mathematical intelligence relates to
a person’s ability to conduct problem-solving activities.
- Spatial intelligence relates to an
individual’s ability to work with patterns.
- Musical intelligence relates to a
person’s affinity for music.
- Bodily-Kinesthetic intelligence relates to
the use of the human body: either whole or in parts. These are our
- Interpersonal intelligence relates to the
ability of an individual to interact with others.
- Intrapersonal intelligence relates to how
well a person understands himself/herself.
- Naturalist intelligence relates to an
individual’s ability to see patterns in nature and
distinguish between separate species. (Gardner, 1999; Lazear, 2000;
Stanford, 2003; & Osciak and Milhem, 2001)
The first two intelligences listed above are
the ones most highly valued in our traditional educational system.
Therefore, persons exhibiting strong academic prowess are most often
those with strong linguistic and logical intelligences. These are also
the persons who will score the highest on both paper-and-pencil
intelligence tests and standardized multiple-choice tests. The goal of
incorporating MI theory into the graduation test review unit is to
allow students whose stronger intelligences are not linguistic or
logical to use their naturally stronger intelligences as the method to
learn the material (Gardner, 1999; Lazear, 2000; Stanford, 2003;
& Osciak and Milhem, 2001). This, in turn, should give them the
knowledge base to score well on the standardized tests.
Educational trainer David Lazear has taken
Gardner’s eight intelligences and provided the classroom
teacher with a “toolbox” of activities they can
incorporate into the classroom to allow students to use their more
dominant intelligences in activities and assessments. These activities
provide practical, easy-to-use resources to take the academically
written definitions of the intelligences and place them into a
user-friendly modality. With this toolbox in hand, an educator can
easily design instruction that incorporates activities for all
intelligences. Likewise, students may choose from the toolbox
activities to complete the tasks given to them by their teachers
Goodnough (2001) reported on the successes of
one science teacher in incorporating MI theory into his classroom using
a case study method. The teacher, “Dave,” designed
his lessons to include several intelligences at once. He taught the
students about the theory and gave them a MI inventory to help them
identify their own strongest intelligences. He occasionally gave the
students the choice of which intelligence to use to solve a problem. He
used learning activities, such as the writing of raps to teach
vocabulary (activating the linguistic, logical, rhythmic, and
kinesthetic intelligences) and creating skits to demonstrate knowledge
(activating the interpersonal and kinesthetic intelligences), to teach
a six-week unit on astronomy. During this unit, both Goodnough (2001)
and “Dave” reported that the students were more
engaged by these activities and seemed to enjoy the work more. They
reported, however, that the teacher-made test results did not reflect a
significant increase in student performance. On the unit taught using
MI theory, the class average was 68% while the previous class average
was 64% (Goodnough, 2001). However, the test for the MI unit contained
more higher-level thinking skills than the previous test. It would
appear that if there was an increase in the class average on a test
that included a greater percentage of higher-level thinking skill
questions, then a more significant increase exists than just the four
percentage points reported. Multiple intelligences theory provides the
framework for all students to use their innate strengths in the pursuit
of knowledge. By allowing students to play to their strengths,
educators are providing all children with the ability to attain the
skills necessary to be successful both in the classroom and on
learning (PBL) is an instructional strategy in which students actively
resolve complex problems in realistic situations” (Glazer,
2001, paragraph 1). PBL is one of several instructional strategies that
offer students the ability to assemble their own learning. These
strategies are collectively known as constructivism: the philosophy
that gives students the freedom to construct their own understanding
(Haney, 2003). PBL provides students with authentic situations or
problems to solve by working together in collaborative groups to
formulate an answer. These resolutions are most often in the form of
some type of physical product. Written as a series of problems that the
students need to solve, an example of a textbook that incorporates
problem-based learning is the American Chemical Society’s
publication, Chemistry in the Community, or ChemCom (Heikkinen, 2002) .
For example, the students have to solve the mystery of a large fish
kill in a fictitious town, Riverwood. In order to discover the reason
behind the fish kill, the students must learn the chemistry content
necessary to identify the issues that could affect the fish in this
manner (Heikkinen, 2002). Once learned, the students must place this
content back into the scenario in order to propose a solution to the
problem. This learning strategy is a powerful tool in the hands of an
Problem-based instruction originated for the
training of Canadian medical personnel and has since evolved to become
applicable to a myriad of instructional settings (Gijbels, 2005). PBL
shares many characteristics with project-based learning and learning by
design (Glazer, 2001). All three of these instructional approaches
provide the students with a learning-centered environment with which to
solve authentic problems. The basis of these strategies is that
students will recognize what information they need to solve the problem
(Glazer, 2001). This leads them to assess what information they already
know and what information they need to acquire via research. The
incorporation of this learning strategy into a review for the science
GHSGT will allow the students to focus on reviewing the content
necessary for successful completion of the test. By recognizing and
differentiating the content they already possess from the content they
need to acquire via research, students will be able to review for the
GHSGT in a more effective manner. Since each student’s
strengths and weaknesses are different, the use of problem-based
learning will allow for a more individualized instructional approach.
In problem-based learning, the classroom is no
longer a teacher-centered one; the focus of the instruction and the
classroom has shifted to become student-centric. The teacher serves as
a guide to take the students through the learning cycle, also known as
the PBL tutorial process (Hmelo-Silver, 2004). The steps of this
process, as outlined by Hmelo-Silver, are
- Presentation of the problem.
- Analysis of the problem.
- Formulation of hypotheses. Identification of
gaps in knowledge.
- Application of new knowledge. Once the
students address the learning issues, conduct research to resolve those
deficiencies, and gain new knowledge, the students are ready to apply
their new knowledge in evaluating the strengths of each of their
hypotheses. Here the students will design, organize, and implement
their plans for the graduation test review.
- Evaluation of their original hypothesis in
light of their new knowledge. (Hmelo-Silver, 2004)
The steps above show in detail the process of
using PBL as an instructional strategy to produce a more effective
review for the content tested on the science portion of the GHSGT. One
can see that while the teacher serves as a guide during the
instructional phase of this process, she must produce a problem that is
both authentic and challenging--a problem that requires the students to
access the appropriate content in order to find a solution. The teacher
must also provide the scaffolding necessary to support the students in
their research, leading them to the proper information to solve the
problem at hand (Lipsomb & West, 2004). Working along with the
teacher, the media specialist and technology specialist can also help
provide the students with the tools they need to successfully research
pertinent information and produce their websites and handouts.
Ideally, problem-based learning allows students
to direct and control their own learning, as well as allowing educators
to provide more individualized instruction, making the learning more
engaging to the learner. This type of instruction provides for greater
flexibility in the organization of the classroom. I hope that this will
move our classrooms away from the herding mentality. Herding is defined
by Prensky (2005) as “students’ involuntary
assignment to specific classes or groups, not for their benefit but for
ours” (p. 10). With PBL, students should choose their own
group members. Not only could students be given the opportunity to
chose group members within their own classroom, but also the
opportunity to chose partners from other classrooms as well. This way,
students from one teacher’s class may collaborate with
students in a second teacher’s class in order to produce a
better product. With the advent of the Internet and the communications
revolution that has ensued, students could meet, communicate, and learn
with other students from around the globe (Prensky, 2005).
Although preliminary research studies indicate
that problem based-learning “supports students’
self-directed learning” (Moust, 2005, p. 63), at present, few
research studies focus on how well heterogeneous groups of high school
students are actually able to direct their own content learning. Most
of the research studies conducted on problem-based learning activities
involve medical students and gifted students (Hmelo-Silver, 2004, p.
249).While many 9-12 educators believe problem-based learning is a
possible solution to the ever-lagging student engagement and
achievement, most of the research so far does not involve high school
(9-12) students. One limitation to our research base is the lack of
solid evidence of the effectiveness of PBL in the K-12 setting
Student Choice and Self-efficacy.
Self-efficacy is the belief someone has in her
own ability to achieve a goal or to complete a task (Margolis &
McCabe, 2006). The student mentioned above did not believe she
possessed the knowledge and skills necessary to pass the GHSGT in
science. By spending insufficient time
answering the questions on the test, the student was exhibiting her
feelings that she could not succeed on the test, an indicator of low
self-efficacy (Bandura, Barbaranelli, Caprara, & Pastorelli,
1996). Therefore, a change in teaching strategies must occur in an
effort to help the students not only gain content knowledge, but also
gain self-confidence in their knowledge.
No personal belief is held more strongly
“than people’s beliefs in their capabilities to
exercise control over”themselves and their
situations(Bandura, 1993, p.118). The students themselves will explain
their behavior in rushing through the test and these explanations will
help them maintain control over the situation. “Students with
a high sense of efficacy for accomplishing an educational task will
participate more readily work harder, and persist long when they
encounter difficulties than those who doubt their
capabilities” (Bandura, 1999, p. 204). Tollefson explains
that individuals with high self-efficacy will persist at a difficult
task longer then individuals with low self-efficacy will.
Lola’s lack of persistence and willingness to give up when
completing the graduation test is an indicator of her low
self-efficacy. In all probability, it is not a lack of ability that
causes these feelings in the students, but rather the beliefs students
have in their abilities and the way those perceptions affect their
emotions (Seifort, 2004).
Therefore, science educators must face the
reality that it is not enough merely to present the material. Teaching
content in a relevant and meaningful manner will enhance the
students’ abilities to internalize the content. One way to
achieve this is to allow the students more control in their own
learning. According to Tollefson (2002), “Control of the
difficulty of the task and the amount of effort needed for a successful
achievement outcome is critical to developing outcome and efficacy
beliefs that promote achievement,” (p, 69). By giving
students a measure of control over their instructional tasks, successes
on those tasks should indicate greater content attainment thereby
boosting their beliefs in their abilities to complete the tasks. Once
the students attain and internalize the content, their self-efficacy
toward science will strengthen.
As stated previously, the impetus for this
study was the poor performance of students on the GHSGT in science. In
2005, only 68 percent of the eleventh graders throughout the state of
Georgia met or exceeded the passing score of five hundred (Report Card,
2005, p.3). With the continued pressure of the No Child Left Behind
legislation (No Child Left Behind Act of 2001) and the looming promise
of the addition of the science exit exams to the 2007Adequate Yearly
Progress determination, the importance of increasing the passing rate
of this test should not be underestimated. Not only will there be no
escaping from the requirement of passing these tests, but also no
escaping the ramifications of these poor performances on schools not
qualifying for AYP.
Since the beginning of the GHSGT in science,
science educators have tried many avenues to help our students review
the content and prepare for the test. However, failure rates remain
high. Hence, teachers need to employ methods that are more effective in
order to help more students pass the test. The purpose of this study is
to determine whether the implementation of a graduation test review
that integrates the use of multiple intelligence theory and problem
based learning strategies can provide students with the knowledge and
skills necessary to be successful on the GHSGT. This research will
explore the effects of using a modified instructional model that
integrates multiple intelligence theory into problem-based learning
unit in order to reach students’ individual intelligence
strengths in a situated learning context. Therefore, the objective was
to create more effective science review experiences to help students
internalize the content at a deeper level, bolster their
self-efficacies toward science content and the science GHSGT, and
result in increased passing rates for the science graduation.
Summary of Literature Review
The continuing poor performance of our students
on the GHSGT in science suggests that a change in teaching strategies
is needed. Based on the research, a graduation test review project was
written to incorporate the teaching strategies of Multiple Intelligence
Theory and Problem-Based Learning. MI theory seeks to allow the
students to use their strongest intelligences in both attaining new
content and reviewing content previously learned. The integration of
PBL strategies in the classroom gives the students an authentic
problem, which they must solve. Research indicates that both of these
strategies lead to greater gains in student content acquisition and
student achievement. Research also shows that when students are given
some control over their learning, their feeling of being adequate to
complete the work, their self-efficacies, are increased. Therefore, the
amalgamation of MI theory, PBL strategies, and increased student
control should provide our students with the tools necessary to be
successful on the GHSGT in science.
This research study is based on the belief that
with the implementation of a graduation test review project that
integrates MI theory with PBL, the students would find more relevance
and meaning in the content. With this added relevance, the students
would make greater connections between science content and real-life
situations. These enhanced connections would result in greater
acquisition of content. This, in turn, would augment their
self-efficacies for both science content and the science graduation
test. The evidence of this increased self-efficacy was the increase in
time spent actually taking the test and improved passing rates for the
If students enjoy the control to choose their
own topics during the completion of a problem-based learning review
unit for the graduation test in science, their content knowledge is
strengthened, their self-efficacies toward science content increases,
and their scores on the graduation test improve. For the purposes of
this study, the students demonstrated their self-efficacy toward
science in the choices of group members and review topics, the students
chose the content areas in which the felt the most academically
prepared. The evidence of this improvement was the change in their
scores from pre-test to post-test.
Site of Research
This study was conducted at a large, suburban
high school, which serves 3000 students. For the purposes of this
research, the high school was known as City High School. City High
School is located in the suburbs of a major metropolitan area in the
southeastern United States. City is on a 4 x 4-block schedule. On a 4 x
4-block schedule, students receive an entire year’s
instruction during one semester. Each class period, or block, is 93
minutes, with students attending four classes per day. Tables 1 and 2
illustrate the 2005 demographics for this high school.
Table 4.1 2005 Demographics of
City High School delineated by race
in City High School
Table 4.2 2005 Demographics of
City High School based on socioeconomic groups
in City High School
The students have previously taken and passed
the coursework necessary to be successful on the GHSGT. However,
traditional methods of instruction--lectures, laboratory activities,
and homework assignments--have not resulted in the students’
abilities to pass the GHSGT on their initial attempts. Traditional
review strategies have resulted in few gains in students’
performances on the GHSGT. The Table 3 illustrates the results of the
2005 administration of the science GHSGT.
Table4. 3 Results of the 2005
Administration of the science GHSGT
Test Ability Level
Number of students
Number of students
This research study involved approximately 158
students in eight college prep physics classes with four different
teachers. The study included all students taking College Prep Physics
during the fall term of the 2006-2007 school year. All the students
were on the College Prep Diploma track, meaning that they were planning
to go straight from high school to some form of college. The College
Prep Physics classes were heterogeneous in make-up both in demographic
terms and in ability levels. The classes included both eleventh graders
who have not previously attempted the GHSGT and twelfth graders who
have attempted the test at least once.
The classroom groups of students were
convenience groups because they were already intact groups, i.e. the
physics classes (Creswell, 2003). The four classes that comprised the
control group are Mrs. Alpha’s two classes, one of Dr.
Omega’s classes and my own class. The other four classes made
up the experimental group. These were Mr. Gamma’s two
classes, Dr. Omega’s second classes, and Mrs.
Delta’s one class. Four of the eight classes, and
approximately half of the students, comprised the control group while
the other half made up the experimental group. Since this study tested
the effects of student choice of science content on student success on
the science GHSGT, the teachers assigned the control group members to
groups and assigned their topics. For the experimental group, the
physics teachers allowed the students to choose both their groups
members and their topics.
The tables below present the current
demographics of City High School as determined on September 1, 2006.
Table 4 below displays the demographic for race and ethnicity, while
Table 5 exhibits other non-racial demographics.
Table 4.4 2006 Demographics of
City High School delineated by race
4.5 Demographics of City High School based on socioeconomic groups
The independent variable for this study was
student choice as an indicator of the students’
self-efficacies toward science content. The students fell into two
groups. The delineation of these groups was based on students being
allowed to choose their topics (experimental group) and students being
assigned to certain topics based on deficiencies that appear on a
diagnostic pre-test (control group). The dependent variable was the
difference in the scores from pre-test to post-test. Again, the
hypothesis was that if students choose their topics, rather than having
the topics assigned to them, then the difference between their pre-test
and post-test scores would be greater. An improvement from pre-test to
post-test should indicate an increased potential for the
students’ successes on the science GHSGT.
An Individual Review Board Authorization (IRB)
form was completed and filed with the University, prior to the
beginning of the study. It is the responsibility of the IRB to protect
the rights and welfare of human subjects involved in research (Office
of the Vice President for Research, UGA website, 2006). The local
school system in which the study occurred also required the filing of
an IRB prior to the commencement of the study. Once permission to
research was granted, the teachers and students began to work on the
The teachers asked their students to complete a
Graduation Test Review project, from which the final product was review
websites. The teachers administered a diagnostic test to the students
as a beginning point for this project. They also surveyed the students
to determine the concepts tested on the GHSGT with which the students
feel most comfortable and least comfortable. Approximately half of the
students chose their group members and content area while the other
half had their groups and topics assigned to them based on deficiencies
shown on the diagnostic pre-test and the students’ own
perceived comfort levels.
In an effort to minimize the bias, all the
classes for each physics teacher, except Dr. Omega, were part of the
same research group. For example, both of Mrs. Alpha’s
classes were members of the control group while all of Mr.
Gamma’s classes were members of the experimental group. Since
each teacher only interacted with one type of group, the likelihood of
bias should decrease. In a continuing effort to alleviate bias, every
classroom followed the same procedure. The project procedure follows.
- All students took the pre-test and completed
the initial self-assessment survey.
- All members of the experimental group chose
their own group members and topics.
- All members of the control group were
assigned to their groups and topics.
- All students participated in the project in
the same basic manner.
- pre-test and initial comfort survey
- topic acquisition
- website design and production
- worksheet and quiz production
- website sharing
- All students took the post-test.
Once the data was collected, t-tests were used
to determine if the results showed a statistical significance.
“The t-test is used to determine whether significant
differences exist between means (Cothran et al.,
1998, p. 140).
I filled the role of a participant-observer and
recorded the students’ reactions, facial expressions,
comments, and apparent levels of motivation. Following the completion
of the project, all the students in the eight classes of College Prep
Physics completed a student perception survey. The other physics
teachers also completed a survey concerning their observations and
perceptions about the project and the students’ motivation
during the implementation phase.
The study lasted six class days. During the
first four days, students researched their content area, designed, and
produced their websites, handouts, and quizzes. Because each student
group produced a topic-specific website, the remaining two days of
project time allowed for the sharing of the websites with the other
members of the class. Once the web sites were completed, the teachers
published them on the internet to have them available for the students
to use later in the year, for students to show their parents, and as a
community service to provide review materials for other students who
were not involved in the project.
In this project, the student groups produced
websites to help other students review for the science GHSGT. Because
of a lack of website-producing software, the websites were simply
converted PowerPoint presentations of the material. To make more
effective review materials, the websites included at least three
different multiple intelligences, contained a worksheet to accompany
the site, and a practice quiz on the information. The project required
the students to chose to incorporate any three of the eight multiple
intelligences into their presentation. This could be as simple as
including written information (verbal/linguistic), pictures and
diagrams (visual, spatial), or patterns found in nature (naturalist).
The following screen shots shows two example project pages. (See Figure
4.1 and Figure 4.2)
Figure 4.1 Example web page
Figure 4.2 Example web page
In order to accomplish this task, the students
required ready access to the mobile laptop carts, the Internet, print
texts, and printers. Once completed, the students shared their websites
with their classmates via a museum/conference set up. Computers were
set up around the room with the students’ websites and the
printed handouts available at each station. The students moved
throughout the room exploring each of the websites, completing the
worksheets and taking the practice quizzes. An evaluation rubric and
their scores on the practice quizzes comprised the final grades for the
At the beginning of the project, the physics
teachers administered a diagnostic pre-test to their students. They
also asked their students to rank the nine topics of the project in
decreasing order of comfort with the material. Based on the pre-test
and the self-assessment, Mrs. Alpha and Mrs. Beta assigned their
students to groups and topics. Dr. Omega also assigned groups and
topics in one of her two classes. These assignments were based on the
topics for which the students showed a deficiency and/or with which
they indicated a lack of confidence; this represented the control
group. The other teachers allowed their students to decide the make up
and topic of their groups based on student choice; this was the
A combination of student surveys, teacher
surveys, and personal discussions with other physics teachers,
participant observation, and the change in student scores from pre-test
to post-test, and a personal log was used to produce data. The teacher
survey is located in Appendix B. Both surveys included six questions to
which the participants responded. The teacher survey also included four
open-ended, free-response questions to allow them to give additional
information to the researcher. Students were observed in other physics
classes as well as my own students. Using this combination of
data-gathering techniques gave me a more complete view of both student
motivation and how motivation affects the students’ products.
At the conclusion of the project, the teachers
repeated the procedures from the start of the project. The teachers
administered a post-test to their students. If a statistically
significant difference between the pre-test and post-test was found to
exist, then a difference between the achievements of the two groups
must also exist.
Parker (1993) defines internal validity as
“the extent to which extraneous variables are
controlled” (p. 130). At least two forms of internal validity
exist for this study: history, or outside events that affect the
results of the study (Griffee, 2004; Parker, 1993) and testing (Parker,
1993). In this study, history might refer to any conversations between
students working on the same review topics in different
teachers’ classrooms. These conversations could result in the
students progressing at a more rapid pace than could occur in the
isolated classrooms alone; ideas from one group could lead to new
discoveries in the other groups that would not have occurred otherwise.
The second threat to internal validity in this study is the
incorporation of the pre-and post-tests. Two different situations could
occur from this method of testing. Either the students remember their
answers to the pre-test and therefore score better because of their
memories or they may become so tired of testing that they may totally
disregard the post-test, actually scoring lower than before (Parker,
1993). Therefore, if the students’ post-test scores are lower
than their pre-test scores, one could infer that they were rebelling
against taking another test.
“External validity threats arise when
experimenters draw incorrect inferences from the sample data to other
persons, other settings, and past or future situations”
(Creswell, 2003, p. 171). Since the sample size is approximately one
hundred fifty students, any results from this research study may be
generalizable to other eleventh grade populations.
As with any research study, limitations of the
research existed. “Bias may be activated by personal
infatuation with any current innovation and the strong belief that this
teaching is effective” (Griffee, 2004, p. 2). My personal
biases existed in the hope for a positive result and in the years of
experience I have as a classroom teacher. I wanted the students to
excel at this project just as I continue to want them to be successful
on the GHSGT. Since this study did not produce evidence that allowing
for student choice made a significant difference in the achievement of
the students, more research is needed to explore this important topic.
Upon reflection on the procedure and outcome of
this project, I wondered if the classroom teacher made a difference in
the outcomes of the study. Could the different teachers had an
unknowing impact of the students’ performances on the
post-test in particular? This is also a direction for further
Results and Discussion
Three types of data in this study include (1)
changes in scores from the pre-test to the post-test, (2) student and
teacher surveys, and (3) researcher observations. Each of these
modalities was designed in order to gather enough information to form a
more complete picture of the results of this project.
Prior to analyzing the data, Mr. Gamma
discovered that his post-test data was tainted. He left the post-test
as an assignment for the students while a substitute was in charge of
the class. Upon his return, the students told him that they believed
the post-test was a mistake since they had taken the pre-test.
Therefore, the data from his two classes were excluded from the
analysis. That lowered the number in the experimental group from 158 to
To determine if the differences between the
students’ pre-test scores and their post-test scores were the
result of the experimental treatment, t-tests were conducted. Table 4.1
shows the means and standard deviation for each group. The mean pretest
score for the experimental group was 27.3 answers correct while the
mean pretest score the control group was 27.41 answers correct. The
experimental group’s mean post-test was 26.1 as compared to
the control group’s mean post-test score of 27.89. According
to the gains between pre-test and post-test, the control group
outperformed the experimental group 0 .42 and -1.26 respectively.
Table 4.6 Statistics on Means and Standard
Deviations for both groups.
A T-test was conducted to compare the mean
difference between the experimental and control groups. The T-test on
the group difference is t (116) = 1.98, p = 0.17, which is not
significant at the 0.05 level. Therefore, there is no evidence to
indicate that the two groups differ in their performance on the test.
All participants in this research were asked to
complete a survey about their experiences. One hundred fifty-eight
students returned their completed survey forms, which represents 100 %
of the population of these eight classes. Because Mr. Gamma’s
surveys were completed on a different date than the post-test, there
was no evidence that the survey results were spoiled. Therefore, the
survey results from all students were analyzed. The student survey can
be found in Appendix C. The four of the five teachers, excluding
myself, completed teacher surveys.
Student surveys were examined in two ways.
First each student’s responses to the survey questions were
cataloged. Means, standard deviations, and t-tests were conducted on
this data in an effort to quantify student perceptions. Table 4.3 and
4.4 below illustrate the results from the individual student data.
Table 4.2 displays the means and standard
deviations for each of the six survey questions. The Qq value for the
control group was 3.52 while the Qq value for the experimental group
was 3.57. Based on this analysis, there is no evidence to support an
appreciable difference between the control group and the experimental
Table 4.7 Statistics on Means and Standard
Deviations for both groups on the Student Survey.
to Survey Questions
St. Error Mean
The T-test on the group difference is t (154) =
-.389, which is not significant at the 0.05 level. Therefore, there is
no evidence to indicate that the two groups differ in their performance
on the test
The student survey data were also analyzed
based on the responses levels for each of the six questions. While the
analysis of each individual students’ responses to the
questions do not provide evidence of a significant difference between
the two groups, an analysis of the total number of responses for each
scale level for each question give a slightly different picture.
In Survey Question One, student responded to
the question of their level of enjoyment in participating in the
project. According to the data, the choice group enjoyed this project
more than the assigned group did. Fifty-one of the students in the
experimental group agreed or strongly agreed with this question while
only thirty-seven of the control group students felt the same way. The
most interesting statistic for this question is that thirty-one of the
seventy-nine students surveyed in the control group had no opinion
about their enjoyment of the project.
Survey Question Two queried the students
concerning their feelings of being prepared to take the GHSGT. Both
groups indicated feelings of preparedness for the GHSGT. Forty-seven of
the control group students and 41 of the experimental group students
answered with either agree or strongly agree. This data indicates that
the students’ confidence levels for success on the GHSGT were
For Question Three the students were asked to
indicate feelings of improvement in areas where they originally felt
weak. More of the control group students, those students assigned to
the topics by their teachers, felt that they were stronger in theses
areas than the experimental group students. This data indicates that
students assigned to their topics had greater feelings of improvement
over students who chose their topics.
Student responses to Survey Questions Four
indicated that the majority of the students in both the control and
experimental groups felt that their website would be beneficial to
themselves and to others as the time for the GHSGT approaches.
Sixty-one students in the control group and 65 of the students in the
experimental group answered with either agree or strongly agree
The students’ responses to Survey
Question Five provided evidence that they were proud of the work they
completed and of the product, their website, regardless of choosing
their topic or being assigned to it.
In answering Survey Question Six the students
exhibited how their confidence levels had increased from having
completed the project. The data shows that the students in the control
group felt more confident to succeed on the GHSGT than did the students
in the experimental group.
According to the findings, four positive themes
- The students enjoyed the project.
Eighty-eight students out of the 158 students surveyed indicated this
sentiment: 37 or 46.8% of the control group and 51 or 64.6 % of the
- The students feel prepared for the
administration of the GHSGT. Forty-seven students from the control
group and 41 students from the experimental totally 88 students
indicated this feeling.
- The students feel their websites are
beneficial to themselves and to others. One hundred twenty-six
students, 61 students from the control group and 65 students from the
experimental group, responded with a positive reaction to this
- The students are proud of their work and
their product. Fifty-eight students from the control group and 61
students from the experimental group responded in this manner.
The responses from the teacher survey
questionnaire indicated that the other teachers involved in this
project felt positively about the project and their students’
successes. The open-ended responses from the teachers review four major
- Students saw value in the activity which
resulting in the students being more serious and dedicated to
completing the project.
- The cooperative learning environment was a
beneficial aspect of the project.
- Technology was necessary to the project but
also the most negative aspect of the project.
- The review unit should come later in the
participant-observer, I moved into and out of each of the eight CP
Physics classes while the students were working on this project. I
found that virtually all students were engaged in the task and
interested in creating a good product. The students communicated well
in each group whether they chose the group members and topics or their
teachers chose the members and topics.
The students in the control group were
disappointed to discover that their group members and topics were
assigned to them. However, they quickly met with one another, used
their self-efficacies to divide the work up among them, and went to
work. It was clear during the observations that the students chose to
work on tasks within the groups with which they were most comfortable.
The students with the greater content knowledge and research skills
became responsible for the content, the more artistic students took on
the tasks of designing the website, and the technology-savvy students
produced the PowerPoint presentations and converted them to WebPages.
In one group, Mark became the webpage designer and began working on the
page almost immediately. Sally began researching the topic on the
internet, while Matthew discussed the content specific objectives for
the project with his teacher. In another group, Kylie began researching
material, while her partners looked on. At first glance, it seemed that
the other two students were not on task. However, upon further
inspection that they were researching together, and then Emma wrote the
worksheet and Bob created the quiz.
This study did not find any evidence indicating
that differences exist between the control group and the experimental
group in terms of performance on the post-test. Based on the data,
allowing the students to use their own beliefs in their competencies in
a given content area, their self-efficacies, to choose their topics and
group members did not result in greater gains from pre-test to
post-test. In this instance, Parker’s (1993) thoughts about
the students’ performance on pre-tests and post-tests may be
correct; at least some of our students may have grown so weary of
testing that they completely disregarded the post-test and actually
scored lower than before. Why was the incidence of this greater in the
experimental group than in the control group?
The student survey results give a different
picture of the students’ feelings and attitudes. While the
majority of students responded favorably to all the questions on the
student survey, two questions provided responses most applicable to the
concept of self-efficacy. The self-confidence levels of the
experimental group compared to the control group led to very different
responses to questions three and six. The students in the control group
felt that they had improved in their weaker areas and that they were
more confident going into the graduation test than the students who
chose their own groups felt. These differences correlate to the
differences between the control group and the experimental group in
their performances on the post-test. The greater feelings of
improvement and confidence resulted in the students in the control
group being less likely to disregard the post-test than the students in
the experimental group were. Not only could these increased feelings of
self-confidence translate into greater self-efficacy in taking and
succeeding on the post-test but also to taking and succeeding on the
GHSGT. This increased self-efficacy toward the GHSGT could mean that
the students will be more persistent in completing the test and less
willing to surrender when they encounter difficult questions on the
test, resulting in increased passing rates on the GHSGT. As this study
provided no evidence that allowing students the freedom to choose their
own cooperative groups made any significant difference in their
achievement on a problem-based learning project, more research is
Baloche, L. A. (1998). The
cooperative classroom: Empowering learning. Upper Saddle
River, NJ: Prentice-Hall, Inc.
Bandura, A. (Ed.). (1999). Self-Efficacy
in changing societies. Cambridge, UK: Cambridge University
Bandura, A. (1993). Perceived self-efficacy in
cognitive development and functioning. Educational
Psychologies, 29(2), 117-149.
Bandura, A. (1982). Self-Efficacy mechanism in
human agency. American Psychologist. [Electronic version]. 37(2),
Bandura, A., Barbaranelli, C., Caprara, G.V.,
& Pastorelli, C. (1996). Multifaceted impact of self-efficacy
beliefs on academic functioning. Child Development, 67,
Brophy, J. (1999). Toward a model of the value
aspects of motivation in education: Developing appreciation for
particular learning domains and activities. Educational Psychologist, 34,
Cothron, J.H., Giese, R.N., & Rezba,
R.J. (2000). Students and research: Practical strategies for
science classrooms and competitions (3 rd ed.). Dubuque:
Kendall/Hunt Publishing Company.
Chen, L., Benton B., Cicatelli E., and Yee, L.
(2004, May/Jun ). Designing and implementing technology collaboration
projects: Lessons learned. [Electronic version]. TechTrends:
Linking Research & Practice to Improve Learning, 48.
GADOE.org (n.d.). What is AYP? No
Child Left Behind Adequate Yearly Progress (AYP)-The Basics.
Retrieved: March 18, 2006, from Georgia Department of Education Web
site: http://www.doe.k12.ga.us/support/plan/nclb /ayp_faq_q7.asp
Gardner, H. (1999). Intelligence
reframed: multiple intelligences for the 21 st century. New
York, NY: Basic Books.
Gardner, H. (2006, February 23). Changing
minds: the art and science of changing our own and other
people’s minds. Public presentation at the 2006
Covenant College Educator’s Conference.
February 23, 2006. Lookout Mountain, GA.
Gijbels, D., van de Watering, G., &
Dochy, F. (2005, February). Integrating assessment tasks in a
problem-based learning environment. [Electronic version].
Assessment & Evaluation in Higher Education, 30.
Glazer, E. (2001). Instructional models for
problem-based inquiry. In M. Orey (Ed.), Emerging perspectives on
learning, teaching, and technology. Retrieved: April 29, 2006 from
Goodnough, K. (2001, April). Multiple
intelligences: a framework for personalizing science curricula. [Electronic
version]. School Science & Mathematics. 101(4).
Governor’s Office of Student
Achievement. (2005) 2004-2005 State of Georgia K-12 Report
Card for State of Georgia in State of Georgia. Page 3.
Retrieved April 29, 2006, from
Griffee, D.T. (2004). Research tips: Validity
and history. [Electronic version]. Journal of Developmental
Education, 28(1), 38.
Haney, J. J., Czerniak, C. M., & A. T.
(2003, December). Constructivist beliefs about the science classroom
learning environment: perspectives from teachers, administrators,
parents, community members, and students. [Electronic
version]. School Science and Mathematics, 103.
Heikkinen, H. (Ed.). (2002). Chemistry
in the community. 4 ed. New York: W. H. Freeman.
Hmelo-Silver, C. E. (2004, Sept). Problem-based
learning: what and how do students learn? [Electronic
version]. Educational Psychology Review.16(3).
Jensen, E. (2000) Brain-based
learning. San Diego, CA: The Brain Store.
Lazear, D. (1999). Eight ways of
knowing: teaching for multiple intelligences. 3 rd edition.
Arlington Heights, IL: Skylight Training and Publishing.
Lipscomb, L., Swanson, J., West, A. (2004).
Scaffolding. In M. Orey (Ed.), Emerging perspectives on
learning, teaching, and technology. Retrieved February 18,
Margolis, H. & McCabe, P.P. (2006)
Improving self-efficacy and motivation: What to do, what to say.
[Electronic version]. Intervention in School and
Clinic. 41 (4), 218-227.
Moust, J., Roebertsen, H., Savelberg, H.,
& De Rijk, A. (2005, March) Revitalising PBL groups: Evaluating
PBL with study teams. [Electronic version]. Education for
No Child Left Behind Act of 2001, 20 U.S.C.
§ 6301 (2002).
Osciak, S. Y., & Milheim, W. D. (2001).
Multiple intelligences and the design of web- based Instruction. [Electronic
version]. International Journal of Instructional Media.28(4).
Parker, R.M. (1993) Threats to the validity of
research. [Electronic version]. Rehabilitation Counseling
Bulletin, 36(3), 130.
Placek, R. (2003). Stats without
stress. (Class handout available from members of the honors
classes at City High School, 1555 Old Peachtree Road, Suwanee, GA
Prensky, M. (2006). Listen to the natives. Educational
Leadership, 63. Rumsey, D. (2003). [Electronic
version]. Statistics for Dummies. Hoboken, NJ: Wiley
Rumsey, D. (2003). Statistics for
Dummies. Hoboken, NJ: Wiley Publishing, Inc.
Seifert, T. (2004) Understanding student
motivation. [Electronic version]. Educational Research, 46(2),
Stanford, P. (2003, November).Multiple
Intelligence for every classroom. [Electronic version].
Intervention in School and Clinic.39(2).
The Office of the Vice President for Research,
(n.d.). The University of Georgia Website. Retrieved
July 1:2006 from: http://www.ovpr.uga.edu/hso/guidelines/index.html
Tollefson, N. (2000). Classroom application of
cognitive theories of motivation. [Electronic version]. Educational
Psychology Review, 12.
Wang, S. (2001). Motivation: General overview
of theories. In M. Orey (Ed.), Emerging perspectives on
learning, teaching, and technology. Available Website:
What is ayp?. (n.d.). Retrieved March 18, 2006,
from Georgia Department of Education Website:
Please complete the
following survey concerning the Graduation Test Review Project you just
1. I enjoyed this project.
2. I feel this project has
helped to prepare me for the graduation test.
3. I feel that I am stronger
in the areas of content in which I originally felt weak.
4. I feel our website will be
beneficial to our groups and to others who wish to review for the
5. I am proud of our work on
6. My confidence has increased
in regards to taking the graduation test.
Please complete the following survey concerning
the Graduation Test Review Project your students just completed.
1. My students focused on the task of
research information to build an informative website.
2. My students appeared engaged in their
3. My students appeared to gain
confidence in their knowledge of the topics.
4. My students appeared to be proud of
5. I feel that this time was beneficial
in preparing the students for the graduation test.
6. My students appear to feel confident
about taking and succeeding on the graduation test.
If I had to name the one most positive aspect
of this project, it would be:
If I had to name the one most negative aspect
of this project, it would be:
I would change this project by:
I feel my students have grown in the following
ways because of this project.