The last 15
years have witnessed the emergence of two major players in secondary
education. The increases in functionality and affordability of
computers along with the development of the Internet have triggered
many schools to purchase computers in an effort to integrate
technology. Computers are more prevalent in schools today than ever
before and the use of the Internet by high school students has
increased dramatically. Within this same time frame, many states have
developed and implemented minimum competency tests/graduation tests
that are considered high risk/high stakes tests because student have to
pass them in order to graduate or move on to the next grade level. The
NCLB Act of 2001 mandates that states monitor growth of learning in
addition to ensuring students have attained minimum competencies in
core subjects. The advent of these test and the NCLB Act of 2001 puts a
greater emphasis on testing in schools and therefore demand!
s that teachers make every attempt to make their students successful.
The time constraints present in education make it important that
teachers find ways to check their students’ progress and
develop efficient methods of remediation.
Gwinnett county, a
metropolitan suburb of Atlanta, has started a program that will design
and implement benchmark assessments at 6, 12, 24, and 30 weeks during
the school year. These tests are being designed to ensure that students
have achieved mastery of basic objectives (termed academic knowledge
and skills AKS). Along with pre and posttests, the benchmark
assessments intend to show student growth within classes. The benchmark
assessments are constructed in the same vein as high stakes graduation
tests. Both are multiple-choice tests that are aligned with standards
established by the state. Debate about the validity of the tests is
prevalent in research literature, yet discussions about remediation for
students who do not pass are almost nonexistent. The emerging
technology associated with computers and the Internet is an untapped
tool that could be utilized to increase test scores and provide an
efficient remediation process. Studying the ef!
fects of this type of remediation when used with benchmark assessments
will allow educators to determine if the remediation process will be
effective with graduation tests. However, while the benchmark
assessments will provide teachers direction for remediation with their
individual students feedback from current graduation tests is not
specific to individual students. At least two states are implementing
online testing for graduation tests that will eventually provide
specific information to teachers and thus allow the teacher to direct
remediation (Neugent, 2004; Woodfield, 2003).
The review of the
literature was conducted using Galileo, primarily searching through
Academic Search Premier, ERIC, and Education Abstracts databases.
Articles dealing with remedial courses at higher education levels were
found most when using remediation as a search term. Upon narrowing the
search to secondary education few results were found. Tutorials as a
search term yielded more results but few related to graduation tests.
An abundance of articles were found concerning the validity and effect
of graduation tests. Additionally many articles were found concerning
the use of computers and technology in the classroom but again few
relating to the remediation of students with failing scores on
graduation tests. References from several articles were used to find
other studies but several were not available online. The Google search
engine was used to find titles of programs and links to tutorials of
which there were many. Finally the author use!
d personal experience and discussions with colleagues especially with
reference to the Georgia High Graduation and the current development of
benchmark assessments in Gwinnett county Georgia.
This literature review was
conducted to provide the author with the necessary information to make
informed decisions with regard to the remediation process to be used by
teachers. Specifically the review attempted to answer the following
- What types of
remediation have been used with students?
- Have technology based
remediation tools been effective?
- Has the use of similar
technology be effective in the classroom?
The types of remediation
are broken down into two different categories: Traditional and
Nontraditional. Traditional types of remediation involve
teacher-student contact or student-student contact. These typically
occur within the setting of the school the student attends.
Nontraditional types of remediation involve student-computer contact or
student-internet-teacher contact. These types of remediation can occur
anywhere students can gain access to the programs. If nontraditional
types of remediation are effective, they should also be used to
supplement or change instruction in the classroom. The author has
attempted to alter instruction with different types of technology
integration but realizes there are more possibilities to be explored.
Thus the review also looked for classroom application involving
nontraditional applications. The impetus for the review came from a
desire to improve student achievement on graduation tests. While not a
ct question, investigation into these tests and other related tests
would help shape future research questions.
Traditional types of
remediation in the secondary setting usually involve students spending
more time with staff after regular school hours. The student has the
choice of staying after school, coming to school on Saturdays or
attending summer school. While some students experience success with
these opportunities they are being exposed to “the same
learning activities using the same texts and workbooks with the same
ineffective teaching” (Starratt, 2003) . Within these
experiences students could be exposed to remediation of basic concepts,
working of example problems, bridging techniques and student discussion
groups (Mason & Crawley, 1993).
Nontraditional types of
remediation can include software programs or online tutorials. While a
google search will show several examples of both types, journal
literature does not provide much research on their effectiveness.
Anderson (Anderson et al., 1995) details early
work in designing “cognitive tutors” that attempt
to tutor students with computer programs. The Advanced Computer
Tutoring Project at Carnegie Mellon University appears to be the first
step in using computers as tutors. While designers of these and
subsequent programs and software packages attempt to use constructivist
practices, most applications surveyed fell into the category of
“drill and practice” (Morse, 1991; Nakhleh
et al., 2000; Schneider, 2000a, 2000b; Scott, 2001a, 2001b;
Sherman, 1999; Tillman, 2000; Timmons, 2001) . While “drill
and practice” may have a niche in education they do not have
the flexibility to allow! students to construct knowledge.
Online tutorials do not
differ much from non-online tutorials, they are however more readily
available and less expensive. The Georgia State Department of Education
offers tips and practice tests online that will score student attempts.
Using this service gives students an idea about the type of questions
that will be used, but the knowledge gained is strictly factual and
does not tie together major concepts. Several applications have been
developed to complement post secondary classes. While students find
them more flexible, there is no research to show that they improve
student test scores (Carswell et al., 2000;
Crowther et al., 2004; Littlejohn et al.,
Nontraditional types of
remediation offer students more flexibility and different instructional
methods not found in their courses. A large number of students can
benefit from this flexibility as they participate in programs outside
of regular school hours or summer school. However a large percentage of
these students do not have appropriate access to computers and the
Internet away from school. Unfortunately this segment of the population
typically scores lower on standardized tests. Gains in achievement can
be demonstrated with both types of remediation but tutorial programs
have not succeeded in teaching high-level reasoning (Horwitz, 1999) .
In order for students to truly benefit from a remediation program they
must have appropriate access to interventions that increase their
knowledge base as well as their capability to learn.
Technology Use in the
Computer applications for
the classroom come in a variety of shapes and sizes each with their own
twist on teaching. Typically teachers have looked for openings into
which they could insert an application rather than change their
curriculum. Older applications worked nicely with this model, as they
were mostly objective drilling exercises that augmented lecture.
Current philosophy in education advocates switching to a more
constructivist approach, likewise most computer applications are
attempting to become constructivist in design (Rodrigues, 2000) . While
most computer applications have the capability to increase student
achievement, (Christmann & Badgett, 1999) there is support that
constructivist applications would actually enhance student learning.
Constructivist software would increase student metacognition and
develop life long learners (Horwitz, 1999; MacKinnon, 2001; Windschitl,
2000) . Usage of certain applications and the Internet! can easily be
incorporated into project-based learning and problem-based learning
activities (Churach & Fisher, 2001; Cox-Petersen &
Olson, 2000) . While exciting and innovative this approach to teaching
(especially with the use of technology) does not match the format of a
typical graduation test. Teachers face a dilemma when planning a
lesson: Do I use tools to make my students better learners or do I
focus my attention on preparing my students to pass a high stakes test.
The USA Today reported the
25 states have high school graduation tests (2004) . The advent of the
NCLB act of 2001 will likely increase that number by 2010. These tests
are affecting teachers and their teaching practices. In Massachusetts
teachers are changing their instructional practices to help students
graduate and improve their schools assessment scores (Vogler, 2002).
Teachers in a successful (high test scores) school district in Ohio
report that the imposition of state tests has increased workload
created excess stress and decreased faculty morale (Kubow &
DeBard, 2000). These teachers feel rushed to cover material and that
they are teaching to the test. While the tests may focus the curriculum
and provide meaningful incentives, the validity of the tests are
questioned by parents and teachers and impede instruction that leads to
higher-order thinking skills (Jacob, 2001; Reising, 2000) . If the
tests are given to students during their! sophomore or junior years,
what does this tell the student about the learning that could be
achieved after the test? Proponents argue that the test will show
teachers student weaknesses and allow time for remediation. Currently
remediation efforts are focused more on passing the test rather than
improving learning. Most teachers could identify the majority of
students who will fail a graduation test using previous test scores and
course grades. The argument is that the students could be identified
earlier so that interventions could take place before the test rather
than remediation after the test (Gibson, 1997). Early intervention
programs in the primary grades may reduce large numbers of low
achievers in upper grades (Neal & Kelly, 2002). The reality of
the situation is that the tests are not going away and teachers and
schools are going to bear the responsibility of improving test scores.
Schools and teachers will also bear the public scrutiny when test
scores do not i!
mprove resulting in a label of a “needs improvement
Upon review of the
literature the author has formulated the following research questions:
- Will classroom
implementation of computer based programs and Internet based activities
reduce the number of students who fail benchmark assessments?
- Will use of
nontraditional types of remediation improve benchmark scores of
students who did not pass the initial benchmark?
- Will the use of
technology improve the learning capabilities of students as measured by
The development of the
questions was influenced by the review of the literature and by the
experiences of the author in the fall of 2004. Two benchmark
assessments (6 and 12 week) were given during the fall semester to 10
th grade chemistry. Students failed the assessments (below 70%) in
large numbers and traditional remediation helped only a few students.
While using benchmarks seemed to indicate an increase of final exam
scores, grades for the course were lower than in previous years. Based
upon the literature the remediation process should improve test scores
and course grades.
In order to increase
student performance on benchmark assessments I created and used review
tools and tutorials with Inspiration, PowerPoint and the Quia.com
website. The review tools were designed to enhance student knowledge on
core objectives. The data collected demonstrated the effectiveness of
the technology-based tutorials on student scores who retook the test.
quasi-experimental design I collected data from pretests, benchmark
assessments, and retakes of benchmark assessments. These instruments
consisted of multiple choice question tests based on core objectives.
The number of questions on the tests ranged from 20 to 45 questions.
The benchmark tests are produced by the school district and correspond
to the mandated degree objectives.
The participants in this
study attend a suburban high school in the southeastern United States.
The high school has a population of 2000 of which 50% are Caucasian,
30% are Asian, 10% African-American and 10% Hispanic. All participants
were enrolled in a college preparatory chemistry class. More than 5% of
the participants were taking the class a second time. Only those
students who decided to take the retest of the benchmark are included
in the study.
At the beginning of the
semester students were given a pretest that covers 4 core objectives.
This test sets a baseline of prior knowledge for the students. The
teacher demonstrated the use of PowerPoint and Inspiration during the
course of instruction. These tools were used to make review guides or
tutorials for the core objectives. At the end of the first six weeks,
the first two core objectives had been covered and the first benchmark
assessment was given. Students can opt to retake the benchmark
assessment after they have used the review guides and tutorials created
to review the core objectives. One day after the 6 week benchmark
assessment all students were taken to a computer lab and directed to
the website Quia.com. Students used teacher created games and
activities to review the core objectives. Students were encouraged to
try Quia.com at home and at the library during lunch. The teacher also
asked students to create new questions !
to be used in the games and activities at Quia.com. The next day
students used laptops in the classroom to take practice chapter tests
online that were produced by the publisher of the textbook used in the
class. The students worked in pairs as they took the tests and emailed
the results to the teacher. They could retake the online practice tests
until they were satisfied with their grade. One week later these
students were given an opportunity to retake the first benchmark
assessment. Data was collected that compared the scores of the two
tests. After the retest of the first benchmark has been scored students
were surveyed and interviewed (see appendix 1) to determine the
effectiveness of the process. The researcher evaluated the suggestions
made by the students and modifications were made to be used for
remediation of the 12 week benchmark assessment. Data was collected
that compared the scores of the two tests.
Mean differences in
scores were calculated using an Excel spreadsheet. The averages of the
pretests and posttests were compared to determine student gains in
achievement. Rates of improvement were also compared with rates of
improvement from last year’s data. However direct comparison
of the data between both years presented a problem. The previous year
students were required to take the retest if they scored less than 70%
while those who scored higher than 70% were not given the option to
retake the test. In the current year students had the option to retake
the test regardless of score. Thus data from students who scored less
than 70% this year was pulled from the main data set to provide a
direct comparison. (see appendix 3) Information from the interviews
were categorized into major themes. The survey results were tabulated
to indicate the number of students for each possible response. (see
Role of the Researcher
As the teacher of the
classes involved in the study, I administered the tests and collected
all of the data and was responsible for the remediation. I will be
interviewing the students and distributing the surveys to the students.
Results and Discussion
48 students attempted the
benchmark after completing the remediation activities. The average
score on the first benchmark for the 48 students was 67.7%. The average
score of the same 48 students on the retest was 75.2%. The net increase
in average was 7.5. Not every student showed an increase for pre to
post test however. Only 32 out the 48 increased their scores while 5
made the same score and 11 showed a decrease in their score. (see
Those students who
initially scored less than 70% showed an increase of 8.7%. Although the
average from this group was lower than the overall group (75.2% to
69.0%) there was a slightly higher increase from pretest to posttest.
17 out of 28 students increased their scores while 4 made the same
score and 7 showed a decrease in their score. (see appendix 3) Students
from the current year also showed a greater pretest and posttest
average. Last year's students averaged 54% on the pretest and 64% on
the posttest. While the percent increase is a higher numerical value
for previous students, current students had an equivalent increase in
correct responses. The 6 week benchmark consisted of 25 questions while
the prior benchmark consisted of only 20 questions.
The results of the
written survey indicated that 90% of students thought that using
computers were very helpful or somewhat helpful. However 75% of the
students spent less than 30 minutes outside of class using the
computers and 85% of the students completed 3 or less of the
Class discussion of the
benchmark remediation activities was varied and only a few ideas came
up in all the classes. Students unanimously liked the instant feedback
received from the activities and they preferred teacher created
remediation tools rather than activities created by the publisher of
technology into my classroom has been beneficial to myself and my
students. Using computer based activities has increased the interest
and achievement of my students and promises to make my teaching style
more efficient. However my classroom is not a controlled environment
designed to prove that the modifications I make increases learning.
There have been many outside influences that have affected the design
of this project and therefore should be mentioned. The original
timeline had to be adjusted to incorporate changes in the instructional
calendar and the redesign of the 6 week benchmark assessment. The
method of delivery of the benchmark also changed in the midst of data
collection as the district made the benchmark assessments online tests.
Students were given the 6 week benchmark as a paper and pencil test
with a scanable answer sheet but those who took the retest did so using
laptops and computer la!
bs online. There was no control group as I wanted all my students to
have the same opportunities. The only comparisons made were with
respect to last years students. Students in this year's study achieved
higher averages and equivalent gains in achievement. In short while the
numbers show the project to be somewhat successful, I do not feel that
definite conclusions should be drawn.
previously in the results and discussion section, the overall average
of the test scores increased. However it must be pointed out that every
student who took the retest did not improve their score. The majority
of students did improve their scores and they also feel positive about
using computers but it is difficult to state the remediation activities
are responsible for the improvement. One student felt their improvement
was due to discussing the topics and questions with other students.
Still the information generated by this project shows that computers
can be an effective aide and that students enjoy the computer based
activities created by their teacher. This alone is enough to continue
to explore alternative methods of instruction and remediation with
Anderson, J. R.,
Corbett, A. T., Koedinger, K. R., & Pelletier, R. (1995).
Cognitive tutors: Lessons learned. Journal of the Learning
Sciences, 4(2), 167.
Carswell, L., Thomas, P., & Petre, M.
(2000). Distance education via the internet: The student experience. British
Journal of Educational Technology, 31(1), 29-46.
Christmann, E., & Badgett, J. (1999). A
comparative analysis of the effects of computer-assisted instruction on
student achievement in differing science and demographical areas. The
Journal of Computers in Mathematics and Science Teaching, 18(2),
Churach, D., & Fisher, D. L. (2001).
Science students surf the web: Effects on constructivist classroom
environments. The Journal of Computers in Mathematics and
Science Teaching, 20(2), 221-247.
Cox-Petersen, A. M., & Olson, J. K.
(2000). Authentic science learning in the digital age. Learning
and Leading with Technology, 27(6), 32-35, 61.
Crowther, M. S., Keller, C. C., &
Waddoups, G. L. (2004). Improving the quality and effectiveness of
computer-mediated instruction through usability evaluations. British
Journal of Educational Technology, 35(3), 289-303.
Gibson, S. D. (1997). A comparative study of
previous achievement indices for two groups of ninth-grade students:
Those who passed and those who failed sections of the ohio ninth grade
proficiency test. American Secondary Education, 26,
Horwitz, P. (1999). Hypermodels: Embedding
curriculum and assessment in computer-based manipulatives. Journal
of Education, 181(2), 1.
Jacob, B. A. (2001). Getting tough? The impact
of high school graduation exams. Educational Evaluation
& Policy Analysis, 23(2), 99-121.
Kubow, P. K., & DeBard, R. (2000).
Teacher perceptions of proficiency testing: A winning ohio suburban
school district expresses itself. American Secondary
Education, 29(2), 16-25.
Littlejohn, A., Suckling, C., &
Campbell, L. (2002). The amazingly patient tutor: Students'
interactions with an online carbohydrate chemistry course. British
Journal of Educational Technology, 33(3), 313-321.
MacKinnon, G. R. (2001). A promising model for
incorporating the computer in science learning. The Journal
of Computers in Mathematics and Science Teaching, 20(4),
Mason, D., & Crawley, F. E. (1993). Remediation,
bridging explanations, worked examples and discussion: Their
effectiveness as teaching strategies in a freshman-level nonscience
major chemistry course. Access eric: Fulltext (143
Reports--Research; 150 Speeches/Meeting Papers). Texas.
Morse, R. H. (1991). Computer uses in secondary
science education. Eric digest.
Nakhleh, M. B., Donovan, W. J., &
Parrill, A. L. (2000). Evaluation of interactive technologies for
chemistry websites: Educational materials for organic chemistry web
site (emoc). The Journal of Computers in Mathematics and
Science Teaching, 19(4), 355-378.
Neal, J. C., & Kelly, P. R. (2002).
Delivering the promise of academic success through late intervention. Reading
& Wrting Quarterly, 18, 101-117.
Neugent, L. W. (2004). Getting ready for online
testing. T.H.E. Journal, 31(12), 34, 36.
Reising, R. W. (2000). High school exit exams. The
Clearing House, 74(1), 4-5.
Rodrigues, S. (2000). The interpretive zone
between software designers and a science educator: Grounding
instructional multimedia design in learning theory. Journal
of Research on Computing in Education, 33(1), 1-15.
Schneider, J. (2000a). Science explorer 3.04. T.H.E.
Journal, 27(9), 78.
Schneider, J. (2000b). Science gateways. T.H.E.
Journal, 27(9), 78.
Scott, L. (2001a). Naming chemical compounds. Learning
and Leading with Technology, 29(1), 60-61.
Scott, L. (2001b). Periodic table &
trends. Learning and Leading with Technology, 29(1),
Sherman, G. (1999). Western harnett high
school, lillington, north carolina. Plato evaluation series. 12.
Starratt, R. J. (2003). Opportunity to learn
and the accountability agenda. Phi Delta Kappan, 85 (4),
Tillman, S. (2000). Britannicaschool.Com. T.H.E.
Journal, 28(3), 73.
Timmons, M. (2001). Hyperchemistry on the web. School
Library Journal, 47(1), 60-61.
Today, U. (2004). High school graduation tests
have little tie to college, report finds. USA Today.
Vogler. (2002). The impact of high-stakes,
state mandated student performance assessment on teacher's
instructional practices. Education, 123(1), 39.
Windschitl, M. (2000). Supporting the
development of science inquiry skills with special classes of software.
Educational Technology Research and
Development, 48(2), 81-95.
Woodfield, K. (2003). Getting on board with
online testing. T.H.E. Journal, 30(6), 32, 34-37.
1. How helpful was using the
computers to review for the Benchmark assessment?
2. How much time outside
of class did you spend using the computer activities to
study for the Benchmark assessment?
3. How many different
activities did you use?