Kolodzy, C. V. (2007). A Critical Look at the “I CAN Learn” Computer Education Systems, Four Years Later.... Instructional Technology Monographs 3 (2). Retrieved <insert date>, from http://itm.coe.uga.edu/archives/spring2007/ckolodzy.htm.

A Critical Look at the “I CAN Learn” Computer Education Systems, Four Years Later...

by

Carl V. Kolodzy

The University of Georgia


Abstract

This is a study of the use of the “I CAN Learn” software (ICL) at middle school in the southeastern United States.  Two other nearby middle schools, started the use of the “I CAN Learn” education system in 2002-03 and the results from one of these school’s implementation in the first year are available through the software vendor’s site.  The study middle school, which opened in 2004-05 and was created from a portion of one of the other middles mentioned middle schools, added the use of “I CAN Learn” in their second year of operations (2005-06).  Improved standardized test scores were evident at the original middle school in 2002-03.  Has that trend continued?  Has the software helped improve scores at the newer school?  This study is an inquiry into standardized test results (Georgia Algebra End of Course Test and school district’s 7th Grade Math Gateway Tests) of the use of the “I CAN Learn” software, after a 3 to 4 year implementation period from the initial studies.  The results indicate that the trend toward improved scores has not continued and that the use of the software had an impact only in its early years of use.

 

Literature Review Methods Results and Discussion Conclusions References

 

Introduction

I have taught Algebra 1 at two of the middle schools where both administrations made the decision to use the “I CAN Learn” education system to teach “at risk” eighth grade students algebra and pre-algebra.  When the program was first implemented there was a marked improvement of test scores from the students that were taking the classes.  The students who were taking the classes would brag about their being chosen for the class.  In more recent years, the student attitude towards the class has not been as positive from the students.  Students that have been moved out of the program have not progressed very well in my traditional classroom.  The number of students not performing well on the Gwinnett County Gateway exams has increased.  Has the program lost its effectiveness?

Various teaching techniques have been used at these schools to address the implementation of the “I CAN Learn” software within the classroom environment.  In some cases, the teacher is actively involved with the presentation of new material.  In other cases, the teacher is there to help students “one-on-one” for any particular area the student is having difficulty.  There are also cases where the teacher is not involved, except to reset computers and make sure the equipment is not being vandalized.    In between, these three different approaches, there are differing combinations of teacher involvement.  There are many positive elements, especially, with the teacher willing to set out an overview of the topic and then work with each student in a “one-on-one” method. There are also negative elements associated with this environment, as when the teacher becomes only a caretaker of the equipment.  

The plus side of the continuum regarding software integration relates to the pace of instruction, which allows student control, as well as allowing the student to retake tests, until the concepts are mastered.  The classroom is setup up with a desk/computer workstation for each student, including headphones, so that the student is basically in his own world, working at his own pace, without the interruption of outside events.  The learning environment comes at a cost, with estimates between $200,000 and $300,000 per classroom.  This cost includes, not only the computer hardware and software, but also the furniture that contains the equipment, such that the students do not have access to any of the equipment other than what is necessary to run the software.  (Headphones, mouse and keyboard…with restrictions imposed, based upon the login account.)  The students do not have the ability to connect/disconnect the keyboard, mouse or headphones.  The students do not have the ability to power the computer; they do not have access to the front panel of the computer.

The problem that is addressed by this project deals with two basic elements: the first is the question of whether student math test scores are still being improved and the second, derived from the first question, is what may account for either the continued improvement in test scores or the decline in test scores.

A number of studies exist that relate computer based learning to improved math scores.  One study (Schacter, 1999) reviews a number of key studies (Kulik, 1994; Baker, Gearhart and Herman, 1994; Mann, Shakeshaft, Becker, and Kottkamp, 1999; Sivin-Kachala, 1998; Scardamalia and Bereiter, 1996; Wenglinsky, 1998).  In “Meta-analytical studies on findings in computer-based instruction” more than 500 individual research studies concerning computer-based instruction are reviewed.  The studies, which were conducted throughout the 1980’s, ranged from elementary instruction through adult education.  This included a number of references to studies involving middle school and high school mathematics.  The results of the meta-analysis was that when using computer-based instruction, students scored higher on standardized tests, they learned more in less time and had more positive attitudes about their classes.

In general, computer-based learning appears to be the solution to educational problems, especially in the areas of mathematics.  So what’s the downside?

First, the studies cited above (Kulik, 1994, and Sivin-Kachala 1998), typically were not done over a long period of time.  Both of these were meta-analytic studies.  A meta-analysis consists of gathering information about a number of similar studies and then drawing conclusions concerning the similarity of results.  The two meta-analysis studies mentioned, averaged information together in order to reach a general conclusion about the hundreds of studies that were reviewed.  An indication of what may happen over a longer period of time is noted in “Evaluating the Apple Classrooms of Tomorrow” (Baker, Gearhart and Herman, 1994). This study was different from the Kulik (1994) and Sivin-Kachala (1998) studies, as five schools were studied over a five year period.  The study was to assess the impact of interactive technologies on teaching and learning.  The findings indicated that the use of the technologies resulted in new learning experiences requiring higher level reasoning and problem solving.  However, these findings were found not to be conclusive, since the students’ performance was no better than students in similar groups who did not have access to computers. 

When one investigates the What Works Clearinghouse, only two curricula that are computer based learning, “I CAN Learn” and “Cognitive Tutor” are noted as showing proof of increasing algebra test scores (Cognitive Tutor [R], 2001, and I CAN Learn Mathematics Curriculum, 2004).  Given that there have been a great number of computer-based algebra curricula used since the 1980’s, how is it that only two of these are noted as actually working?  How long have these two curricula been in use?  The information from the references notes that the improvements are from the initial years of implementation of the curriculum in the schools. 

If a school district spends $300,000+ for a classroom, will it get its monies worth out of the system?  Will the district see an improvement for one or two years, and then see a return to previous test scores, or will it see continuing improvement. 

The purpose of this study is to test the hypothesis that the continued use (over multiple years) of computer-based curricula for Algebra and Pre-Algebra leads to the continued improvement of standardized test scores.  The study will control for socio-economic status and scores on the Iowa Algebra Aptitude Test (IAAT) within three similar Gwinnett County, Georgia middle schools. The independent variable(s) of employing the “I CAN Learn” computer-based curriculum for Algebra and Pre-Algebra students will be generally defined as “I CAN Learn” and the control and intervening variable(s) of socio-economic status and IAAT will be statistically controlled in the study.

 

 

Literature Review

The idea that the use of computer-assisted instruction and/or computer-based learning will result in improved standardized test scores is not a new concept. The use of computer-assisted instruction for the teaching of mathematics is of particular note. Computer-assisted instruction has a number of definitions listed on the Internet. The University of New Orleans’ glossary defines it as “An aide to teaching in which a computer is utilized to enhance the learning environment by assisting students in gaining a mastery over a specific skill” (University of New Orleans Website). Northeastern Illinois University defines it as: “The use of computers to aid in the delivery of instruction in which the system allows for remediation based answers but not for a change in the underlying program structure”. (Northeastern Illinois University Website)  Wikipedia, has a definition that states:  “Refers to a system of educational instruction performed almost entirely by computer. Such systems typically incorporate functions, such as: assessing student capabilities with a pre-test, presenting education materials in a navigable form, providing repetitive drills to improve the student’s command of knowledge, providing game-based drills to increases learning enjoyment, assessing student progress with a post-test, routing students through a series of  courseware instructional software and recording student scores and progress for later inspection be a courseware instructor.”  (Wikipedia Website, paragraph 1)   The common thread through these definitions is that computer-assisted instruction is more specific than computer-based learning, which is the use of computers as a key component or tool of the student’s educational environment. However, given this distinction, the use of the two terms appears to be interchangeable in many of the articles. (See Anohina, 2005 “Analysis of the terminology used in the field of virtual learning”)  The term “computer-based instruction” is also used, which appears to be a combination of terms. Computer-assisted instruction is not limited to self-paced instruction, or skill set drilling. It can include these, as well as a full curriculum for a given subject (such as algebra). Components for computer-assisted instruction may include pre-test assessment of the student’s capabilities, presentation of materials in a form that the student can navigate between topics, repetitive drills, game-based drills, post-test assessment of the student’s progress. Computer-based learning is more than the use of computers in the classroom, i.e. using the computer as merely a tool for word-processing, creating presentations or accessing the Internet. It is a more general term than computer-assisted instruction, in that it applies to any structured use of computers in the classroom for teaching purposes.

When I narrow my focus to computer-assisted instruction and computer-based learning for algebra and pre-algebra, I find a great number of individual studies. There have been innumerable studies done concerning the impact of technology on student achievement. The main element of technology that is expressed is that of computer-based learning and computer-assisted instruction. One study (Schacter, 1999) reviews a number of key studies (Kulik, 1994; Baker, Gearhart and Herman, 1994; Mann, Shakeshaft, Becker, and Kottkamp, 1999; Sivin-Kachala, 1998; Scardamalia and Bereiter, 1996; Wenglinsky, 1998). Of direct interest concerning the longer term effects of using a computer-based learning environment are the works of Kulik (1994); Sivin-Kachala (1998); Booker, Gearhart and Herman (1994); and Wenglinsky (1998). These studies include information on the grade level and content area of interest to this study or, in the case of Booker, Gearhart and Herman, (1994) include a longitudinal study. The other studies either do not have any conclusive information or they apply to a different domain, i.e. they do not apply to math or do not apply to middle/high school.

Kulik (1994) used a meta-analysis technique in his study, which involved analysis of more than 500 individual research studies, concerning computer-based instruction. Many of the studies that he analyzed involved middle school and high school mathematics. The results indicated that, on average, the students using computer-based instruction scored in the 64th percentile, compared to the students in control conditions, who scored at the 50th percentile on standardized tests. Students learn more in less time and students have more positive attitudes about their classes, when the class used computer-based learning or computer-assisted instruction strategies. On the downside, the computers did not have positive effects in every area that was studied. That means that given all 500 studies, on the average, there was an increase, but for any given study, there may not have been the gain in test performance. (Schacter, 1999) 

Much like Kulik’s study, Sivin-Kachala’s (1998) review was of 219 research studies from 1990 to1997 and included all areas of learning and ages of learners. What was different is that this study was based upon the effect of any technology on learning and achievement, and not centered upon computer aided instruction. Although this would have been included, items, such as calculators, would not have shown up in the Kulik study. Positive findings listed “Students in technology rich environments experienced positive effects on achievement in all subject areas”; increased achievement pre-school through higher education, regular and special need; and improved attitudes toward learning and self-concept (Schacter, 1999, p. 5). A list of influences on effectiveness included the specific student population, the software design, the teacher’s role and level of access to the technology.

The Apple Classrooms of Tomorrow (ACOT) were evaluated by Baker, Gearhart and Herman (1994). The study was much different from Kulik (1994) and Sivin-Kachala (1998), in that the study took place at five school sites, across the United States over a five year period. The study was to assess the impact of interactive technologies on teaching and learning. The positive findings were a little “soft”:  “The ACOT  experience appeared to result in new learning experiences requiring higher level reasoning and problem solving…this finding was not conclusive” (Schacter, 1999, page 5). One other positive finding noted that the program led to changing teacher practices toward more cooperative group work. The negative finding was that: “ACOT students performed no better than comparison groups or nationally reported norms who did not have access to computers” (Schacter, 1999, page 6).

Wenglinsky (1998) conducted a national study on the effects of simulation and higher order thinking technologies on fourth and eighth graders in mathematics. By setting controls for socioeconomic status, class size and teacher characteristics, the educational outcomes represented the value added by technology for comparable groups. The positive findings were that by using simulation and higher order thinking software, gains of up to 15 weeks were measured on the National Assessment of Educational Progress (NAEP) for eighth-grade math students; gains of up to 13 weeks above grade level on the NAEP were attributed to classes where the teachers received professional development on computers; and higher order use of computers was positively related to student achievement for both fourth and eighth grade. The negative side showed that playing learning games and developing higher order thinking led to only three to five weeks improvement for fourth grade students and that students using drill and practice technologies scored worse on the NAEP than those who did not use the technologies for both fourth and eighth grade students.

The great majority of the studies on the impact of computer-based instruction used test scores to compare the progress made with students using the computer-based instruction.  However, the areas of improvement noted as “improved attitudes toward learning”, “improved attitudes toward self-concept”, “use of higher level reasoning” and “use of problem solving strategies” (Sivin-Kachala, 1998 and Baker, Gearhart and Herman, 1994, p. 8) were not supplied by testing, but were done by survey. The achievement improvements were noted for testing that was researcher constructed tests, national tests or standardized tests. The dependency of improved achievement was only checked relative to whether students were using computer-based instruction or not. In the cases previously noted, this was done in two different ways:

1.      Within the same year, randomly grouped students test scores were compared between a group that used computer-based instruction and a group that did not use computer-based instruction.

2.      Test scores from the year prior to implementing computer-based instruction were compared to the test scores for the first year where computer-based instruction was implemented.

Each case attempts to minimize the impact of variables that can occur that would invalidate the comparisons. In the first case, the assumption is that by taking large enough random samples, issues concerning race, sex, socio-economic status (SES), cognitive ability, class size, etc. can be minimized. It does not account for variations in instructors, the instruction method and the attitudes of students who are not in the computer-based instruction group. In the second case, the assumption here is that the variables concerning race, sex, SES, cognitive ability, class size, etc. are minimized. In addition, a great majority of the same teachers are used, so variation in instructional methods can be minimized. It doesn’t account for the variations in class make-up that can be considerably different from year to year, especially if the school is in an area of growth that results in major changes in the schools’ student profile. Race, sex, SES, cognitive ability, class size etc. may change quickly in a school district that is experiencing growth.

In the attempt to minimize the impact of the variables listed above, there results very few studies that look at the impact of computer-based instruction over the longer period. If the study was based upon the situation with a control group, then it is difficult to replicate this year after year, in that the school will either implement (or drop) the computer-based instruction based upon its success (or failure). If the study was based upon the situation of comparing achievement using computer-based instruction results with achievement without computer-based instruction, then this method cannot be carried out over additional years. This leaves the problem of trying to verify that the computer-based instruction is still effective, without being impacted by changes in the student make-up (race, sex, SES, cognitive ability, class size etc.).

One would think that given the number of studies and the amount of computer-based instruction that has been available, that longitudinal studies would be available. Most of the studies noted previously are from the 1980’s and early 1990’s. My next step was to look at what computer-based instruction for algebra and pre-algebra is available today that has been judged to be effective in raising student test scores. From the Department of Education’s “What Works Clearinghouse” (What Works Clearinghouse website, 2006), only two computer-based instructional packages were cited as showing improvement in student test scores. These two are I CAN Learn and Cognitive Tutor. (I CAN Learn, 2002; Carnegie Software, 2004). Three algebra curriculums, “Connected Mathematics Project,” “The Expert Mathematician,” and “Saxon Mathematics” were also noted in the “What Works Clearinghouse,” but were not computer-based instructional products. Given the great number of computer-based instructional products that are available (and have been available, since the 1980’s), it is strange that only two products met the criteria. Although other curricula are widely used, these curricula were excluded due to failure to meet the WWC evidence standards or lacked any studies to provide evidence that could be reviewed. If, they did not meet the evidence standards, does that mean that they no longer are improving test scores among the students who are participating in these computer-based instruction?  No indication is given from WWC.

Even with the two computer-based learning products that are cited by WWC, there does not seem to be evidence that students continue to show the improvement noted originally by WWC. There were a number of news articles concerning “I CAN Learn” and “Cognitive Tutor,” when they were first introduced ("News briefs", 1999) . However, the news lately about JRL, licensor for “I CAN Learn”, has not been as flattering. A school district in Dallas, Texas, has had poor results from the use of “I CAN Learn” and is considering removal of the education system from its schools. The district also wants its money back. JRL is rebutting that district personnel need more training. This is the most vocal indication that the results touted in the first years of implementation of education systems like “I CAN Learn” may not continue in later years of use.

If computer-based software initially improves test scores, but then, after use over a number of years, the improvement wanes, the question arises as to the actual reasons behind score improvements. This study is focused more on the “When” as opposed to “How” and “Why.”  The key here is whether the novelty of the curricula, especially with full classroom based computer software instruction, is what provided the improved scores, rather than the actual method or presentation media. The vendor of the software provides some insight into its software development; (I CAN Learn Results; I CAN Learn Education Systems, 2004) however, there is no information, outside of the obvious student test results, that supports its claims. The critical components appear to be the combination of self-paced instruction, which gives the students control over their learning, and the availability of an instructor to assist with any problem that students may encounter. This one-on-one time is available, since the students are all engaged in the course-work. This combination initially resulted in the improvements noted in the next paragraph (What Works Clearinghouse, 2006).

In the case of the “I CAN Learn” algebra course, the study cited by What Works Clearinghouse was a randomized control study in Gilmer County, Georgia. The trial included 254 students and indicated a positive effect on math test scores (Georgia CRCT) of 0.41 standard deviations. “I CAN Learn” Algebra follows a five-part format consisting of a pretest, review, lesson presentation, quiz, and cumulative review. The software is designed to be interactive with the student. The student progresses through the lessons at their own pace ("I CAN Learn[R] Mathematics Curriculum. What Works Clearinghouse Intervention Report", 2004)

Cognitive Tutor also uses interactive computer software to help teach algebra (Stylianou & Shapiro, 2002) . The curriculum covers eight topics, of which three topics are aligned with National Council of Teachers of Mathematics (NCTM) Curriculum and Evaluation Standards. A typical lesson consists of time where the students are allowed to work at their own pace to develop their problem solving skills. ("Cognitive Tutor[R]. What Works Clearinghouse Intervention Report", 2001) . As with the “I CAN Learn” software, the reasoning behind the improvements are not detailed by the software vendor, but also seem to stem from the fact that the student has some control over the content they are learning.

The study noted by WWC was a randomized controlled trial of 360 9th grade regular education students. These students were compared to students using the McDougall Littell’s Heath Algebra 1. The intervention group scored 0.23 standard deviations higher than the comparison group ("Carnegie Learning Offers Free Trial; Report Shows Cognitive Tutor Increases Miami-Dade's FCAT Scores; Information Technology Update, 2002).

Summary

The findings noted in this review point towards an improvement in testing scores, when students are placed into a computer-based education program. The findings tend to be done “on average” or “in general”. When specific education software or systems are used, then the findings note that not all computer-based education results in improved test scores. Also, the findings seem to be involved with the change from no or low technology to technology based educational systems. Once a computer-based system is in place for a period of time (say three to five years), are the gains still there?  The ACOT program, which extended out for five years, did not show any marked improvement in testing results. Is that typical or is that a result that reflects just upon the ACOT approach?  The “What Works Clearinghouse” found only a very few computer-based programs that they approved, based upon the research on the implementation of these programs. Is there something different about these computer-based systems from the other systems?

After reviewing the sources of the computer software for algebra courses that have shown to improve student test scores, I note that there is little evidence available, except from the producers of the software, to indicate the continued improvement of test scores. It is now worth analyzing more recent data to see if the novelty of the computer software (and classroom, in the case of “I CAN Learn”) has ended. Recent articles have noted that JRL Enterprises may not be working as well as planned, and at least one school district is considering removing the “I CAN Learn” classrooms since they are not seeing improvements in student test scores (Corey Murray, eSchool News, May 12, 2005; “Officials Freeze I CAN Learn”

The study proposed here will try to answer the questions concerning whether “I CAN Learn” computer-based instruction is still effective in improving testing scores for algebra and pre-algebra students. The hypothesis is that computer-based instruction for algebra course does improve standardized test scores. Comparison of test scores over the past four years at two different middle schools in Gwinnett County can help to provide answers to this question.

A second question, concerning what leads to the effectiveness (or ineffectiveness) of the computer-based instruction, will be addressed through the use of a survey of the participants in the classes (present and recent past). The survey will attempt to determine what areas of impact students and instructors felt led to test score improvements. Questions addressing novelty, student control, additional instructor assistance being available, format, method of pre-test, post-test, grading, etc. will be included for the students to comment. Questions addressing the instructor’s role and method of instruction will be included for the instructors to comment. The hypothesis here is that by using computer-based instruction, the student becomes more motivated and receives more individual attention, such that improvement in test scores result.

 

Methods

The quantitative method research design chosen for this study was survey and data analysis of previously surveyed information.  Creswell (2003) set out a methodology, which I have followed in my method of using data collected for a different study. This study consisted of determining if the “I CAN Learn” computer-assisted instruction (ICL) was still providing an increase in student performance in algebra and pre-algebra, compared to students who did not take ICL.  The test results from the first year implementation indicated an improvement; however it was not clear if that improvement continued in succeeding class years.   Whether there was continued improvement or not, it was also of interest to note the parameters that might have affected the student scores.   This study consisted of data analysis of existing standardized test information, a survey of the students who had and were concurrently taking ICL classes and of a survey of the classes to determine possible parameters that were affecting the outcome. 

The three schools chosen for this study are all within a contiguous geographical area in the southeastern United States.  The school profiles are very similar, with one middle school being located within one mile of the second middle school, and having been created from two elementary schools that formerly fed into third midle schools.  Since two of the schools dropped the use of ICL data could only be collected from one of the schools.

The test data analysis consisted of examining the standardized test data from students at the one middle school.  This school was using ICL for their eighth grade students who failed the school district’s Gateway exam in mathematics.  These students were taking a pre-algebra curriculum.  The scores from the Gateway exam were used for the pre-algebra students.  This survey was a cross-sectional survey, to note the effectiveness of the pre-algebra ICL versus pre-algebra without ICL.  The purpose of examining the test scores was to see the amount of improvement for the pre-algebra students, as ICL had not been used for this purpose prior to the 2005-2006 school year.  This survey of test scores allowed a comparison of the effectiveness of ICL over more recent years.  The data was being made available from the school with the understanding that they would receive a copy of this study. 

The survey consisted of a mathematical attitude inventory, which was a self-administered survey, of the students who used ICL, and their instructors.  The purpose of this survey was to determine the student’s level of motivation, value of mathematics, self-confidence in mathematics and enjoyment of mathematics within the ICL environment.  The teacher survey was focused on the teaching method used within the classroom.  The availability and ease of administering this survey were the key reasons for choosing this method for gathering data on student and instructor attitudes and methods.

The survey administered was a one-time questionnaire.  This was administered no sooner than one month into the school year 2006-2007 to all of the students currently in the course.  Twenty (20) students were surveyed for the 2006-2007 school year.   One instructor was also surveyed, from the only school still using ICL.  In addition, a group of students who were not taking the ICL classes were surveyed to determine if there was any difference in motivation, confidence and mathematical attitude between the ICL group and the non-ICL group.  Parts of two classes were included in the non-ICL group, which was made up of 24 students. The students in the regular class were students who had passed the Gateway exam for mathematics the previous year, but due to their Iowa Algebra Aptitude Test (IAAT) scores, were placed in the most basic algebra class for eighth grade.  The attempt was to look at students that were closest in mathematics ability as the ICL students.  On the average, the regular class would be on a very similar level, with the exception of their success on the school district’s Gatewate test.

The student survey chosen was a modified “Attitudes Toward Mathematics Inventory” (ATMI).  ATMI was developed initially in the 1970’s (Fennema-Sherman, 1976) and consisted of up to 49 statements.  A 39 statement version was chosen in this case.  Students were asked for their degree of agreement with each using a Likert-type scale from 1 to 5.  A copy of the survey questions is attached.  The questions shown here were taken from a doctoral thesis (Curtis, 2006).  The attitudinal areas checked by the survey were confidence, motivation, enjoyment and value.

I chose this survey due to previous investigations as to the validity and reliability of this particular survey.  The use of this survey for middle-school mathematics students (Tapia, Marsh, 2000) resulted in a Cronbach Alpha factor of 0.95.  The survey in the Curtis thesis, also shows a Cronbach Alpha factor in the range of 0.88 to 0.95 depending upon variable. 

For the instructor survey, I chose the Wisconsin Observation Scale. This survey consists of twelve questions and deals with the active student learning within the classroom.  This survey allowed a look into what the actual role of the instructor was in each class.

Relationship of Variable to Survey Question

Variable Name

Research Question

Survey Question Number

Motivation

Does ICL increase motivation?

23, 28, 32-34

Confidence

Does ICL increase the student’s confidence?

9-22

Enjoyment

Does ICL increase the students’ enjoyment?

3, 24-27, 29-31, 37, 38

Value

Does ICL increase the value a student applies to mathematics?

1,2, 4-8, 35, 36, 39

 

The data analysis consisted of a review of the number and type of students that did not return a completed survey.  Contact was made with the instructors to solicit their help in achieving a high percentage of completed surveys.  Response bias was addressed as a response vs. non-response analysis, with the late responders being considered as typical of the non-responders.

 

Results and Discussion

Results:

After approval from the school system for this research proposal, data was collected from the middle school for the students enrolled in the pre-algebra ICL classes for 2005-06.  Gateway test scores were compiled for the spring of 2005 and 2006 and compared.  The Gateway Math exam is graded on a weighted scoring system from with a low score of 300 and a maximum score of 800.  A score of 480 or higher was necessary for the students to pass the exam and be promoted into the 8th grade.  All of the scores compared were for students who had not passed the math exam in 2005, had taken ICL during the 2005-06 school year and were taking the exam again in 2006.  Thirty-eight (38) students were in this category.  However, due to the transient nature of these students, only 23 were in attendance for both tests.   The student scores improved, over the year by 16.22 points.   There is a standard deviation of 28.26 associated with the increase, with 6 student scores actually declining over the year.  The average of the 23 scores for 2005 was 450.43.  The average of scores for 2006 was 466.65. The improvement was no better than the improvement made in the 2004-05 school year.  A copy of the comparative test scores is provided in Appendix A.

To attempt to see if there was a difference in how students perceive the ICL class from a traditional class, a survey was made of the current ICL classes and two regular math classes. This survey was completed in March, 2007.   A copy of the survey results is provided in Appendix B.  Overall, the ICL classes listed were 0.38 more positive in their responses over the regular math classes.  ( ICL 3.09 average score to Regular 2.72; standard deviation of 0.46 and 0.37 respectively).  Within each sub-heading:

Comparison of Attitudinal Survey (1 to 5 scoring) – ICL vs. Regular Math

 

ICL Classes

Regular Math Classes

Difference

Motivation

2.61

2.42

0.19

Confidence

3.00

2.85

0.15

Enjoyment

2.99

2.50

0.49

Value

3.57

2.91

0.67

 

The survey tends to point towards an increase in the value and enjoyment of math when using ICL over the regular math curriculum.  This could be one of the main reasons behind initial increases in scores.  From the ICL literature on their web site, motivation and confidence are listed as main reasons for immediate improvement in scores at traditionally low-scoring schools.  It appears that after students have been exposed to computer-based classes, that motivation and confidence does not continue. 

A survey of the ICL teacher at the middle school was conducted.  This was based upon the University of Wisconsin Observation Scale.  This scale was used to indicate the amount of active involvement the teacher believes is happening within the class.  There were eleven (11) questions answered of the twelve provided. (Question eight, concerning student inquiries as a guide for mathematical investigations does not apply to the ICL curriculum.)  One to four points are assigned to each question.  The higher the score, the more collaborative education environment is being provided.  This particular teacher encourages students within the ICL classes to help other students, as there may be equipment issues as well as mathematical issues to deal with.  The score for this survey was 33 out of a possible 44 points.  This would indicate the class is fairly collaborative in nature, with students assisting in the education process.

Discussion: 

 The first conjecture as to whether the ICL computer education system is still providing improvement in student test scores is not conclusive,  however, the middle school, using the classes as a remediation for students, showed some promise and is continuing to use the program for an additional year.  The actual test scores do not completely support the middle schools’ decision however this will be reviewed again after the 2006-07 school year.   Given this result and the fact that two other schools have dropped the use of ICL, because of lack of effect and the ongoing maintenance costs associated with the software, it is surprising that the software is still be considered for the 2007-08 school year.  Some possible reasons behind the reduced influence of the ICL program may be related to the availability of computer equipment within households that are at or below the poverty level.  The novelty of using the computer has changed.  Students have daily access to email, chat rooms, audio and video files, even in Title 1 schools.  More than just participating in a class using computers is needed as part of the motivation.  

Conclusions

This leads to the survey results.  In prior studies by the vendor (ICL website), motivation and confidence, especially within the lower SES schools were main reasons for the initial success of ICL.  The motivation and confidence scores for the students in the present ICL classes are not substantially different from students in regular math classes.  The enjoyment of the class and the value the students place on mathematics is greater, however, the big gains assigned to increased confidence from not being pre-judged within the mathematics classroom appears to have dissipated.  Again, this may be due to the greater access to computer equipment both in and out of school.  It definitely is not seen as a privilege to take the ICL class. 

Further study is needed to check into the reasons behind the reduced survey scores in motivation and confidence.  Is it related to the type of class, i.e. offered as regular credit or as a elective credit?  How are the students chosen for the class?  Does this impact their viewpoint on mathematics? 

 

References

Anohina, A. (2005). Analysis of the terminology used in the field of virtual learning. Educational Technology & Society, 8 (3), 91-102.

 

Baker, E.L., Gearhart, M., & Herman, J.L. (1994). Evaluating the apple classrooms of tomorrow. In E.L. Baker, and H.F. O’Neil, Jr. (Eds.). Technology assessment in education and training. Hillsdale, NJ; Lawrence Erlbaum.

Becker, J. P. E., & Miwa, T. E. (1992). The United State-Japan Seminar on Computer Use in School Mathematics. Proceedings. (Honolulu, Hawaii, July 15-19, 1991).

 

Blass, B., & et al. (1992). The Use of Computers in the Math Classroom.

 

Brown, F. (2000). Computer-assisted Instruction in Mathematics Can Improve Students' Test Scores: A Study.

 

Button, G. (1997). Algebra without blackboards. Forbes, 160(6), 94-96.

 

Carnegie Learning Offers Free Trial; Report ShowsCognitive Tutor Increases Miami-Dade's FCAT Scores. (2004). T H E Journal, 32(2), 10-10.

 

Cognitive Tutor[R]. What Works Clearinghouse Intervention Report. (2001). What Works Clearinghouse. Retrieved July 16, 2006 from http://www.whatworks.ed.gov/Intervention.asp?iid=13&tid=03&pg=topic.asp.

 

Creswell, John W. (2003). Research Design, Qualitative, Quantitative and Mixed Methods Approaches.  Sage Publications.

 

Curtis, K.(2006). Improving Student Attitudes: A Study of Mathematics Curriculum Innovation..  Doctoral Thesis.

 

I CAN Learn[R] Mathematics Curriculum. What Works Clearinghouse Intervention Report. (2004). What Works Clearinghouse. http://www.whatworks.ed.gov/Intervention.asp?iid=14&tid=03&pg=topic.asp.

 

I CAN Learn[R]. Corporate website. Retrieved July 16, 2006 from http://www.icanlearn.com/about/mission.asp

 

Kulik, J.A. (1994) Meta-analytica studies of findings on computer-based instruction. In E.L. Baker and H.F. O'Neil, Jr. (Eds.). Technology assessment in education and training. Hillsdale, NJ; Lawence Erlbaum.

 

Mann, D. Shakeshaft, C. Becker, J. & Kottkamp, R. (1999). West Virginia's Basic Skills/Computer Education Program; An Analysis of Student Achievement. Santa Monica CA; Milken Family Foundation.

 

News briefs

News briefs. (1999). Electronic Education Report, 6(3), 10.

 

Northeastern Illinois University Glossary of Educational Terms. Computer -Assisted Instruction. http://www.neiu.edu/~dbehrlic/hrd408/glossary.htm#c (July, 2006)

 

Officials freeze 'I CAN Learn'
By Corey Murray, Associate Editor, eSchool News; http://www.eschoolnews.com/news/showStory.cfm?ArticleID=5665

 

Scardamalia, M., & Bereiter, C. (1996). Computer support for knowledge-building communities in T. Koschmann. (Ed.) CSCL; Theory and practice of an emerging paradigm. Mahwah, NJ; Erlbaum.

 

Schacter, J. (1999).The Impact of Education Technology on Student Achievement: What the Most Current Research Has to Say. Milken Exchange on Education Technology, Santa Monica, CA.

 

Sivin-Kachala, J. (1998). Report on the effectiveness of technology in schools, 1990-1997. Software Publisher's Association.

 

Stylianou, D. A., & Shapiro, L. (2002). Revitalizing Algebra: the effect of the use of a cognitive tutor in a remedial course.

 

Revitalizing Algebra: the effect of the use of a cognitive tutor in a remedial course. Journal of Educational Media, 27(3), 147.

 

Tapia, M., Marsh, G. (2000).  Attitudes toward Mathematics Instrument: An Investigation with Middle School Students.   Paper presented at the Annual Meeting of the Mid-South Education Research Association, Bowling Green, KY, November 15-18, 2000.

 

Thomas, D. A., & Thomas, R. A. (1999). Discovery Algebra: Graphing Linear Equations. Mathematics Teacher, 92(7), 569-572.

 

University of New Orleans, Glossary of Educational Terms. Computer-Assisted Instruction. http://alt.uno.edu/glossary.html  (July, 2006)

 

Wenglinsky, H. (1998). Does it compute? The relationship between educational technology and student achievement in mathematics. Educational Testing Service Policy Information Center.

 

What Works Clearinghouse (2006). Website: http://www.whatworks.ed.gov/

 

Wikipedia Computer-Assisted Instruction. http://en.wikipedia.org/wiki/Computer-Assisted_Instruction (July, 2006)

 


 

APPENDIX A: STUDENT SURVEY RESULTS

 

 

 

            ICL

Regular

 

 

 

 

 

 

 

Average

Average

Category

Diff.

 

 

 

23

I am confident that I could learn advanced math.

3.05

2.30

0

0.75

 

 

 

28

I would like to avoid using math in college.

2.42

3.05

0

(0.63)

 

 

 

32

I am willing to take more than the required amount of math.

2.26

2.20

0

0.06

Motivation

33

I plan to take as much math as I can during my education.

2.85

2.25

0

0.60

ICL

Regular

Diff.

34

The challenge of math appeals to me.

2.44

2.30

0

0.14

2.61

2.42

0.19

9

Math is one of my most dreaded subjects.

2.47

2.75

1

(0.28)

 

 

 

10

My mind goes blank and I am unable to think clearly when working with math.

3.15

3.00

1

0.15

 

 

 

11

Studying math makes me feel nervous.

2.85

2.95

1

(0.10)

 

 

 

12

Math makes me feel uncomfortable.

2.89

2.95

1

(0.06)

 

 

 

13

I am always under a terrible strain in math class.

2.89

3.10

1

(0.21)

 

 

 

14

When I hear the word math, I have a feeling of dislike.

2.90

3.05

1

(0.15)

 

 

 

15

It makes me nervous to even think about having to do a math problem.

3.33

3.42

1

(0.09)

 

 

 

16

Math does not scare me at all.

3.10

2.76

1

0.33

 

 

 

17

I have a lot of self-confidence when it comes to math.

2.90

2.70

1

0.20

 

 

 

18

I am able to solve math problems without too much difficulty.

3.10

2.40

1

0.70

 

 

 

19

I expect to do fairly well in any math class I take.

3.15

2.55

1

0.60

 

 

 

20

I am always confused in my math class.

3.18

2.95

1

0.23

Confidence

21

I feel unsure when attempting math problems.

3.11

2.75

1

0.36

ICL

Regular

Diff.

22

I learn math easily.

3.00

2.53

1

0.47

3.00

2.85

0.15

3

I feel great when I solve a math problem.

3.20

2.81

2

0.39

 

 

 

24

I have usually enjoyed studying math in school.

2.61

2.16

2

0.45

 

 

 

25

Math is dull and boring.

2.84

2.63

2

0.21

 

 

 

26

I like to solve new problems in math.

2.74

2.40

2

0.34

 

 

 

27

I would prefer to do a math assignment than to write an essay.

3.20

2.75

2

0.45

 

 

 

29

I really like math.

2.68

2.25

2

0.43

 

 

 

30

I am happier in a math class than in any other class.

2.32

2.55

2

(0.23)

 

 

 

31

Math is a very interesting subject.

3.29

2.45

2

0.84

Enjoyment

37

I am comfortable expressing my own ideas on how to look for solutions to a difficult problem in math.

3.60

2.55

2

1.05

ICL

Regular

Diff.

38

I am comfortable answering questions in math class.

3.42

2.45

2

0.97

2.99

2.50

0.49

1

Math is a very worthwhile and necessary subject.

4.05

3.00

3

1.05

 

 

 

2

I want to develop my math skills.

3.95

3.67

3

0.29

 

 

 

4

Math helps develop my mind and teaches me to think.

3.15

2.62

3

0.53

 

 

 

5

Math is important in everyday life.

4.10

3.24

3

0.86

 

 

 

6

Math is one of the most important subjects for people to study.

3.57

3.10

3

0.48

 

 

 

7

College math courses would be very helpful no matter what I decide to study.

3.79

3.05

3

0.74

 

 

 

8

I can think of many ways that I use math outside of school.

3.62

3.14

3

0.48

 

 

 

35

I think studying advanced math is useful.

2.90

2.19

3

0.71

Value

36

I believe studying math helps me with problem solving in other areas.

2.94

2.90

3

0.04

ICL

Regular

Diff.

39

I believe I am good at solving math problems.

3.67

2.16

3

1.51

3.57

2.91

0.67

 

Average

3.09

2.72

 

0.38

 

 

 

 

Standard Deviation

0.46

0.37

 

 

 

 

 

 


APPENDIX B: STUDENT GATEWAY SCORES

 

 

Gateway Scores

 

 

 

 

 

 

Student #

2005

2006

Difference

% Change

 

Passing=480

 

1

435

477

42

9.7%

 

 

435

477

2

 

492

NA

NA

 

 

 

 

3

 

481

NA

NA

 

 

 

 

4

450

459

9

2.0%

 

 

450

459

5

441

454

13

2.9%

 

 

441

454

6

459

488

29

6.3%

 

 

459

488

7

459

488

29

6.3%

 

 

459

488

8

454

481

27

5.9%

 

 

454

481

9

454

467

13

2.9%

 

 

454

467

10

446

495

49

11.0%

 

 

446

495

11

 

488

NA

NA

 

 

 

 

12

466

485

19

4.1%

 

 

466

485

13

 

495

NA

NA

 

 

 

 

14

435

481

46

10.6%

 

 

435

481

15

463

 

NA

NA

 

 

 

 

16

 

481

NA

NA

 

 

 

 

17

470

 

NA

NA

 

 

 

 

18

 

481

NA

NA

 

 

 

 

19

477

 

NA

NA

 

 

 

 

20

386

450

64

16.6%

 

 

386

450

21

470

 

NA

NA

 

 

 

 

22

430

481

51

11.9%

 

 

430

481

23

474

488

14

3.0%

 

 

474

488

24

470

467

-3

-0.6%

 

 

470

467

25

 

463

NA

NA

 

 

 

 

26

474

467

-7

-1.5%

 

 

474

467

27

474

467

-7

-1.5%

 

 

474

467

28

459

 

NA

NA

 

 

 

 

29

446

471

25

5.6%

 

 

446

471

30

 

459

NA

NA

 

 

 

 

31

 

428

NA

NA

 

 

 

 

32

454

404

-50

-11.0%

 

 

454

404

33

474

481

7

1.5%

 

 

474

481

34

370

421

51

13.8%

 

 

370

421

35

466

440

-26

-5.6%

 

 

466

440

36

477

481

4

0.8%

 

 

477

481

37

466

440

-26

-5.6%

 

 

466

440

38

 

421

NA

NA

 

 

 

 

 

454

467

 

 

 

 

450.43

466.65

 

 

23

16.22

3.9%

Average

 

 

 

 

 

Max

64

16.6%

Max

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Std. Dev.

28.26

6.6%

Std. Dev.