COMPUTER-AIDED SELF-ASSESSMENT
AND INDEPENDENT LEARNING IN HIGHER EDUCATION
Peter Morris and Shane Dowdall
Dept. Computing & Mathematics, Dundalk Institute of Technology, Dundalk, Ireland
Keywords: Mathematics software, Computer-aided assessment, Self-assessment for learning, Independent learning,
Mathematics fundamentals.
Abstract: This paper outlines the process and evaluates the effectiveness of introducing software for self-assessment
and independent learning in a Mathematics module. Many students identify mathematics as a problem area,
and lecturers must maintain standards and meet learning outcomes for their modules. With limited
resources, difficulties arise from increasing student numbers and a more diverse cohort. The aim of this
study was to enable students to learn independently, reduce mathematical anxiety, and improve self-efficacy
and competencies in mathematics, through the use of technology. Software, consisting of visual tutorials
and online assessments, was introduced to a Mathematics module on a first year undergraduate degree
programme. Students could take and retake online assessments given within supervised technology-led
sessions, with their best result recorded. The advantages of this include improved accessibility, alternative
teaching styles, self-paced tutorials, timely automated feedback and self-assessment for learning. The
problems encountered are highlighted and solutions suggested which may have relevance to mathematics
lecturers and learning support units. Our research findings show that the aims of the initiative were broadly
met. Notably, the initiative enabled most students to bridge the gap between their expected and actual level
of mathematical competency, and improved mathematical self-efficacy for identified groups of students.
1 PROJECT OUTLINE AND AIMS
In a diverse student cohort, there is a mixed level of
competency in fundamental subject areas such as in
mathematics. The level of mathematical competency
expected by the lecturer is not always met by the
students. Lecturers must maintain standards and
meet the learning outcomes for the modules they
teach leaving little time to raise students’
competencies in these fundamentals.
The majority of students entering 1
st
year of the
computing degree programmes in this study have
come directly from secondary school. Mixed with
these are mature students (many of whom have not
studied mathematics for many years), international
students (many of whom have language difficulties)
and students entering from further education.
Pajares (1996a) lists many variables that
contribute to achievement in mathematics including
mathematics self-efficacy and mathematics anxiety.
Self-efficacy refers to the student’s self-belief in
their ability to solve mathematical problems. One
investigation “revealed that there is a strong positive
relationship between mathematics self-efficacy and
achievement in mathematics” (Ayotola and Adedeji,
2009, p. 956), a finding supported by Lent and
Hackett (1987), Hackett (1985) and Pajares (1996b).
Negative past experiences in mathematics can
lead to emotional responses to mathematical
problems, known as mathematical anxiety, which
can reduce the student’s future mathematical
achievement. According to Hoffman (2010)
mathematics anxiety “may impede mathematics
performance irrespective of true ability”. This is
supported by Aiken (1970) and Ashcraft (2002,
2005).
The central question for this inquiry was whether
we could improve the knowledge of mathematical
fundamentals, for a diverse 1
st
year student cohort,
through the introduction of software that
incorporates self-directed online tutorials and
assessments for learning. To achieve this aim, the
following objectives were outlined:
a. Inform the incoming students of their own
understanding of mathematical fundamentals
b. Enable students who have already achieved a
139
Morris P. and Dowdall S..
COMPUTER-AIDED SELF-ASSESSMENT AND INDEPENDENT LEARNING IN HIGHER EDUCATION .
DOI: 10.5220/0003332801390142
In Proceedings of the 3rd International Conference on Computer Supported Education (CSEDU-2011), pages 139-142
ISBN: 978-989-8425-50-8
Copyright
c
2011 SCITEPRESS (Science and Technology Publications, Lda.)
proficiency to be assessed quickly
c. Build students’ confidence in maths, improve
self-efficacy and reduce mathematical anxiety
d. Improve engagement in mathematics
e. Promote independent learning
f. Offer flexible timing of assessments to students,
enabling students to dedicate more time to the topics
they find most difficult
g. Analyse the effect of initiative on students
2 METHODOLOGY
Understanding Mathematics, software by Matrix
Multimedia, which includes self-assessment for
learning, was introduced to the students. This
enabled them to learn some mathematical
fundamentals, in their own time, independently of
the lecturer, through visual tutorials and cumulative
assessments.
The approach used by the software is in line with
the suggested methodologies for learning through
multimedia as proposed by Alessi & Trollip (2001,
p. 89). It also adheres to the Contiguity Principle and
Multiple Representation Principle as proposed by
Mayer and Moreno (1998). Demir and Kiliç (2009)
found that using a computer had a positive effect on
maths achievement for some students.
The students had access to the software in a
computer lab, whenever the room was available, and
also during optional weekly 2-hour supervised
sessions. The researchers supervised these sessions
for the purpose of monitoring and recording of
assessments but not to teach the topics. Students
took assessments as often as they wished with only
their best mark used. 20% of a module’s overall
mark was dedicated to the completion of three
cumulative assessments (6% each), and a final
diagnostic test (worth 2%).
To facilitate self-study and self-evaluation, the
students chose the pace at which they learned, when
they completed the assessment and the assessment
duration. Assessment for learning was facilitated as
students could also practise the assessments before
being formally assessed. Also, assessments could be
taken many times, and feedback following an
assessment was delivered in private, by the
computer, with no need for results to be revealed to
their peers or their lecturer.
Topics all included an animated introduction,
visual tutorials, worksheets and a final assessment.
Students could jump ahead to any tutorial within the
topic, or to take the final assessment, at any time.
The assessment was based on that topic’s tutorials
and results were presented on screen immediately.
When student’s declared that they wished to take
formal assessments, they were supervised taking
final assessments in a formal examination setting.
The students completed a diagnostic test (Test 1)
before the software was introduced, the result of
which informed them of their level of ability in the
topics covered by the software. A second diagnostic
test (Test 2) was administered in the last week of the
semester and the two tests were compared to
evaluate any change that occurred in their
competency.
Two surveys were used in the initiative. The first
survey was to gauge the students’ feelings about
their results in the first diagnostic test and to act as a
motivator for students to engage with the initiative.
The second survey was taken during the final three
weeks of the initiative to capture the students’ views
on the software, the effectiveness of the initiative,
and any perceived changes in levels of mathematical
self-efficacy and maths anxiety.
Students also completed a personal reflective
journal on the initiative. An external independent
evaluator anonymously summarised the journals.
3 FINDINGS AND ANALYSIS
Students that spent their time during supervised
sessions completing the visual tutorials and
practising assessments did not require any
supervision. Queries from students mainly related to
the software, and whether it was correctly marking
their answers. The software provided corrective
feedback, engaging the student in independent
learning and assessment for learning, as described
by Petty (2006) and Hattie (1999). As the feedback
was delivered personally to the individual student it
should be of particular benefit to the “weakest
learners”, as detailed by Black & William (1998)
and Bennett (1999). As students repeatedly practice
the assessments independently, with feedback
provided, computer-aided self-assessment is
facilitated. As a result, students should become
“realistic judges of their own performance” (Boud,
1995, p. 13).
Some issues, including browser compatibility
and on-campus only licensing agreements, impacted
negatively on the software’s accessibility and the
objective of promoting independent learning.
CSEDU 2011 - 3rd International Conference on Computer Supported Education
140
3.1 Survey 1: Response to Test 1
Survey 1 encouraged students to reflect on their first
Diagnostic Test (Test 1) result and gauged their
motivation to engage with independent learning.
47% of responding students were “surprised with
their result”, with about half of these expecting to
have performed better, and half worse. This result
may indicate levels of mathematical self-efficacy.
Students subsequently rated their placement
within the class. Over half of the students felt they
were “about average”, 30% felt they were “above
average” and less than 20% felt “below average”.
Students were then asked to gauge if they valued
mathematics fundamentals. In a 5-part rating scale
from 1 “very important” to 5 “not at all important”,
31 of 40 students valued mathematics as 1 or 2.
Finally students’ rated their motivation for
learning independently. No-one stated that their
result did “not act as an incentive”. 15 of 40
respondents stated it acted “as a big incentive”.
The researchers feel that the combination of
diagnostic test and survey informed students of their
initial mathematics competency and motivated them
to engage in the rest of the initiative.
3.2 Comparison of Diagnostic Tests
The results from Diagnostic Tests 1 and 2 were
analysed. For the 48 students who sat both tests:
1. The average mark changed from 20.1 (out of 36)
to 24.52 marks (approx. 12% increase).
2. The hypothesis that “the change is due to
chance” was rejected at the 95% level. The 95%
confidence interval for the mean of the differences
is: (2.05, 4.78).
The achievement of the 48 students is broken down
into 4 groups A, B, C and D (figure 1).
Group A’s (22 students) “difference in marks”
was approximately normally distributed with an
average of 1 mark and a standard deviation of 1.604
marks. A paired t-test was used to test whether “this
improvement was due to chance” but was rejected at
the 95% level. So, we conclude there was an
improvement for these students but it is only small.
Of the students who got less than or equal to 20
marks in Test 1, 16 of these (61.5%), Group B,
obtained 20 or more marks in Test 2. While there is
still room for improvement, we might consider this
group to have improved to a competent level.
There were 10 students who got less than or
equal to 20 marks in both tests. However, 6 of these
(Group C) improved by at least 5 marks (out of 36),
which might be considered a reasonable
improvement in spite of the fact that they did not
reach the competency levels of those in group B.
This leaves 4 students, Group D, who do not
belong in Groups A, B or C. Three of these actually
did worse in Test 2. The initiative did not appear to
help these students.
We note that 38 of the 48 students (Groups A
and B) achieved 20 or more marks in Test 2. So, by
the end of the initiative, nearly 80% of the students
might now be considered competent. Groups A, B
and C constitute nearly 92% of the students.
Figure 1: Summary of improvements.
3.3 Student Feedback
40 students completed an online survey during the
last three weeks of the initiative to provide feedback
on the effectiveness of the initiative. Asked if their
understanding of the mathematics topics improved,
nearly 50% responded that the software helped them
“a small amount”. 38% were more positive. Only
13% stated that they did not improve, or were “not
sure” of any improvement.
63% felt that using the software did result in
them being less fearful. 15% (6) stated the software
“helped a lot”, and 28% (11) that “it has helped”
them become less fearful of mathematics. 34% of
responses (14) were negative, which might indicate
that these students either did not reduce their anxiety
or that they may never have suffered from anxiety.
In response to a question regarding maths self-
efficacy, 29% (12) found that their confidence in
doing maths problems improved “a small amount”.
41% (16) felt that the software “helped”, or “helped
a lot” but 25% saw no improvement.
Use of the software outside the scheduled
supervised classes was limited, with 28 students, or
70% of respondents, having used it only once, or
never, outside of class time. Half of respondents (20)
felt they did not have enough access to the software.
In a 5-part rating scale, with 1 as “very easy” and
5 as “very difficult”, 45% of respondents (18) found
COMPUTER-AIDED SELF-ASSESSMENT AND INDEPENDENT LEARNING IN HIGHER EDUCATION
141
the software easy to use, rating 1 or 2, just 17.5% of
respondents (7) found it difficult, rating 4 or 5.
An anonymous reflective journal on the initiative
was completed by the students and independently
evaluated. One student stated “I liked the fact you
could re-do the test as many times as you like...also
the fact that you could practise on your own time…
My mathematical ability was not good but has
improved a lot as the semester progressed.”
3.4 Summary of Findings
From our evaluation, we concluded that there was an
improvement in the mathematical competency of the
students, and particular improvements for some of
the weaker students (Groups B and C).
The use of assessment for learning ensured that
the assessments were not being performed “under
timed, high-stakes conditions” (Ashcraft & Moore,
2009). This should have helped to reduce maths
anxiety. Survey 2 found that students benefited both
by reducing mathematical anxiety and improving
self-efficacy, with an improvement in mathematical
achievement, as supported by Pajares (1996a).
4 ACTION IMPLICATIONS
During and following the initiative, the researchers
recorded their reflections on challenges overcome.
The decoupling of the sessions from the module was
considered to be beneficial to the delivery of the
module. There was a positive effect on the students’
feelings towards the module and the degree. There
was a strong positive effect on mature students with
low maths self-efficacy. The attendance in
supervised sessions was low at times. Some students
completed assessments for all three topics in the one
session allowing those who were proficient to
proceed quickly.
Several recommendations can be drawn from the
initiative with regards the software including
ensuring early installation, student access from home
within a virtual learning environment and using
video tutorials to introduce the software. Future
considerations include embedding the initiative
within the module, or delivering it through a
Mathematics Learning Centre, and expanding to any
1
st
year programme with a Mathematics module and
to outreach and life-long learning programmes.
REFERENCES
Aiken, L. R., Jr. (1970) Attitudes towards mathematics.
Review of Educational Research, 40, 551-596.
Alessi, S. M. and Trollip, S. R. (2001) Multimedia for
Learning: Methods and Development. 3
rd
Edition,
Massachusetts: Allyn and Bacon.
Ashcraft. M. H. (2002) Math anxiety: Personal,
educational, and cognitive consequences. Current
Directions in Psychological Science, 11(5), 181-185.
Ashcraft. M. H. (2005) Math anxiety and its cognitive
consequences: A tutorial review. In J.I.D. Campbell
(Ed.), The handbook of mathematical cognition, (pp.
315-330). NY: Psychology Press.
Ashcraft. M. H. and Moore, A. M. (2009) Mathematical
anxiety and the affective drop in performance, Journal
of Pyschoeducational Assessment, 27(3), 197-205 .
Ayotola, A. and Adedeji, T. (2009) The relationship
between mathematics self-efficacy and achievement in
mathematics. Procedia Social and Behavioral
Sciences, 1, 953-957.
Bennett, F. (1999) Computers as Tutors: Solving the
Crisis in Education. Faben Inc.
Black, P. J. and William, D. (1998) Assessment and
Classroom Learning. Assessment in Education:
Principles, Policy & Practice, Volume 5 (1), March
1998 , pp.7-74.
Boud, D. (1995) Enhancing Learning Through Self-
Asssessment, Kogan Page.
Demir, I. and Kiliç, S. (2009) Effects of computer use on
students’ mathematics achievement in Turkey.
Procedia Social and Behavioral Sciences, 1, 1802-
1804.
Hackett, G. (1985) The role of mathematics self-efficacy
in the choice of mathematics related majors of college
women and men. A path analysis. Journal of
Counselling Psychology, 32, 47-56.
Hattie, J. (1999) Influences on student learning. Inaugural
Lecture: Professor of Education University of
Auckland. Retrieved March 30, 2010, from
http://www.geoffpetty.com/downloads/WORD/
Influencesonstudent2C683.pdf
Lent, R. W. and Hackett, G. (1987) Career self-efficacy:
empirical status and future directions. Journal of
Vocational Behaviour. 30, 347-382.
Mayer, R. E. and Moreno, R. (1998). A Cognitive Theory
of Multimedia Learning: Implications for Design
Principles. ACM SIGCHI conference on Human
Factors in Computing Systems, Los Angeles, CA, 22,
1-10. Retrieved from http://citeseerx.ist.psu.edu/
viewdoc/download?doi=10.1.1.105.5077&rep=rep1&t
ype=pdf
Pajares, F. (1996a) Self-efficacy beliefs and mathematical
problem-solving of gifted students. Contemporary
Educational Psychology, 21, 325-244.
Pajares, F. (1996b) Self-efficacy beliefs in academic
settings. Review of Educational Research, 66, 543-
578.
Petty, G. (2006) Evidence-based teaching: a practical
approach. Cheltenham: Nelson Thornes.
CSEDU 2011 - 3rd International Conference on Computer Supported Education
142