An Intelligent Tutor for High School Chemistry

Learning Engineer
2022.1 - 2022.7
Overview
Independent Study Supervised by Prof. Bruce McLaren
An intelligent tutoring system (ITS) is a computer system providing stepwise instructions for learners to practice problem-solving skills without human intervention (Han et al., 2019). What differentiates ITS from other computer-based learning tools is its adaptability, meaning the tutor should be responsive to learner’s real-time actions and give hints or feedback based on learners’ knowledge, or even affect (Malekzadeh et al., 2015) and metacognitive states (Koedinger et al., 2009).
ITS is still a new concept in China with very little research work done on its practical application and evaluation (Zhang, 2017). To see the potential of ITS in a different context, this study is conducted with the research focus of evaluating the effectiveness of a chemistry equilibrium calculation ITS for Chinese high school students. To be exact, the tutor is designed for practicing the “3-row” strategy.
Oh no! The image is gone!
Typically, students need to fill in some of the cells in the 3-row table using the given information, then infer the values in other cells mainly applying the rule that the converting amount is proportional to the coefficients in the chemical equation. If the values cannot be calculated directly, students need to set unknowns and use indirect information to solve the equation. Finally, they will use the values in the table to answer the question about conversion rates, reaction rates, concentration, or amount of a certain substance in the system.
Method
Description
icon missing
Cognitive Task Analysis (Think-aloud and follow-up interview)
Participants:
1 Expert and 6 Novice Students
Purpose:
To specify the cognitive model of both the ideal solution and the novices’ common mistakes, so the design of the ITS, especially the hints and the feedback, can be aligned with the learning goal and effective for learning.
icon missing
UI Design & Technical Implementation
Process:
Developed the interfaces using CTAT HTML Editor.
Developed the functions using CTAT nools language.
Deployed the problem set on TutorShop.
icon missing
Classroom Study
Participants:
68 11th grade students from a public high school in Shenzen, China
Purposes:
To collect student performance data and observe the learning behaviors.
To collect feedback from students on their experience interacting with the tutor.
icon missing
Data Analysis
Purposes:
To measure the learning gain by doing a pre-post comparison.
To raise possible redesign plans based on the learners' feedback.

The Learner-Centered Research

Oh no! The image is gone!
The CTAs started with think-aloud instructions, then the participants were asked to solve several problems and to answer follow-up questions to clarify their thinking process of specific steps.
For the main flow of the strategy, the novices did as predict using the 3-row tables to solve the problems. What makes a novice different from expertsis that they are more likely to make minor mistakes due to insufficient fluencyand accuracy.
Though the use of the 3-row table makes the solution seem scaffolded, there are many possibilities in terms of the selection of units, the filling order, the unknown setting, and the step skipping. These all led to the design decisions of the ITS.
Note that the novice pathways are not what allnovices do, but the possible mistakes that the novices are likely to make.

UI Design & Technical Implementation

Oh no! The image is gone!
The assets are uploaded dynamically for each problem. The number of columns in the table is always corresponding to the number of substances in the equation.
Students are required to select the table unit in the drop-down menu on the left before filling in cells in the table. The drop-down menu was moved based on the contiguity principle.
Oh no! The image is gone!
When it is unnecessary to set unknowns in a certain cell, there will be a message telling the students the information can be found in the problem statement.
Oh no! The image is gone!
When the student sets an unknown for the first time, the tutor generates the correct expression for all other cells accordingly. Then the student needs to use the same letter consistently. When the student writes other expressions, the input is compared algebraically with the tutor-generated correct expression.
Oh no! The image is gone!
Modeled error 1: the converting value/expression is not correlated with the coefficient.
E.g. when the student sets the converting amount of N2 as x, but incorrectly writes the expression of NH3 as x instead of 2x, the tutor reminds the student to pay attention to the coefficient of NH3.
Oh no! The image is gone!
Modeled error 2: for a reactant the student incorrectly adds the starting and the converting amount/concentration to get the final amount/concentration.
E.g. N2 is a reactant, but the student writes “1+a” instead of “1-a” as the expression of the final amount of N2, the tutor gives an error message saying that the final amount of a reactant should be subtracting the converting amount from the initial amount.
It is similar for the product substances.
Oh no! The image is gone!
Modeled error 3: the value/expression isn’t correlated to the table unit they selected.
E.g. When the student selects “mol/L” as the unit of measure, but incorrectly puts in 1 which is the amount of N2, the tutor gives an error message saying that the value input is not corresponding to the selected table unit.
Oh no! The image is gone!
Modeld errors specific to different types of questions
E.g. when the student is asked to calculate the conversion rate of CO, if the student writes 25 instead of the correct asnwer 75, the tutor will give an error message saying they should divide the conversion amount/concentration instead of the final amount/concentration.

Classroom Study

To measure the effectiveness of the ITS, the tutor was deployed on TutorShop, and the pre- and the post-tests were embedded in the problem set. The problems were different in the two tests, but they were carefully selected in the way that the difficulty level was similar and the solutions were analogous.

Oh no! The image is gone!

The problem set was assigned to 78 students as homework. Students were given deidentified accounts and a detailed instruction file on how to log in and find the assignment. After 5 days, 12 students didn’t finish it due to technical issues. An optional follow-up online survey was sent to the students, and 15 students gave feedback about the experience of interacting with the ITS.

This study has been reviewed andgranted approval by the Carnegie Mellon University Institutional Review Board (IRB) under Exempt Review on 5/26/2022, in accordance with 45 CFR 46.104.d.1. The IRB ID of thiss tudy is STUDY2021_00000509.

Data Analysis & Results

The transaction data also showed that with the prompt in the instruction, the students were able to use the unknown setting function. The average score of the pre-test is 2.75 out of 4, indicating the students had relatively high prior knowledge. The average score of the post-test is 2.25 out of 4, which is lower than the average pre-test score. The p-value of the t-test is 0.0166, indicating the post-test results are significantly worse than the pre-test.

Pre-test score
0
1
2
3
4
Average learning gain
(pre-post difference)
2.40
1.28
0.14
-0.96
-1.50
Average time for the whole problem set (min)
65
45
53
60
40
Learning gain / time
0.0369
0.0284
0.0026
-0.016
-0.0038

With detailed examination, it is shown by the grouped average scores that the tutor had a more positive effect on low-performing students. However, students reported positive attitudes towards the experience. There are several possible reasons for the negative learning gain as listed below.
First, the way of the questions asked in the test problems is not as straightforward as it is in the practice problems, and there are more detailed descriptions in the pre- and post-test than in the practice problems. The practice part of the ITS didn’t model the errors related to reading ability.
Second, students with higher prior knowledge (who got higher scores in the pre-test) may tend to be less careful in the post-test. As suggested by previous works, affect plays a crucial role in learning processes, but this aspect is not considered in this study.
Due to the limitation of the remote setting and timing, the students' learning behavior wasn't closely observed. Future work needs to be done on fixing the technical problems, understanding student difficulties, and redesigning the tutor to enhance learning effectiveness.

The Cognitive Tutor Authoring Tool (CTAT) is a tool created by Carnegie Mellon University for building web-based ITS (Aleven & Sewell, 2010). It enables the development of two types of tutors: cognitive tutors (rule-based tutors) and example-tracing tutors, with the latter allowing users to build the tutor through demonstrating the solution paths without coding, though the flexibility may be limited. Rule-based tutors require programming skills to develop, while it can achieve a higher level of flexibility (Aleven, 2010). The nools language used to code this kind of tutor is structured as an IF-THEN format, which is aligned with production rules used for modeling procedural knowledge based on the ACT-R theory. Instead of comparing the students’ actions against each step of the predefined solution, rule based tutors first identify the situation where the students take the attempt, then generate all possible next steps under the given production rules. Once the students’ attempt matches any of the predicted steps, the attempt will be considered correct.
TutorShop is a learner management system for intelligent software tutors built with the Cognitive Tutor Authoring Tools. It is free for students and teachers, and authors who use it for (non-commercial) research purposes.

References

Aleven, V. (2010). Rule-based cognitive modeling for intelligent tutoring systems. In Advances in intelligent tutoring systems (pp. 33-62). Springer, Berlin, Heidelberg.
Aleven, V., Mclaren, B. M., Sewall, J., & Koedinger, K. R. (2009). A new paradigm for intelligent tutoring systems: Example-tracing tutors. International Journal of Artificial Intelligence in Education, 19(2), 105-154.
Aleven, V., McLaughlin, E. A., Glenn, R. A., & Koedinger, K. R. (2016). Instruction based on adaptive learning technologies. Handbook of research on learning and instruction, 522-560.
Aleven, V., & Sewall, J. (2010, June). Hands-on introduction to creating intelligent tutoring systems without programming using the cognitive tutor authoring tools (CTAT). In Proceedings of the 9th International Conference of the Learning Sciences-Volume 2 (pp. 511-512).
Cao, J., Yang, T., Lai, I. K. W., & Wu, J. (2021). Student acceptance of intelligent tutoring systems during COVID-19: The effect of political influence. The International Journal of Electrical Engineering & Education, 00207209211003270.
Clark, R. E., Feldon, D. F., van Merriënboer, J. J., Yates, K. A., & Early, S. (2008). Cognitive task analysis. In Handbook of research on educational communications and technology (pp. 577-593).
Routledge.Clark, R. C., & Mayer, R. E. (2016). E-learning and the science of instruction: Proven guidelines for consumers and designers of multimedia learning. john Wiley & sons.Ceiling effect (statistics). (n.d.). Wikipedia. Retrieved July 14, 2022, from https://en.wikipedia.org/wiki/Ceiling_effect_(statistics)
D’Mello, S. K., & Graesser, A. C. (2014). 31 Feeling, Thinking, and Computing with Affect-Aware Learning Technologies. The Oxford handbook of affective computing, 419.
Han, J., Zhao, W., Jiang, Q., Oubibi, M., & Hu, X. (2019, October). Intelligent tutoring system trends 2006-2018: A literature review. In 2019 Eighth International Conference on Educational Innovation through Technology (EITT) (pp. 153-159). IEEE.
Koedinger, K., Aleven, V., Roll, I., & Baker, R. (2009). In vivo experiments on whether supporting metacognition in intelligent tutoring systems yields robust learning. In Handbook of metacognition in education (pp. 395-424). Routledge.
Koedinger, K. R., & Anderson, J. R. (1993). Effective use of intelligent software in high school math classrooms.
Koedinger, K. R., Corbett, A. T., & Perfetti, C. (2012). The Knowledge‐Learning‐Instruction framework: Bridging the science‐practice chasm to enhance robust student learning. Cognitive science, 36(5), 757-798.
Koedinger, K., Cunningham, K., Skogsholm, A., & Leber, B. (2008, June). An open repository and analysis tools for fine-grained, longitudinal learner data. In Educational data mining 2008.
Malekzadeh, M., Mustafa, M. B., & Lahsasna, A. (2015). A review of emotion regulation in intelligent tutoring systems. Journal of Educational Technology & Society, 18(4), 435-445.
Nesbit, J. C., Adesope, O. O., Liu, Q., & Ma, W. (2014, July). How effective are intelligent tutoring systems in computer science education?. In 2014 IEEE 14th international conference on advanced learning technologies (pp. 99-103). IEEE.
Zhang, B., & Jia, J. (2017, June). Evaluating an intelligent tutoring system for personalized math teaching. In 2017 international symposium on educational technology (ISET) (pp. 126-130). IEEE.
杨晓晓. (2021). 让高考不再 “高” 不可攀# br#——突破 “化学反应原理” 试题的解题瓶颈# br. 化学教与学, 558(6x), 65.

Back to Top ↑