Based on our preliminary findings we visualized this system flow shown below. We learned that the AI-generated metrics were designed to represent the students' engagement level both instantaneously and on average. When the values go down, it calls the users' or their supervisors' attention to check out what went wrong in the class, so the users can reflect on their practice and improve their skills over time.
Interestingly, through the quantitative analysis we saw no strong correlation between the AI-generated metrics and the teachers' years of experiences or the students' attitude towards the teachers (relected by student survey results). So we formed our research questions accordingly, and selected user-centered research methods that can help us better understand the users' underlying needs and confusions.
As the research lead, I led the literature review and the designed of the protocols for the the survey, the interviews, and the contextual inquiries. I also led the following qualitative analysis - I suggested methods like constructed response coding, so we were able to extract the common points mentioned by the users, and support our insights with strong evidence.
We used affinity diagramming and journey mapping to synthesize our findings, as shown above. Starting from this, summarized the opportunity space using "How Might We" statements to guide the ideation.
Starting from the opportunity space we defined in the research phase, we brainstormed 7 ideas and demonstrated them in the form of storyboards with brief descriptions to both users and company administrators. Shown below are the selected panels from each idea, with the original storyboards each including 4 panels.
To further narrow down the focus, we conducted several activities with stakeholders including a user desirability survey and a adminstrator rating. We integrated the 7 ideas into 2 larger directions, then selected the one focusing on enhancing the current experience without altering the user behavior too much to make a quicker impact.
The redesign of the system began with specifying the features and building low-fidelity prototypes, and ended with a high-fidelity prototype handed over to the develop team of Shonan Seminar.
To evaluate the design, we conducted a usability test using an asynchronous tool Maze. We created imaginary scenarios where different users were assigned with multiple tasks to complete with the prototype. The tool then collect the users' actual paths taken, compare them with the predefined ideal paths, and generates the success rate and the clicking heat map.
From the statistics and the users' subjective feedback, we could conclude that our design intentions were successfully understood by the users.