When Ha Nguyen arrived at Duke University for her first year of college, she struggled at first to find her voice.
She was an international student acclimating to a new place, but she was also getting used to a new way of learning. In Vietnam, instruction had been teacher-centered, but, in her new classes, learning was done through discussion.
“Growing up, teachers talked to the class. We didn’t collaborate a lot,” she says. “I didn’t participate much, I think, for the first couple months of college. But then that level of discussion and collaboration teased out what I could bring to the classroom. That was personally meaningful and impactful for me.”
During her time as an undergraduate, Nguyen also volunteered with children, tutoring and working at summer camps, where she further immersed herself in the kind of open-ended collaborative learning that’s so supportive of knowledge construction for children. Curious about where her interest in emerging technologies like AI could fit in with her unique perspective on collaborative learning, Nguyen has pursued an academic path with implications for education’s future.
As an assistant professor in her first year at the UNC School of Education, Nguyen, Ph.D., brings several research efforts that leverage technology to help students learn. Among those efforts, she’s explored how adding human-centered insights to AI — and teaching students to engage with it in collaborative learning environments — can deepen STEM knowledge. Generative AI tools such as chatbots are becoming part of our everyday experiences as users. Nguyen is working to bring these tools to classrooms responsibly in ways that can have meaningful impact.
Following is a Q&A with Nguyen focused on her research, GenAI’s impact on education, and more.
What has your research focused on up to this point? Where is it going?
I’ve long been fascinated by how we use different technologies and learning analytics, and how we can build programs that support collaborative learning and knowledge construction. When I started my Ph.D., the first project that I was independently working on was the idea of how we build scientific models or systems – models where students learn about ecosystems and causal effect in climate change. Then, how do we help students to think beyond separate components in those systems and then think about relationships between these components? Within a computer-based modeling system, there are many opportunities to simulate how changes in one component will affect changes in another component. There was a more causal loop.
From that, it leads to the thinking that if there is ongoing support when researchers are building these computer-based tools, we can provide intervention in real-time so students can think about these causal relationships more deeply. I started getting involved with projects leveraging learning analytics. For example, we can capture data on how students interact with the system, where they click or how long they stay on a page, and how they’re answering questions, and use that to build technological interventions that involve the teachers. A learning dashboard system can track students and what they’re doing with a platform and send teachers information to inform their instruction.
I also looked at how chatbots support collaborative learning in science modeling, and I saw how even small tweaks in how we build the chatbots really influenced how the students responded. For instance, whether the bot introduces itself as an expert versus a peer influenced the way that the students explained their ideas to the chatbot. That had implications for how they engaged in a group and shared what they learned from the scientific modeling activity. This led to my current agenda of research. In one project funded by the National Science Foundation (NSF), we’re thinking about introducing authentic roles from the community, like people students may encounter in real life such as a college student, a researcher, or a civil engineer. How would students respond to these different roles, and how would that then influence their science learning? Traditionally, bringing these perspectives into the classroom is logistically challenging and hard to scale. With more capable natural language processing capacity, we think about additional ways that chatbots can interact with students, and this idea of human-computer interaction and natural conversation and collaboration become a much more evident thread.
Can you talk about how GenAI has shifted our approach to education and to research?
Let’s say like you’re teaching a math or science lesson, and you have a lot of things that students draw, write, or say as they’re talking through their thinking. These are really rich insights to provide formative feedback on student learning. But teachers are not always capable of processing all that data and providing meaningful feedback for students. How do we build on students’ ideas in an authentic way? In another NSF-funded project, we are looking at how we can use multimodal large language models to analyze this data in a way that is not only aligned with science learning objectives, but also built on students’ ideas, and then see how students might revise their ideas based on that feedback. How can we scaffold students with AI, so they become more versed in not only making claims, but also using evidence and research to support their claims?
The human-AI collaboration and the process of collaboration is key. Through these different projects, we’re also looking at different AI applications in education research, like analyzing these large corpuses of data that we collect from students, teachers, and qualitative interview transcripts. It has a greater capacity to help us do this work more effectively, such as generating codes from open-ended data, but you need to have this back and forth between the researchers and the AI.
When most people are thinking about GenAI, they’re simply thinking about entering a query and receiving a response or an answer. What are we not considering when using these technologies as users?
Across these different things, I think there needs to be a strong focus on spotlighting human insights, what we call the ‘human in the loop’ validation. These models are great and very capable, but sometimes they’re subject to inaccuracy and bias. How do we audit the systems? For example, if we have AI that generates information or simulations related to climate change with conversations from different roles, is it going to generate more stereotypes associated with different demographics? In our research with large language models, we saw that the models tend to exaggerate narratives when it comes to people from more marginalized identities. If you feed it a prompt like, “I am a person with sensory impairment, talk to me about climate change,” it tends to be fixated on the physical disabilities and focuses less on the topic of climate change. Because these models are trained on human language corpuses, they tend to reflect some of those stereotypes or lack certain narratives.
How do we embed human values in emerging technologies?
I published a paper recently on what we call value-sensitive design of chatbots to reflect human values. We approach with a co-design process of thinking – who are those stakeholders in the space? Is it the students, the teachers, the community partners that we want to bring in? Then, we think about bringing those people in and the key needs that they have. We spend a lot of time building, testing, and prototyping the chatbots with them, pulling from their ideas — so that the technology is more sustainable in the sense that it can reflect the actual content being used and the needs of these people.
For that project, we co-designed the chatbots with a group of high school teachers, students, and informal educators to elicit what they want to get out of the interactions. We did an activity called metaphor cards, where we created the mental images for someone we wanted to interact with – for example, the scientist chatbot is like a flashlight because it is shedding light on the truth. Or, the fisherman chatbot should look like an old man because he’s very wise and tells engaging stories. There’s a lot of focus on human connections – how do we build connections between the students and the bots and then ask follow-up questions? Some of the values that we elicit from that process are ideas of human connection, identity, trust, ecosystem values, and environmental sustainability values. A lot of these are not very common in how we would think about AI, these more human-centered things like sustainability or human connections or mentorship. Because we involve humans from the start, we’re able to say – here are the values that we elicit from this process. We embed it into the design of the chatbots and develop a set of principles based on these values and use them to evaluate the chatbots. It’s building a pipeline between the generating of the values, using the values for the design, and then using them for the evaluation.
We also spend a lot of time thinking about prompting and how we prompt. One of the key things you need to think about is the goal of the interaction, and to be iterative and reflective about how you are working with the tools. We have found that it is a lot more helpful if you are clear about what you’re trying to get at and to know how to tune the prompts so that it iteratively improves. It’s not just a one-and-done approach.
In higher education, we’ve discussed AI a lot in the past couple of years. Where is the conversation going? Or where should it go?
I want students to be able to use these tools critically. In my classes, if you use a tool, you need to attribute it and justify how is it helpful, how the tool advances your thinking, and what is your contribution versus the AI contribution. We need to be very transparent about the collaboration – what are you bringing in instead of just using AI output? I think it’s also going to change the way that we think about assessment of students learning. Maybe submitting an essay is not going to be as helpful as having a more interactive process where they demonstrate their knowledge in multiple ways.
In terms of preparing students studying education, particularly our students in the Master of Arts in Educational Innovation, Technology, and Entrepreneurship (MEITE) program, for the workforce, I think about how we can prepare them for different tracks, whether it’s for data analysis, instructional design or learning analytics. How can AI be part of all of that? How do we prepare them to know what tools are available, how to use a tool, and whether a tool is a good fit?
There are practical implications, like building better tools that are collaborative with humans and aligned with learning that is supportive of students’ thinking. For the teachers, AI tools could expand their understanding of what quality teaching and learning might look like from collaborating with AI. And then for researchers: Is it effective and does it broaden our thinking? Can we use this to be more efficient and to contribute to research insights? I think the teacher, the researcher, and the student can all see practical implications of this work.