Neural networks to find the right content for the right student
Online education sources continue growing in every field, greatly outpacing the current model of textbook publishing and distribution. Students often search through a vast number of sources to find appropriate content at an appropriate knowledge level, which presents several challenges. Among them, the name of a subject does not necessarily appear in the text about that subject – a text about an aspect of quantum mechanics, for instance, may not actually have the term “quantum mechanics” in the text. But longer combinations of words within the text – key phrases – can provide clues to the subject. Peter Brusilovsky, Daquin He, and their team rely on CRC’s hardware to build and train neural networks that can identify key phrases to find the right text for the right student.
“Key phrases rest on three concepts: example, function, and application,” explains Brusilovsky, professor in the School of Computing and Information. “What is the phrase? What role does it perform in the text? How does it relate to the subject?”
In the traditional educational paradigm, instructors assign texts and create tests matching the assumed level of the students’ knowledge in stages – Chemistry 101 comes before Chemistry 102. A novice student in a too-advanced course may struggle, which is not only unpleasant but inefficient. Material that is too easy may be a waste of time. Online educational sources do not provide that traditional structure. Brusilovsky and He’s team builds tools aimed at helping students to find material they need without wading through irrelevant sources.
Developing these tools is highly computational – the many iterations of building, training, and identifying the best-performing neural networks require a lot of power. The graphic processing units (GPU) resources at CRC make it possible to incorporate a vast amount of data.
“For our last project, we collected text from a half-million computer science papers,” explains graduate student Rui Meng, who is co-author on several papers with Brusilovsky and He (like Brusilovsky a professor of Information Science; graduate student Kushboo Thaker was also a co-author). “Because of the computing power of the GPUs, we were able to generate long phrases illustrating key concepts that don’t appear in the text. This requires processing a huge amount of data. We developed a neural network that can read text like an expert – and we substantially beat all the previous methods for efficiency and accuracy. We just use more data than they do.”
The team developed two methods to generate summary phrases from a long text. Building a sequence-to-sequence (seg2seq) algorithm – the model used in Google translation – the team input target phrases linked together as sequences and sorted phrases in the order of their first occurrence in a source text. In the first method, key phrases were restricted to be as distinct as possible from one another. In the second method, key phrases linked as much mutual information as possible. The team is now working to the broaden the neural network for domains in which it has not been extensively trained.
For a 2018 paper, Brusilovsky and He’s team worked on a server in their lab with one graphic processing unit, waiting weeks and more for results. CRC expanded GPUs in 2018 with an additional 10 servers. When the team started working with the CRC GPUs, the power and speed make a big impact on their productivity.
“We’re grateful for CRC,” says Meng. “We build a pre-training generic model here, and then train the model on distributed, parallel nodes on CRC that can handle a volume of data like the half-million computer science papers. We can look at many more parameters that influence the model. I learned from scratch on CRC, relying partly on the ticketing help system. The system creates great communication with the consultants, but also communication with other researchers.” CRC’s ticketing system for help queries is open to all CRC users to read. The system serves as forum for users facing similar issues.
“The collaboration CRC creates is important,” says Brusilovsky. “Researchers share expertise, we share problems. Sharing the center creates community. Working together makes life much easier. And we hope to have access to more GPUs too – demand for GPUs for many researchers is only going to increase.”
Brusilovsky and He’s team recently received an NIH grant to study reader comprehension of health consumer texts such as WebMD. They are compiling medical libraries with a range of documents from simple pamphlets to highly technical research papers. The complicated range of medical terms and diverse knowledge levels of the readers make the project a challenge. They hope in the end to find ways to raise the knowledge level of the readers by matching them with the appropriate level of text.
Closer to the business of the university, the team is using AI to study student behavior in learning. For an online course on information retrieval, students are first quizzed to predict their level of knowledge around a concept. Key concept methods are used annotate the quiz and predict the probability that an individual student will understand the concept, and produce testing and lessons based on those results.
“We want to know how to select more reliable tests to predict user understanding,” says Brusilovsky. “So we know what they know, and what they need to know.”
AI could create a new educational paradigm, similar to the possibilities presented by applying genomics to medicine, explains Brusilovsky. “We want to assess students’ knowledge using better technology. With that we would create personalized learning experience for each student.”