Can we trust AI for education?

Pathwright Team



While AI can already answer questions and generate content that looks plausibly human, it often “hallucinates” and makes up facts and data that look correct but are invalid.

That’s a problem, especially in education!

Before indiscriminately applying AI everywhere in education technology, there are crucial questions to answer that warrant thoughtful and humane consideration. In the Pathwright Labs, we’re exploring:

  • How do we prevent AI from making up facts and data (hallucinating)?
  • Can we prevent AI from going off-topic or regurgitating pre-existing biases from its training data?
  • Can we align AI with our educator’s existing content (books, videos, etc.)?
  • Can AI transform existing documents and content into actionable paths, recommendations, and FAQs?
  • Can we make AI cite its sources when giving answers or suggestions?
  • Can we use AI as a reliable tutor or Teacher’s Assistant for everyday educational tasks?
  • How can we make AI a helpful tool alongside learners and teachers without replacing the human elements that make education an enriching experience?

We’re working on ideas and experiments that address these questions alongside educators we respect and trust in our Labs. If you’re interested in these questions, please join us for our OpenLab event for discussion and an early preview of how we can move forward in this brave new world.

PS, If you’re a Season Pass holder, you can schedule a personal ProductLab to get an inside look at what we’re working on and discuss how we can help you align and use AI to further your business and educational goals.

Using Pathwright is dead simple and doesn’t cost a thing until you’re ready to launch a path.
Get started

Who else might like this?

Topics in this article

More in News

See all