The first AI development at the Studio

As part of the strategic reflection of Minds Studio’s value proposition and positioning in this new era of AI, I’ve been taking the Responsible AI Bootcamp for Edtech practitioners.

In just a couple of weeks, I’ve built and published a tool that helps a literature teacher prepare culturally responsive lesson plans and assignments for any book, tailored to the age and country of origin of each student.

You can play with it here:

It’s still a very basic tool, created primarily to experiment with Dify– an Open Source Platform designed to simplify and accelerate the development of AI-powered applications.

Being a tinkerer

It’s definitely exciting to play with powerful new tools, but I remain firm in my conviction that technology alone will not change the world—technologists might make it a tiny bit better.

A concept that I learned in this course – thanks to Justin Reich’s paper – is that of being a Tinkerer. Quoting Justin “Tinkerers see schools and universities as complex systems that can be improved, but they believe improvements come from many years of incremental changes to existing institutions rather than from wholesale renewal.” This stands in contrast to the charismatic stance that ascribes tremendous power to new technologies to reinvent education.

Choose your favourite LLM -biases

Once you start digging deeper on how LLMs are built, and the potential sources of downstream harm, you realise it’s all a game of choosing priorities—and it’s a very tricky game to play.

Should a model be inclusive? Sure.
Should it be fair? Absolutely.
Should it respect privacy? No doubt.

The problem arises when one choice influences others. For instance, more privacy might mean lower data quality, especially if we want the LLM to be truly inclusive or to consider historical biases.

The entire process of building an LLM—from data collection to processing, model development, evaluation, postprocessing, and deployment—involves a staggering number of decisions. Balancing them is incredibly difficult.

Adding knowledge on top of an LLM

It’s been fascinating to explore the concept of RAG (Retrieval Augmented Generation). Since LLMs are trained on broad- though massive- datasets, it’s often necessary to provide an additional layer of domain-specific knowledge to improve their performance on particular tasks.

For example, in the tool I mentioned earlier, I added documentation- “knowledge on top” of the LLM (ChatGPT)- about cultural differences and widely known K–12 books. This helps the model retrieve more accurate and relevant responses.

Are we going to be AI-first now?

No.

We are experimenting with AI, but the purpose of the Studio remains unchanged by any technological wave, no matter how big it is.

We remain focused on sparking human curiosity and bringing to life exciting, inspiring communities of learners who regularly practice something they want to learn.

I’ll continue to share learnings from this journey.