A place to share what we are learning about open education and A.I.

Category: Ethics and GenAI

Working with GenAI with Open Principles

As mentioned in our last blog post, the Opterna prototype is now up and undergoing user-experience testing. When we have been out presenting to various groups recently, we have been asked “so what is the crosswalk between Open and Opterna”? This is a great invitation to explore how open and open pedagogy have informed the development of Opterna, so let’s dig in! 

When we talk about “open” in education, we often think of open textbooks — freely available, openly licensed resources that remove cost barriers for students. But what happens when you bring generative AI into the picture? Opterna offers a case study of trying to embed open principles into an AI-powered learning support experience designed for students and educators. 

Opterna is an AI-powered study companion built to work alongside books published in Pressbooks. At first glance, it might look like any chatbot designed to help students study. However, one of the things we set out to do when co-designing Opterna was to use open as a design philosophy to guide our decision making and allow for a project where “open” has been woven into design and functionality. 

To do this we drew on several established frameworks including the traditional 5R permissions of OER (retain, reuse, revise, remix, redistribute) made popular by Wiley (2014)Sinkinson’s (2018) values of open pedagogyJhangiani’s (2019) 5Rs for open pedagogy, and more recently Clarke-Gray’s (2025) New 5Rs of Open, which foregrounds social justice dimensions as captured in the categories represent, resist, repair, refuse, and rise up. With these frameworks providing the backbone, we ran possible feature sets and design decisions for Opterna against Hegarty’s (2015) eight attributes of open pedagogy, to ensure that what was created reflected, as much as possible, the values core to open pedagogy. 

All of this translated into actual features of Opterna in several tangible ways. For example, Opterna is built on openly licensed content — 25 textbooks from the B.C. Open Collection by BCcampus — meaning the knowledge base itself honours the open licensing approach and in doing so, was able to address several privacy and trust concerns raised about generative AI. We also reached out to the authors of these textbooks to make sure they were aware of this new use of their work and provided them with an opportunity ask questions about how their work would inform Opterna,  an example of Clarke-Gray’s concept of repair (2025).   

In keeping with the traditional 5 permissions of open educational resources, the AI Study Companion source code is available on GitHub under a highly permissive software license that allows other institutions and developers to adopt and adapt it with minimal restrictions. As part of the feature set of Opterna, we intentionally did not add a log-in feature to ensure no personal information would be collected. Privacy and ethical use considerations were baked into the design from the start, and a participatory design-based action research project accompanies this project, investigating how open GenAI tools can be implemented effectively and responsibly in higher education. 

Opterna embodies open pedagogy through its emphasis on student agency and choice. Learners can select the level at which they engage with content — introductory, intermediate, or advanced — and choose their post-secondary level. They can edit pre-populated prompts, create their own, generate their own examples, and build a personal collection of outputs that can be shared with others. The dialogical chat interface, grounded in Socratic questioning, positions the student as an active co-constructor of knowledge rather than a passive consumer. 

For instructors, Opterna offers a distinct view with different functionality, along with editable sample prompts and multiple output formats including practice questions, flashcards, FAQs, and audio output. This variety reflects Universal Design for Learning (UDL) principles and allows for some customization and choice within the design constraints of the build.  

Through the iterative process of co-creating Opterna with UBC Cloud Innovation Centre (UBC CIC), we developed a draft framework for ethical AI that we used on this project. This framework is centred around values such as trust, human connection, sharing (openness/the commons), access, learner agency, critical engagement, and inter-connectedness.  

The principles of the Framework for Ethical AI. The principles are listed in a circle and are the following: Engaging, Trust-worthy, Human-centred, Informed, Critical, Agency, Limits, Access, and Impact. The first letters of each principle spell out "Ethical AI" and the term "Ethical AI" sits at the centre.
Working Framework for Ethical AI in the Opterna Project

The Opterna project is still unfolding, with research, testing, workshops, and resource development planned through 2028. But it already demonstrates something important: that “open” in the context of AI, similar to open in the context of OER’s, isn’t just about licensing or access. It’s about student agency, thoughtful design, community participation, and a willingness to share not just products, but processes.   

Written by: Dr. Elizabeth Childs 

Learn more at the BCcampus Open GenAI Project page or visit the blog at opengenai.opened.ca. 

Ethics in GenAI – Considerations to reduce environmental impact 

It is difficult to quantify the environmental impact of generative AI. It extends to the mining of raw materials used to make the hardware in data centres or the water used to cool these data centres. But the most obvious resource used is energy. Like any other AI, generative AI uses a large amount of energy in its training stage. According to a 2023 study by Alex de Vries, GPT-3’s training process alone consumed an estimated 1,287 MWh of electricity, which is nearly equivalent to the annual energy consumption of 120 average American households in 2022. But energy consumption doesn’t end after the training phase. Each time someone prompts one of these LLMs such as ChatGPT, the hardware that processes and performs these operations consumes energy, estimated to be at least five times more than a normal web search. With the popularity of LLMs such as ChatGPT as well as generative AI being added into seemingly every application and technology, the number of users is only growing. 

With that in mind, what do we do to mitigate the environmental impact of our AI Study Tool? The general answer is to ensure that our tool is energy efficient, and we are currently exploring three ways to do this. 

  1. Using smaller language models: Given that large language models (LLM) such as ChatGPT consume lots of energy both in training and after release, an obvious way to reduce energy consumption is to use smaller language models. A small language model (SLM) is distinguished from a large language model by the number of parameters, of which the SLM has fewer than the LLM. As an SLM is trained with a smaller dataset, it has fewer parameters and any language model with fewer than 30 billion parameters is considered an SLM. This means that an SLM is more energy-efficient, less costly to train (both in time and energy), and has improved latency. An SLM also improves upon the issues of bias and transparency as, with a smaller dataset, you have more knowledge and control of what goes into training your language model. We are unsure if our use case will allow us to use an SLM, but we are researching existing, open-source models in hopes that we will be able to fine-tune an SLM for our purposes. 
  1. Cache-augmented generation (CAG): The most common way to try to ensure that information is accurate is to check valid sources before providing an answer and the way language models usually do this is Retrieval-Augmented Generation or RAG. The idea is that, after receiving a prompt, the language model will then search for information about the query to ensure the information is both accurate and up-to-date before generating an output using the information it has fetched. This step is important to provide accurate information to users and limit the “hallucinations” we are all warned about. But this means that, on top of processing a prompt and generating a response, we now have the added cost of searching for and processing sources every time it is prompted. Enter Cache-Augmented Generation or CAG! Instead of searching through a large database (which could be the entire internet) of information, it is pre-loaded with reference data so that the search for reference information is more efficient. CAG is good for information that does not change frequently, such as one of our textbooks, and can also ensure the accuracy and validity of the information cited, so it seems perfect for our use case. 
  1. Caching generated output: Judicious AI Use to Improve Existing OER by Royce Kimmons, George Veletsianos, and Torrey Trust suggests using caching to improve the energy efficiency of the language model. As we discussed, each prompt uses some arbitrary amount of energy and that remains the case even if it’s the same prompt over and over. So, by caching or storing some of these generated responses from the learning model and returning them when a query is repeated, it will use less energy as it does not have to process and generate a new response each time. Further, the authors suggest serving those cached responses to students as OER, which reduces the number of prompts altogether and contributes to improving the equity of generative AI. 

Though I’m a little overwhelmed by the sheer volume of information out there regarding energy efficiency in AI, much less the complex subject of AI ethics in general, I am excited about exploring these three solutions. As we start working with the developers for this project, I am interested in learning about the implementation of these concepts and the feasibility of implementation in our AI Study Tool. Researching this project and ensuring it aligns with our values feels like a puzzle I’m trying to solve, and I am enjoying delving into the world of computer science once again and flexing those problem-solving muscles.