Generative AI for learning: Opportunities and challenges – Taiga @ Umeå University

On 231020 1215-1300, I gave an invited talk as part of the #frAIday-series at Taiga @ Umeå University on how generative AI can enhance learning and education.

Below is a video of the talk (35 minutes), and here are the slides.

Here are answers to the questions asked by the audience via chat after the talk (not in the recording). My answer in italics follows the questions.

How should we approach the assessment of student writing skills? Traditionally, this has been accomplished through take-home exams and thesis work. However, it now appears that students can seek assistance from AI tools. Should we continue with proctored on-campus examinations, or should we reconsider what it means to be proficient in writing a text?

Let’s look more at the whole learning process and focus less on the final artifact, where we work more with continuous examination. Another benefit of this approach is that the student gets more immediate feedback, which supports deeper learning. Compare this to getting feedback based on a test several weeks after the test.

Suppose teachers want to continue with reports written outside of a proctored environment. In that case, they need to combine it with some other examination method, such as a short verbal examination, to judge if the student knows the content submitted in the report.

Experimentally, they can also try using AI to help them judge if the students know the subject.

Can the result be that we re-evaluate the necessity to learn certain tasks, just like what happened when calculators were introduced? E.g.: Is there a value in knowing how to summarise information, or answer a factual question (if a machine can do it better…). If not, why should we teach it ?

The last part is the crucial question here. If we have AI assistants that can answer everything and solve all our tasks for us, then what should the students learn? Well, to judge if an answer is correct, they need to have a basic understanding and knowledge of the area in question. Thus, we should teach in a broader way where students can use AI to help them solve more specific problems in the future.

A comment on the calculator analogy: it is actually not a great analogy to the current situation with AI-assistant learning, as the calculator could only solve (initially) fundamental mathematical problems. The AI assistants in 2023 can already now solve very advanced problems, problems that are typically above most people’s knowledge.

A question to @all: Does your university have a policy or general recommendations for handling generative AI?

I have looked around and discussed with representatives from other Swedish universities, and most of them are working on some policy regarding AI usage in education and research. At Luleå University of Technology, I have been part of a group working on such a policy document. In May 2023, we released a first draft that you can find here. It has proven to be a time-consuming process to incorporate all comments and cover all edge cases. Ultimately, it comes down to definitions where it is hard to define AI and generative AI, where the document could also be used as a basis for supporting disciplinary cases around cheating using AI.

Also, the guidelines need to be updated regularly as they quickly can become out dated.

If at some point all texts used for AI training will be AI-generated, what will be the quality of that AI?

This has already proven to be a problem with other services trained using public content. Google Translate cannot be trained using public data as so much text exists today online that is translated using automatic translation services. We will see the same issue when training AI in the future, and the selection of data to train on has to be done very selectively.

What happens when AI software meets quantum computing?

The intersection of AI and quantum computing is a subject of significant interest and speculation. Quantum computing has shown promise in efficiently solving complex optimization problems, often computationally expensive for classical computers.

Quantum computers could drastically speed up machine learning algorithms. Some computations that take a conventional computer billions of years to complete could be done in seconds by a quantum computer. Many AI tasks are fundamentally optimization problems.

When we have working quantum computing machines, they could very drastically change AI-learning. It’s important to note that practical, scalable quantum computing has yet to be achieved. We’re still in the early stages of understanding what’s possible when these two technologies intersect.

Instead of wondering IF students are using AI, why not just assume that they are and will. Then with that in mind, we organize our courses and examinations based on this assumption. Not saying that this will make life easy for teachers – but at least it provides a new point of orientation for teachers. Exactly. You said said it. We need to evolve our education based on our new reality with AI.

Yes, I strongly agree with you on this. Independently of policies and rules, students will both use AI tools to help them learn and to help them cut corners. We need to adapt our education to a new reality.

You talked about students’ mental model of a concept would raise the questions “am I wrong or is the AI wrong”, it’s then a risk between learning and misinformation. Is AI literacy enough to support this issue in education as a way to encourage students to fact check everything ChatGPT outputs? Or are we (humans) too impressionable and vulnerable to confirmation bias for example such that false information that matches our mental models leads to us not bothering to check? Which “force” is stronger.

I believe that all humans are lazy at heart. Humans have always tried to find various tools to make our lives easier. AI changes how much effort we put into the mental process, and just as you indicate in your question, it will, to some extent, lead to us not bothering with fact-checking, and we will accept the answer AI gives us and move on to the next question or task. This is a real challenge when it comes to how we conduct education and what the students should learn.

Quite a few analysts say only ten or so large companies will write code at low level needed for ‘super efficiency’ every other software company will ‘write’ using natural language, that is translated into code by LLMs.

This debate has long been ongoing, and we have seen numerous efforts with both graphical and low-code programming. Some groups that do not know how to program very well have successfully created less sophisticated software using low-code approaches. With AI support, this group will surely grow and we will see more advanced computer programs designed using AI tools.

AI-vision will also play an important role here, where users can sketch the graphical part of their software, and AI can create programs to realize the sketches.

That is all. Good luck with your AI efforts.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.