software development agency

Exploring AI's Role in Education with Oxford's Dominik Lukes

March 23, 2025

15 min read

AI has become a buzzword. But do we really know a lot about it? Can we fully leverage the new opportunities that it brings to us? To dive deeper into this topic and to make this space more transparent to everyone, we’ve launched the Innovantage podcast. In the series of episodes, Sigli’s CBDO Max Golikov will talk to AI experts who will share their professional opinions on how AI is transforming the world around us.

Our first guest is Dominik Lukes, System Technology Officer at Oxford, who runs the Reading and Writing Innovation Lab. Dominik has been exploring the potential of artificial intelligence since the early 90s, long before the world got familiarized with ChatGPT.

In this episode of the Innovantage podcast, Max and Dominik discussed the impact of AI on the education sector and its potential to revolutionize the academic environment. Moreover, they touched on the basics of generative AI, the working principle of LLMs, and even the probability of an AI apocalypse.

Check out the full Innovantage episode with Dominik Lukes here:

Have no time to watch it now? We’ve prepared a short summary for you!

Key terms that you need to know

To begin with, let us briefly explain the main terms that are related to the topic under consideration.

Any AI tool is based on a model and a model is a set of parameters. Namely, these parameters ensure that if you feed something into a model, it will give you something back.

The models that are used in ChatGPT and similar solutions are models that generate language.

What are LLMs?

But what does it mean when we say “Large language models” (LLMs?) What makes them large?

A free version of ChatGPT that is available at the current moment relies on a corpus of about half a trillion words or thereabouts, which is an enormous number. As for GPT-4, OpenAI hasn’t revealed precise figures. But when Meta released a new large language model called Lama3, they said it was pre-trained on over 15T tokens that were all collected from publicly available sources.

The bigger the corpus used for pre-training is, the higher quality you can expect.

There are also parameters that should be applied to make models work. There are some small models with 8 billion parameters, while large models have hundreds of billions.

Why do you need to pre-train AI models?

An interesting thing that was an important breakthrough in AI is that it is not necessary to train AI for every single task separately. You can take all these 15 trillion tokens and pre-train a model with some basic cognitive capabilities.

After pre-training, it’s time for fine-tuning on top of that which will make your model do other things. The companies are constantly fine-tuning the models. That’s why the models are changing. For example, there is a thing that worked last month. But it may not work this month.

To achieve the desired results the data used for pre-raining has to be clean, has to be carefully selected within the abilities of your algorithm. Your model has to be pre-trained and then fine-tuned for a particular purpose. And that’s one of the things that will make your solution work better.

How do AI models work?

The work of models can be compared with a regression curve, which is kind of a prediction curve. While there is an opinion that such models work on frequencies and occurrences, that’s true. What they have inside are weights and relationships.

Dominik compared such models with semantic machines. So they are semantic in the sense that they understand relationships between things, but they’re not semantic in the sense they don’t understand the world outside themselves.

GPT: What is it?

Have you ever thought about what this abbreviation can mean? Actually, these three letters stand for what we’ve just explained about the work of such models.

G is for Generative. It means that the model is capable of generating text.

P is for Pre-trained. It means that the model should be pre-trained on a large corpus of data for learning patterns, grammar, facts about the world, and getting some reasoning abilities.

T is for Transformer. This refers to the underlying architecture of the model used for natural language processing.

AI hallucinations

If you have ever worked with LLMs, you’ve probably noticed that sometimes they can provide inaccurate answers or “invent” something that really doesn’t exist. It means that these models can still hallucinate despite massive improvements.

It can happen because models are trained on data. They learn to make predictions by finding patterns in the data. Nevertheless, due to biased or incomplete data, your AI model may learn incorrect patterns which will result in wrong predictions.

How to make AI work better

Unfortunately, AI models can’t teach us how it is correct to communicate with them. And let’s be honest, interacting with AI is not just the same as communicating with a person.

You should be ready for a ”rollercoaster”. It means that sometimes AI tools can go much beyond your expectations, while sometimes its outputs may be disappointing for you.

To achieve better results, you should experiment, try different prompts, and elaborate your own approach to make AI solve your tasks.

Not by ChatGPT alone: AI-powered tools that are used now

AI Tools Used at Oxford by Dominik Lukes

When ChatGPT was made publicly available in November 2022, it caused enormous hype practically immediately. Let’s be honest in mass perception, ChatGPT has become a synonym for generative AI. Nevertheless, that’s far from being true. Today there is a huge number of various tools, the functionality of which can greatly differ from what ChatGPT offers.

First of all, you can start your familiarization with AI with the so-called Big Four. Apart from ChatGPT by OpenAI, it also includes:

  • Claude by Anthropic;
  • Gemini (formerly known as Bard) by Google;
  • Copilot by Microsoft.

People take these popular models and use them to build different tools.

For example, Elicit is such a tool that can help you with your research. It can search for papers and extract information from them. Of course, you still will need to check it but you will get a really good draft.

There are also projects that leverage the possibilities of the released coding IDE of GPTs. It allows people to create, for example, custom bots within ChatGPT or Copilot.

By using the APIs, it is possible to build solutions outside of these platforms.

According to Dominik, currently, we are at the stage where everybody is trying to see what AI can do for us now. But also we are starting to explore what it can do for us in the future and what are the possibilities.

Such a highly respected educational institution as Oxford is also actively discovering the potential of AI, along with the rest of the world. Dominik shared that they are experimenting with ChatGPT, its enterprise version, integrations with Copilot, as well as other innovative tools powered by AI.

In this case, for researchers, it’s highly important to understand what students think about various solutions, what they find useful, and how they can benefit from the integration of AI into the learning environment.

Dominik also shared his personal thoughts. According to him, Claude is a good tool for educational purposes. It can deal with long context. It means that you can upload the entire academic paper and ask it to provide you with a summary or to find some specific information in the text. This feature makes Claude different from ChatGPT. And it can be highly helpful not only for students or professors but also for businesses.

Homework is dead. But what about education itself?

When it comes to education and the changes that AI has brought to it (and will bring in the future), a lot of people are concerned about the possibility of checking the level of students’ knowledge. And their position is quite clear.

For example, earlier the format of home exams was rather popular. Students received tasks and were asked to do them at home. Now, when we have so many AI-powered tools at hand, such tasks can be fully useless.

It’s obvious that you can no longer pretty much trust that all students who will hand in their essays have written their works entirely on their own. Such things as composition, spelling, grammar, and some other objective points that professors take care of can be checked and improved by AI solutions. Of course, they are still far from being perfect when it comes to research and in-depth analysis. However, that’s something that we have on the horizon.

Some teachers try to apply so-called AI checkers that are expected to detect AI-generated content. Nevertheless, AI experts insist that today there aren’t any reliable tools that can identify such content with 100% precision. There are different big and small models and they generate content in different ways. Moreover, their outputs greatly depend on prompts. As a result, we can’t trust the results shown by these checkers.

How AI is integrated into the academic process at Oxford

But how can professors motivate students to learn new materials if even their homework can be done with the help of artificial intelligence?

Professors at Oxford have their own approach to the academic process that can be a good solution for many educational establishments. A big part of the educational activities are happening in small groups. It means that students have a lot of discussions. So when they submit papers, they also have to talk about them afterward.

As for exams at Oxford, a lot of their examinations take place in an invigilated environment. So professors can see what the students are using.

Dominik is quite optimistic about the integration of AI into the education process. Though it’s too early to speak about its mass adoption its further implementation will definitely continue. And the task for both educators and students is to find the best way to use artificial intelligence for their needs.

AI for teachers: How to use it now

Max and Dominik also talked about the use cases when teachers can apply AI already now.

Here, Dominik shared one simple principle of working with AI solutions: You should ask the right thing from the right tool. For example, ChatGPT can be really good at explaining math terms and concepts but it is really bad at calculating and solving math tasks.

Similar things can be observed in other disciplines. Language teachers can greatly benefit from the ability of AI to create multiple-choice tests for students about a text or a grammatical feature. And here AI can perfectly cope with such tasks.

Nevertheless, if you are going to ask an AI model to create fill-in-the-blank grammar exercises, you shouldn’t have any high expectations. In this case, AI can offer the wrong option or provide the wrong gaps where something should be added. Quite often, if you ask AI to give you an example of a grammar feature, you will get an answer that won’t satisfy you. But when AI is generating a text for you it won’t make such mistakes.

AI generation still requires strong human supervision, just like an intern. It can work for you but you still need to control the provided results.

Skills for future students to work with AI

The educational environment is changing. How can we get ready for this AI-enriched world? Are there any specific skills that people should try to develop in order to work better with the newly introduced tools?

While answering these questions, Dominik highlighted that it is impossible to name any precise skillset.

However, here’s a list of recommendations from a person who has been working with AI for many years:

  1. Keep exploring.
  2. Keep trying it.
  3. And do not think that if you have used an AI a few times you have explored the entire frontier of its capability.

Maybe in a year or two, professionals will find some skills that you need to know but not now. There isn’t just one best tool or the best skillset to be used in the academic environment, as well as in any other space.

AI for disabilities: Can it help people to overcome barriers?

Speaking about AI, it is also interesting to note the potential of such tools to change the quality of life for people who have different types of disabilities. And here, it’s worth paying attention not only to what such solutions can offer in the educational context but also in the context of everyday tasks.

Such tools as screen readers or text-to-speech solutions can be highly useful for people with low vision and different kinds of visual impairments. It is possible to take any webpage and ask AI to voice what is written or shown there. In other words, even if a person can’t read or see something on his or her own, AI can do it. Of course, inaccurate outputs caused by AI hallucinations are still possible. But that’s already a great step forward.

AI can also be of great help for those who have problems with writing and typing due to dyslexia or any other issues. In this case, people can rely on speech-to-text features, as well as AI-powered grammar and spelling checkers.

Given this, we say that artificial intelligence can make a lot of things available to people, even if previously they couldn’t do them.

Talking about the capabilities of AI to expand the existing borders for people, Dominik also mentioned that today not speaking English is already a huge limitation. Those who do not know this language are cut off from a huge part of the world, especially when it comes to learning. A lot of materials are provided only in English. And here AI can also demonstrate its power. You do not need to wait till this or that research is translated into your native language. You can ask AI to do it for you and get a quick result.

And…Is an AI apocalypse inevitable?

Let us be fully honest with you. That’s just an eye-catching subheading. While some people are trying to guess what is going on in GPT’s mind, such experts as Dominik already know the answer. Nothing. Really nothing is going on in GPT’s mind till the moment we send a question to the chatbot.

We are learning constantly, even when we are sleeping our brains are changing.

Large language models, as well as other AI-powered tools, can’t think as we can. They are not exploring the world around them. If there are no requests from users, such models are sitting quietly just like a blob of numbers on your hard drive. It means that we should feel completely safe.

Instead of the final word

The AI industry is advancing at an enormous pace. Even a couple of months can bring impressive changes, and half a year feels like a leap into a new era. That’s why it’s practically impossible to predict what comes next and when. So let’s wait and see how AI tools will evolve soon and how education and other spheres will be impacted by these changes.

Looking for more insights from the world of AI? Follow us on YouTube, like our videos, ask questions in the comments, and do not miss the next episodes of the Innovantage podcast hosted by Max Golikov.

Subscribe to Innovantage YouTube Channel

Check Innovantage Spotify

Listen to Innovantage on Apple Podcasts

software development agency

suBscribe

to our blog

Subscribe
Thank you, we'll send you a new post soon!
Oops! Something went wrong while submitting the form.