About 25 years ago, Jimmy Wales did something that had never been seen on the internet. He introduced an online encyclopedia to the world and asked strangers to build it together. Today, Wikipedia is the go-to website for over a billion people and is quietly powering much of what artificial intelligence (AI) knows, or at least thinks it knows.
At a time when the use of large language models (LLMs) are proliferating, unprecedented viral misinformation, and AI systems concocting a book’s ISBN number with absolute confidence, Wales continues to be an optimist. The Wikipedia co-founder is not concerned with AI replacing Wikipedia. However, he is more apprehensive about what happens when the humans who verify, debate, and curate information begin to slowly disappear from the equation.
On the sidelines of the AI Impact Summit 2026, we sat down with the co-founder of one of the internet’s great public institutions to discuss truth, trust, and why, in the age of AI, the most radical thing one can do is cite their sources.
Edited excerpts from the conversation with Jimmy Wales, co-founder of Wikipedia.
Wikipedia is built on the idea of free human knowledge. In the age of AI, where AI models generate instant answers, what does ‘free knowledge’ actually mean?
Jimmy Wales: Well, for us, the concept of free knowledge has always meant two things. Free of cost, which the AIs are; well, many of the models are free, but the better ones are usually not. So free of cost, but also free in the sense of being freely licensed to be redistributed, reused, and shared by anyone for any purpose. That hasn’t changed. Obviously, the information ecosystem is changing -in some ways for good, in some ways for bad.
AI models are trained on Wikipedia at a massive scale. Does that make Wikipedia, in essence, the backbone of the AI economy or the infrastructure that’s quietly propping it up?
Jimmy Wales: Definitely. For many years I’ve said that at Wikipedia we feel a really strong and heavy responsibility, because Wikipedia has become part of the infrastructure of the world. Now, LLMs are clearly part of the infrastructure of the AI world. And I think it’s really important that human-curated knowledge, “knowledge is human” is actually one of our themes for this year, remains central. It’s important in no small part because AIs can have terrible problems with accuracy and facts.
I continue to be amused when I reflect on this: what if you had asked me 25 years ago, at the founding of Wikipedia, what the first AIs would be like? I think we all would have said, ‘Oh, they’ll be completely uncreative, very fact-bound, very matter-of-fact.’ It turns out they’re really bad at facts, and they’re actually quite creative. You can ask them to brainstorm ideas. Because of that, we think that for a very long time, human oversight and curation will be really, really important.
Story continues below this ad
You’ve always maintained that AI should support human editors and not replace them. Where exactly do you draw that line, and who gets to enforce it?
Jimmy Wales: For us, everything is enforced by the community through established processes, along with a great deal of deliberation, discussion, and debate. That work is ongoing, particularly as we think about which kinds of tools are useful and which are not.
I really enjoyed one recent story. A German Wikipedian named Matthias came to me and said, “Jimmy, you should see what’s been happening on German Wikipedia.” He had written a bot, not an AI bot, that scanned German Wikipedia for ISBNs, the international identification numbers for books, and checked them against a database to verify they were correct.
He explained that when he finds an incorrect ISBN, it is usually an obvious human error – a cut-and-paste mistake or a single digit typed incorrectly. But this time, he found ISBNs that didn’t exist at all, and the corresponding book titles weren’t in the database either. Looking more closely, he noticed that several of these entries came from the same editor.
Matthias contacted the person and asked, “What is this? You’re adding ISBNs that don’t exist.” It turned out to be a good-faith mistake. The editor said, “I’m so sorry. I’m new to Wikipedia and wanted to help, but I didn’t know what to do, so I thought I’d add references. I asked ChatGPT for them, and it gave me these. I had no idea it could make up ISBN numbers.”
Story continues below this ad
And yes, it can make up ISBN numbers. In this case, the issue was harmless because it was caught by a volunteer. It’s a cautionary tale: the output can look convincing, but it isn’t reliable enough. The problem was fixed and didn’t cause lasting harm. Now, Matthias plans to run his tool across many more languages to look for similar issues.
That’s the kind of work the community is doing, like figuring out how to use these tools responsibly.
Right at the start of the year, the internet seemed to explode with Moltbook, a platform where AI agents interact with each other. As AI begins citing other AI-generated work, is there a real danger that AI-generated knowledge could override human consensus rather than reflect it?
Jimmy Wales: One of the things I’d say about that is that journalism is more important than ever, because building trust in the information we read still very much relies on humans. You came here to see me, you recorded what we said, you’re a human witness, and you can write it down and say, ‘This is what we talked about; this is what happened.’ In the past, to make things up wholesale was very expensive; you just couldn’t do it easily. But now you can push a button and generate a hundred fake interviews with me, all of which are kind of plausible. So we really do need human journalists doing this work, and I do worry about it.
But not so much in the direct sense of a problem for Wikipedia, because Wikipedia is always about reliable sources. You need a source, and you need quality journalism. I worry when I see news stories about news organisations starting to use AI. I’m like, okay, great, but let’s be really careful. There are certain types of uses where I think that’s probably fine, and others where I’m like, ‘I don’t think that’s a good idea.’ I mean, everybody’s seen, I don’t know if it’s real or not, but there’s a screenshot floating around of a news article where the last sentence says something like, “If you’d like to hear more, would you like me to write another essay for another newspaper?” And it’s obvious that some journalist somewhere cut and pasted from a ChatGPT response on a deadline. Obviously that’s a funny story, and little things like that will happen. But, we really do need to make sure our facts are facts.
Story continues below this ad
Newsrooms are already using AI despite hallucination risks. Do you think something like Wikipedia can be called the last line of defence against AI-driven misinformation?
Jimmy Wales: To some extent, yes. Journalists are the first line of defence, and communities like Wikipedia are the last. We’ve had a problem for years with genuinely fake news: websites that look plausible but aren’t real. That problem is only intensifying now that vast amounts of content can be generated easily.
The Wikipedia community is obsessively focused on sources and quality. Editors constantly debate whether a source is reliable, and they already know the major newspapers and institutions. That makes it hard to fool Wikipedians, even if it’s easy to fool people on social media.
One example I often give is a fake news site called the Denver Guardian. It sounds real, but Wikipedians immediately questioned it, checked, and discovered it was completely fabricated. That’s the last line of defence: slowing down and asking whether something is actually real. I hope the public is developing the same instinct. Just because something spreads quickly online doesn’t make it true. Validation still matters.
How do you convince younger users that human-managed knowledge matters when AI feels smarter, faster, and more confident?
Jimmy Wales: I think the first thing would be to point to reliability – to say, actually, AI can make up completely crazy things. I always enjoy asking AI about my wife, because she’s not a famous person, but she’s known a bit, so she’s the perfect example. The AI thinks it knows who she is, so it just makes things up, and they’re usually plausible. That’s always kind of amusing but also a warning that AI gets things wrong a lot.
Story continues below this ad
And then there’s the element of judgement, which is maybe the second thing. There’s a difference between indiscriminately spewing out a bunch of facts and the judgement of what’s important and what you need to know. I talk about this even around machine translation – it’s great that it’s gotten a lot better, but you need to think about the context of your reader. Who is reading this? What do they need to know? In some languages, you might not need to explain who the most famous cricket player is. If you wrote about him in Hindi, you might not need to explain who he is. But in English, you’d better explain it, because a lot of your readers might not follow cricket. Did you translate the text correctly? Maybe. But did you translate the knowledge context? You really do need that richness of human experience to do that well.
How do you view something like Grokipedia? I wouldn’t describe it as a rival or a peer to Wikipedia, but these kinds of engines are emerging and are often mentioned alongside it, while also carrying a subtext of shaping or subduing certain political contexts. What is your perspective on this?
Jimmy Wales: I use large language models a lot. I’m very fascinated by them because I love technology, and I’m always experimenting. And I know that they are nowhere near good enough to write an encyclopedia. I think when people first start using them, they’re like, ‘Oh, this is incredible; this is amazing.’ And then the more you ask about an obscure topic, the worse the hallucination problem gets. So the quality is just not good enough. I’m not particularly worried about that. It’s an interesting experiment, but it’s far too soon.
As an extension of that, when we have sleek, AI-powered knowledge systems that promise speed, personality, and synthesis, why should users still trust a slower, human-driven encyclopedia?
Jimmy Wales: Well, Wikipedia is really, really fast. I always say, if you hear on the radio that some famous person has died and you go as fast as you can to Wikipedia to be the first person to update it – it’s already there. So we’re pretty fast. But there are also cases where we’re slow, and I think those are interesting as well.
One example I was really proud of: when Michael Jackson died, it took about 10 to 12 hours before the community would allow it on Wikipedia, because the first report came from TMZ – a celebrity tabloid, which is okay, but it is a tabloid and it’s made mistakes in the past, and it wasn’t being confirmed by any major news source. The community treated it as, ‘Well, it’s on TMZ, and there are a lot of rumours online, but it hasn’t been confirmed by any serious news organisation, so we shouldn’t have it.’ I think that’s a good thing. In some cases, we should be a little bit slower, because we don’t know yet. I don’t think AI is necessarily going to be faster. It’s a good question.
Story continues below this ad
India is one of Wikipedia’s largest audiences and editor bases. How critical is India to the future of free knowledge online?
Jimmy Wales: I think it’s huge. As a side note, this is my third trip to India this year in a month and a half. I love coming to India, and I love the energy. When I was in Kerala, I met with the local Wikipedia community, and it was just very fun. Their excitement about working with schools and with libraries and the different projects they’re working on, and that kind of energy is fantastic.
Obviously the Indian IT sector is very important to the economy, and the interest in AI is great. I think it’s great that the AI Summit is here; it really highlights that. And for us, for Wikipedia, the multilingual nature of everything we do is so important. One of the great side benefits of LLM technology is that machine translation has gotten much better than it used to be. It used to be completely useless, then kind of okay but amusing. It got decent first between certain language pairs that were economically important and English-Spanish was one of the first to be decent. Now it’s become better across all kinds of languages, and that’s really exciting for our community. People want to be able to draw knowledge from different places and integrate it. For a country like India with a huge number of languages, we have many communities working in both English and their mother tongue and other languages.
Is there any concern that AI models, which are mostly trained on English content, could end up marginalising non-English languages?
Jimmy Wales: Yes, definitely. It’s one of the things that’s really interesting. I just saw news about a company here announcing an AI trained on eight Indic languages, and it sounds like a great programme. I can’t really comment on it in detail, but I think that’s hugely important.
The problem of bias in AI is the same as the problem of bias in humans. I might have certain cultural biases I don’t even know about, because I’m an English speaker who grew up in America and lives in the UK. But the good news is I can’t cause much harm with my own biases, because I can’t come into your language and start writing about things I know nothing about. If I were an AI, actually quite good at writing in all languages, I might bring in all kinds of weird assumptions and things that are just wrong. We know this kind of bias is a problem. I think it’s really important for people working in AI to focus on: what is the body of work we’re training on? Is it comprehensive? I think it’s hugely important.
Story continues below this ad
If you were to start Wikipedia today, in an AI-first internet era, what would you design differently to protect trust?
Jimmy Wales: I always say I’m a pathological optimist, so I tend to think everything is fine, and I don’t believe I would change very much. But if we were truly starting from scratch today, we would have to spend a lot more time discussing when not to use AI, and there would be a real temptation we’d need to resist.
The best parallel I can think of, and I don’t think I’ve ever said this before, comes from Wikipedia’s early days. The 1911 edition of Encyclopaedia Britannica was in the public domain and legally available for anyone to use. Someone suggested, “Why don’t we just copy all those articles to get started?” People actually began doing that, but we quickly stopped.
When we looked closely, the content simply wasn’t very good. You might think, for example, that an article on Aristotle wouldn’t need updating – he died 2,000 years ago. But in reality, a great deal had changed: scholarship had advanced, archaeological research had progressed, and our understanding had evolved. We realised that copying that material wasn’t helpful.
Story continues below this ad
I think the same thing would happen with AI today. You’d start using it and then realise it contains too many errors to be genuinely useful. We would likely go through that learning process, but I don’t think we would fundamentally change how we operate. The fundamentals of knowledge are always going to be the same.