The Decline in Buddhist Discussion Groups

There has been a decline in activity recently at CT. @RobertK has also mentioned that Dhammawheel has experienced less traffic. We believe it is because of AI. These days, we can just ask AI for references or meanings and get instant answers rather than waiting one or two days.

I myself am a heavy user of AI. I also use it for getting ideas and references for some of my Dhamma talks or papers I write for school. I use it for proofreading, and we have AI proofreaders in our CT group that will correct Pāli text too.

We have the commentaries translated fairly well on the Pa-Auk website.

Is this good? Much of the information about Buddhism is filtered by the discussion groups to decide what is valid or not. It knows PureDhamma is not a reliable source and all of the controversies why. But there are human elements, even with mistakes that will be missed.

The leaders of the world believe that devices will be just AI devices later on. There is a really good explanation on how that will work by Neistat.

Ven. Sujato had a big debate on SuttaCentral. He will lose that war, but has become a huge influence with SuttaCentral’s indexing engine for suttas. It is always #1 when I search. I own ebook translations of Wisdom Pubs Nikāyas, but almost never crack them open. When I need something, I google it. Now I ask Gemini… and to give me the links. I can even ask by subject and then get the links. We still need to check and verify things, but later it will be so good, we won’t do that.

It seems that later on, if there was a massive conspiracy to come to fruition, the information can be rewritten once we depend on this. The 1984 era of changing history could actually happen. It is interesting how the future will be. There are great things coming, and it only gets better. However, ChatGPT has gone downhill.

If you want Dhamma, I recommend Gemini Pro. Nevertheless, there is value in reading the whole suttas as well rather than pinpointing the exact example you are looking for.

Recently, the PTS books were published. They are the extended, rolled-out unabridged version: poetically arranged and a longer read. Mike Olds believes that the magic comes with the repetitions. He certainly spent a lot of hours providing these books to prove his point. Not even the Pāli versions do this.

All that said, and even with the encouragement of long-form unabridged reading, there is something nice with learning less and learning more deeply.
Enjoy the website while it lasts!

2 Likes

5 posts were split to a new topic: Sujin Boriharnwanaket ( according to AI)

In a sense, Bhante Ariyadhammika told me it is good, I don’t get to waste my time (according to him) on forums, since people who has questions can ask Norbu or other AI. (We also put in the suttas, vinaya and commentaries in various combinations in Notebook LM, which is superb in citation and even if it hallucinations, we can check the citation to verify).

On sutta central not using AI, I cannot even promote this AI tool there. So I think it’s a lost on their end, but then, indeed, it’s good to at least have somewhere on the internet to remain AI free so that it’s not AI training on AI slop indefinitely, but at least there’s a pure source.

r/Buddhism is also AI free, and as far as I can see, it’s as active as ever, same with Sutta Central forum. It’s as active as ever.

Now, my personal project is to try to use AI to design a game involving the dhamma as the game mechanics, and can use EEG headband as part of the input from user. Just coming up with the design to write a paper on, I don’t think I will make the game myself.

How do you go about doing that? I haven’t tried notebook LM.

I find NotebookLM not as good as Gemini Pro. I’ve tried doing that; it is a weak engine.

Gemini Pro 3 is very good now.

To do what he says… just drop 4 Nikāya PDFs + Dhp + Dhp Commentary + Jātaka Commentary into the NotebookLM and then ask it questions. It was not very good. Less information is better.

There is too much info though. Maybe best to drop out the commentary into a separate notebook for questions. It also does not seem to retain discussions. That part is frustrating for me.

NotebookLM has one good point. It will only look at PDFs that are given to it.

For my studies… I find that a chat window works best for me on the professional tier. Right now, ChatGPT is dropping in quality. Gemini 3 Pro is the new leader. Grok is good too… if you are on the free tier, Grok wins.

Different opinions exist because some people only know the free tier.

The other issue is Elon Musk gets overwhelming bad press, yet Grok is #1 on OpenRouter. Who is OpenRouter? They are a huge central router for changing AI engines on the fly without changing the code. How big? Discourse, our software we use now, has OpenRouter as the main use case. We use Grok on OpenRouter for our Discourse.

1 Like

Something that we need to be aware of (mentioned to me by Venerable Subhuti) is that by posting here we are ‘training’ AI as they use the content. If we use AI excessively then it can bring in distortions.
The same happens of course with poor content written originally by humans. Care is always needed!

Its kinda a natural evolution. Post covid with the rise of zoom services, there was a notable decline in in-person attendance at temples as well. This is also true with christian churches afaik. AI will naturally displace a lot (but not all) of the things posted on forums too. But by posting here we also help train the AI models as mentioned above. since a lot of the more niche stuff doesnt have wide coverage in traditional online sources and thus the AI goes to forums for niche topics like english language classical theravada.

1 Like

Yeah, I’ve never understood the enthusiasm for systems that are just feeding back statistically probably language that is based on input never checked for accuracy.

My experience using LLMs for coding has only decreased my trust of the models. I can’t tell you how many times I have asked a question and been given a wrong answer that eventually the AI admits it knew was wrong in the first place.

I think the consensus is that it is good for generating ideas, but cannot fully be trusted. The problem will happen when it gets better and people do trust it.

It does democratize the smaller web pages like CT that can add some orthodox perspective.

There is also a “skill” in prompting to get what you need and getting the links (and then testing the links). All in all, I’m confident to say that it is quite good in translating Pāḷi better than I will be able to learn myself. Nevertheless, I will still try to learn Pāḷi probably until I die. We are very lucky that AI has opened up the commentaries for us. But on the downside, that was a great source from discussion groups. “What does the commentary say about such and such?”

1 Like

Right. People are treating it as an information source. I know in Sri Lanka that people are treating it as a substitute for googling.

I’m sad to say that I have to agree this is also the case for me. I doubt I will even keep trying as you are. But the fact that I don’t know Pali is the very thing that worries me about using it.

One thing I have found it to be helpful for is seeing what a complementary passage doesn’t say about a sutta. For instance, I was wondering if the commy addressed a certain issue in a sutta. It’s actually a really low threshold to see that the commy in fact said nothing at all about the issue in question. If it had addressed the question I had, then I’d need to get a real translation before I came to any conclusions about it. But I wouldn’t need to waste someone’s time simply to find that the commy wasn’t going to be helpful.

You have to be careful about non-mentions of things in the commentary. Ven. Ñāṇananda, the one who wrote “Nibbāna: The Mind Stilled”, starts off by saying the commentary says nothing about nāma. Which is total nonsense! I cannot believe someone educated would say such a thing, but… he is not orthodox. In any case… many commentaries are cumulative within a nikāya. They don’t repeat. We are also so lucky to have a referral to the VSM in many commentaries, for those that mention meditation and attainments. You have to understand how the commentary works.

1 Like

Right. That is a good point. This was something much more narrow. I realize the only conclusion I could draw was “The commentary to this sutta doesn’t mention x”. And the sutta wasn’t in a series of suttas where the explanation would have been found previously.

We are obviously lacking a high quality annotated edition of the commentaries that would remedy this. My greatest fear is that these LLMs will make the creation of such a work less likely to be created but no less important to have. I’m also afraid that people will soon trust their LLMs more than wise teachers. And that those wise teachers give up on sharing their wisdom.

This is a worry. I think people do understand though that as good as AI is it is still reliant on and trained on the original input from the teachers (and Commentary/sutta). So teachers should be encouraged to keep sharing and people should still discuss and ask questions on Dhamma in a live situation.

Bhikkhu Bodhi’s Commentary translations such as the Net Of Views are the gold standard and probably one reason that AI has become as good as it is (using his works to train on).

That’s not been my observation. I find people think it surpasses the knowledge and bias of any real human teacher. I mean if people really are using forums less (and people on forums are reposting AI answers) doesn’t that tell us something?

I’m trying to convince my friend who will do a phd, to research the advantages of using AI to create a base, and then doing a modification to correct little details. He might just do that. Then with this “research” he can gain a phd.

AI is useful, i use it professionally and its a huge productivity booster. It still needs to be refined after being generated but its much easier to refine/check the accuracy of generated material than to make it from scratch. The Internet didnt replace telephones but it certainly displaced some of its usage.

I trust notebook LM more, because it cites things. So if I ever doubt the answer, I look up the citation. It got something wrong sometimes.

Compared with genetic AI, then it’s that one might not know if the AI is pure hallucinating if one’s knowledge is not good enough. As generally, AIs do not cite where they got the information from.

generic AI being trained on forums, is seriously risky. Just see the quality of a lot of answers in Reddit, and some forums don’t even have a feedback system of likes or dislikes, up or down votes like dhammawheel, and there could be people with strong wrong views there, not opposed by those with right views who prefer not to quarrel.

Pretend you’re from Mahayana school, ask the generic AI Mahayana questions, I doubt it will answer in classical Theravada style. Where as Notebook LM can be customized to whichever source one wishes it to be.

Nowadays also, it keeps the history of the chat. One can also save it as note and even import notes as source.

Either way you need to verify what it says.
Notebook LM is closed loop on what you give it.
However, give it too much and it gets confused.

However, you can do the same thing with Gemini 3 pro
Unless you have access to gemini 3 pro, please don’t argue with me what is better what is not better. It is funny when people use free or inferior tools and then tell me which is better or not. I have many useful things I do with ai for real projects and many use cases. I know quickly who is the smartest. Grok is not as smart, but it gives instant answers. Gemini 3pro is very smart but a little slow.

Bhante,

I do not know about other AI’s but when it comes to ChatGPT, I do not fully trust it. Once I’ve asked it some question about looking for such and such passage in a sutta and it gave a fake answer which I knew was fake. It even gave a reference and was totally confident in the blatant lie it was saying. After few tries I was able to convince it that it was wrong.

After this, I take what it says with grain of salt… Or at least, one needs to double check all the crucial info.

IMHO.