Would you continue using OMLx if there were no forum or chat rooms?

A properly trained AI would be able to parse the entire support forum way better the current search algorithms do, reduce the frequency of support staff needing to tell folk to submit logs, direct them to the relevant posts, etc.

To deny this out of some blind dislike of AI is not reasonable.

…and that differs from humans how, exactly?

I think what you are saying is there needs to be automation that takes the content of a post and scrapes all other topics to find related info. That is really just better queries. You don’t need the overhead of a full blown AI to do that, and often end up with much worse results over time. I agree the search function in Discourse is limited.

Because deploying AI is something everyone will have to deal with. You can choose which humans you get advice from. There is no choice if you are asking for that to supplement what the human should be doing on their own.

1 Like

Semantics. Either you use LLM tech, or you don’t.

If you don’t, then you have what you have right now. If you do, then you can leverage it.

It’s really not. DB queries and functions to call them on an existing stack is more efficient, controllable, and often faster than bringing in a foreign system to do that.

Total BS. I can choose to ignore anyone or anything I like. You are acting that if LLM tech is implemented, then humans will be banned from the space. I don’t think OML would do that, they’d just use it to minimise how much effort the actual humans have to dedicate to support.

I suggest you look into how LLMs actually work.

DB queries are not even close.

I’m really not. If you misunderstand someone, don’t speak for them.

I’m saying humans will leave if they do not want an AI annoying them.

1 Like

Why would you assume that I haven’t?

Well, you don’t seem to like the idea.

Humans will leave if other humans annoy them as well. All we want is to be serviced, as fast and as cheaply and as effectively as possible. AI can help with that, no matter how the nay-sayers whine about it.

I don’t. But not because I don’t like you, or anything. I just have worked with databases before. You are free to start a new topic in Development > Packages and features requests outlining how you would do it, though.

In both cases the instigator of the dislike would have to eventually be removed which brings you back where you started. It’s not exactly an argument.

This also doesn’t apply to a no-cost product made by a voluntary entity.

Considering the vast majority of AI code is also written by the woke, I’m not in any great hurry to adopt it. It’s not a requirement for anyone to live, and you can’t really say you wouldn’t use Matrix because, woke and then lecture me about being an AI ā€œnay-sayer.ā€

1 Like

It’s the training data that is provided by the woke, not the actual LLM algorithms, just like with humans; LLMs are no more woke than humans. You can train humans with woke data, and you get woke humans, same goes for LLMs.

You are saying that only they can train LLMs, and never us. Such defeatism.

So, if we apply the same logic that necessitated the poll in the first place (multiple sources of information out of sync, and a lack of people to curate that) what would lead you to believe we have a multitude of people to properly audit and train AI to just scrape the forum and use flat information without any sort of hallucinations or wandering into other types of content?

No, I’m saying your logic is flawed and your understanding of software engineering principles is not correct.

I’m also not sure why you are deciding to be confrontational. I guess we both know you won’t be doing this anytime soon. If you were in our chat, you would know we have complainers in there, as well. I just tell them to do the work and stop complaining. That’s basically what I’m telling you now. If you want a better solution, find it and propose it. Otherwise, you are just manipulating your stats on our forum for your own benefit.

Whatever. The humans ā€œhallucinateā€ (other word for having the wrong answer as) well.

Please elucidate that comment.

There are neural networks, perceptrons, etc, and the training data they use. Bad training data = bad output, just like with humans.

Now, if you have actual evidence that neural networks and perceptrons are woke in and of themselves, in the absence of training data, please share it.

I’m saying if you know how they work and develop them, then you prove your statement. It’s not up to me to disprove it. That’s collectivism tactics.

You also neglect to mention the part where the CoC brigade is actively trying to silo off contributors and users they do not agree with.

That is all training data problems, not algorithm problems, just like with humans.

I can tell you that if we don’t have the people to consolidate the info, then we don’t have the people to acquire new knowledge to train LLM’s.