How do you usually test AI tools before using them in production?
Very open ended question.
What are we talking about here?
- AI agents that run on production docs, data, internal knowledge bases to provide automated help to customers or support agent?
- Hubspot
- Zendesk
- etc.
- AI automation tools that run in different pipes, tasks?
- n8n
- zapier
- etc.
- AI agents that run on pull requests to automate code reviews, security analysis, etc.?
- coderabbit
- codeantai
- etc.
- AI agents that run on dev machines or in company cloud to help with coding and qa?
- codex
- opencode
- etc.
Something else entirely?

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.