Stuck or curious?
Find FAQs below and our Documentation
For unresolved issues: Discord and support@batcher.ai
General questions:
Bulk AI processing platform combining spreadsheet simplicity with enterprise-scale automation capabilities. Ideal for repetitive text generation/analysis tasks at scale.
While ChatGPT handles single interactions, Batcher.ai specializes in mass operations - process 10k chat-style requests simultaneously with spreadsheet-level organization and error handling for bulk jobs.
Depends on row count, model complexity, context length, typically 2-15 seconds per row using cloud APIs. Parallel processing can handles hundreds of rows per minute.
All processing occurs in isolated environments with only some days retention mode according the suscribed tier plan. Data never trains public models unless explicitly authorized or on external APIs.
No, spreadsheet interface handles complexity and formulas templates, helps you to define AI logic.
Multimodal and Version control support next major update. Submit or vote on upcoming features in our Discord server roadmap channel.
Various uses cases can be found on : uses-cases and on our Discord server.
Advanced Features:
The system scales based on your current subscription level. Only Freemium plan have rows limits per job while Pro+ tiers can handle 100k+ rows with priority queuing.
Yes, use cell references in formulas to create multi-stage workflows. Output from Row 5 becomes input for Row 6 automatically.
Per-cell provider selection, integration with OpenAI, Anthropic Claude and 30+ LLM providers. Add custom endpoints or self-hosted models via Ollama/vLLM.
Yes, integrate via your Ollama endpoint.
Leverage Batcher’s per-cell model control to implement validation, add dedicated "validation columns" using LLM formulas to audit results.
For bulk operations:
- Add a validation column flagging low-confidence outputs (Score <8/10
)
- Batch reroute flagged rows to the same or more powerful models
- Batcher’s parallel processing lets you run verification checks across 20K+ entries in minutes
Pricing / Customer service / Troubleshooting
Failed rows are flagged without stopping the whole batch. Retry with adjusted parameters like parallel processing, check context length.
Web interface works on any modern browser with focus on Firefox. Heavy sheets (>10k cells) recommend 16GB+ RAM for smooth editing.
24/5 email tickets (12h response tier1+ in business hours, average 4h response time).
- 15k tokens in batcher.ai inference per day.
- 100 cells per document.
- 15 processed documents per month.
- 7 days data retention.
Coming soon via PartnerStack.