Help us translate ChronoScan and get an advance license for free.
Contact us
Integrated with AI
Cloud and on-premise AI, powered by leading Large Language Models (LLMs)
ChronoScan AI – Next‑Level Document Intelligence
ChronoScan Capture Advanced and Enterprise editions support the latest generation of AI, running Large Language Models (LLMs) for transformative document automation.
LLMs deliver advanced data extraction, document analysis, automatic content classification, summarization, natural language understanding, and more.
ChronoScan supports three major AI engines for maximum versatility and business fit:

- Integration with ChatGPT 3.5, ChatGPT 4, and GPT-4o online services.
- Supports context windows of 4K, 16K, 32K, and up to 128K tokens.
- ChatGPT-4-vision-preview and ChatGPT-4o support direct image processing (no OCR needed).
- Highly flexible for extracting, classifying, or summarizing structured and unstructured data.
- Scriptable from Visual Basic Script for direct AI-powered snippets.
Pros: No OCR required, faster for image requests.
Cons: Consumes credits, requires internet, and images are processed on OpenAI/ Azure servers.
Cons: Consumes credits, requires internet, and images are processed on OpenAI/ Azure servers.

- Works with Gemini Advanced on Google Cloud.
- Enterprise-grade text and image analysis, semantic understanding, and information extraction.
- Multilingual support, entity recognition, and data classification.
- Secure and scalable with Google’s cloud infrastructure.
- Handles images, tables, and formatted text natively.
Pros: No OCR required, faster for image requests.
Cons: Consumes credits, requires internet, and images are processed on OpenAI servers.
Cons: Consumes credits, requires internet, and images are processed on OpenAI servers.



- Supports Llama (Meta), Mistral, Gemma, and more via Ollama or similar platforms.
- Run locally or on your private server for full data confidentiality and control.
- Customizable for business-specific needs—ideal for enterprises with strict privacy or compliance requirements.
- Best for small or experimental projects due to current hardware requirements (GPU recommended).
- Low-cost way to experiment and learn about LLM technology in-house.
Technical Considerations:
Each LLM has a “context window” (the maximum number of tokens that can be processed at once), e.g. 4K, 16K, 32K, 128K...
More tokens enable richer analysis but require higher compute resources.
Always check your hardware/API limits before running large-scale jobs.
Each LLM has a “context window” (the maximum number of tokens that can be processed at once), e.g. 4K, 16K, 32K, 128K...
More tokens enable richer analysis but require higher compute resources.
Always check your hardware/API limits before running large-scale jobs.
- LLMs drastically reduce the manual effort required for data capture from complex and unstructured documents.
- Achieve advanced content classification, entity extraction, and summarization out of the box.
- Choose between cloud-powered AI (maximum performance/context) or local models (total data privacy and control).