From Draft to Polished Prose: The In-Browser AI Workflow
The AI Suite is built around a simple workflow: take a draft, transform it, then quality-check it. Start in AI Summarizer if you are condensing a long piece into a tight overview, or skip straight to AI Paraphraser if the goal is a rewrite at the same length — different tone, simpler vocabulary, or more detail. The Paraphraser is instruction-tuned, so its Custom mode accepts natural prompts like "Rewrite for a 10-year-old reader" or "Convert to bulleted list." Both tools run small models entirely in your browser; nothing you paste leaves your machine.
After the transformation step, run the draft through AI Grammar Checker for sentence-level fixes — typos, agreement, missing articles, doubled words. The grammar model is conservative by design, so it rarely introduces new errors. For style work that goes beyond grammar (sentence length, weak verbs, jargon), the Paraphraser with a Custom instruction is usually a better fit than the Grammar Checker. Pair the grammar check with the Readability Score tool from the SEO suite to find the rougher sections of long documents.
For multilingual work, AI Translator covers 10 languages through 20 Helsinki-NLP opus-mt models — one per direction. Each pair downloads on demand (about 35 MB), and only the pairs you actually pick. The translator also serves as a quality sanity check: round-tripping a paragraph from English to a target language and back surfaces phrasing that does not survive translation, which often correlates with phrasing that confuses readers in the original.
The three reference tools — AI Prompt Template Builder, LLM Comparison Table, and System Prompt Library — cover the hosted-model side of the workflow. They do not run inference themselves. The Template Builder parameterizes the prompts you reuse with ChatGPT or Claude or Gemini; the Comparison Table helps you pick the right model for a job; the System Prompt Library gives you tested role prompts. Together they make working with paid model APIs more efficient.
The Image AI group extends the same browser-only, no-upload philosophy to image work. AI Image Upscaler ships two Swin2SR variants — classical 2x for clean source material like line art and screenshots, real-world 4x for compressed photos with noise. AI Background Remover v2 uses BiRefNet lite, a general-purpose dichotomous image segmentation model that handles products, pets, and objects in addition to portraits — a substantial improvement over the older MediaPipe-based tool which is still available for fast portrait work. AI Image to Prompt uses BLIP captioning to generate a caption, a Stable Diffusion-style prompt, or a conditional caption from a user-supplied prefix. AI Depth Estimator uses the smallest Depth Anything V2 variant (27 MB, Apache 2.0; the larger Base/Large/Giant variants are non-commercial and not used) to produce grayscale depth maps useful for masking, parallax, and displacement work. The suite covers both directions: when you need privacy and zero API cost, the eight model-powered tools run locally; when you need top-end quality, the reference tools make hosted model usage cleaner.
Suite FAQ
Are all 11 AI tools really free?+
Yes. Every tool in the AI Suite is free with no signup, no rate limit, and no premium tier. The site is supported by ads on the surrounding pages; the tools themselves are unrestricted.
How much model data downloads on first use?+
It depends on which tool. The Prompt Template Builder, LLM Comparison Table, and System Prompt Library download nothing — they are reference tools. Text tools: the Summarizer downloads about 155 MB, the Paraphraser and Grammar Checker about 80 MB each, and the Translator downloads one ~35 MB model per language pair you actually pick. Image tools: the Depth Estimator is 27 MB, the Image Upscaler is 22 MB per variant, the Background Remover is 85 MB, and the Image to Prompt is 280 MB. Everything is cached after first use.
Will my prompts and drafts get sent to a server?+
No. The eight model-powered tools (Summarizer, Paraphraser, Grammar, Translator, Image Upscaler, Background Remover v2, Image to Prompt, Depth Estimator) run inference entirely in your browser using transformers.js. The reference tools save data only to your browser's local storage. Nothing in this suite uploads your content to a server, including ours.
Why not just use ChatGPT or Claude for this?+
For sensitive content, internal documents, or anything you would not paste into a public chatbot, the privacy story matters. The hosted models from OpenAI and Anthropic are larger and produce more polished output, but they see every word you send. The tools here run small models locally; the trade-off is quality for privacy, no API keys, no rate limits, and no per-request cost.
Which browsers support WebGPU acceleration?+
Chrome and Edge on desktop fully support WebGPU on Windows, macOS, and Linux. Safari on macOS 15+ and iOS 17+ supports WebGPU on supported hardware. Firefox is rolling out support but it is behind a flag in most channels. Browsers without WebGPU fall back to WebAssembly, which works everywhere but is slower.
Can I use these tools offline after the first load?+
Mostly yes. Once a model is downloaded and cached, the inference itself works offline. You need a network connection only for the first load of each model, for the page assets (HTML, CSS, fonts), and for the React runtime. Full offline support via a service worker is on the roadmap.
What licenses are the models released under?+
All eight model-powered tools use permissive licenses: Apache 2.0 for distilbart (summarizer), flan-t5 (paraphraser), grammar-synthesis (grammar), Helsinki-NLP opus-mt (translator), Swin2SR (upscaler), and Depth Anything V2 Small (depth). MIT for BiRefNet lite (background remover). BSD-3-Clause for BLIP base (image to prompt). All three license families permit commercial use; we credit the authors on each tool page anyway.
Can I use the model output in client work or commercial deliverables?+
Yes. Every model in this suite is released under a permissive license that allows commercial use of the output. As with any AI tool, review and edit the output before delivering it, and disclose AI assistance where institutional policies require it. The reference tools (templates, comparison, prompt library) produce no model output — you assemble the prompts and use them with whatever model you choose.
Written by Derek Giordano · Part of Ultimate Design Tools