CSV to SQL Converter (INSERT Statements)
Loading a CSV into a SQL database is one of those tasks that should take ten seconds and usually takes thirty minutes. Each database has its own import command, the column types need to be declared, identifier names need escaping, and one stray apostrophe in a row breaks the whole load. This tool sidesteps all of it: drop the CSV in, pick the dialect, get back a fully escaped CREATE TABLE plus one INSERT per row (or batched) that runs as-is in your SQL client.
How Column Types Are Inferred
Every column is scanned cell-by-cell and the widest observed type wins. A column whose values are all integers becomes INTEGER (or INT in MySQL). Mix in any floats and the column promotes to DOUBLE PRECISION or DOUBLE. Any non-numeric value anywhere in the column promotes to TEXT. Boolean detection requires exact matches on true/false case-insensitively. The rationale: a column with 999 integers and one stray asterisk is almost certainly a text column with a single dirty value, not a numeric column with a parse error — making the column TEXT preserves the data without losing the asterisk.
Why This Beats Hand-Writing the INSERTs
The two parts of writing INSERTs by hand that always go wrong are quoting and identifier escaping. Quoting: a single apostrophe inside any value breaks SQL syntax unless doubled, and copy-pasted data routinely has apostrophes in names, addresses, and free-text columns. This tool doubles every apostrophe automatically. Identifier escaping: column headers from real-world CSVs are full of spaces, hyphens, and Unicode characters that need either double-quotes (Postgres, SQLite) or backticks (MySQL). The tool sanitizes header names to safe snake_case identifiers and emits the correct quoting for the chosen dialect, so the script runs without manual edits.
Dialect Differences That Matter
PostgreSQL uses BOOLEAN for booleans and double-quoted identifiers ("my column"). MySQL uses TINYINT(1) for booleans and backtick-quoted identifiers (`my column`). SQLite uses INTEGER for booleans (no native bool type) and accepts either style of identifier quoting. The tool emits the right combination for whichever dialect you pick, plus dialect-specific options like Postgres's ON CONFLICT DO NOTHING on the INSERT clauses or MySQL's INSERT IGNORE. Batched-insert mode emits a single statement with many VALUES tuples, which is significantly faster than one INSERT per row for large imports.
Use Cases and Reasonable Limits
Common cases: bulk-loading a one-time data dump into a development database, generating a seed file for a new project, exporting a Google Sheet to a SQL backup, or shipping reference data alongside a schema migration. The tool handles tens of thousands of rows comfortably in-browser. For multi-million-row imports, generate the script here for the schema, then use the database's native bulk-load (COPY in Postgres, LOAD DATA INFILE in MySQL) on the raw CSV instead — you will hit the database's native loader speed, which is an order of magnitude faster than statement-by-statement INSERTs at that scale.
Common neighbors in this workflow: CSV Cleaner to dedupe and normalize before converting, CSV Viewer to verify the shape, JSON ↔ CSV Converter when the source is JSON instead of CSV, and Mock Data Generator to build a fresh CSV for testing the import.
Frequently Asked Questions
Built by Derek Giordano · Part of Ultimate Design Tools