Skip to content
← Code Tools

CSV to SQL Converter

Drop a CSV and walk away with a runnable INSERT script

CSV to SQL Converter (INSERT Statements)

Loading a CSV into a SQL database is one of those tasks that should take ten seconds and usually takes thirty minutes. Each database has its own import command, the column types need to be declared, identifier names need escaping, and one stray apostrophe in a row breaks the whole load. This tool sidesteps all of it: drop the CSV in, pick the dialect, get back a fully escaped CREATE TABLE plus one INSERT per row (or batched) that runs as-is in your SQL client.

How Column Types Are Inferred

Every column is scanned cell-by-cell and the widest observed type wins. A column whose values are all integers becomes INTEGER (or INT in MySQL). Mix in any floats and the column promotes to DOUBLE PRECISION or DOUBLE. Any non-numeric value anywhere in the column promotes to TEXT. Boolean detection requires exact matches on true/false case-insensitively. The rationale: a column with 999 integers and one stray asterisk is almost certainly a text column with a single dirty value, not a numeric column with a parse error — making the column TEXT preserves the data without losing the asterisk.

Why This Beats Hand-Writing the INSERTs

The two parts of writing INSERTs by hand that always go wrong are quoting and identifier escaping. Quoting: a single apostrophe inside any value breaks SQL syntax unless doubled, and copy-pasted data routinely has apostrophes in names, addresses, and free-text columns. This tool doubles every apostrophe automatically. Identifier escaping: column headers from real-world CSVs are full of spaces, hyphens, and Unicode characters that need either double-quotes (Postgres, SQLite) or backticks (MySQL). The tool sanitizes header names to safe snake_case identifiers and emits the correct quoting for the chosen dialect, so the script runs without manual edits.

Dialect Differences That Matter

PostgreSQL uses BOOLEAN for booleans and double-quoted identifiers ("my column"). MySQL uses TINYINT(1) for booleans and backtick-quoted identifiers (`my column`). SQLite uses INTEGER for booleans (no native bool type) and accepts either style of identifier quoting. The tool emits the right combination for whichever dialect you pick, plus dialect-specific options like Postgres's ON CONFLICT DO NOTHING on the INSERT clauses or MySQL's INSERT IGNORE. Batched-insert mode emits a single statement with many VALUES tuples, which is significantly faster than one INSERT per row for large imports.

Use Cases and Reasonable Limits

Common cases: bulk-loading a one-time data dump into a development database, generating a seed file for a new project, exporting a Google Sheet to a SQL backup, or shipping reference data alongside a schema migration. The tool handles tens of thousands of rows comfortably in-browser. For multi-million-row imports, generate the script here for the schema, then use the database's native bulk-load (COPY in Postgres, LOAD DATA INFILE in MySQL) on the raw CSV instead — you will hit the database's native loader speed, which is an order of magnitude faster than statement-by-statement INSERTs at that scale.

Common neighbors in this workflow: CSV Cleaner to dedupe and normalize before converting, CSV Viewer to verify the shape, JSON ↔ CSV Converter when the source is JSON instead of CSV, and Mock Data Generator to build a fresh CSV for testing the import.

Frequently Asked Questions

Does the tool upload my CSV to a server?+
No. The CSV is parsed locally by PapaParse 5.4 in your browser, the SQL is generated in memory, and the result is offered as a download or copy-to-clipboard. You can verify in the Network tab that no upload occurs while the tool runs.
Which SQL dialects are supported?+
PostgreSQL, MySQL, and SQLite. The differences between them — boolean type, identifier quoting, optional ON CONFLICT clause — are handled automatically. ANSI SQL is also available as a plain-Postgres-style baseline if your target is a less-common database that follows the standard.
How are column types decided?+
Each column is scanned cell-by-cell and the widest observed type wins. All-integer column → INTEGER. Mix in any float → floating-point. Any non-numeric value anywhere → TEXT. Exact case-insensitive true/false values give BOOLEAN. The default favors TEXT for ambiguous cases, since a single bad row should not corrupt the load.
Are special characters in the data escaped properly?+
Yes. Single quotes in string values are doubled per ANSI SQL. Backslashes and newlines are preserved literally inside the quoted strings, which works in all three supported dialects. Non-ASCII characters are emitted as-is in UTF-8; make sure your database connection is UTF-8 so they survive the load.
Can I override the output table name?+
Yes. The default table name is derived from the CSV filename, sanitized to a safe identifier. You can override it in the options. The identifier is automatically quoted in the dialect-correct way (double-quotes for Postgres/SQLite, backticks for MySQL) so even names with spaces or reserved words like order work.
What about NULL values?+
Empty cells in the CSV become the literal NULL in the SQL output, not the string 'NULL'. If your CSV uses placeholders like NA or N/A for null, run it through CSV Cleaner first to normalize those markers, then convert here.
What is the practical CSV size ceiling?+
There is no hard limit because the work runs in your browser. Files with tens of thousands of rows work well. For multi-million-row loads, use the tool to generate the CREATE TABLE statement, then use your database's native bulk-load command on the CSV directly — statement-by-statement INSERTs are slow at that scale regardless of how they were generated.
Can I get batched inserts instead of one per row?+
Yes. The batch mode emits a single INSERT with many VALUES tuples instead of one INSERT per row. This is dramatically faster on large imports because the database parses one statement instead of thousands. The default batch size is 500 rows per statement, which is a safe ceiling for most query-length limits.

Built by Derek Giordano · Part of Ultimate Design Tools

Privacy Policy · Terms of Service