×
全部 > AI技术 > AI编程 > AI代码生成 > 生成MySQL到PostgreSQL全量同步脚本

生成MySQL到PostgreSQL全量同步脚本

claude3.5 sonnet 20241022(Anthropic)

### Synchronization Overview This document outlines a full data synchronization script to transfer data from a MySQL database to a PostgreSQL database for the `users` table. The synchronization will involve copying all records from the source table to the target table. ### Synchronization Script ```sql -- Step 1: Create the target table in PostgreSQL if it doesn't exist CREATE TABLE IF NOT EXISTS users ( id BIGINT PRIMARY KEY, username VARCHAR(255), created_at TIMESTAMP ); -- Step 2: Insert data from MySQL to PostgreSQL INSERT INTO users (id, username, created_at) SELECT id, username, created_at FROM dblink('host=source_host dbname=source_db user=source_user password=source_password', 'SELECT id, username, created_at FROM users') AS t(id BIGINT, username VARCHAR(255), created_at TIMESTAMP); ``` ### Field Mapping Explanation - **MySQL `BIGINT`** maps to **PostgreSQL `BIGINT`**. - **MySQL `VARCHAR(255)`** maps to **PostgreSQL `VARCHAR(255)`**. - **MySQL `DATETIME`** maps to **PostgreSQL `TIMESTAMP`**. ### Notes/Warnings 1. Ensure that the PostgreSQL `dblink` extension is enabled to allow connections to the MySQL database. 2. Replace `source_host`, `source_db`, `source_user`, and `source_password` with actual MySQL connection details. 3. This script assumes that there are no existing records in the PostgreSQL `users` table. If there are existing records, consider using `INSERT ... ON CONFLICT` to handle duplicates based on the primary key. 4. Test the script in a development environment before executing it in production to avoid data loss.

通义千问2.5-Coder-7B(阿里云百炼)

```markdown # Synchronization Overview This synchronization script is designed to perform a full data copy from a MySQL source database to a PostgreSQL target database. The script will handle the copying of the `users` table, ensuring that all fields and their types are appropriately mapped and compatible between the two databases. # Synchronization Script ```sql -- Create target table if it does not exist CREATE TABLE IF NOT EXISTS users ( id BIGINT PRIMARY KEY, username VARCHAR(255), created_at TIMESTAMP ); -- Full data copy from MySQL to PostgreSQL INSERT INTO users (id, username, created_at) SELECT id, username, created_at FROM mysql_source.users; ``` # Field Mapping Explanation - **id**: Both MySQL and PostgreSQL use `BIGINT` for this field. - **username**: Both databases support `VARCHAR(255)` for usernames. - **created_at**: MySQL uses `DATETIME`, while PostgreSQL uses `TIMESTAMP`. The script automatically handles the conversion during the insert operation. # Notes/Warnings 1. Ensure that the `mysql_source` schema exists in your PostgreSQL database and contains the `users` table. 2. Verify that the user running the script has the necessary permissions to read from the MySQL source and write to the PostgreSQL target. 3. This script assumes that there are no foreign key constraints or other dependencies that need to be handled separately. 4. For large datasets, consider using batch processing or pagination to avoid overwhelming the system resources. ``` ### Additional Options If you require additional options such as specific data transformations or handling of certain edge cases, please provide them in the `additionalOptions` object.

gpt-4.1(OpenAI)

# MySQL → PostgreSQL 数据全量同步方案 ## Synchronization Overview 本方案实现 MySQL 到 PostgreSQL 的 users 表结构化全量同步,适用于初始化数据迁移、周期性全量刷新等场景。同步方式为“全量同步(Full Data Copy)”,即完全覆盖目标表,为确保数据一致,建议同步前确保目标表无数据或已妥善备份。 --- ## Synchronization Script 以下同步脚本采用通用“先导出、再导入”方式,适合小到中等规模表同步,兼容性好,逻辑清晰。分两步操作: ### 步骤1. 从 MySQL 导出数据为CSV格式 假设在 MySQL 服务器上操作: ```bash mysqldump -h [MYSQL_HOST] -u [USER] -p[PASSWORD] --fields-terminated-by=',' --fields-optionally-enclosed-by='"' \ --skip-triggers --no-create-info --compact --compatible=postgresql \ --tables [DATABASE] users > users_data.csv ``` 或使用 SELECT INTO OUTFILE(如有权限): ```sql SELECT id, username, DATE_FORMAT(created_at, '%Y-%m-%d %H:%i:%s') as created_at INTO OUTFILE '/tmp/users_data.csv' FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' LINES TERMINATED BY '\n' FROM users; ``` ### 步骤2. 导入CSV到 PostgreSQL 假设在 PostgreSQL 服务器上操作,users 表结构已与 MySQL 对齐(字段名、类型需适配,详见后述): ```sql -- (1) 清空目标表,以实现全量覆盖 TRUNCATE TABLE users; -- (2) 导入数据(需将 users_data.csv 传输到服务器相应路径) COPY users(id, username, created_at) FROM '/path/to/users_data.csv' WITH (FORMAT csv, HEADER false, DELIMITER ',', QUOTE '"', NULL ''); ``` --- ## Field Mapping Explanation | MySQL 字段 | 类型 | PostgreSQL 对应字段 | 类型 | 备注 | | ------------ | --------------- | ------------------- | ---------------- | ------------------------------------- | | id | BIGINT | id | BIGINT | 可直接映射,无需转换 | | username | VARCHAR(255) | username | VARCHAR(255) | 可直接映射,无需转换 | | created_at | DATETIME | created_at | TIMESTAMP | PostgreSQL 推荐使用TIMESTAMP类型 | > 注意:确保目标 PostgreSQL 表的字段类型与MySQL兼容。DATETIME → TIMESTAMP 建议显式转换字符串格式。 --- ## Notes/Warnings - 若 users 表已存在数据,TRUNCATE TABLE 会清空全部数据,请务必确认是否可以覆盖。 - 导出和导入文件时,字符集建议均采用 UTF-8,避免乱码。 - MySQL DATETIME 输出建议格式与 PostgreSQL TIMESTAMP保持一致:"YYYY-MM-DD HH:MI:SS"。 - 请确认导入文件路径访问权限,PostgreSQL 的 COPY 语句要求文件位于数据库服务器上,并有读权限。 - 若用户表含有主键、自增(AUTO_INCREMENT/SERIAL)等特殊约束,请保证两端结构及序列同步。 - 大表可考虑分批次导出/导入,或使用专业同步组件(如DataX、DTS、pgloader等)以避免内存压力。 - 若需自动化、无人工干预迁移,建议脚本嵌入在调度工具或运维流程中。 --- 如需增量或实时(CDC)同步脚本,请补充需求参数。

DeepSeek V3(Deepseek)

```markdown # MySQL to PostgreSQL 数据同步方案 ## Synchronization Overview - **同步类型**: 全量同步 (Full Data Copy) - **源数据库**: MySQL - **目标数据库**: PostgreSQL - **同步表**: users - **同步策略**: 一次性全表数据复制 ## Synchronization Script ### MySQL 数据导出脚本 (SQL) ```sql -- MySQL 端执行: 全量数据查询 SELECT id, username, created_at FROM users; ``` ### PostgreSQL 数据导入脚本 (SQL) ```sql -- PostgreSQL 端执行: 清空目标表(可选) TRUNCATE TABLE users; -- PostgreSQL 端执行: 全量插入数据 INSERT INTO users ( id, username, created_at ) VALUES (%s, %s, %s); ``` ### Python 实现脚本 (推荐) ```python import mysql.connector import psycopg2 from datetime import datetime # 数据库连接配置 mysql_config = { 'host': 'mysql_host', 'user': 'username', 'password': 'password', 'database': 'db_name' } pg_config = { 'host': 'pg_host', 'user': 'username', 'password': 'password', 'database': 'db_name' } # 建立连接 mysql_conn = mysql.connector.connect(**mysql_config) pg_conn = psycopg2.connect(**pg_config) # 执行同步 try: # 从MySQL读取数据 mysql_cursor = mysql_conn.cursor(dictionary=True) mysql_cursor.execute("SELECT id, username, created_at FROM users") # 写入PostgreSQL pg_cursor = pg_conn.cursor() pg_cursor.execute("TRUNCATE TABLE users") # 清空目标表 for row in mysql_cursor: pg_cursor.execute( "INSERT INTO users (id, username, created_at) VALUES (%s, %s, %s)", (row['id'], row['username'], row['created_at']) ) pg_conn.commit() print("同步完成,影响行数:", mysql_cursor.rowcount) finally: mysql_cursor.close() pg_cursor.close() mysql_conn.close() pg_conn.close() ``` ## Field Mapping Explanation | MySQL 字段 | PostgreSQL 字段 | 类型转换说明 | |------------|----------------|--------------| | id | id | BIGINT -> BIGINT | | username | username | VARCHAR(255) -> TEXT (PostgreSQL自动兼容) | | created_at | created_at | DATETIME -> TIMESTAMP | ## Notes/Warnings 1. **性能建议**: - 大数据量时建议添加批处理逻辑(每次插入1000条) - 对于超过1GB的表,考虑使用`COPY`命令替代INSERT 2. **类型兼容性**: - PostgreSQL的DATETIME会自动转为TIMESTAMP - VARCHAR长度限制在PostgreSQL中非强制 3. **注意事项**: - 执行前确保目标表已存在且结构正确 - 全量同步会覆盖目标表原有数据 - 建议在低峰期执行大数据量同步 4. **扩展建议**: - 如需增量同步,可添加WHERE条件: `WHERE created_at > '上次同步时间'` - 考虑添加异常处理重试机制 ```

模型名称 模型提供商 响应总时长(s) 生成 token 数 首 token 延迟(s) 生成速率(tokens/s)
8.36
响应最快
0
1.41
0
9.96
337
0.5
延迟最小
33.85
16.2
942
内容最多
1.75
58.16
速度最快
40.59
751
5.74
18.5
AI文本生成
38家服务商提供服务
AI深度推理
11家服务商提供服务
AI代码生成
11家服务商提供服务
AI数学模型
10家服务商提供服务