关系型数据库速度比较(性能基准测试)及python实现

https://www.sqlite.org/speed.html 做了SQLite、MySQL和PostgreSQL的速度比较,使用的数据库版本比较老,但是测试方法依旧颇有意义。

小结

我们进行了一系列的测试来衡量SQLite 2.7.6、PostgreSQL 7.1.3和MySQL 3.23.41的相对性能。以下是从这些实验中得出的一般结论:

  • SQLite 2.7.6比RedHat 7.2上默认安装的PostgreSQL 7.1.3在大多数常用操作上要快得多(有时快10或20倍)。

  • 在大多数常见操作中,SQLite 2.7.6通常比MySQL 3.23.41快(有时快两倍以上)。

  • SQLite执行CREATE INDEX或DROP TABLE的速度不如其他数据库。但影响不大,因为这些都是不常见操作。

  • 将多个操作组合成事务时SQLite的工作效果最好。

测试说明:

  • 不涉及多用户性能或涉及多个连接和子查询的复杂查询的优化。
  • 在相对较小(大约14兆字节)的数据库上进行的。

测试环境

用于这些测试的平台是一台1.6GHz的Athlon,有1GB的内存和一个IDE磁盘驱动器。操作系统是RedHat Linux 7.2,stock内核。

使用的PostgreSQL和MySQL服务器是RedHat 7.2上默认提供的(PostgreSQL版本7.1.3和MySQL版本3.23.41)。特别注意的是,RedHat 7.2上的默认MySQL配置不支持事务。不支持事务给了MySQL很大的速度优势,但SQLite在大多数测试中仍然能够领先。

RedHat 7.3中的默认PostgreSQL配置太保守(它是为在8MB内存的机器上工作而设计的),通过配置调整,可以使PostgreSQL运行得快得多。Matt Sergeant报告说,他已经调整了他的PostgreSQL安装,结果显示,PostgreSQL和MySQL的运行速度基本相同。他对SQLite进行了测试,其配置与网站上出现的相同。它是用-O6优化和-DNDEBUG=1开关编译的,该开关禁用了SQLite代码中的许多 "assert() "语句。-DNDEBUG=1编译器选项使SQLite的速度大约提高了一倍。

一个简单的Tcl脚本被用来生成和运行所有的测试。这个Tcl脚本的副本可以在SQLite源代码树中的tools/speedtest.tcl文件中找到。

测试

测试1:1000个INSERT

CREATE TABLE t1(a INTEGER, b INTEGER, c VARCHAR(100));
INSERT INTO t1 VALUES(1,13153,'thirteen thousand one hundred fifty three');
INSERT INTO t1 VALUES(2,75560,'seventy five thousand five hundred sixty');
... 995 lines omitted
INSERT INTO t1 VALUES(998,66289,'sixty six thousand two hundred eighty nine');
INSERT INTO t1 VALUES(999,24322,'twenty four thousand three hundred twenty two');
INSERT INTO t1 VALUES(1000,94142,'ninety four thousand one hundred forty two');
  • 结果:
| PostgreSQL:            | 4.373  |
| ---------------------- | ------ |
| MySQL:                 | 0.114  |
| SQLite 2.7.6:          | 13.061 |
| SQLite 2.7.6 (nosync): | 0.223  |

因为它没有中央服务器来协调访问,所以SQLite必须为每个事务关闭和重新打开数据库文件,从而使其缓存失效。在这个测试中,每个SQL语句都是一个单独的事务,所以数据库文件必须被打开和关闭,缓存必须被刷新1000次。尽管这样,SQLite的异步版本仍然几乎和MySQL一样快。然而,请注意同步版本的速度要慢得多。SQLite在每个同步事务之后调用fsync(),以确保所有数据在继续之前安全地在磁盘表面。在同步测试的13秒中,SQLite大部分时间都在闲置,等待磁盘I/O的完成。

测试2:事务中的25000个INSERT

BEGIN;
CREATE TABLE t2(a INTEGER, b INTEGER, c VARCHAR(100));
INSERT INTO t2 VALUES(1,59672,'fifty nine thousand six hundred seventy two');
... 24997 lines omitted
INSERT INTO t2 VALUES(24999,89569,'eighty nine thousand five hundred sixty nine');
INSERT INTO t2 VALUES(25000,94666,'ninety four thousand six hundred sixty six');
COMMIT;
  • 结果:
| PostgreSQL:            | 4.900 |
| ---------------------- | ----- |
| MySQL:                 | 2.184 |
| SQLite 2.7.6:          | 0.914 |
| SQLite 2.7.6 (nosync): | 0.757 |

当所有的INSERT被放在事务中时,SQLite不再需要关闭和重新打开数据库,不需要做任何fsync(),SQLite比PostgreSQL和MySQL都快得多。

测试3:25000次INSERT到有索引的表中

BEGIN;
CREATE TABLE t3(a INTEGER, b INTEGER, c VARCHAR(100));
CREATE INDEX i3 ON t3(c);
... 24998 lines omitted
INSERT INTO t3 VALUES(24999,88509,'eighty eight thousand five hundred nine');
INSERT INTO t3 VALUES(25000,84791,'eighty four thousand seven hundred ninety one');
COMMIT;
  • 结果:
| PostgreSQL:            | 8.175 |
| ---------------------- | ----- |
| MySQL:                 | 3.197 |
| SQLite 2.7.6:          | 1.555 |
| SQLite 2.7.6 (nosync): | 1.402 |

有报告称,SQLite在索引表上的表现不尽人意。最近增加了这个测试来反驳这些传言。诚然,SQLite在创建新的索引项时不如其他引擎快(见下面的测试6),但其总体速度仍较好。

测试4:100个没有索引的SELECT

BEGIN;
SELECT count(*), avg(b) FROM t2 WHERE b>=0 AND b<1000;
SELECT count(*), avg(b) FROM t2 WHERE b>=100 AND b<1100;
... 96 lines omitted
SELECT count(*), avg(b) FROM t2 WHERE b>=9800 AND b<10800;
SELECT count(*), avg(b) FROM t2 WHERE b>=9900 AND b<10900;
COMMIT;
  • 结果:
| PostgreSQL:            | 3.629 |
| ---------------------- | ----- |
| MySQL:                 | 2.760 |
| SQLite 2.7.6:          | 2.494 |
| SQLite 2.7.6 (nosync): | 2.526 |

这个测试对没有索引的25000条目表进行了100次查询,因此需要进行全表扫描。先前版本的SQLite在这个测试中曾经比PostgreSQL和MySQL慢,但最近的性能增强提高了它的速度,所以它现在是这一组中最快的。

测试5:100次模糊SELECT

BEGIN;
SELECT count(*), avg(b) FROM t2 WHERE c LIKE '%one%';
SELECT count(*), avg(b) FROM t2 WHERE c LIKE '%two%';
... 96 lines omitted
SELECT count(*), avg(b) FROM t2 WHERE c LIKE '%ninety nine%';
SELECT count(*), avg(b) FROM t2 WHERE c LIKE '%one hundred%';
COMMIT;
  • 结果:
| PostgreSQL:            | 13.409 |
| ---------------------- | ------ |
| MySQL:                 | 4.640  |
| SQLite 2.7.6:          | 3.362  |
| SQLite 2.7.6 (nosync): | 3.372  |

这个测试进行100次全表扫描,但它使用了字符串比较而不是数字比较。SQLite在这里比PostgreSQL快三倍以上,比MySQL快30%左右。

测试6:创建索引

CREATE INDEX i2a ON t2(a);
CREATE INDEX i2b ON t2(b); 
  • 结果:
| PostgreSQL:            | 0.381 |
| ---------------------- | ----- |
| MySQL:                 | 0.318 |
| SQLite 2.7.6:          | 0.777 |
| SQLite 2.7.6 (nosync): | 0.659 |

SQLite在创建新索引时比较慢。问题不大(因为新索引的创建并不频繁),且正在努力解决的问题。希望未来的SQLite版本在这方面能做得更好。

测试7:5000个带索引的SELECT

SELECT count(*), avg(b) FROM t2 WHERE b>=0 AND b<100;
SELECT count(*), avg(b) FROM t2 WHERE b>=100 AND b<200;
SELECT count(*), avg(b) FROM t2 WHERE b>=200 AND b<300;
... 4994 lines omitted
SELECT count(*), avg(b) FROM t2 WHERE b>=499700 AND b<499800;
SELECT count(*), avg(b) FROM t2 WHERE b>=499800 AND b<499900;
SELECT count(*), avg(b) FROM t2 WHERE b>=499900 AND b<500000;
  • 结果:
| PostgreSQL:            | 4.614 |
| ---------------------- | ----- |
| MySQL:                 | 1.270 |
| SQLite 2.7.6:          | 1.121 |
| SQLite 2.7.6 (nosync): | 1.162 |

这三个数据库引擎在有索引工作时都运行得更快。但SQLite仍然是最快的。

测试8:1000次无索引的UPDATE操作

BEGIN;
UPDATE t1 SET b=b*2 WHERE a>=0 AND a<10;
UPDATE t1 SET b=b*2 WHERE a>=10 AND a<20;
... 996 lines omitted
UPDATE t1 SET b=b*2 WHERE a>=9980 AND a<9990;
UPDATE t1 SET b=b*2 WHERE a>=9990 AND a<10000;
COMMIT;
  • 结果:
| PostgreSQL:            | 1.739 |
| ---------------------- | ----- |
| MySQL:                 | 8.410 |
| SQLite 2.7.6:          | 0.637 |
| SQLite 2.7.6 (nosync): | 0.638 |

MySQL始终比PostgreSQL和SQLite慢五到十倍。MySQL通常是非常快的引擎。也许这个问题在后来的MySQL版本中已经解决了。

测试9:带索引的25000次UPDATE

BEGIN;
UPDATE t2 SET b=468026 WHERE a=1;
UPDATE t2 SET b=121928 WHERE a=2;
... 24996 lines omitted
UPDATE t2 SET b=35065 WHERE a=24999;
UPDATE t2 SET b=347393 WHERE a=25000;
COMMIT;
  • 结果:
| PostgreSQL:            | 18.797 |
| ---------------------- | ------ |
| MySQL:                 | 8.134  |
| SQLite 2.7.6:          | 3.520  |
| SQLite 2.7.6 (nosync): | 3.104  |

在最近的2.7.0版本中,SQLite在这项测试中的运行速度与MySQL大致相同。但最近对SQLite的优化使UPDATEs的速度提高了一倍多。

测试10:带索引的25000个文本UPDATE

BEGIN;
UPDATE t2 SET c='one hundred forty eight thousand three hundred eighty two' WHERE a=1;
UPDATE t2 SET c='three hundred sixty six thousand five hundred two' WHERE a=2;
... 24996 lines omitted
UPDATE t2 SET c='three hundred eighty three thousand ninety nine' WHERE a=24999;
UPDATE t2 SET c='two hundred fifty six thousand eight hundred thirty' WHERE a=25000;
COMMIT;
  • 结果:
| PostgreSQL:            | 48.133 |
| ---------------------- | ------ |
| MySQL:                 | 6.982  |
| SQLite 2.7.6:          | 2.408  |
| SQLite 2.7.6 (nosync): | 1.725  |

在这里,2.7.0版的SQLite曾经以与MySQL差不多的速度运行。但现在2.7.6版比MySQL快2倍以上,比PostgreSQL快20倍以上。

测试11:来自SELECT的INSERT

BEGIN;
INSERT INTO t1 SELECT b,a,c FROM t2;
INSERT INTO t2 SELECT b,a,c FROM t1;
COMMIT; 
  • 结果:
| PostgreSQL:            | 61.364 |
| ---------------------- | ------ |
| MySQL:                 | 1.537  |
| SQLite 2.7.6:          | 2.787  |
| SQLite 2.7.6 (nosync): | 1.599  |

异步的SQLite只比MySQL慢一丝。(MySQL似乎特别擅长INSERT...SELECT语句。)PostgreSQL引擎的大部分时间是在等待磁盘I/O。

测试12:没有索引的DELETE

DELETE FROM t2 WHERE c LIKE '%fifty%';
  • 结果:
| PostgreSQL:            | 1.509 |
| ---------------------- | ----- |
| MySQL:                 | 0.975 |
| SQLite 2.7.6:          | 4.004 |
| SQLite 2.7.6 (nosync): | 0.560 |

同步版本的SQLite是一组中最慢的,但异步版本是最快的。差别在于执行fsync()所需的额外时间。

测试13:带索引的DELETE

DELETE FROM t2 WHERE a>10 AND a<20000; 
  • 结果:
| PostgreSQL:            | 1.316 |
| ---------------------- | ----- |
| MySQL:                 | 2.262 |
| SQLite 2.7.6:          | 2.068 |
| SQLite 2.7.6 (nosync): | 0.752 |

PostgreSQL比MySQL快。然而,异步的SQLite比另外两个都快。

测试14:大的DELETE之后再大的INSERT

INSERT INTO t2 SELECT * FROM t1; 
  • 结果:
| PostgreSQL:            | 13.168 |
| ---------------------- | ------ |
| MySQL:                 | 1.815  |
| SQLite 2.7.6:          | 3.210  |
| SQLite 2.7.6 (nosync): | 1.485  |

旧版本的SQLite(在2.4.0版本之前)在一连串的DELETE和新的INSERT之后会显示出性能下降。如本测试所示,这个问题现在已经解决了。

测试15:大的DELETE后大量小INSERT

BEGIN;
DELETE FROM t1;
INSERT INTO t1 VALUES(1,10719,'ten thousand seven hundred nineteen');
... 11997 lines omitted
INSERT INTO t1 VALUES(11999,72836,'seventy two thousand eight hundred thirty six');
INSERT INTO t1 VALUES(12000,64231,'sixty four thousand two hundred thirty one');
COMMIT;
  • 结果:
| PostgreSQL:            | 4.556 |
| ---------------------- | ----- |
| MySQL:                 | 1.704 |
| SQLite 2.7.6:          | 0.618 |
| SQLite 2.7.6 (nosync): | 0.406 |

SQLite非常善于在事务中进行INSERT,它在这项测试中比其他数据库快很多。

测试16:DROP表

DROP TABLE t1;
DROP TABLE t2;
DROP TABLE t3; 
  • 结果:
| PostgreSQL:            | 0.135 |
| ---------------------- | ----- |
| MySQL:                 | 0.015 |
| SQLite 2.7.6:          | 0.939 |
| SQLite 2.7.6 (nosync): | 0.254 |

SQLite比其他数据库要慢,它必须经过并删除数据库文件中涉及该表的记录。另一方面,MySQL和PostgreSQL使用单独的文件来表示每个表,所以它们可以通过删除文件来删除表,这要快得多。删除表并不是常见的操作,SQLite花费的时间稍长,也不是大问题。

测试内容

  • 空数据库场景
  • 业务场景:数据库大小在26G左右,参见性能测试方案的数据模型部分。
  • 性能测试场景:在执行性能、负载、尖峰、压力等测试时通过API或命令行执行。

测试代码

以下代码使用python调用sqlite库实现测试,同时将sql导出为sql文件,以方便命令行执行。后续需要支持调用我司DB的python API。
代码最新版本存放在https://github.com/china-testing/python-testing-examples/blob/master/db/sqlite_speed.py

import random
import time

from num2word import word
import sqlite3


def open_database():
    
    db = 'pydb.db'
    conn = sqlite3.connect(db)
    return conn

def get_conn_and_cursor(name, sql=''):
    
    print("*"*20, name)
    conn = open_database()
    cursor = conn.cursor()
    if sql:
        cursor.execute(sql)
        conn.commit();    
    return conn, cursor

def list2file(lists, filename):
    
    f = open(filename, 'w')
    for item in lists:
        f.write(item + "\n")
    f.close()
    

def insert_1000():
    
    sqls = []
    sql = ''' DROP TABLE IF EXISTS t1;'''
    sqls.append(sql)
    conn, cursor = get_conn_and_cursor("Test 1: 1000 INSERTs", sql)
    
    t1 = time.time()   
    sql = ''' CREATE TABLE t1(a INTEGER, b INTEGER, c VARCHAR(100));'''
    cursor.execute(sql)
    conn.commit()
    for i in range(1, 1001):
        num = random.randint(1, 100000000)
        num_str = word(num).lower()
        sql = '''INSERT INTO t1 VALUES({}, {},'{}');'''.format(i, num, num_str)
        cursor.execute(sql)
        sqls.append(sql)
        conn.commit()    
    print("*"*10, time.time()-t1)
    list2file(sqls, "1.sql")
    conn.close()
    
def insert_25000_transaction():
    
    sqls = []
    sql = '''DROP TABLE IF EXISTS t2;'''
    sqls.append(sql)
    conn, cursor = get_conn_and_cursor("Test 2: 25000 INSERTs in a transaction", sql)
 
    t1 = time.time()   
    cursor.execute("BEGIN")
    sqls.append("BEGIN;")
    sql = ''' CREATE TABLE t2(a INTEGER, b INTEGER, c VARCHAR(100));'''
    sqls.append(sql)
    cursor.execute(sql)
    for i in range(1, 25001):
        num = random.randint(1, 100000000)
        num_str = word(num).lower()
        sql = '''INSERT INTO t2 VALUES({}, {},'{}');'''.format(i, num, num_str)
        sqls.append(sql)
        cursor.execute(sql)
    conn.commit()  
    sqls.append("COMMIT;")
    print("*"*10, time.time()-t1)
    conn.close()
    list2file(sqls, "2.sql")
    
def insert_25000_transaction_index():
    
    sqls = []
    sql = '''DROP TABLE IF EXISTS t3;'''
    sqls.append(sql)
    conn, cursor = get_conn_and_cursor("Test 3: 25000 INSERTs into an indexed table", sql)    
 
    t1 = time.time()   
    cursor.execute("BEGIN")
    sqls.append("BEGIN;")
    sql = ''' CREATE TABLE t3(a INTEGER, b INTEGER, c VARCHAR(100));'''
    sqls.append(sql)
    cursor.execute(sql)
    sql = '''CREATE INDEX i3 ON t3(c);'''
    sqls.append(sql)
    cursor.execute(sql)    
    for i in range(1, 25001):
        num = random.randint(1, 100000000)
        num_str = word(num).lower()
        sql = '''INSERT INTO t1 VALUES({}, {},'{}');'''.format(i, num, num_str)
        sqls.append(sql)
        cursor.execute(sql)
    conn.commit()  
    sqls.append("COMMIT;")
    print("*"*10, time.time()-t1)
    conn.close()
    list2file(sqls, "3.sql")
    
    
def select_100_without_index():
    
    sqls = []
    conn, cursor = get_conn_and_cursor("Test 4: 100 SELECTs without an index")        
 
    t1 = time.time()   
    cursor.execute("BEGIN")
    sqls.append("BEGIN;")
    for i in range(100):
        sql = '''SELECT count(*), avg(b) FROM t2 WHERE b>={} AND b<{};'''.format(i*100, i*100+1000)
        sqls.append(sql)
        cursor.execute(sql)
    conn.commit() 
    sqls.append("COMMIT;")
    print("*"*10, time.time()-t1)
    conn.close()
    list2file(sqls, "4.sql")
    

def select_100_comparison():
    
    sqls = []
    conn, cursor = get_conn_and_cursor("Test 5: 100 SELECTs on a string comparison")
 
    t1 = time.time()   
    cursor.execute("BEGIN")
    sqls.append("BEGIN;")
    for i in range(1,101):
        sql = '''SELECT count(*), avg(b) FROM t2 WHERE c LIKE '%{}%';'''.format(word(i).lower())
        sqls.append(sql)
        cursor.execute(sql)
    conn.commit() 
    sqls.append("COMMIT;")
    print("*"*10, time.time()-t1)
    conn.close()
    list2file(sqls, "5.sql")
    
def create_index():
    
    sqls = []
    conn, cursor = get_conn_and_cursor("Test 6: Creating an index")
 
    t1 = time.time()   
    sql = ''' CREATE INDEX i2a ON t2(a);'''
    sqls.append(sql)
    cursor.execute(sql)   
    conn.commit() 
    sql = ''' CREATE INDEX i2b ON t2(b); '''
    sqls.append(sql)
    cursor.execute(sql)   
    conn.commit() 
    print("*"*10, time.time()-t1)
    conn.close()
    list2file(sqls, "6.sql")
    
def select_5000_with_index():
    
    sqls = []
    conn, cursor = get_conn_and_cursor("Test 7: 5000 SELECTs with an index")
 
    t1 = time.time()   
    for i in range(5000):
        sql = '''SELECT count(*), avg(b) FROM t2 WHERE b>={} AND b<{};'''.format(i*100, i*100+100)
        sqls.append(sql)
        cursor.execute(sql)
        conn.commit()    
    print("*"*10, time.time()-t1)
    conn.close()
    list2file(sqls, "7.sql")


def update_1000_without_index():
    
    sqls = []
    conn, cursor = get_conn_and_cursor("Test 8: 1000 UPDATEs without an index")
 
    t1 = time.time()   
    cursor.execute("BEGIN")
    sqls.append("BEGIN;")
    for i in range(1000):
        sql = '''UPDATE t1 SET b=b*2 WHERE a>={} AND a<{};'''.format(i*10, i*10+10)
        sqls.append(sql)
        cursor.execute(sql)
    conn.commit()   
    sqls.append("COMMIT;")
    print("*"*10, time.time()-t1)
    conn.close()
    list2file(sqls, "8.sql")
    
def update_25000_with_index():
    
    sqls = []
    conn, cursor = get_conn_and_cursor("Test 9: 25000 UPDATEs with an index")
 
    t1 = time.time()   
    cursor.execute("BEGIN")
    sqls.append("BEGIN;")
    for i in range(25000):
        sql = '''UPDATE t2 SET b={} WHERE a={};'''.format(random.randint(1, 100000000), i+1)
        sqls.append(sql)
        cursor.execute(sql)
    conn.commit()   
    sqls.append("COMMIT;")
    print("*"*10, time.time()-t1)
    conn.close()
    list2file(sqls, "9.sql")
    
def update_25000_text_with_index():
    
    sqls = []
    conn, cursor = get_conn_and_cursor("Test 10: 25000 text UPDATEs with an index")
 
    t1 = time.time()   
    cursor.execute("BEGIN")
    sqls.append("BEGIN;")
    for i in range(25000):
        sql = '''UPDATE t2 SET c='{}' WHERE a={};'''.format(word(random.randint(1, 100000000)).lower(), i+1)
        cursor.execute(sql)
        sqls.append(sql)
    conn.commit()  
    sqls.append("COMMIT;")
    print("*"*10, time.time()-t1)
    conn.close()
    list2file(sqls, "10.sql")

def insert_from_select():
    
    sqls = []
    conn, cursor = get_conn_and_cursor("Test 11: INSERTs from a SELECT")
 
    t1 = time.time()   
    cursor.execute("BEGIN")
    sqls.append("BEGIN;")
    sql = '''INSERT INTO t1 SELECT b,a,c FROM t2;'''
    sqls.append(sql)
    cursor.execute(sql)
    sql = '''INSERT INTO t2 SELECT b,a,c FROM t1;'''
    sqls.append(sql)
    cursor.execute(sql)  
    conn.commit() 
    sqls.append("COMMIT;")
    print("*"*10, time.time()-t1)
    conn.close()
    list2file(sqls, "11.sql")
    

def del_without_index():
    
    sqls = []
    conn, cursor = get_conn_and_cursor("Test 12: DELETE without an index")
 
    t1 = time.time()   
    sql = '''DELETE FROM t2 WHERE c LIKE '%fifty%'; '''
    sqls.append(sql)
    cursor.execute(sql)  
    conn.commit() 
    print("*"*10, time.time()-t1)
    conn.close()
    list2file(sqls, "12.sql")
    
def del_with_index():
    
    sqls = []
    conn, cursor = get_conn_and_cursor("Test 13: DELETE with an index")
 
    t1 = time.time()   
    sql = '''DELETE FROM t2 WHERE a>10 AND a<20000;  '''
    sqls.append(sql)
    cursor.execute(sql)  
    conn.commit() 
    print("*"*10, time.time()-t1)
    conn.close()
    list2file(sqls, "13.sql")
    
def big_insert_after_big_del():
    
    sqls = []
    conn, cursor = get_conn_and_cursor("Test 14: A big INSERT after a big DELETE")
 
    t1 = time.time()   
    cursor.execute("BEGIN")
    sqls.append("BEGIN;")
    sql = '''DELETE FROM t2;'''
    sqls.append(sql)
    cursor.execute(sql)
    sql = '''INSERT INTO t2 SELECT * FROM t1; '''
    sqls.append(sql)
    cursor.execute(sql)  
    conn.commit()
    sqls.append("COMMIT;")
    print("*"*10, time.time()-t1)
    conn.close()
    list2file(sqls, "14.sql")


def small_insert_after_big_del():
    
    sqls = []
    conn, cursor = get_conn_and_cursor("Test 15: A big DELETE followed by many small INSERTs")
 
    t1 = time.time()   
    cursor.execute("BEGIN")
    sqls.append("BEGIN;")
    sql = '''DELETE FROM t1;'''
    sqls.append(sql)
    cursor.execute(sql)
    for i in range(1, 12001):
        num = random.randint(1, 100000000)
        num_str = word(num).lower()
        sql = '''INSERT INTO t1 VALUES({}, {},'{}');'''.format(i, num, num_str)
        sqls.append(sql)
        cursor.execute(sql)
    conn.commit() 
    sqls.append("COMMIT;")
    print("*"*10, time.time()-t1)
    conn.close()
    list2file(sqls, "15.sql")
    
def drop_table():
    
    sqls = []
    conn, cursor = get_conn_and_cursor("Test 16: DROP TABLE")
 
    t1 = time.time()   
    cursor.execute("BEGIN")
    sqls.append("BEGIN;")
    sql = '''DROP TABLE t1;'''
    sqls.append(sql)
    cursor.execute(sql)
    conn.commit() 
    sql = '''DROP TABLE t2;'''
    sqls.append(sql)
    cursor.execute(sql)
    conn.commit() 
    sql = '''DROP TABLE t3;'''
    sqls.append(sql)
    cursor.execute(sql)
    conn.commit() 
    print("*"*10, time.time()-t1)
    conn.close()
    list2file(sqls, "16.sql")
    
insert_1000()
insert_25000_transaction()
insert_25000_transaction_index()
select_100_without_index()
select_100_comparison()
create_index()
select_5000_with_index()
update_1000_without_index()
update_25000_with_index()
update_25000_text_with_index()
insert_from_select()
del_without_index()
del_with_index()
big_insert_after_big_del()
small_insert_after_big_del()
drop_table()

热门相关:峡谷正能量   道君   前任无双   回眸医笑,冷王的神秘嫡妃   锦乡里