Python Windows java php Ubuntu wordpress nginx linux命令 HTML5 linux Android 开源 shell Firefox google mysql apache 微软 程序员 centos

如何尽量减少App与MySQL的交互?

最近研究mysql应用优化中讨论的一个问题:如何做到在APP端尽量将业务逻辑紧密相关的几条SQL封装成单个SQL批量发送给Server。这种思路和存储过程还不太一样,另外存储过程需要将业务逻辑绑定在服务器端,并且测试过程发现在效率上要相对Oracle弱化不少。我们需要的是什么样的功能呢?

业务场景:

举一个典型的账务逻辑中SQL例子:

begin;
update t1 set xxx where xxx; # 影响两行记录
insert into t2 values(); # 成功插入一行记录
xxxyyyzzz;
commit;

注意,业务上非常强的逻辑要求:update必须是成功更新两条记录 && insert必须是成功插入一条记录。

此时业务优化希望能将update & insert 封装成一条逻辑语句,任何一条语句不成功便需要返回错误,是否回滚则让APP决定。

为此,MySQL服务器层必须要扩展语法:

update min_batch_rows=2 t1 set xxx where xxx;
insert min_batch_rows=1 into t2 values();

在APP端,将这两条语句一起发送给服务器端(CLIENT_MULTI_STATEMENTS),一旦有一条语句执行不成功则中止。

1. 对单条记录,需要扩展 min_batch_rows 语法,在命令处理完后判断影响的行数从而决定是否回滚。
2. 对多条记录,需要将这几条语句批量发送,这一组连续的带hint的语句为一组特殊的语句,要么全做,要么全不做。

1. 单条语句

## min_batch_rows 是指最小影响行数,如果影响的行数小于此值,则当前语句会被回滚。

# 两条a=11的记录

mysql> select * from t1 where a=11;
+------+
| a    |
+------+
|   11 |
|   11 |
+------+
2 rows in set (3.79 sec)

# 指定最小更新量为2,a=11的记录会被更新

mysql>  update MIN_BATCH_SIZE=2 t1 set a=a+10 where a=11;
Query OK, 2 rows affected (1.40 sec)
Rows matched: 2  Changed: 2  Warnings: 0

mysql> select * from t1 where a=11;
Empty set (1.18 sec)

mysql> select * from t1 where a=21;
+------+
| a    |
+------+
|   21 |
|   21 |
+------+
2 rows in set (1.94 sec)

# 指定最小更新量为3,a=21的记录会不被更新,因为只有两条记录有影响

mysql>  update MIN_BATCH_SIZE=3 t1 set a=a+10 where a=21;
ERROR 1721 (HY000): effected rows are less than specified mini_batch_rows.
mysql> select * from t1 where a=21;
+------+
| a    |
+------+
|   21 |
|   21 |
+------+
2 rows in set (1.90 sec)

2. 多条语句 

## min_batch_rows 的语句为一组逻辑,只有上条语句正确执行后下一条语句才可能会执行。

mysql> delimiter ||
mysql> truncate table t1;
    -> begin;insert into t1 values(1); insert into t1 values(2);  insert into t1 values(3);commit ||


mysql> select * from t1 ||
+------+
| a    |
+------+
|    1 |
|    2 |
|    3 |
+------+
3 rows in set (0.00 sec)

mysql> begin;
    -> update MIN_BATCH_SIZE=1 t1 set a=a+10 where a=1;
    -> update MIN_BATCH_SIZE=1 t1 set a=a+10 where a=2;
    -> update MIN_BATCH_SIZE=2 t1 set a=a+10 where a=3;
    -> commit ||
Query OK, 0 rows affected (0.00 sec)

Query OK, 1 row affected (0.00 sec)
Rows matched: 1  Changed: 1  Warnings: 0

Query OK, 1 row affected (0.00 sec)
Rows matched: 1  Changed: 1  Warnings: 0

ERROR 1721 (HY000): effected rows are less than specified mini_batch_rows.
mysql>

# 仍旧是原先的123

# 注意,如果在原先的session中查看记录会是修改后的记录,因为multi-sql被过截掉了。

delimiter ||
truncate table t1;
begin;insert into t1 values(1); insert into t1 values(2);  insert into t1 val<code>ues(3);commit ||
    
## SQL:   
delimiter ||
begin;
update MIN_BATCH_SIZE=1 t1 set a=a+10 where a=1;
update MIN_BATCH_SIZE=1 t1 set a=a+10 where a=2;
update MIN_BATCH_SIZE=2 t1 set a=a+10 where a=3; 
commit ||


## SQL:
delimiter ||
begin;
update MIN_BATCH_SIZE=1 t1 set a=a+10 where a=1;
update MIN_BATCH_SIZE=1 t1 set a=a+10 where a=2;
update MIN_BATCH_SIZE=2 t1 set a=a+10 where a=3 ||
update MIN_BATCH_SIZE=1 t1 set a=a+10 where a=3;
commit ||

## SQL:
delimiter ||
begin;
update MIN_BATCH_SIZE=1 t1 set a=a+10 where a=1;
update MIN_BATCH_SIZE=1 t1 set a=a+10 where a=2;
update MIN_BATCH_SIZE=2 t1 set a=a+10 where a=3 ||
update MIN_BATCH_SIZE=1 t1 set a=a+10 where a=3 ||
commit ||

其执行结果:

mysql> delimiter ||
mysql> begin;
    -> update MIN_BATCH_SIZE=1 t1 set a=a+10 where a=1;
    -> update MIN_BATCH_SIZE=1 t1 set a=a+10 where a=2;
    -> update MIN_BATCH_SIZE=2 t1 set a=a+10 where a=3 ||
Query OK, 0 rows affected (0.00 sec)

Query OK, 1 row affected (0.00 sec)
Rows matched: 1  Changed: 1  Warnings: 0

Query OK, 1 row affected (0.00 sec)
Rows matched: 1  Changed: 1  Warnings: 0

ERROR 1721 (HY000): effected rows are less than specified mini_batch_rows.
mysql> update MIN_BATCH_SIZE=1 t1 set a=a+10 where a=3 ||
Query OK, 1 row affected (0.00 sec)
Rows matched: 1  Changed: 1  Warnings: 0

mysql> commit ||
Query OK, 0 rows affected (0.00 sec)

mysql> select * from t1; ||
+------+
| a    |
+------+
|   11 |
|   12 |
|   13 |
+------+
3 rows in set (0.00 sec)

延伸阅读

评论