转自:
最近有需求需要本地处理一些临时的数据,用做统计分析。如果单纯的 MYSQL 也能实现,
不过一堆临时数据这样从 mysql 导来导去还是挺麻烦的,比较理想的选择是本机装个 cygwin 环境,然后可以用 awk 等 shell 工具做即时处理。本文主要讲述如何在 awk 中实现 SQL 的常用操作,当做个简单的 awk 入门分享。
虽然文中部分 awk 会有其它更简洁高效的 shell 命令去完成,亦或是其它语言去完成,但这都不在本文的讨论范畴。注:本文所用到的两个测试文件 user、consumer,分别模拟两张 SQL 表:
user 表,字段:
id name addr 1 zhangsan hubei 3 lisi tianjin 4 wangmazi guangzhou 2 wangwu beijing consumer 表,字段: id cost date 1 15 20121213 2 20 20121213 3 100 20121213 4 99 20121213 1 25 20121114 2 108 20121114 3 100 20121114 4 66 20121114 1 15 20121213 1 115 20121114 测试环境: OS 版本: uname -a CYGWIN_NT-6.1 june-PC 1.7.9(0.237/5/3) 2011-03-29 10:10 i686 Cygwin awk 版本: awk --version GNU Awk 3.1.81、查询整张表记录,where 条件过滤,关键词:where
select * from user; awk 1 user;select * from consumer where cost > 100;awk '$2>100' consumer
2、对某个字段去重,或者按记录去重,关键词:distinct
select distinct(date) from consumer;awk '!a[$3]++{print $3}' consumerselect distinct(*) from consumer;awk '!a[$0]++' consumer
3、记录按序输出,关键词:order by
select id from user order by id;awk '{a[$1]}END{asorti(a);for(i=1;i<=length(a);i++){print a[i]}}' user
4、取前多少条记录,关键词:limit
select * from consumer limit 2;awk 'NR<=2' consumerawk 'NR>2{exit}1' consumer # performance is better
5、分组求和统计,关键词:group by、having、sum、count
select id, count(1), sum(cost) from consumer group by id having count(1) > 2;awk '{a[$1]=a[$1]==""?$2:a[$1]","$2}END{for(i in a){c=split(a[i],b,",");if(c>2){sum=0;for(j in b){sum+=b[j]};print i"\t"c"\t"sum}}}' consume或:awk '{a[$1]+=$2;b[$1]++;}END{for(i in a){if(b[i]>2){print i"\t"b[i]"\t"a[i];}}}' consumer.log
6、模糊查询,关键词:like(like属于通配,也可正则 REGEXP)
select name from user where name like 'wang%';awk '$2 ~/^wang/{print $2}' userselect addr from user where addr like '%bei';awk '/.*bei$/{print $3}' userselect addr from user where addr like '%bei%';awk '$3 ~/bei/{print $3}' user
7、多表 join 关联查询,关键词:join
select a.* , b.* from user a inner join consumer b on a.id = b.id and b.id = 2;awk 'ARGIND==1{a[$1]=$0;next}{if(($1 in a)&&$1==2){print a[$1]"\t"$2"\t"$3}}' user consumer
8、多表水平联接,关键词:union all
select a.* from user a union all select b.* from user b;awk 1 user userselect a.* from user a union select b.* from user b;awk '!a[$0]++' user user
9、随机抽样统计,关键词:order by rand()
SELECT * FROM consumer ORDER BY RAND() LIMIT 2;awk 'BEGIN{srand();while(i<2){k=int(rand()*10)+1;if(!(k in a)){a[k];i++}}}(NR in a)' consumer
10、行列转换,关键词:SUM(IF())、WITH ROLLUP
mysql 写法: awk 写法:11、awk 小应用之 RTX 订餐统计:1、功能:
统计 rtx 聊天记录中的订餐信息,包括且限于:菜名、人员姓名、人数2、支持的功能:订餐、取消、修改3、格式:订餐:“+1 空格 菜名”,如: “+1 鸡腿” // 不含双引号取消:“-1” 即可, 如: “-1” // 不含双引号修改:格式同订餐一样,会自动根据姓名覆盖4、使用限制与注意事项:(1)必须严格遵守格式,否则会统计错误,例如:菜名和+1-1之间要空格分隔,且必须 -1+1 开头(2)如果一人代订多人,需要复制格式,修改姓名,然后发布多条信息,格式://代订 // 这一行一定要带上,不能以 +-( 字符开头(userName) //要以 ( 打头,如果你自己点多份,请在名字后面带上数字序号,如 userName1+1 菜名 暂不支持直接 “+2 菜名” 这种形式,因为最后需要按姓名汇总(3)此 awk 脚本需要在 4.0 版本以上运行,因为 4.0 以下的 HashMap 不支持中文 key。 测试用例:echo "user(统计测试) 18:30:52对吧user(统计测试) 18:30:55下单了,嗯user(张三) 18:31:11+1 西瓜泡方便面user(统计测试) 18:30:52对吧 -1 测试user(统计测试) 18:30:52// 这是帮人代订的测试,这行一定要,随便写点啥都行 -------------------- 测试代订功能(代订测试人)+1 豆腐脑-甜的。。。。。。。。。(代订测试人2)+1 豆腐脑-酸的user(统计测试) 18:30:55下单了,嗯user(李四) 18:31:11+1 大排user(李四) 18:31:11-1user(统计测试) 18:30:52对吧user(统计测试) 18:30:55下单了,嗯user(张三) 18:31:11+1 带鱼user(王麻子) 18:31:11+1 大蒜user(统计测试) 18:30:55下单了,嗯user(测试程序) 18:31:11+1 唐僧肉user(测试程序1) 18:31:11+1 带鱼user(赵六) 18:31:11+1 大蒜"|\awk '/\(/{gsub(/.*\(|\).*/,"");name=$0;getline;if(!($0~/^(\+|-)/))next;a[name]=$0}END{for(i in a){split(a[i],b," ");if(b[2]=="")continue;c[b[2]]=c[b[2]]==""?i:c[b[2]]","i};for(i in c){split(c[i],d,",");print i":\t"c[i]"\t"length(d)}}'|column -t
结果:
带鱼: 测试程序1,张三 2唐僧肉: 测试程序 1大蒜: 赵六,王麻子 2豆腐脑-酸的: 代订测试人2 1豆腐脑-甜的: 代订测试人 112、查找父ID 1 2 3 4 | echo "1 0 11 1 111 11 1111 111 "|awk '{a[$1]=$2;if($2==0){b[$1]=$12}}END{for(i in a){j=i;c=0;while(a[j]!=0){j=a[j];c++};print i" \t "j" \t"c}}' |
结果:
1 2 3 4 5 | id rootId level 1111 1 3 111 1 2 11 1 1 1 1 0 |
关于 id 间父子关系的建立与查找,还可以参考这个例子中的 python 写法:
python 数据结构转换,将线性元祖转换成字典树:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 | t = ( ( 1 , - 1 , 'python' ), ( 2 , - 1 , 'ruby' ), ( 3 , - 1 , 'php' ), ( 4 , - 1 , 'lisp' ), ( 5 , 1 , 'flask' ), ( 6 , 1 , 'django' ), ( 7 , 1 , 'webpy' ), ( 8 , 2 , 'rails' ), ( 9 , 3 , 'zend' ), ( 10 , 6 , 'dblog' ) ) # fid 无序版 from itertools import groupby from operator import itemgetter as get from pprint import pprint # group by fid tmp = dict ([(k, list (rows)) for k, rows in groupby( sorted (t, key = get( 1 )), get( 1 ))]) def map_fun(row): item = dict ( zip (( 'id' , 'fid' , 'title' ), row)) if row[ 0 ] in tmp: item[ 'son' ] = find_children(row[ 0 ]) return item; def find_children(parent): return map (map_fun, tmp[parent]) pprint(find_children( - 1 )) |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 | t = ( ( 1 , - 1 , 'python' ), ( 2 , - 1 , 'ruby' ), ( 3 , - 1 , 'php' ), ( 4 , - 1 , 'lisp' ), ( 5 , 1 , 'flask' ), ( 6 , 1 , 'django' ), ( 7 , 1 , 'webpy' ), ( 8 , 2 , 'rails' ), ( 9 , 3 , 'zend' ), ( 10 , 6 , 'dblog' ) ) # fid 有序版 from pprint import pprint l = [] entries = {} for id , fid, title in t: entries[ id ] = entry = { 'id' : id , 'fid' : fid, 'title' : title} if fid = = - 1 : l.append(entry) else : parent = entries[fid] parent.setdefault( 'son' , []).append(entry) pprint(l) |
推荐阅读:
[1] 更快的IP库查找方法以及AWK中的二分查找
[2] q - Text as Data
q is a command line tool that allows direct execution of SQL-like queries on CSVs/TSVs (and any other tabular text files).
q treats ordinary files as database tables, and supports all SQL constructs, such as WHERE, GROUP BY, JOINs etc. It supports automatic column name and column type detection, and provides full support for multiple encodings. 1 2 3 | q "SELECT COUNT(*) FROM ./clicks_file.csv WHERE c3 > 32.3" ps -ef | q -H "SELECT UID,COUNT(*) cnt FROM - GROUP BY UID ORDER BY cnt DESC LIMIT 3"
|