Python踩坑指南(第二季)

本期围绕jieba讲一个我遇到的实际问题,在同一个服务里,存在两个不同接口A和B,都用到了jieba分词,区别在于两者需要调用不同的词库,巧合中,存在以下情况:

1
2
词库A:"干拌面"
词库B:"干拌","面"

在服务启动的时候,由于词库A优先被加载了,再去加载词库B的时候发现,并没有加载成功:

接口A中:

1
jieba.load_userdict("A.txt")

接口B中:

1
jieba.load_userdict("B.txt")

结果发现,在切干拌面这个词的时候,接口B中还是没有切成功。其实每次在我们加载jieba的时候,可以注意一下会出现以下info:

1
2
3
4
Building prefix dict from the default dictionary ...
Dumping model to file cache /var/folders/hv/kfb7n4lj06590hqxjv6f3dd00000gn/T/jieba.cache
Loading model cost 0.824 seconds.
Prefix dict has been built succesfully.

显而易见,先进行了Building prefix dict,再Dumping model to file cache,后续Loading model都会来自这,所以这个地方导致以上问题。

我是这么处理的:
接口A中:

1
jieba1 = jieba.Tokenizer(dictionary="A.txt")

接口B中:

1
jieba2 = jieba.Tokenizer(dictionary="B.txt")

案例如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
In [1]: import jieba

In [2]: jieba1=jieba.Tokenizer(dictionary="A.txt")

In [3]: jieba2=jieba.Tokenizer(dictionary="B.txt")

In [4]: jieba1.lcut("干拌面")
Building prefix dict from /Users/slade/Desktop/A.txt ...
Dumping model to file cache /var/folders/hv/kfb7n4lj06590hqxjv6f3dd00000gn/T/jieba.u5221c1b70f06b36e44bc519f39715c96.cache
Loading model cost 0.006 seconds.
Prefix dict has been built succesfully.
Out[4]: ['干拌面']

In [5]: jieba2.lcut("干拌面")
Building prefix dict from /Users/slade/Desktop/B.txt ...
Dumping model to file cache /var/folders/hv/kfb7n4lj06590hqxjv6f3dd00000gn/T/jieba.uc4f38d90bf7ce748744ff94fb2863fe4.cache
Loading model cost 0.003 seconds.
Prefix dict has been built succesfully.
Out[5]: ['干拌', '面']

需要注意的是,去看Tokenizer源码,里面有这么一段读取调用:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
def gen_pfdict(self, f):
lfreq = {}
ltotal = 0
f_name = resolve_filename(f)
for lineno, line in enumerate(f, 1):
try:
line = line.strip().decode('utf-8')
word, freq = line.split(' ')[:2]
freq = int(freq)
lfreq[word] = freq
ltotal += freq
for ch in xrange(len(word)):
wfrag = word[:ch + 1]
if wfrag not in lfreq:
lfreq[wfrag] = 0
except ValueError:
raise ValueError(
'invalid dictionary entry in %s at Line %s: %s' % (f_name, lineno, line))
f.close()
return lfreq, ltotal

在load_userdict的时候词库的词频可以省略不写,word, freq = line.split(' ')[:2]决定了这边需要加上,这个依赖于版本,我并没有实验不同版本。

A.txt:

1
干拌面 1

B.txt:

1
2
干拌 1
面 1

欢迎大家关注我的个人bolg知乎,更多代码内容欢迎follow我的个人Github,如果有任何算法、代码、转行疑问都欢迎通过邮箱发消息给我。

打赏的大佬可以联系我,赠送超赞的算法资料