ImageDataGenerator.flow_from_directory()的用法已经非常多了,优点是简单方便,但数据量很大时,需要组织目录结构和copy数据,很浪费资源和时间

1. 训练时从txt加载数据(参考:https://blog.csdn.net/u013491950/article/details/88817310

2. 预测时从txt加载数据:

... ...
def get_gen(img_root_txt,_BS):
    cnt = 0
    imgs = []
    files = []
    for n in open(img_root_txt):
        _n = n[:-1]
        files.append(_n)
        img_2 = image.load_img(_n, target_size=(64, 64))
        img_3 = (image.img_to_array(img_2))/255
        img_3 = np.expand_dims(img_3, axis=0)
        imgs.append(img_3)
        if cnt > _BS:
            break
        cnt = cnt + 1
    x = np.concatenate([x for x in imgs])
    print(x.shape)
    
    return x,files

gen,files = get_gen('./imgs_root.txt', BS)
xx = model.predict(gen)
for n,m in zip(files, xx):
    if m[0]<m[1]:
        print(n, m)
... ...

  注意:txt存数据路径,预测结果是各个分类的概率,!!!预测时图像预处理方式与训练时图像预处理方式需保持一致(尺寸、归一化等)