这一系列基本上是属于我自己进行到了那个步骤就做到那个步骤的
由于新装了GPU (GTX750ti)和CUDA9.0、CUDNN7.1版本的软件,所以希望TensorFlow能在GPU上运行,也算上补上之前的承诺
说了下初衷,由于现在新的CUDA版本对TensorFlow的支持不好,只能采取编译源码的方式进行
所以大概分为以下几个步骤
1.安装依赖库(这部分我已经做过了,不进行介绍,可以看前边的依赖库,基本一致)
sudo apt-get install openjdk-8-jdk
jdk是bazel必须的
2.安装Git(有的就跳过这一步)
3.安装TensorFlow的build工具bazel
4.配置并编译TensorFlow源码
5.安装并配置环境变量
1.安装依赖库
2.安装Git
使用
sudo apt-get install git
git clone --recursive https://github.com/tensorflow/tensorflow
3. 安装TensorFlow的build工具bazel
这一步比较麻烦,是因为apt-get中没有bazel这个工具
因此需要到GitHub上先下载,再进行安装 下载地址是https://github.com/bazelbuild/bazel/releases
选择正确版本下载,这里序号看下TensorFlow的版本需求,具体对BAZEL的需求可以查看configure.py文件,比如我这个版本中就有这样的一段
_TF_BAZELRC_FILENAME = '.tf_configure.bazelrc' _TF_WORKSPACE_ROOT = '' _TF_BAZELRC = '' _TF_CURRENT_BAZEL_VERSION = None _TF_MIN_BAZEL_VERSION = '0.27.1' _TF_MAX_BAZEL_VERSION = '1.1.0'
每个字段的意思从字面上就可以得知,_TF_BAZELRC_FILENAME是使用bazel编译时使用的配置文件(没有特别细致的研究,https://www.cnblogs.com/shouhuxianjian/p/9416934.html里边有解释),_TF_MIN_BAZEL_VERSION = '0.27.1'是最低的bazel版本需求
使用sudo命令安装.sh文件即可
sudo chmod +x ./bazel*.sh sudo ./bazel-0.*.sh
4.配置并编译TensorFlow源码
首先是配置,可以针对自己的需求进行选择和裁剪。这一步特别麻烦,有很多选项需要选择,我的选择如下:
1 jourluohua@jour:~/tools/tensorflow$ ./configure 2 WARNING: Running Bazel server needs to be killed, because the startup options are different. 3 You have bazel 0.14.1 installed. 4 Please specify the location of python. [Default is /usr/bin/python]: 5 6 7 Found possible Python library paths: 8 /usr/local/lib/python2.7/dist-packages 9 /usr/lib/python2.7/dist-packages 10 Please input the desired Python library path to use. Default is [/usr/local/lib/python2.7/dist-packages] 11 12 Do you wish to build TensorFlow with jemalloc as malloc support? [Y/n]: Y 13 jemalloc as malloc support will be enabled for TensorFlow. 14 15 Do you wish to build TensorFlow with Google Cloud Platform support? [Y/n]: n 16 No Google Cloud Platform support will be enabled for TensorFlow. 17 18 Do you wish to build TensorFlow with Hadoop File System support? [Y/n]: n 19 No Hadoop File System support will be enabled for TensorFlow. 20 21 Do you wish to build TensorFlow with Amazon S3 File System support? [Y/n]: n 22 No Amazon S3 File System support will be enabled for TensorFlow. 23 24 Do you wish to build TensorFlow with Apache Kafka Platform support? [Y/n]: n 25 No Apache Kafka Platform support will be enabled for TensorFlow. 26 27 Do you wish to build TensorFlow with XLA JIT support? [y/N]: y 28 XLA JIT support will be enabled for TensorFlow. 29 30 Do you wish to build TensorFlow with GDR support? [y/N]: y 31 GDR support will be enabled for TensorFlow. 32 33 Do you wish to build TensorFlow with VERBS support? [y/N]: y 34 VERBS support will be enabled for TensorFlow. 35 36 Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]: N 37 No OpenCL SYCL support will be enabled for TensorFlow. 38 39 Do you wish to build TensorFlow with CUDA support? [y/N]: y 40 CUDA support will be enabled for TensorFlow. 41 42 Please specify the CUDA SDK version you want to use. [Leave empty to default to CUDA 9.0]: 8 43 44 45 Please specify the location where CUDA 8.0 toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: 46 47 48 Please specify the cuDNN version you want to use. [Leave empty to default to cuDNN 7.0]: 49 50 51 Please specify the location where cuDNN 7 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: 52 53 54 Do you wish to build TensorFlow with TensorRT support? [y/N]: N 55 No TensorRT support will be enabled for TensorFlow. 56 57 Please specify the NCCL version you want to use. [Leave empty to default to NCCL 1.3]: 58 59 60 Please specify a list of comma-separated Cuda compute capabilities you want to build with. 61 You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus. 62 Please note that each additional compute capability significantly increases your build time and binary size. [Default is: 5.0] 63 64 65 Do you want to use clang as CUDA compiler? [y/N]: N 66 nvcc will be used as CUDA compiler. 67 68 Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]: 69 70 71 Do you wish to build TensorFlow with MPI support? [y/N]: N 72 No MPI support will be enabled for TensorFlow. 73 74 Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]: 75 76 77 Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: N 78 Not configuring the WORKSPACE for Android builds. 79 80 Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See tools/bazel.rc for more details. 81 --config=mkl # Build with MKL support. 82 --config=monolithic # Config for mostly static monolithic build. 83 Configuration finished
View Code
本站文章如无特殊说明,均为本站原创,如若转载,请注明出处:TensorFlow入门——bazel编译(带GPU) - Python技术站