Skip to content
SaeHie Park edited this page Sep 19, 2020 · 50 revisions

Prerequisite

sudo apt-get install python3 python3-pip python3-numpy python3-dev \
python3-pip python3-wheel python3-mock python3-is-python
python3 -m pip install --upgrade pip
sudo update-alternatives --install /usr/bin/python python /usr/bin/python3 10
sudo update-alternatives --install /usr/bin/python-config python-config /usr/bin/python3-config 10

After install and then change

sudo update-alternatives --set python /usr/bin/python3

required python packages

pip install -U --user pip six numpy wheel mock
pip install -U --user keras_applications==1.0.6 --no-deps
pip install -U --user keras_preprocessing==1.0.5 --no-deps

Build

2.3.0

./configure

# normal
bazel build --config=opt --config=noaws //tensorflow/tools/pip_package:build_pip_package

# minimal?
bazel build --config=opt --config=noaws --config=nogcp --config=nohdfs --config=nonccl \
//tensorflow/tools/pip_package:build_pip_package

# arm options?
bazel build --config=opt --config=noaws --config=nogcp --config=nohdfs --config=nonccl \
--copt="-mfpu=neon-vfpv4" --copt="-ftree-vectorize" \
//tensorflow/tools/pip_package:build_pip_package

# ??? --copt="-funsafe-math-optimizations" --copt="-fomit-frame-pointer"

bazel-bin/tensorflow/tools/pip_package/build_pip_package ~/temp/tensorflow_pkg

pip uninstall ~/temp/tensorflow_pkg/tensorflow-2.3.0-cp35-cp35m-linux_x86_64.whl
pip install ~/temp/tensorflow_pkg/tensorflow-2.3.0-cp35-cp35m-linux_x86_64.whl --user

Ref: https://www.tensorflow.org/install/install_sources

bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg

pip uninstall /tmp/tensorflow_pkg/tensorflow-2.3.0-cp36-cp36m-linux_x86_64.whl
pip install /tmp/tensorflow_pkg/tensorflow-2.3.0-cp36-cp36m-linux_x86_64.whl --user

Debug build

# bazel build --copt=-O -c dbg -c opt //tensorflow/tools/pip_package:build_pip_package

CC_OPT_FLAGS="-O0 -g" ./configure

bazel build \
--config=noaws --config=nogcp --config=nohdfs --config=nonccl \
-c dbg --strip=never //tensorflow/tools/pip_package:build_pip_package

for label_image example program,

bazel build --copt=-O -c dbg -c opt tensorflow/examples/label_image/...

Build with CMake ?

cd tensorflow
mkdir build
cd build
cmake -DCMAKE_BUILD_TYPE=Release ../tensorflow/contrib/cmake
make

Build Tensorflow Mobile JNI ?

This only generates for 32bit ARM. AArch64 is not produced now.

You need Android Ndk installed. This will be asked when running ./configure

Last option of ./configure is -march=native but should give something like -O2 that clang understands.

Build:

bazel build --config=opt //tensorflow/contrib/android:libtensorflow_inference.so \
  --crosstool_top=//external:android/crosstool \
  --host_crosstool_top=@bazel_tools//tools/cpp:toolchain \
  --cpu=armeabi-v7a

Output resides as something like ./bazel-tensorflow/bazel-out/armeabi-v7a-py3-opt/bin/tensorflow/contrib/android/libtensorflow_inference.so

Purge cache

bazel clean --expunge

TFlite model JSON

flatc -t schema.fbs -- input_model.tflite

This will create a input_model.json

With VirtualEnv

sudo apt-get install python-dev python-virtualenv

cd ~

# for TF2.3.0 as of today
virtualenv --system-site-packages -p python3 venv/tf2_3_0

# enter
source venv/tf2_3_0/bin/activate

# setup
pip install --upgrade pip
pip install --upgrade numpy
pip install --upgrade tensorflow-cpu==2.3.0

# or from build
pip install ~/temp/tensorflow_pkg/tensorflow-2.3.0-cp35-cp35m-linux_x86_64.whl

# 2.3.0 Py3.5
pip install /tmp/tensorflow_pkg/tensorflow-2.3.0-cp36-cp36m-linux_x86_64.whl

# exit
deactivate

Re-enter

source venv/tf2_3_0/bin/activate

...

deactivate

Operators

Search with REGISTER_OP in tensorflow/core folder

Check

python3 -c 'import tensorflow as tf; print(tf.__version__)' 

Inspect model

#!/usr/bin/env python

import sys
import tensorflow as tf

in_file=sys.argv[1]

with tf.gfile.GFile(in_file, 'rb') as f:
    graph_def = tf.GraphDef()
    graph_def.ParseFromString(f.read())

with tf.Graph().as_default() as graph:
    tf.import_graph_def(graph_def, name="")

fmt_str = ""
max_op = 10
max_sh = 10

for node in graph.as_graph_def(add_shapes=True).node:
    sh_str = str([ d.size for d in node.attr['_output_shapes'].list.shape[0].dim ])
    max_op = max(max_op, len(node.op))
    max_sh = max(max_sh, len(sh_str))

fmt_str = "{:4d} | {:" + str(max_op) + "s} | {:" + str(max_sh) + "s}"

node_order = 1
for node in graph.as_graph_def(add_shapes=True).node:
    sh_str = str([ d.size for d in node.attr['_output_shapes'].list.shape[0].dim ])
    print(fmt_str.format(node_order, node.op, sh_str) + " | " + node.name)
    node_order = node_order + 1

Paper

Models

Learning TF

Clone this wiki locally