Binary neural networks in LostTech.TensorFlow
As you might have heard, today Apple has acquired our Seattle neighbor Xnor.ai for $200M. The company’s main product is a mechanism to run neural networks on low-power devices, and its core is just 50 lines of code. It achieves then efficiency by performing operations en masse on individual bits instead of the normal 32- and recently 16-bit floating point numbers.
For the last few months we have been working to bring bitwise operations to Gradient,
and yesterday we finally got the first relatively stable build based on the latest TensorFlow
from 1.x family: 1.15 (previous versions of Gradient up to Preview 6.4 were based off TensorFlow 1.10).
The new version brings support for many new features, among them tf.bitwise
and gen_bitwise_ops
modules.
In the light of the Xnor.ai acquisition news I decided to publish these bits with a simple sample code
to a work-in-progress branch,
so you could start trying them early. You can view
the new sample code for bitwise ops
in Gradient-Samples repository, but it is as easy as
Tensor xor = tf.bitwise.bitwise_xor(x, y);
Tensor bitcount = gen_bitwise_ops.population_count_dyn(xor);
Stay tuned for the official release with TensorFlow 1.15 support. It is coming soon!