Binary neural networks in Gradient

As you might have heard, today Apple has acquired our Seattle neighbor Xnor.ai for $200M. The company’s main product is a mechanism to run neural networks on low-power devices, and its core is just 50 lines of code. It achieves then efficiency by performing operations en masse on individual bits instead of the normal 32- and recently 16-bit floating point numbers.

For the last few months we have been working to bring bitwise operations to Gradient, and yesterday we finally got the first relatively stable build based on the latest TensorFlow from 1.x family: 1.15 (previous versions of Gradient up to Preview 6.4 were based off TensorFlow 1.10). The new version brings support for many new features, among them tf.bitwise and gen_bitwise_ops modules. In the light of the Xnor.ai acquisition news I decided to publish these bits with a simple sample code to a work-in-progress branch, so you could start trying them early. You can view the new sample code for bitwise ops in Gradient-Samples repository, but it is as easy as

Tensor xor = tf.bitwise.bitwise_xor(x, y);
Tensor bitcount = gen_bitwise_ops.population_count_dyn(xor);

Stay tuned for the official release with TensorFlow 1.15 support. It is coming soon!

Read More

What's New in Gradient Preview 6.4?

We released Gradient Preview 6.4 on Oct 15, 2019. It brings several new features and bug fixes:

  • feature: inheriting from TensorFlow classes enables defining custom Keras layers and models
  • feature: improved automatic conversion .NET types to TensorFlow
  • feature: fast marshalling from .NET arrays to NumPy arrays
  • bug fix: it is now possible to modify collections belonging to TensorFlow objects
  • bug fix: enumerating TensorFlow collections could crash in multithreaded environment
  • new samples: ResNetBlock and C# or Not
  • train models in Jupyter F# notebook in your browser hosted for free by Microsoft Azure
  • preview expiration: extended to March 2020

F# Notebook Screenshot

Read More