Bfloat16

from Wikipedia, the free encyclopedia
Structure of the bfloat16 floating point format

bfloat16 ( brain floating point with 16 bits ) is the name for a floating point format in computer systems. It is a binary data format with one bit for the sign , 8 bits for the exponent and 7 bits for the mantissa . It is therefore a shortened version of the IEEE 754 single data type in the mantissa .

bfloat16 is particularly used in machine learning systems, such as TPUs , as well as certain Intel Xeon processors and Intel FPGAs.

swell

  1. Available TensorFlow Ops | Cloud TPU | Google Cloud . In: Google Cloud . Retrieved on May 23, 2018: "This page lists the TensorFlow Python APIs and graph operators available on Cloud TPU."
  2. Elmar Haußmann: Comparing Google's TPUv2 against Nvidia's V100 on ResNet-50 . In: RiseML Blog . April 26, 2018. Archived from the original on April 26, 2018. Info: The archive link was inserted automatically and has not yet been checked. Please check the original and archive link according to the instructions and then remove this notice. Retrieved on May 23, 2018: "For the Cloud TPU, Google recommended we use the bfloat16 implementation from the official TPU repository with TensorFlow 1.7.0. Both the TPU and GPU implementations make use of mixed-precision computation on the respective architecture and store most tensors with half-precision. " @1@ 2Template: Webachiv / IABot / blog.riseml.com
  3. Tensorflow Authors: ResNet-50 using BFloat16 on TPU . In: Google . February 28, 2018. Accessed on May 23, 2018.  ( Page no longer available , search in web archivesInfo: The link was automatically marked as defective. Please check the link according to the instructions and then remove this notice.@1@ 2Template: Dead Link / github.com  
  4. Khari Johnson: Intel unveils Nervana Neural Net L-1000 for accelerated AI training . In: VentureBeat . May 23, 2018. Retrieved May 23, 2018: "... Intel will be extending bfloat16 support across our AI product lines, including Intel Xeon processors and Intel FPGAs."
  5. Michael Feldman: Intel Lays Out New Roadmap for AI Portfolio . In: TOP500 Supercomputer Sites . May 23, 2018. Retrieved May 23, 2018: "Intel plans to support this format across all their AI products, including the Xeon and FPGA lines"
  6. Lucian Armasu: Intel To Launch Spring Crest, Its First Neural Network Processor, In 2019 . In: Tom's Hardware . May 23, 2018. Retrieved on May 23, 2018: "Intel said that the NNP-L1000 would also support bfloat16, a numerical format that's being adopted by all the ML industry players for neural networks. The company will also support bfloat16 in its FPGAs, Xeons, and other ML products. The Nervana NNP-L1000 is scheduled for release in 2019. "