Caffe

from Wikipedia, the free encyclopedia
Caffe
Basic data

Maintainer Berkeley Artificial Intelligence Research (BAIR)
developer Yangqing Jia
Publishing year 2014
Current  version 1.0
( April 18, 2017 )
operating system Unixoide , macOS , Microsoft Windows
programming language C ++
category Program library for deep learning
License BSD license
German speaking No
caffe.berkeleyvision.org/

Caffe is a program library for deep learning . She was taught by Yangqing Jia during his Ph.D. -Time developed at the University of California, Berkeley's Vision and Learning Center .

Caffe has first the MATLAB - Implementation of fast Convolutional Neural Networks (CNN) for C and C ++ ported . Caffe contains numerous algorithms and deep learning architectures for the classification and cluster analysis of image data . CNN, R-CNN ( recurrent neural network ), LSTM ( long short-term memory ) and fully connected neural networks are supported. With Caffe, the graphics processor- based acceleration with cuDNN from Nvidia can be used, so that 60 million images can be processed per day.

As the primary programming are Python ( NumPy ) and MATLAB provided. Yahoo has integrated Caffe into Apache Spark ( caffeonspark ) in order to use deep learning in a distributed manner.

literature

  • Yangqing Jia et al. a .: Caffe: Convolutional Architecture for Fast Feature Embedding . In: UC Berkeley EECS . Berkeley 2014 ( Online [PDF]).

Web links

Individual evidence

  1. Release 1.0 . April 18, 2017 (accessed April 23, 2018).
  2. ^ Embedded Vision Alliance: The Caffe Deep Learning Framework: An Interview with the Core Developers. In: Embedded Vision Alliance. Retrieved April 5, 2017 .
  3. Evan Shelhamer: Deep Learning for Computer Vision with Caffe and cuDNN. In: Nvidia. Nvidia Corp., October 15, 2014, accessed April 5, 2017 .
  4. Nikhil Ketkar: Deep Learning with Python . A hands-on introduction . Apress, 2017, ISBN 978-1-4842-2765-7 .
  5. Michael Thomas, Gabriela Motroc: CaffeOnSpark: Yahoo makes deep learning software open source. In: JAXenter. JAXenter, March 1, 2016, accessed April 5, 2017 .