The following table compares notable software frameworks, libraries and computer programs for deep learning.
Deep-learning software by name
Software | Creator | Initial release | Software license[lower-alpha 1] | Open source | Platform | Written in | Interface | OpenMP support | OpenCL support | CUDA support | ROCm support[1] | Automatic differentiation[2] | Has pretrained models | Recurrent nets | Convolutional nets | RBM/DBNs | Parallel execution (multi node) | Actively developed |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
BigDL | Jason Dai (Intel) | 2016 | Apache 2.0 | Yes | Apache Spark | Scala | Scala, Python | No | No | Yes | Yes | Yes | ||||||
Caffe | Berkeley Vision and Learning Center | 2013 | BSD | Yes | Linux, macOS, Windows[3] | C++ | Python, MATLAB, C++ | Yes | Under development[4] | Yes | No | Yes | Yes[5] | Yes | Yes | No | ? | No[6] |
Chainer | Preferred Networks | 2015 | BSD | Yes | Linux, macOS | Python | Python | No | No | Yes | No | Yes | Yes | Yes | Yes | No | Yes | No[7] |
Deeplearning4j | Skymind engineering team; Deeplearning4j community; originally Adam Gibson | 2014 | Apache 2.0 | Yes | Linux, macOS, Windows, Android (Cross-platform) | C++, Java | Java, Scala, Clojure, Python (Keras), Kotlin | Yes | No[8] | Yes[9][10] | No | Computational Graph | Yes[11] | Yes | Yes | Yes | Yes[12] | Yes |
Dlib | Davis King | 2002 | Boost Software License | Yes | Cross-platform | C++ | C++, Python | Yes | No | Yes | No | Yes | Yes | No | Yes | Yes | Yes | |
Flux | Mike Innes | 2017 | MIT license | Yes | Linux, MacOS, Windows (Cross-platform) | Julia | Julia | Yes | No | Yes | Yes[13] | Yes | Yes | No | Yes | Yes | ||
Intel Data Analytics Acceleration Library | Intel | 2015 | Apache License 2.0 | Yes | Linux, macOS, Windows on Intel CPU[14] | C++, Python, Java | C++, Python, Java[14] | Yes | No | No | No | Yes | No | Yes | Yes | |||
Intel Math Kernel Library 2017 [15] and later | Intel | 2017 | Proprietary | No | Linux, macOS, Windows on Intel CPU[16] | C[17] | Yes[18] | No | No | No | Yes | No | Yes[19] | Yes[19] | No | |||
Google JAX | 2018 | Apache License 2.0 | Yes | Linux, macOS, Windows | Python | Python | Only on Linux | No | Yes | No | Yes | Yes | ||||||
Keras | François Chollet | 2015 | MIT license | Yes | Linux, macOS, Windows | Python | Python, R | Only if using Theano as backend | Can use Theano, Tensorflow or PlaidML as backends | Yes | No | Yes | Yes[20] | Yes | Yes | No[21] | Yes[22] | Yes |
MATLAB + Deep Learning Toolbox (formally Neural Network Toolbox) | MathWorks | 1992 | Proprietary | No | Linux, macOS, Windows | C, C++, Java, MATLAB | MATLAB | No | No | Train with Parallel Computing Toolbox and generate CUDA code with GPU Coder[23] | No | Yes[24] | Yes[25][26] | Yes[25] | Yes[25] | Yes | With Parallel Computing Toolbox[27] | Yes |
Microsoft Cognitive Toolkit (CNTK) | Microsoft Research | 2016 | MIT license[28] | Yes | Windows, Linux[29] (macOS via Docker on roadmap) | C++ | Python (Keras), C++, Command line,[30] BrainScript[31] (.NET on roadmap[32]) | Yes[33] | No | Yes | No | Yes | Yes[34] | Yes[35] | Yes[35] | No[36] | Yes[37] | No[38] |
ML.NET | Microsoft | Yes | Windows, Linux, macOS | C#, F# | Yes | |||||||||||||
Apache MXNet | Apache Software Foundation | 2015 | Apache 2.0 | Yes | Linux, macOS, Windows,[39][40] AWS, Android,[41] iOS, JavaScript[42] | Small C++ core library | C++, Python, Julia, Matlab, JavaScript, Go, R, Scala, Perl, Clojure | Yes | No | Yes | No | Yes[43] | Yes[44] | Yes | Yes | Yes | Yes[45] | No |
Neural Designer | Artelnics | 2014 | Proprietary | No | Linux, macOS, Windows | C++ | Graphical user interface | Yes | No | Yes | No | Analytical differentiation | No | No | No | No | Yes | Yes |
OpenNN | Artelnics | 2003 | GNU LGPL | Yes | Cross-platform | C++ | C++ | Yes | No | Yes | No | ? | ? | No | No | No | ? | |
PlaidML | Vertex.AI, Intel | 2017 | Apache 2.0 | Yes | Linux, macOS, Windows | Python, C++, OpenCL | Python, C++ | ? | Some OpenCL ICDs are not recognized | No | No | Yes | Yes | Yes | Yes | Yes | Yes | |
PyTorch | Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan (Facebook) | 2016 | BSD | Yes | Linux, macOS, Windows, Android[46] | Python, C, C++, CUDA | Python, C++, Julia, R[47] | Yes | Via separately maintained package[48][49][50] | Yes | Yes | Yes | Yes | Yes | Yes | Yes[51] | Yes | Yes |
Apache SINGA | Apache Software Foundation | 2015 | Apache 2.0 | Yes | Linux, macOS, Windows | C++ | Python, C++, Java | No | Supported in V1.0 | Yes | No | ? | Yes | Yes | Yes | Yes | Yes | |
TensorFlow | Google Brain | 2015 | Apache 2.0 | Yes | Linux, macOS, Windows,[52][53] Android | C++, Python, CUDA | Python (Keras), C/C++, Java, Go, JavaScript, R,[54] Julia, Swift | No | On roadmap[55] but already with SYCL[56] support | Yes | Yes | Yes[57] | Yes[58] | Yes | Yes | Yes | Yes | Yes |
Theano | Université de Montréal | 2007 | BSD | Yes | Cross-platform | Python | Python (Keras) | Yes | Under development[59] | Yes | No | Yes[60][61] | Through Lasagne's model zoo[62] | Yes | Yes | Yes | Yes[63] | No |
Torch | Ronan Collobert, Koray Kavukcuoglu, Clement Farabet | 2002 | BSD | Yes | Linux, macOS, Windows,[64] Android,[65] iOS | C, Lua | Lua, LuaJIT,[66] C, utility library for C++/OpenCL[67] | Yes | Third party implementations[68][69] | Yes[70][71] | No | Through Twitter's Autograd[72] | Yes[73] | Yes | Yes | Yes | Yes[64] | No |
Wolfram Mathematica 10[74] and later | Wolfram Research | 2014 | Proprietary | No | Windows, macOS, Linux, Cloud computing | C++, Wolfram Language, CUDA | Wolfram Language | Yes | No | Yes | No | Yes | Yes[75] | Yes | Yes | Yes | Yes[76] | Yes |
Software | Creator | Initial release | Software license[lower-alpha 1] | Open source | Platform | Written in | Interface | OpenMP support | OpenCL support | CUDA support | ROCm support[77] | Automatic differentiation[2] | Has pretrained models | Recurrent nets | Convolutional nets | RBM/DBNs | Parallel execution (multi node) | Actively developed |
Comparison of compatibility of machine learning models
Format name | Design goal | Compatible with other formats | Self-contained DNN Model | Pre-processing and Post-processing | Run-time configuration for tuning & calibration | DNN model interconnect | Common platform |
---|---|---|---|---|---|---|---|
TensorFlow, Keras, Caffe, Torch, ONNX, | Algorithm training | No | No / Separate files in most formats | No | No | No | Yes |
ONNX | Algorithm training | Yes | No / Separate files in most formats | No | No | No | Yes |
See also
References
- ↑ "Deep Learning — ROCm 4.5.0 documentation". Archived from the original on 2022-12-05. Retrieved 2022-09-27.
- 1 2 Atilim Gunes Baydin; Barak A. Pearlmutter; Alexey Andreyevich Radul; Jeffrey Mark Siskind (20 February 2015). "Automatic differentiation in machine learning: a survey". arXiv:1502.05767 [cs.LG].
- ↑ "Microsoft/caffe". GitHub. 30 October 2021.
- ↑ "Caffe: a fast open framework for deep learning". July 19, 2019 – via GitHub.
- ↑ "Caffe | Model Zoo". caffe.berkeleyvision.org.
- ↑ GitHub - BVLC/caffe: Caffe: a fast open framework for deep learning., Berkeley Vision and Learning Center, 2019-09-25, retrieved 2019-09-25
- ↑ Preferred Networks Migrates its Deep Learning Research Platform to PyTorch, 2019-12-05, retrieved 2019-12-27
- ↑ "Support for Open CL · Issue #27 · deeplearning4j/nd4j". GitHub.
- ↑ "N-Dimensional Scientific Computing for Java". Archived from the original on 2016-10-16. Retrieved 2016-02-05.
- ↑ "Comparing Top Deep Learning Frameworks". Deeplearning4j. Archived from the original on 2017-11-07. Retrieved 2017-10-31.
- ↑ Chris Nicholson; Adam Gibson. "Deeplearning4j Models". Archived from the original on 2017-02-11. Retrieved 2016-03-02.
- ↑ Deeplearning4j. "Deeplearning4j on Spark". Deeplearning4j. Archived from the original on 2017-07-13. Retrieved 2016-09-01.
{{cite web}}
: CS1 maint: numeric names: authors list (link) - ↑ "Metalhead". FluxML. 29 October 2021.
- 1 2 "Intel® Data Analytics Acceleration Library (Intel® DAAL)". software.intel.com. November 20, 2018.
- ↑ "Intel® Math Kernel Library Release Notes and New Features". Intel.
- ↑ "Intel® Math Kernel Library (Intel® MKL)". software.intel.com. September 11, 2018.
- ↑ "Deep Neural Network Functions". software.intel.com. May 24, 2019.
- ↑ "Using Intel® MKL with Threaded Applications". software.intel.com. June 1, 2017.
- 1 2 "Intel® Xeon Phi™ Delivers Competitive Performance For Deep Learning—And Getting Better Fast". software.intel.com. March 21, 2019.
- ↑ "Applications - Keras Documentation". keras.io.
- ↑ "Is there RBM in Keras? · Issue #461 · keras-team/keras". GitHub.
- ↑ "Does Keras support using multiple GPUs? · Issue #2436 · keras-team/keras". GitHub.
- ↑ "GPU Coder - MATLAB & Simulink". MathWorks. Retrieved 13 November 2017.
- ↑ "Automatic Differentiation Background - MATLAB & Simulink". MathWorks. September 3, 2019. Retrieved November 19, 2019.
- 1 2 3 "Neural Network Toolbox - MATLAB". MathWorks. Retrieved 13 November 2017.
- ↑ "Deep Learning Models - MATLAB & Simulink". MathWorks. Retrieved 13 November 2017.
- ↑ "Parallel Computing Toolbox - MATLAB". MathWorks. Retrieved 13 November 2017.
- ↑ "CNTK/LICENSE.md at master · Microsoft/CNTK · GitHub". GitHub.
- ↑ "Setup CNTK on your machine". GitHub.
- ↑ "CNTK usage overview". GitHub.
- ↑ "BrainScript Network Builder". GitHub.
- ↑ ".NET Support · Issue #960 · Microsoft/CNTK". GitHub.
- ↑ "How to train a model using multiple machines? · Issue #59 · Microsoft/CNTK". GitHub.
- ↑ "Prebuilt models for image classification · Issue #140 · microsoft/CNTK". GitHub.
- 1 2 "CNTK - Computational Network Toolkit". Microsoft Corporation.
- ↑ "Restricted Boltzmann Machine with CNTK #534". GitHub, Inc. 27 May 2016. Retrieved 30 October 2023.
- ↑ "Multiple GPUs and machines". Microsoft Corporation.
- ↑ "Disclaimer". CNTK TEAM. 6 November 2021.
- ↑ "Releases · dmlc/mxnet". Github.
- ↑ "Installation Guide — mxnet documentation". Readthdocs.
- ↑ "MXNet Smart Device". ReadTheDocs. Archived from the original on 2016-09-21. Retrieved 2016-05-19.
- ↑ "MXNet.js". Github. 28 October 2021.
- ↑ "— Redirecting to mxnet.io". mxnet.readthedocs.io.
- ↑ "Model Gallery". GitHub. 29 October 2022.
- ↑ "Run MXNet on Multiple CPU/GPUs with Data Parallel". GitHub.
- ↑ "PyTorch". Dec 17, 2021.
- ↑ "Falbel D, Luraschi J (2023). torch: Tensors and Neural Networks with 'GPU' Acceleration". torch.mlverse.org. Retrieved 2023-11-28.
- ↑ "OpenCL build of pytorch: (in-progress, not useable) - hughperkins/pytorch-coriander". July 14, 2019 – via GitHub.
- ↑ "DLPrimitives/OpenCL out of tree backend for pytorch - artyom-beilis/pytorch_dlprim". Jan 21, 2022 – via GitHub.
- ↑ "OpenCL Support · Issue #488 · pytorch/pytorch". GitHub.
- ↑ "Restricted Boltzmann Machines (RBMs) in PyTorch". GitHub. 14 November 2022.
- ↑ "Install TensorFlow with pip".
- ↑ "TensorFlow 0.12 adds support for Windows".
- ↑ Allaire, J.J.; Kalinowski, T.; Falbel, D.; Eddelbuettel, D.; Yuan, T.; Golding, N. (28 September 2023). "tensorflow: R Interface to 'TensorFlow'". The Comprehensive R Archive Network. Retrieved 30 October 2023.
- ↑ "tensorflow/roadmap.md at master · tensorflow/tensorflow · GitHub". GitHub. January 23, 2017. Retrieved May 21, 2017.
- ↑ "OpenCL support · Issue #22 · tensorflow/tensorflow". GitHub.
- ↑ "TensorFlow". TensorFlow.
- ↑ "Models and examples built with TensorFlow". July 19, 2019 – via GitHub.
- ↑ "Using the GPU — Theano 0.8.2 documentation". Archived from the original on 2017-04-01. Retrieved 2016-01-21.
- ↑ "gradient – Symbolic Differentiation — Theano 1.0.0 documentation". deeplearning.net.
- ↑ "Automatic vs. Symbolic differentiation".
- ↑ "Recipes/modelzoo at master · Lasagne/Recipes · GitHub". GitHub.
- ↑ "Using multiple GPUs — Theano 1.0.0 documentation". deeplearning.net.
- 1 2 "torch/torch7". July 18, 2019 – via GitHub.
- ↑ "GitHub - soumith/torch-android: Torch-7 for Android". GitHub. 13 October 2021.
- ↑ "Torch7: A Matlab-like Environment for Machine Learning" (PDF).
- ↑ "GitHub - jonathantompson/jtorch: An OpenCL Torch Utility Library". GitHub. 18 November 2020.
- ↑ "Cheatsheet". GitHub.
- ↑ "cltorch". GitHub.
- ↑ "Torch CUDA backend". GitHub.
- ↑ "Torch CUDA backend for nn". GitHub.
- ↑ "Autograd automatically differentiates native Torch code: twitter/torch-autograd". July 9, 2019 – via GitHub.
- ↑ "ModelZoo". GitHub.
- ↑ "Launching Mathematica 10". Wolfram.
- ↑ "Wolfram Neural Net Repository of Neural Network Models". resources.wolframcloud.com.
- ↑ "Parallel Computing—Wolfram Language Documentation". reference.wolfram.com.
- ↑ "Deep Learning — ROCm 4.5.0 documentation". Archived from the original on 2022-12-05. Retrieved 2022-09-27.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.