Abstract
This chapter provides an introduction to neural networks and presents the fundamental concepts that underpin modern deep neural networks. Multilayer perceptrons (MLPs) are introduced first, and the equivalence between the simplest MLP (i.e., an MLP with just two fully-connected layers of neurons and linear activation functions) and a multivariate linear regression model is demonstrated. Efficient training of MLPs and all other modern deep neural networks is enabled by the error backpropagation algorithm, which is described next. Subsequently, this chapter provides an overview of the key building blocks used to design and train deep neural networks as powerful universal function approximators. These include a description of frequently used activation functions, optimization algorithms, loss/objective functions, regularization strategies, and normalization techniques.
Original language | English |
---|---|
Title of host publication | Medical Image Analysis |
Publisher | Elsevier Masson s.r.l. |
Pages | 415-450 |
Number of pages | 36 |
ISBN (Electronic) | 9780128136577 |
ISBN (Print) | 9780128136584 |
DOIs | |
Publication status | Published - 2024 |
Keywords
- Activation functions
- Error backpropagation
- Loss functions
- Multilayer perceptron
- Optimization
- Regularization