Breaking the Activation Function Bottleneck through Adaptive Parameterization

Sebastian Flennerhag, Hujun Yin, John Keane, Mark Elliot

Research output: Chapter in Book/Conference proceedingConference contributionpeer-review

Abstract

Standard neural network architectures are non-linear only by virtue of a simple element-wise activation function, making them both brittle and excessively large. In this paper, we consider methods for making the feed-forward layer more flexible while preserving its basic structure. We develop simple drop-in replacements that learn to adapt their parameterization conditional on the input, thereby increasing statistical efficiency significantly. We present an adaptive LSTM that advances the state of the art for the Penn Treebank and WikiText-2 word-modeling tasks while using fewer parameters and converging in less than half as many iterations.
Original languageEnglish
Title of host publicationAdvances in Neural Information Processing Systems 31 (NeurIPS 2018)
Chapter0
Pages0
Publication statusE-pub ahead of print - 22 Nov 2018
EventNeurIPS (NIPS) 2018 - Tokyo, Japan
Duration: 2 Dec 20188 Jan 2019
Conference number: 32
https://nips.cc/

Conference

ConferenceNeurIPS (NIPS) 2018
Abbreviated titleNIPS
Country/TerritoryJapan
CityTokyo
Period2/12/188/01/19
Internet address

Fingerprint

Dive into the research topics of 'Breaking the Activation Function Bottleneck through Adaptive Parameterization'. Together they form a unique fingerprint.

Cite this