Many real world problems involve organization of data in a structured form, such as sequences, trees or graphs. Traditional neural networks are suitable for dealing with unstructured data of a fixed size.Recurrent neural networks (RNNs) and recursive neural networks (RecNNs) have been proposed to extend the applicability of neural network models to structured domains. In particular this seminar is focused on Echo State Networks (ESNs) and, more in general, with Reservoir Computing (RC), a new paradigm for designing RNNs in a very efficient way. We show how contractivity of the state transition function is a key feature of ESNs and how this relates to the Markovian bias of RNN models. We propose four main architectural features that might have a positive influence on ESNs, namely input variability, multiple time scale dynamics, interconections among neurons and regressing a high dimensional feature space. Our investigation is conducted through a study of the predictive performance on several benchmark problems and a PCA analysis of the reservoir state space.
A particularly interesting variant, the phi-ESN which resulted from our study, is presented. The future work will mainly consist in extending our considerations and investigations to trees and graphs domains.