Opendata, web and dolomites


Recurrent Neural Networks and Related Machines That Learn Algorithms

Total Cost €


EC-Contrib. €






Project "AlgoRNN" data sheet

The following table provides information about the project.


Organization address
city: LUGANO
postcode: 6904

contact info
title: n.a.
name: n.a.
surname: n.a.
function: n.a.
email: n.a.
telephone: n.a.
fax: n.a.

 Coordinator Country Switzerland [CH]
 Total cost 2˙500˙000 €
 EC max contribution 2˙500˙000 € (100%)
 Programme 1. H2020-EU.1.1. (EXCELLENT SCIENCE - European Research Council (ERC))
 Code Call ERC-2016-ADG
 Funding Scheme ERC-ADG
 Starting year 2017
 Duration (year-month-day) from 2017-10-01   to  2022-09-30


Take a look of project's partnership.

# participants  country  role  EC contrib. [€] 


 Project objective

Recurrent neural networks (RNNs) are general parallel-sequential computers. Some learn their programs or weights. Our supervised Long Short-Term Memory (LSTM) RNNs were the first to win pattern recognition contests, and recently enabled best known results in speech and handwriting recognition, machine translation, etc. They are now available to billions of users through the world's most valuable public companies including Google and Apple. Nevertheless, in lots of real-world tasks RNNs do not yet live up to their full potential. Although universal in theory, in practice they fail to learn important types of algorithms. This ERC project will go far beyond today's best RNNs through novel RNN-like systems that address some of the biggest open RNN problems and hottest RNN research topics: (1) How can RNNs learn to control (through internal spotlights of attention) separate large short-memory structures such as sub-networks with fast weights, to improve performance on many natural short-term memory-intensive tasks which are currently hard to learn by RNNs, such as answering detailed questions on recently observed videos? (2) How can such RNN-like systems metalearn entire learning algorithms that outperform the original learning algorithms? (3) How to achieve efficient transfer learning from one RNN-learned set of problem-solving programs to new RNN programs solving new tasks? In other words, how can one RNN-like system actively learn to exploit algorithmic information contained in the programs running on another? We will test our systems existing benchmarks, and create new, more challenging multi-task benchmarks. This will be supported by a rather cheap, GPU-based mini-brain for implementing large RNNs.


year authors and title journal last update
List of publications.
2018 D. Ha, J. Schmidhuber
Recurrent World Models Facilitate Policy Evolution
published pages: , ISSN: , DOI:
NeurIPS 2018 2019-06-07
2018 I. Schlag, J. Schmidhuber
Learning to Reason with Third Order Tensor Products
published pages: , ISSN: , DOI:
NeurIPS 2018 2019-06-07
2018 A. M. Metelli, M. Papini, F. Faccio, M. Restelli
Policy Optimization via Importance Sampling
published pages: , ISSN: , DOI:
NeurIPS 2018 2019-06-07
2019 R. Csordas, J. Schmidhuber
Improving Differentiable Neural Computers Through Memory Masking, De-allocation, and Link Distribution Sharpness Control
published pages: , ISSN: , DOI:
ICLR 2019 2019-06-07
2018 L. Kirsch, J. Kunze, D. Barber
Modular Networks: Learning to Decompose Neural Computation
published pages: , ISSN: , DOI:
NeurIPS 2018 2019-06-07

Are you the coordinator (or a participant) of this project? Plaese send me more information about the "ALGORNN" project.

For instance: the website url (it has not provided by EU-opendata yet), the logo, a more detailed description of the project (in plain text as a rtf file or a word file), some pictures (as picture files, not embedded into any word file), twitter account, linkedin page, etc.

Send me an  email ( and I put them in your project's page as son as possible.

Thanks. And then put a link of this page into your project's website.

The information about "ALGORNN" are provided by the European Opendata Portal: CORDIS opendata.

More projects from the same programme (H2020-EU.1.1.)

AdjustNet (2020)

Self-Adjusting Networks

Read More  


Exploiting genome replication to design improved plant growth strategies

Read More  

DINAMIX (2019)

Real-time diffusion NMR analysis of mixtures

Read More