About Me

Anand's picture

I am a post-doc at the Institut für Neuroinformatik at the Ruhr-Universität Bochum. Last year, I finished my PhD with Prof. Wolfgang Maass at the Institute for Theoretical Computer Science at Technische Universität Graz. My general research interest is to understand intelligence using machine learning methods that heavily derive inspiration from neuroscience and biology. I do this by developing mathematical and computational models for learning and memory, using state-of-the-art machine learning methods.

I used to be a Software Development Engineer at Amazon.com in the DynamoDB team for a couple of years before I started my PhD.

I have a Masters in computer science from the University of Texas at Austin where I worked with Prof. Risto Miikkulainen on using neuro-evolution and task-decomposition to learn complex tasks. I have also worked with Prof. Peter Stone on agents that learn from human demonstrations and rewards.

I worked at Indian Institute of Science, Bangalore as a Research Assistant with Prof. K Gopinath after finishing my undergraduate degree at IIT Madras.

Detailed resume available on request.

Link to my Google Scholar Profile


  1. Jain A, Subramoney A, Miikkulainen R. "Task decomposition with neuroevolution in extended predator-prey domain". In: Proceedings of Thirteenth International Conference on the Synthesis and Simulation of Living Systems. East Lansing, MI, USA; 2012. conference (url) (pdf) (bibtex)
  2. Subramoney A. "Evaluating Modular Neuroevolution in Robotic Keepaway Soccer" [Master's thesis]. [Austin, TX]: Department of Computer Science, The University of Texas at Austin; 2012. p. 54 pages. thesis (url) (pdf) (bibtex)
  3. Petrovici MA, Schmitt S, Klähn J, Stöckel D, Schroeder A, Bellec G, Bill J, Breitwieser O, Bytschok I, Grübl A, Güttler M, Hartel A, Hartmann S, Husmann D, Husmann K, Jeltsch S, Karasenko V, Kleider M, Koke C, Kononov A, Mauch C, Müller E, Müller P, Partzsch J, Pfeil T, Schiefer S, Scholze S, Subramoney A, Thanasoulis V, Vogginger B, Legenstein R, Maass W, Schüffny R, Mayr C, Schemmel J, Meier K. "Pattern representation and recognition with accelerated analog neuromorphic systems". In: 2017 IEEE International Symposium on Circuits and Systems (ISCAS). 2017. p. 1–4. conference (url) (pdf) (bibtex)
  4. Kaiser J, Stal R, Subramoney A, Roennau A, Dillmann R. "Scaling up liquid state machines to predict over address events from dynamic vision sensors". Bioinspiration & Biomimetics. June 2017; journal (url) (pdf) (bibtex)
  5. Bellec* G, Salaj* D, Subramoney* A, Legenstein R, Maass W. "Long short-term memory and Learning-to-learn in networks of spiking neurons". In: Advances in Neural Information Processing Systems 31. Curran Associates, Inc.; 2018. p. 795–805. conference (url) (pdf) (bibtex)
  6. Kaiser* J, Hoff* M, Konle A, Vasquez Tieck JC, Kappel D, Reichard D, Subramoney A, Legenstein R, Roennau A, Maass W, others. "Embodied Synaptic Plasticity With Online Reinforcement Learning". Frontiers in Neurorobotics. 2019;13:81. journal (url) (bibtex)
  7. Yegenoglu* A, Diaz* S, Klijn* W, Peyser* A, Subramoney A, Maas W, Visconti G, Herty M. "Learning to Learn on High Performance Computing". In: Society for Neuroscience Meeting 2019. Jülich Supercomputing Center; 2019. workshop (url) (bibtex)
  8. Subramoney A, Scherr F, Maass W. "Learning to learn motor prediction by networks of spiking neurons". In: Worshop on Robust Artificial Intelligence For Neurorobotics, Edinburgh. 2019. workshop (url) (bibtex)
  9. Subramoney* A, Scherr* F, Bellec* G, Hajek E, Salaj D, Legenstein R, Maass W. "Slow processes of neurons enable a biologically plausible approximation to policy gradient". In: NeurIPS 2019 Workshop: Biological and artificial Reinforcement Learning. 2019. workshop (url) (pdf) (bibtex)
  10. Bellec* G, Scherr* F, Hajek E, Salaj D, Subramoney A, Legenstein R, Maass W. "Eligibility traces provide a data-inspired alternative to backpropagation through time". In: NeurIPS 2019 Workshop: Real neurons and hidden units. 2019. workshop (url) (bibtex)
  11. Bellec* G, Scherr* F, Subramoney A, Hajek E, Salaj D, Legenstein R, Maass W. "A solution to the learning dilemma for recurrent networks of spiking neurons". Nature Communications. July 2020;11(1):3625. journal (url) (preprint) (bibtex)
  12. Subramoney A. "Biologically plausible learning and meta-learning in recurrent networks of spiking neurons" [PhD thesis]. [Graz, Austria]: Institute for Theoretical Computer Science, Graz University of Technology; 2020. thesis (url) (pdf) (bibtex)
  13. Subramoney A, Scherr F, Maass W. "Reservoirs Learn to Learn". In: Nakajima K, Fischer I, editors. Reservoir Computing: Theory, Physical Implementations, and Applications. Singapore: Springer; 2021. p. 59–76. (Natural Computing Series). bookchapter (url) (preprint) (bibtex)
  14. Subramoney A, Bellec G, Scherr F, Legenstein R, Maass W. "Revisiting the role of synaptic plasticity and network dynamics for fast learning in spiking neural networks". bioRxiv. January 2021; preprint (preprint) (bibtex)
  15. Rao* A, Legenstein* R, Subramoney A, Maass W. "A normative framework for learning top-down predictions through synaptic plasticity in apical dendrites". bioRxiv. March 2021; preprint (preprint) (bibtex)
  16. Salaj* D, Subramoney* A, Kraišniković* C, Bellec G, Legenstein R, Maass W. "Spike Frequency Adaptation Supports Network Computations on Temporally Dispersed Information". eLife. July 2021;10:e65459. journal (url) (preprint) (bibtex)

(*: equal contributions)

Open Source Software