About Me

Anand's picture

I am a Lecturer (Assistant Professor) at the Department of Computer Science at Royal Holloway, University of London.

I was a post-doc at the Institut für Neuroinformatik at the Ruhr-Universität Bochum. I finished my PhD with Prof. Wolfgang Maass at the Institute for Theoretical Computer Science at Technische Universität Graz, working on biologically plausible learning and meta-learning in spiking neural networks.

I have a Masters in computer science from the University of Texas at Austin where I worked with Prof. Risto Miikkulainen on using neuro-evolution and task-decomposition to learn complex tasks. I worked as a Software Development Engineer at Amazon.com in the DynamoDB team for a couple of years right after my Masters.

I received my undergraduate degree at IIT Madras and worked at Indian Institute of Science, Bangalore as a Research Assistant with Prof. K Gopinath right after my undergrad.

My Erdős number is 3.

Research Interests

I am broadly interested in learning and intelligence, both algorithmic and biological. My current research interest focuses on understanding the principles of scalability through the following questions:

Much of my work derives inspiration from neuroscience and biology in the quest to build a better and more general artificial intelligence.

Join Us

If you’re interested in starting a collaboration, don’t hesitate to reach out and get in touch

I’m happy to support your application for externally funded post-doc fellowships for working with me. Here’s a non-exhaustive list of post-doc fellowships:

For those interested in externally funded PhD programs, internships, or remote internships, feel free to email me.

Stay tuned, as I’ll be regularly updating the site with other open positions as they become available.

Mentoring

I commit a few hours per month to mentor underrepresented groups in academia. If you need guidance on career choices, research directions, PhD applications, or anything else, please book a slot on my calendar to talk to me.

Publications

Link to my Google Scholar Profile

* denotes equal contributions

  1. Jain A, Subramoney A, Miikkulainen R. "Task decomposition with neuroevolution in extended predator-prey domain". In: Proceedings of Thirteenth International Conference on the Synthesis and Simulation of Living Systems. East Lansing, MI, USA; 2012. conference (url) (pdf) (bibtex)
  2. Petrovici MA, Schmitt S, Klähn J, Stöckel D, Schroeder A, Bellec G, Bill J, Breitwieser O, Bytschok I, Grübl A, Güttler M, Hartel A, Hartmann S, Husmann D, Husmann K, Jeltsch S, Karasenko V, Kleider M, Koke C, Kononov A, Mauch C, Müller E, Müller P, Partzsch J, Pfeil T, Schiefer S, Scholze S, Subramoney A, Thanasoulis V, Vogginger B, Legenstein R, Maass W, Schüffny R, Mayr C, Schemmel J, Meier K. "Pattern representation and recognition with accelerated analog neuromorphic systems". In: 2017 IEEE International Symposium on Circuits and Systems (ISCAS). 2017. p. 1–4. conference (url) (pdf) (bibtex)
  3. Kaiser J, Stal R, Subramoney A, Roennau A, Dillmann R. "Scaling up liquid state machines to predict over address events from dynamic vision sensors". Bioinspiration & Biomimetics. June 2017; journal (url) (pdf) (bibtex)
  4. Bellec* G, Salaj* D, Subramoney* A, Legenstein R, Maass W. "Long short-term memory and Learning-to-learn in networks of spiking neurons". In: Advances in Neural Information Processing Systems 31. Curran Associates, Inc.; 2018. p. 795–805. conference (url) (pdf) (bibtex)
  5. Kaiser* J, Hoff* M, Konle A, Vasquez Tieck JC, Kappel D, Reichard D, Subramoney A, Legenstein R, Roennau A, Maass W, Dillmann R. "Embodied Synaptic Plasticity With Online Reinforcement Learning". Frontiers in Neurorobotics. 2019;13:81. journal (url) (bibtex)
  6. Subramoney A, Scherr F, Maass W. "Learning to learn motor prediction by networks of spiking neurons". In: Worshop on Robust Artificial Intelligence For Neurorobotics, Edinburgh. 2019. workshop (url) (bibtex)
  7. Bellec* G, Scherr* F, Subramoney A, Hajek E, Salaj D, Legenstein R, Maass W. "A solution to the learning dilemma for recurrent networks of spiking neurons". Nature Communications. July 2020;11(1):3625. journal (url) (preprint) (bibtex)
  8. Subramoney A, Scherr F, Maass W. "Reservoirs Learn to Learn". In: Nakajima K, Fischer I, editors. Reservoir Computing: Theory, Physical Implementations, and Applications. Singapore: Springer; 2021. p. 59–76. (Natural Computing Series). bookchapter (url) (preprint) (bibtex)
  9. Rao* A, Legenstein* R, Subramoney A, Maass W. "A normative framework for learning top-down predictions through synaptic plasticity in apical dendrites". bioRxiv. March 2021; preprint (preprint) (bibtex)
  10. Salaj* D, Subramoney* A, Kraišniković* C, Bellec G, Legenstein R, Maass W. "Spike Frequency Adaptation Supports Network Computations on Temporally Dispersed Information". eLife. July 2021;10:e65459. journal (url) (preprint) (bibtex)
  11. Yegenoglu A, Subramoney A, Hater T, Jimenez-Romero C, Klijn W, Pérez Martı́n Aarón, Vlag M van der, Herty M, Morrison A, Diaz Pier S. "Exploring parameter and hyper-parameter spaces of neuroscience models on high performance computers with Learning to Learn". Frontiers in Computational Neuroscience. May 2022;:46. journal (url) (preprint) (bibtex)
  12. Subramoney A, Nazeer KK, Schöne M, Mayr C, Kappel D. "Efficient Recurrent Architectures through Activity Sparsity and Sparse Back-Propagation through Time". In: International Conference on Learning Representations. 2023. conference Spotlight (notable-top-25%) presentation (url) (preprint) (talk) (bibtex)
  13. Subramoney A. "Efficient Real Time Recurrent Learning through Combined Activity and Parameter Sparsity". In: ICLR 2023 Workshop: Sparsity in Neural Networks (SNN). arXiv; 2023. workshop (url) (preprint) (bibtex)
  14. Kappel D, Nazeer KK, Fokam CT, Mayr C, Subramoney A. "Block-local learning with probabilistic latent representations". arXiv; 2023. preprint (preprint) (bibtex)
  15. Grappolini EW, Subramoney A. "Beyond Weights: Deep learning in Spiking Neural Networks with pure synaptic-delay training". In: Proceedings of the 2023 International Conference on Neuromorphic Systems. New York, NY, USA: Association for Computing Machinery; 2023. (ICONS ’23). conference (url) (preprint) (bibtex)
  16. Mukherji R, Schöne M, Nazeer KK, Mayr C, Subramoney A. "Activity Sparsity Complements Weight Sparsity for Efficient RNN Inference". In: NeurIPS 2023 Workshop: ML with New Compute Paradigms (MLNCP). 2023. workshop (preprint) (bibtex)
  17. Nazeer KK, Schöne M, Mukherji R, Vogginger B, Mayr C, Kappel D, Subramoney A. "Language Modeling on a SpiNNaker2 Neuromorphic Chip". In: 2024 IEEE 6th International Conference on AI Circuits and Systems (AICAS). IEEE; 2024. p. 492–6. conference (preprint) (bibtex)
  18. Subramoney A, Bellec G, Scherr F, Legenstein R, Maass W. "Fast learning without synaptic plasticity in spiking neural networks". Scientific Reports. April 2024;14(1):8557. journal (url) (preprint) (bibtex)
  19. Schiewer R, Subramoney A, Wiskott L. "Exploring the limits of Hierarchical World Models in Reinforcement Learning". arXiv preprint arXiv:240600483. June 2024; preprint (preprint) (bibtex)
  20. Zhuge J, Mayr C, Subramoney A, Kappel D. "Single Train Multi Deploy on Topology Search Spaces using Kshot-Hypernet". In: 2nd Workshop on Advancing Neural Network Training: Computational Efficiency, Scalability, and Resource Optimization (WANT@ ICML 2024). 2024. workshop (url) (bibtex)
  21. Mukherji R, Schöne M, Nazeer KK, Mayr C, Kappel D, Subramoney A. "Weight Sparsity Complements Activity Sparsity in Neuromorphic Language Models". Proceedings of the 2024 International Conference on Neuromorphic Systems. August 2024; conference Full talk/Oral (preprint) (bibtex)
  22. Schöne M, Sushma NM, Zhuge J, Mayr C, Subramoney A, Kappel D. "Scalable Event-by-event Processing of Neuromorphic Sensory Signals With Deep State-Space Models". Proceedings of the 2024 International Conference on Neuromorphic Systems. August 2024; conference Best Paper Award (preprint) (bibtex)
  23. Sushma NM, Tian Y, Mestha H, Colombo N, Kappel D, Subramoney A. "State-space models can learn in-context by gradient descent". arXiv preprint arXiv:241011687. October 2024; preprint (preprint) (bibtex)
  24. Fokam CT, Nazeer KK, König L, Kappel D, Subramoney A. "Asynchronous Stochastic Gradient Descent with Decoupled Backpropagation and Layer-Wise Updates". arXiv preprint arXiv:241005985. October 2024; preprint (preprint) (bibtex)

Theses

  1. Subramoney A. "Evaluating Modular Neuroevolution in Robotic Keepaway Soccer" [Master's thesis]. [Austin, TX]: Department of Computer Science, The University of Texas at Austin; 2012. p. 54 pages. thesis (url) (pdf) (bibtex)
  2. Subramoney A. "Biologically plausible learning and meta-learning in recurrent networks of spiking neurons" [PhD thesis]. [Graz, Austria]: Institute for Theoretical Computer Science, Graz University of Technology; 2020. thesis (url) (pdf) (bibtex)

Recorded talks

Open Source Software

Teaching