7 Comments

Towards Deep Symbolic Reinforcement Learning:

https://arxiv.org/abs/1609.05518

Expand full comment

The Future of AI is Self-Organizing and Self-Assembling (w/ Prof. Sebastian Risi)

https://www.youtube.com/watch?v=_7xpGve9QEE

Expand full comment

Evolution through Large Models (OpenAI, 2022, 58 pages)

Paper: https://arxiv.org/abs/2206.08896

Expand full comment

The Alberta Plan for AI Research

https://arxiv.org/abs/2208.11173

Expand full comment

Also to make reinforcement learning more efficient we must build upon experience of previous trained robots. So we need to use pretrained models and distill their experience into new model.

This is what evolution does.

We also need symbolic reasoning. If the robot understand language he can receive more guided training. For these we need powerful NLP models capable of writing programs and coordinate other more instinctual models.

So here we are still limited by hardware and global training algorithm that does not scale cheaply.

Expand full comment

The main issue is the lack of self-evolvable life-like energy-efficient and cheap AI hardware. AI algorithms and theory are already ahead and we need AI hardware to catch up. Life evolved neural hardware first from simple nerve cells to the full brain and algorithms come second. First implement a evolvable neuron artificial cell that survives by making connections, processing signals and delivering useful outputs. AI hardware should be able to evolve on its own from a dumb AI based on instincts to higher intellect like Einstein. If we need to constantly work to improve this AI instead of evolving on its own then it will die with us and never survive on its own. Now first priority would be to have a framework of modular cell-like artificial networks that can get together and cooperate. Unfortunately gradient descent +backprop is not capable to train such evolvable network of evolvable networks. It does not scale.

Expand full comment