A Study on The Biological Basis of Relational Learning Researchers at ML Collective in San Francisco and collaborators at Columbia University have delved into the biological processes governing relational learning. Their study, recently published in Nature Neuroscience, used brain-inspired artificial neural networks to gain insights into how humans and certain animals learn relationships between objects or events.
The principle demonstrated by the research is fairly simple: if A is greater than B and B is also greater than C, then A is also greater than C. This concept, known as ‘transitive inference,’ is fundamental for our understanding of the world, guiding tasks such as distinguishing the ranking of apple, orange, and banana (A > B > C). The researchers utilized a cutting-edge approach, dubbed ‘transitive inference,’ to investigate the neural basis of relational learning in humans and nonhuman animals. Using a specific type of brain-inspired artificial neural network, they identified several processes in the brain that might play a role in relational learning. One of the fascinating insights from the study is that in tasks where there’s explicit information about relationships between objects, humans and certain animals often grasp them, even if theys physical properties and AI algorithms, promising an extraordinary level of encryption, unprecedented in digital ages.
The team, led by Stelios Tzortzakis, is hopeful of their invention’s potential, believing it could revolutionize cryptography and secure wireless optical communication.
