Thanks for sharing your work. Without rerunning the experiment, could we take a look at the results on ARC-AGI 2, in particular which tasks the model was able to solve and which ones it failed?
This approach reminds me of the meta-learning approach referenced by Joscha Bach in his first interview in the Lex Friedman podcast. He mentions that neural networks are an algorithm that automatically looks for an algorithm that implements the problem. Meta-learning is a level above that and finds an algorithm that discovers a learning algorithm for the given domain, which he claims is closer to how our brain works.
eerily similar to DSPy's GEPA
https://arxiv.org/abs/2507.19457
Thanks for sharing your work. Without rerunning the experiment, could we take a look at the results on ARC-AGI 2, in particular which tasks the model was able to solve and which ones it failed?
Thank you for sharing this. Brilliantly simple and elegant solution.
This approach reminds me of the meta-learning approach referenced by Joscha Bach in his first interview in the Lex Friedman podcast. He mentions that neural networks are an algorithm that automatically looks for an algorithm that implements the problem. Meta-learning is a level above that and finds an algorithm that discovers a learning algorithm for the given domain, which he claims is closer to how our brain works.
they are all actually Recursive Emergence https://github.com/Recursive-Emergence/RE/blob/main/chapter_2_mathematical_foundations.md
👍