At great risk of sounding completely ignorant, this approach is basically what I thought the point of machine learning was - cleverly using feedback loops to improve things automatically. The thing that sticks out to me as particularly cool about FunSearch is the use of programs as inputs/outputs and the fact that they managed to automate feedback.
I'm pretty naive in terms of granular understanding here as I am barely proficient in Python, to be clear, but when I daydream about things you could solve with machine learning/AI, this is the approach I always think of and I guess is how I thought it already worked. Load it up with the best information we have currently, define the desired results as clearly as possible, implement some form of automatic feedback, and let it run iteratively until it produces something better than what you had before.
Is this a case of "well no shit, but actually implementing that effectively is the hard part"? Is it being able to quickly apply it to a wide variety of problems? I guess I'm trying to understand whether this is a novel idea (and if so, what parts are novel), or if the idea has been around and it's a novel implementation.
The important thing is "how do you change X so that it heads towards the goal". And "how to do it quickly and efficiently".
Otherwise the description is the same as "select randomly, keep the best, iterate".
The goal is also complex. You might be thinking of "find the most efficient program" but that's not what we're doing here iiuc. We're trying to get a program that makes other unseen programs more efficient. That's hard to define as a goal.
They did remove the worst results from the group over time, the others was just uses a seed to generate new examples from instead of starting each function from scratch.
I'm pretty naive in terms of granular understanding here as I am barely proficient in Python, to be clear, but when I daydream about things you could solve with machine learning/AI, this is the approach I always think of and I guess is how I thought it already worked. Load it up with the best information we have currently, define the desired results as clearly as possible, implement some form of automatic feedback, and let it run iteratively until it produces something better than what you had before.
Is this a case of "well no shit, but actually implementing that effectively is the hard part"? Is it being able to quickly apply it to a wide variety of problems? I guess I'm trying to understand whether this is a novel idea (and if so, what parts are novel), or if the idea has been around and it's a novel implementation.