Self-learning software that builds itself
Connecting state and local government leaders
Researchers have developed a machine-learning system that can assemble code components into a program to meet goals set by the human developers.
Researchers at Lancaster University in England have developed a machine-learning system that can assemble code components into a program to meet goals set by the human developers. The resulting software can continue to learn and reconfigure itself to adapt to changing conditions without human intervention.
The researchers used the designing of a web server as a test case for the runtime emergent software, or REx, and found not only that the resulting web server performed efficiently but that REx came up with some unexpected tricks.
“The machine-learning system found a way of constructing our example web server that we would never have imagined would be the best solution,” said Barry Porter, a lecturer at the university’s School of Computing and Communications. “We've had to then meticulously verify this ourselves by studying all of the source code that it has chosen to compose, and in every case so far we've found that the solution identified by the real-time machine-learning system really was the best option.”
According to Porter, REx consists of three layers. First, a component-based programming language called Dana allows the system to find, select and adapt building blocks of function-specific code. Next, a perception, assembly and learning framework (PAL) configures the components and measures their behaviors. Finally, a learning module uses the collected information to determine the best configuration of the blocks of code.
Human coders are still required to specify the goals of a program that REx will build and to specify what constitutes “success.” Yet eliminating the need for human coders altogether is, Porter said, “a key focus of our ongoing work.”
The researchers are also working on developing ways to set Rex’s program development goals using natural language instead of high-level coding, Porter said.
So far, the key criterion for REx’s programs has been narrowly defined performance of a task. But Porter said he foresees much more complex possibilities. “In our ongoing research we're looking at more complex success criteria, including aggressive energy saving or maximizing mean-time-to-failure of hardware,” he said.
The team is looking at scenarios in which it might colocate more processing on fewer physical machines. That would potentially save energy, but response to user requests might be slower because each machine is busier, Porter explained. “To help measure these criteria we always need access to corresponding measurements from the software, such as power consumption levels of the computers on which they're running.”
If that seems a bit over the top for a simple web server, that’s because Porter’s team is already envisioning REx designing software for such complex tasks as running data centers.
“Software is by far the most complex element within data centers, and it's very hard to know how efficient our current ‘static’ software systems really are across all of the different conditions to which they are subjected over the course of a day,” Porter said. “Using approaches like ours, by having software understand itself on a much deeper level, we get much better insights and automated responses to efficiency issues in these complex systems.”
Porter also sees self-assembling and self-learning software as having potential in many other areas, including robotics. “Machines will need to draw on a large range of possible software behaviors, and their combinations, to achieve goals in the environments that they encounter,” he said.
NEXT STORY: Close encounters of the drone kind