Full width home advertisement

Welcome Home


Post Page Advertisement [Top]

A New Approach to Solving the "Hardest of the Hard" Computer Problems



Scientists are already benefiting from a relatively new type of computing that mimics the way the human brain functions. This type of computing is changing the way they approach some of the most difficult information processing problems.


Researchers have discovered a way to make reservoir computing work between 33 and a million times faster, while using significantly fewer computing resources and requiring significantly less data input.


As an example, researchers solved a complex computing problem on a desktop computer in less than one second during a test of this next-generation reservoir computing technology, according to the researchers.


According to Daniel Gauthier, lead author of the study and professor of physics at The Ohio State University, the same problem requires a supercomputer to solve and still takes much longer than it does using today's state-of-the-art technology.


"We can perform very complex information processing tasks in a fraction of the time while using significantly fewer computer resources than what reservoir computing can currently accomplish," Gauthier explained. "We can perform very complex information processing tasks in a fraction of the time while using significantly fewer computer resources."


"Moreover, reservoir computing was already a significant improvement over what had previously been possible," says the researcher.


The findings of the study were published in the journal Nature Communications on September 21, 2021.


"Reservoir computing" is a machine learning algorithm that was developed in the early 2000s and is used to solve some of the "hardest of the hard" computing problems, such as forecasting the evolution of dynamical systems that change over time, according to Gauthier. "Reservoir computing" is an acronym that stands for reservoir computing plus optimization.


Dynamical systems, such as the weather, are difficult to predict because even a small change in one condition can have significant consequences later on, according to Dr. Hennessey.


An example of this is the "butterfly effect," which, in one metaphorical example, involves changes caused by a butterfly flapping its wings that can eventually influence the weather weeks after the change has occurred.


In previous research, it has been demonstrated that reservoir computing is well-suited for learning dynamical systems and can provide accurate predictions about how they will behave in the future, according to Gauthier.


It accomplishes this through the use of an artificial neural network, which functions somewhat similarly to a human brain. Scientists feed information from a dynamical network into a "reservoir" of artificial neurons that are randomly connected in a network. The network generates useful output that the scientists can interpret and feed back into the network, resulting in a more and more accurate forecast of how the system will evolve in the future as the network grows in size and sophistication.


The larger and more complex the system, as well as the greater accuracy that the scientists desire in the forecast, the larger and more complex the network of artificial neurons, as well as the greater the amount of computing resources and time required to complete the task, are all factors to consider.


One problem has been that the reservoir of artificial neurons has been referred to as a "black box," according to Gauthier, and scientists have not been able to determine exactly what is going on inside it – they only know that it works.


According to Gauthier, the artificial neural networks that are at the heart of reservoir computing are built on mathematical principles.


As a result, he said, "we had mathematicians look at these networks and ask, 'To what extent are all of these pieces in the machinery really needed?'"


In this study, Gauthier and his colleagues investigated that question and discovered that the entire reservoir computing system could be greatly simplified, resulting in a significant reduction in the demand for computing resources and a significant reduction in the amount of time spent computing.


It was on a weather forecasting task that they tested their concept, which involved using a weather system developed by Edward Lorenz, whose work was instrumental in gaining our understanding of the butterfly effect.


Compared to today's state-of-the-art reservoir computing, their next-generation reservoir computing was a clear winner on this Lorenz forecasting task. In a relatively simple simulation performed on a desktop computer, the new system was 33 to 163 times faster than the current model, according to the researchers.


However, when it came to achieving high levels of accuracy in the forecast, the next-generation reservoir computing was approximately one million times faster. And, according to Gauthier, the new-generation computing model achieved the same accuracy with the equivalent of only 28 neurons, as opposed to the 4,000 neurons required by the current-generation model.


One important reason for the increase in speed is that the "brain" behind this next generation of reservoir computing requires significantly less warmup and training to produce the same results as the current generation, compared to the current generation.


Warmup data is training data that must be entered into the reservoir computer as input in order for it to be prepared for its actual task.


"There is almost no warming time required for our next-generation reservoir computing," Gauthier explained.


At the moment, scientists must enter 1,000 or 10,000 data points or more to get the system to warm up. And that is all of the information that has been lost, information that is not required for the actual work. Adding one, two, or three data points is all that's required," he explained.


And, once researchers have finished training the reservoir computer to make forecasts, the next-generation system will require significantly less data than the current system.


In their test of the Lorenz forecasting task, the researchers discovered that they could achieve the same results with 400 data points as the current generation produced with 5,000 or more data points, depending on the level of accuracy desired.


According to Gauthier, "what's exciting is that this next generation of reservoir computing takes something that was already very good and makes it significantly more efficient."


He and his colleagues intend to build on this work in order to tackle even more difficult computing problems, such as fluid dynamics forecasting.


"It's an incredibly difficult problem to solve," says the researcher. "We want to see if we can speed up the process of solving that problem by using our simplified reservoir computing model," says the research team.

No comments:

Post a Comment

Bottom Ad [Post Page]