When crafting an algorithm for dealing with some kind of problem, we usually are involved with designing rules that transform the input data into output data. The programming language is a method convenient to humans for rules specification. But generally, the computer itself does not need anything more than low-level binary instructions to transform input data into some desired output.
Although high-level programming languages have proven to be of much aid to the human intellect when it comes to crafting rules that the computer can follow, it also comes with some bottlenecks and as we now realise when it comes to problems whose rules cannot be easily specified we use machine learning to discover models that map the input to output.
Machine learning methods especially deeplearning have proven to be potent tools when we need to discover a model or a set of rules that map the input to output and will continually augment many algorithm development activities but they also come with their own bottlenecks. They are not very transparent in how the rules determine the output, and their performance is very bad requiring very large compute resources to perform simple operations.
Machine learning, which is based on statistics might not be the only way to generate rules that map inputs to output, in the future it will be possible to generate rules automatically using some other methods, rules that are much simpler to execute and thus much more efficient than current machine learning systems.
Comments
Post a Comment