Sequence prediction

Sequence prediction is different to other types of supervised learning problems. The sequence imposes an order on the observations that must be preserved when training models and making predictions.

Some examples include:

- weather forecasting
- DNA sequence classification
- image captioning
- language translation

The recurrent connections in LSTMs add memory to the network and allow it to learn the ordered nature of observations within input sequences.

In a sense, this capability unlocks sequence prediction for neural networks and deep learning.
Sequential prediction is a very fast Pattern Matching Algorithm. It has linear running time and, if implemented as a Folded Pattern Matcher, only needs to visit matching entries. During a search, it is able to find all matches along with their match size. Its speed makes it viable for use in a Virtual Guns array.
This algorithm requires that inputs are discrete and capped, like in Symbolic Pattern Matching. This is not really a problem since we can use integers, giving us up to 2,147,483,648 values (if we only use non-negative values). By comparison, a robot's velocity is limited to [-8, 8] units per tick and it's turning to [-10, 10] degrees.
The main draw back is that the algorithm must traverse the log each time a new value is added in order to update stored match lengths. This is in contrast to most pattern matching algorithms which don't do any traversing until a search is performed. This can be alleviated by storing the longest match we find during each add, making fetching the longest match a simple variable access.

Comments

Popular posts from this blog

Boosting and AdaBoost algorithm.

Decision tree

Random forest algorithm