Skip to content

An NLP tokenization algorithm that is a trainable layer for neural networks.

License

Notifications You must be signed in to change notification settings

martinm07/tokenization-layer

Repository files navigation

Tokenization Layer

View in Deepnote

This is a concept for a tokenization algorithm that is a neural network layer, training as part of a model trying to solve some NLP task, to make tokens that are best for the task. This is explained further in concept-explained.ipynb.

The tokenization layer is a layer that takes in text, split by letter and one-hot encoded, and outputs the same text, but represented by it's patterns (i.e. tokens) instead of letters. In other words, the tokenization layer returns a text of 0s, except with the patterns' signatures ("out") wherever said patterns were in the text. Here's an example:
Warning: Equations have low contrast when viewing on dark background

^ Showing how the tokenization layer takes in text that is split by letter and one-hot encoded

^ Showing how a single neuron of the tokenization layer works

^ Showing how the multiple neurons in a tokenization layer come together to make the layer output

These patterns (i.e. tokens) should then update through training, and to do that we need to get the derivative of the layer w.r.t. the patterns. To do that we rewrite our layer to use convolutions (from CNNs), and derive from there. With this (and a couple finer details) we have the tokenization layer.

However, as it stands now, it is unable to really train. At a high level, this is due to the fact that there are many possible tokens that will never be detected in a text (i.e. random strings of characters), and that the upstream gradient at the layer is probably impossible for it to follow (as in, the rest of the neural network wants the layer to produce outputs that are impossible for it to output).
Again, for more details refer to concept-explained.ipynb.


If you would like a stable environment to work on this repository, there's Deepnote, which is where most of the work for this repository was done. Just create a new project and integrate with this repository.

Despite that, here's the requirements.txt:

colorama==0.4.4
seaborn==0.11.1
matplotlib==3.3.4
tensorflow==2.5.0
pandas==1.2.4
numpy==1.19.5
nltk==3.6.1
ipython==7.26.0
scikit_learn==0.24.2

I also made a PIP package for this. You can see the PyPi page here, or just download it using

pip install tokenization-layer

Additionally, here is the documentation on it (for contributing to the docs, here's the synchronised GitHub repo).

About

An NLP tokenization algorithm that is a trainable layer for neural networks.

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Packages

No packages published

Contributors 2

  •  
  •