Artificial IntelligenceData Science / AI

Language-Independent Document Clustering

Share with:

FacebookTwitterGoogleLinkedInPinterest

I explored a novel way of document clustering, which is language independent. The crux is, it applies to any text in any language. The process remains the same as the data changes.

Let me note the approach in bullet points:

  1. Build word vectors from the available data.
  2. Build word clusters of the document word vectors.
  3. Find which cluster best represents the document.
  4. Aggregate these clusters (similar clusters go together).
  5. Documents with similar clusters go together.

Now we will walk through the process and code. We will start by creating word vectors. Before that

In Natural Language Processing (NLP) 90% of the work is preprocessing — Some Researcher

So make sure you do all the preprocessing that you need to do on your data. Since this is a POC, I have excluded the part where preprocessing is done.

We will use the genesis python package to build word vectors. The below function creates the vectors and saves them in the binary format.

# Create wordvectors
def create_vectors(filename):
    sentences = []
    with bz2.open(filename, 'rt') as df:
        for l in df:
            sentences.append(" ".join(l.split(",")).split())
    model = Word2Vec(sentences, size=25, window=5, min_count=1, workers=4)
    model.save(r"filename.vec") # Line changed refer code on github
    return model

Once the word vectors are ready, we can move on to the next step. This is an exciting step, in particular. I stole it from a blog here. It’s very simple, and it works!

Let’s say we have a document with n words. We will start with the first word and lookup for the vector for that word in the model (word vector). That vector goes to a cluster, then moving forward, we assign the word to either a pre-created cluster or a new cluster with a probability 1/(1+n). If we decide to assign to an existing cluster, we assign the word to the most similar cluster. This approach creates clusters that separate the words into groups that make sense and take a look.

Language-Independent
Language-Independent Document Clustering

More examples in this notebook here.

The below function is the implementation of the above algorithm, which is also called as Chinese Restaurant Process.

I have put the code on GitHub. Also, there is an accompanying notebook that you can have a look at. https://github.com/kaustubhn/doc_clust

Improvements

A few suggestions to improve the above algorithm.

  1. More Data
  2. Better Preprocessing
  3. More Dimensional Vectors (we used 25 dimensions)
  4. A better approach to matching clusters (other than the cosine approach)

Note: This is a very experimental approach. A lot is unclear and undefined as of now. I will keep improving and completing the approach. Your comments are appreciated.


For any questions and, inquiries visit us on https://thinkitive.com/technologies/ai-and-ml.html

Tags
Photo of Kaustubh

Kaustubh

I look after Technology at Thinkitive. Interested in Machine Learning, Deep Learning, IoT, TinyML and many more areas of application of machine learning.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
Close