4-Topic-Modeling

Topic Modeling

Introduction

Another popular text analysis technique is called topic modeling. The ultimate goal of topic modeling is to find various topics that are present in your corpus. Each document in the corpus will be made up of at least one topic, if not multiple topics.

In this notebook, we will be covering the steps on how to do Latent Dirichlet Allocation (LDA), which is one of many topic modeling techniques. It was specifically designed for text data.

To use a topic modeling technique, you need to provide (1) a document-term matrix and (2) the number of topics you would like the algorithm to pick up.

Once the topic modeling technique is applied, your job as a human is to interpret the results and see if the mix of words in each topic make sense. If they don't make sense, you can try changing up the number of topics, the terms in the document-term matrix, model parameters, or even try a different model.

Topic Modeling - Attempt #1 (All Text)

In [1]:
# Let's read in our document-term matrix
import pandas as pd
import pickle

data = pd.read_pickle('dtm_stop.pkl')
data
Out[1]:
abandon abandoned abandoner abandonest abandoneth abandoning abandoningall abandonment abandons abashed ... yuyutsu yuyutsus zanoni zarathustra zeal zealously zenana zisyphus zodiac zone
Adi 25 5 0 2 0 13 0 4 0 2 ... 11 0 0 0 0 0 0 0 1 0
AshramavAsika 2 1 0 0 0 5 0 0 0 0 ... 7 0 0 0 0 0 0 0 0 0
anushAsana 8 9 0 0 0 7 0 1 5 1 ... 3 0 0 0 0 0 0 1 0 0
ashvamedha 1 1 0 0 0 6 0 1 1 0 ... 4 0 0 0 0 0 0 0 0 0
bhISma 11 18 2 0 0 38 0 18 2 0 ... 2 0 0 0 0 0 0 0 0 0
droNa 13 15 0 0 1 36 0 0 0 0 ... 8 1 0 0 1 0 0 0 0 0
karNa 6 8 0 0 0 9 0 1 0 0 ... 10 1 0 0 0 1 0 0 0 0
mahAprasthAnika 4 2 0 0 0 1 0 2 0 0 ... 2 0 0 0 0 0 0 0 0 0
mausalA 0 0 0 0 0 1 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0
sabhA 9 4 0 0 0 2 0 1 0 0 ... 0 0 0 0 0 0 1 0 0 0
sauptika 1 0 0 0 0 1 0 0 1 0 ... 0 0 0 0 0 0 0 0 0 0
shalya 8 5 0 0 0 12 0 0 1 0 ... 7 0 0 0 0 0 0 0 0 0
shanti 53 28 0 0 0 77 1 22 12 1 ... 6 0 1 0 3 1 0 0 0 0
strI 4 0 0 0 0 0 0 0 0 0 ... 2 0 0 0 0 0 0 0 0 0
svargArohaNika 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0
udyoga 19 13 0 0 0 24 0 9 0 0 ... 5 0 0 0 0 1 0 0 1 0
vana 16 9 0 0 0 12 0 2 0 1 ... 0 0 0 1 0 0 0 0 0 0
virAta 0 0 0 0 0 2 0 0 0 0 ... 0 0 0 0 0 1 0 0 0 1

18 rows × 36812 columns

In [2]:
# Import the necessary modules for LDA with gensim
# Terminal / Anaconda Navigator: conda install -c conda-forge gensim
from gensim import matutils, models
import scipy.sparse

# import logging
# logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
In [3]:
# One of the required inputs is a term-document matrix
tdm = data.transpose()
tdm.head()
Out[3]:
Adi AshramavAsika anushAsana ashvamedha bhISma droNa karNa mahAprasthAnika mausalA sabhA sauptika shalya shanti strI svargArohaNika udyoga vana virAta
abandon 25 2 8 1 11 13 6 4 0 9 1 8 53 4 0 19 16 0
abandoned 5 1 9 1 18 15 8 2 0 4 0 5 28 0 0 13 9 0
abandoner 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0
abandonest 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
abandoneth 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0
In [4]:
# We're going to put the term-document matrix into a new gensim format, from df --> sparse matrix --> gensim corpus
sparse_counts = scipy.sparse.csr_matrix(tdm)
corpus = matutils.Sparse2Corpus(sparse_counts)
In [5]:
# Gensim also requires dictionary of the all terms and their respective location in the term-document matrix
cv = pickle.load(open("cv_stop.pkl", "rb"))
id2word = dict((v, k) for k, v in cv.vocabulary_.items())

Now that we have the corpus (term-document matrix) and id2word (dictionary of location: term), we need to specify two other parameters - the number of topics and the number of passes. Let's start the number of topics at 2, see if the results make sense, and increase the number from there.

In [6]:
# Now that we have the corpus (term-document matrix) and id2word (dictionary of location: term),
# we need to specify two other parameters as well - the number of topics and the number of passes
lda = models.LdaModel(corpus=corpus, id2word=id2word, num_topics=2, passes=10)
lda.print_topics()
Out[6]:
[(0,
  '0.014*"battle" + 0.010*"like" + 0.007*"arrows" + 0.007*"shafts" + 0.006*"mighty" + 0.005*"karna" + 0.004*"drona" + 0.004*"car" + 0.004*"arjuna" + 0.004*"steeds"'),
 (1,
  '0.006*"unto" + 0.005*"art" + 0.005*"like" + 0.004*"words" + 0.004*"shall" + 0.003*"brahmanas" + 0.003*"earth" + 0.003*"brahmana" + 0.003*"foremost" + 0.003*"acts"')]
In [7]:
# LDA for num_topics = 3
lda = models.LdaModel(corpus=corpus, id2word=id2word, num_topics=3, passes=10)
lda.print_topics()
Out[7]:
[(0,
  '0.015*"battle" + 0.010*"like" + 0.008*"shafts" + 0.007*"arrows" + 0.006*"mighty" + 0.005*"karna" + 0.005*"drona" + 0.005*"car" + 0.005*"steeds" + 0.005*"arjuna"'),
 (1,
  '0.007*"unto" + 0.006*"like" + 0.005*"hath" + 0.004*"words" + 0.004*"sons" + 0.004*"race" + 0.004*"shall" + 0.004*"mighty" + 0.004*"yudhishthira" + 0.003*"earth"'),
 (2,
  '0.007*"art" + 0.005*"acts" + 0.004*"unto" + 0.004*"soul" + 0.004*"creatures" + 0.004*"like" + 0.003*"brahmana" + 0.003*"person" + 0.003*"brahmanas" + 0.003*"possessed"')]
In [8]:
# LDA for num_topics = 4
lda = models.LdaModel(corpus=corpus, id2word=id2word, num_topics=4, passes=10)
lda.print_topics()
Out[8]:
[(0,
  '0.007*"art" + 0.005*"acts" + 0.004*"unto" + 0.004*"soul" + 0.004*"creatures" + 0.004*"like" + 0.004*"brahmana" + 0.004*"person" + 0.003*"possessed" + 0.003*"man"'),
 (1,
  '0.016*"battle" + 0.011*"like" + 0.008*"shafts" + 0.008*"arrows" + 0.007*"mighty" + 0.006*"karna" + 0.005*"drona" + 0.005*"car" + 0.005*"steeds" + 0.005*"arjuna"'),
 (2,
  '0.006*"unto" + 0.006*"like" + 0.005*"words" + 0.004*"hath" + 0.004*"sons" + 0.004*"race" + 0.004*"mighty" + 0.004*"gods" + 0.004*"yudhishthira" + 0.004*"earth"'),
 (3,
  '0.008*"unto" + 0.006*"like" + 0.005*"hath" + 0.005*"shall" + 0.004*"continued" + 0.004*"sons" + 0.004*"vaisampayana" + 0.004*"race" + 0.004*"yudhishthira" + 0.003*"words"')]

These topics aren't looking too great. We've tried modifying our parameters. Let's try modifying our terms list as well.

Topic Modeling - Attempt #2 (Nouns Only)

One popular trick is to look only at terms that are from one part of speech (only nouns, only adjectives, etc.). Check out the UPenn tag set: https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html.

In [9]:
# Let's create a function to pull out nouns from a string of text
from nltk import word_tokenize, pos_tag

def nouns(text):
    '''Given a string of text, tokenize the text and pull out only the nouns.'''
    is_noun = lambda pos: pos[:2] == 'NN'
    tokenized = word_tokenize(text)
    all_nouns = [word for (word, pos) in pos_tag(tokenized) if is_noun(pos)] 
    return ' '.join(all_nouns)
In [10]:
# Read in the cleaned data, before the CountVectorizer step
data_clean = pd.read_pickle('data_clean.pkl')
data_clean
Out[10]:
transcript book_name
Adi the mahabharata of krishnadwaipayana vya... Ādi Parva
AshramavAsika the mahabharata of krishnadwaipayana vya... Sabhā Parva
anushAsana the mahabharata of krishnadwaipayana vya... Vana Parva
ashvamedha the mahabharata of krishnadwaipayana vya... Virāṭa Parva
bhISma the mahabharata of krishnadwaipayana vya... Udyoga Parva
droNa the mahabharata of krishnadwaipayana vya... Bhīṣma Parva
karNa the mahabharata of krishnadwaipayana vya... Droṇa Parva
mahAprasthAnika the mahabharata of krishnadwaipayana vya... Karṇa Parva
mausalA the mahabharata of krishnadwaipayana vya... Śalya Parva
sabhA the mahabharata of krishnadwaipayana vya... Sauptika Parva
sauptika the mahabharata of krishnadwaipayana vya... Strī Parva
shalya the mahabharata of krishnadwaipayana vya... Śanti Parva
shanti the mahabharata of krishnadwaipayana vya... Anuśāsana Parva
strI the mahabharata of krishnadwaipayana vya... Aśvamedha Parva
svargArohaNika the mahabharata of krishnadwaipayana vya... Āśramavāsika Parva
udyoga the mahabharata of krishnadwaipayana vya... Mausala Parva
vana the mahabharata of krishnadwaipayana vya... Mahāpratiṣṭhānika Parva
virAta the mahabharata of krishnadwaipayana vya... Svargārohaṇa Parva
In [11]:
# Apply the nouns function to the transcripts to filter only on nouns
data_nouns = pd.DataFrame(data_clean.transcript.apply(nouns))
data_nouns
Out[11]:
transcript
Adi mahabharata krishnadwaipayana book adi parva p...
AshramavAsika mahabharata krishnadwaipayana book parva prose...
anushAsana mahabharata krishnadwaipayana book anusasana p...
ashvamedha mahabharata krishnadwaipayana book aswamedha p...
bhISma mahabharata krishnadwaipayana book bhishma par...
droNa mahabharata krishnadwaipayana book drona parva...
karNa mahabharata krishnadwaipayana book karnaparva ...
mahAprasthAnika mahabharata krishnadwaipayana book mahaprastha...
mausalA mahabharata krishnadwaipayana book mausalaparv...
sabhA mahabharata krishnadwaipayana book sabha parva...
sauptika mahabharata krishnadwaipayana book sauptikapar...
shalya mahabharata krishnadwaipayana book shalyaparva...
shanti mahabharata krishnadwaipayana book santi parva...
strI mahabharata krishnadwaipayana book striparva p...
svargArohaNika mahabharata krishnadwaipayana book svargarohan...
udyoga mahabharata krishnadwaipayana book parva prose...
vana mahabharata krishnadwaipayana book parva prose...
virAta mahabharata krishnadwaipayana book virata parv...
In [12]:
# Create a new document-term matrix using only nouns
from sklearn.feature_extraction import text
from sklearn.feature_extraction.text import CountVectorizer

# Re-add the additional stop words since we are recreating the document-term matrix
add_stop_words = ['like', 'im', 'know', 'just', 'dont', 'thats', 'right', 'people',
                  'youre', 'got', 'gonna', 'time', 'think', 'yeah', 'said']
stop_words = text.ENGLISH_STOP_WORDS.union(add_stop_words)

# Recreate a document-term matrix with only nouns
cvn = CountVectorizer(stop_words=stop_words)
data_cvn = cvn.fit_transform(data_nouns.transcript)
data_dtmn = pd.DataFrame(data_cvn.toarray(), columns=cvn.get_feature_names())
data_dtmn.index = data_nouns.index
data_dtmn
Out[12]:
abandon abandoner abandoning abandonment abandons abashment abatement abbreviation abdomen abduct ... yuyudhanas yuyutshu yuyutsu yuyutsus zanoni zarathustra zeal zenana zodiac zone
Adi 0 0 0 3 0 1 0 0 0 0 ... 0 0 8 0 0 0 0 0 1 0
AshramavAsika 0 0 0 0 0 0 0 0 0 0 ... 0 0 6 0 0 0 0 0 0 0
anushAsana 0 0 0 1 1 0 1 1 11 0 ... 0 0 2 0 0 0 0 0 0 0
ashvamedha 0 0 0 1 1 0 0 0 0 0 ... 0 0 1 0 0 0 0 0 0 0
bhISma 1 2 0 15 1 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0
droNa 2 0 1 0 0 0 0 0 0 0 ... 3 0 5 1 0 0 1 0 0 0
karNa 0 0 0 1 0 0 0 0 0 0 ... 0 0 5 1 0 0 0 0 0 0
mahAprasthAnika 0 0 0 2 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0
mausalA 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0
sabhA 1 0 0 1 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 1 0 0
sauptika 0 0 0 0 0 0 0 0 1 0 ... 0 0 0 0 0 0 0 0 0 0
shalya 0 0 0 0 1 0 0 0 0 0 ... 0 0 5 0 0 0 0 0 0 0
shanti 2 0 1 20 3 0 0 0 2 0 ... 0 0 6 0 1 0 3 0 0 0
strI 0 0 0 0 0 0 0 0 1 0 ... 0 0 1 0 0 0 0 0 0 0
svargArohaNika 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0
udyoga 1 0 2 9 0 0 0 0 0 1 ... 0 0 2 0 0 0 0 0 1 0
vana 0 0 0 2 0 0 0 0 0 0 ... 0 1 0 0 0 1 0 0 0 0
virAta 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 1

18 rows × 24381 columns

In [13]:
# Create the gensim corpus
corpusn = matutils.Sparse2Corpus(scipy.sparse.csr_matrix(data_dtmn.transpose()))

# Create the vocabulary dictionary
id2wordn = dict((v, k) for k, v in cvn.vocabulary_.items())
In [14]:
# Let's start with 2 topics
ldan = models.LdaModel(corpus=corpusn, num_topics=2, id2word=id2wordn, passes=10)
ldan.print_topics()
Out[14]:
[(0,
  '0.027*"son" + 0.022*"battle" + 0.010*"arrows" + 0.010*"shafts" + 0.009*"king" + 0.008*"thou" + 0.007*"car" + 0.007*"men" + 0.007*"steeds" + 0.007*"words"'),
 (1,
  '0.012*"thou" + 0.011*"king" + 0.010*"son" + 0.008*"men" + 0.007*"thee" + 0.006*"words" + 0.006*"art" + 0.005*"brahmanas" + 0.005*"earth" + 0.005*"brahmana"')]
In [15]:
# Let's try topics = 3
ldan = models.LdaModel(corpus=corpusn, num_topics=3, id2word=id2wordn, passes=10)
ldan.print_topics()
Out[15]:
[(0,
  '0.010*"thou" + 0.009*"king" + 0.009*"men" + 0.007*"acts" + 0.007*"art" + 0.007*"son" + 0.006*"soul" + 0.006*"creatures" + 0.006*"person" + 0.006*"man"'),
 (1,
  '0.027*"son" + 0.023*"battle" + 0.011*"shafts" + 0.011*"arrows" + 0.008*"king" + 0.008*"thou" + 0.008*"car" + 0.007*"men" + 0.007*"steeds" + 0.007*"warriors"'),
 (2,
  '0.016*"son" + 0.015*"thou" + 0.014*"king" + 0.008*"thee" + 0.008*"words" + 0.007*"men" + 0.007*"sons" + 0.007*"race" + 0.005*"earth" + 0.005*"kings"')]
In [16]:
# Let's try 4 topics
ldan = models.LdaModel(corpus=corpusn, num_topics=4, id2word=id2wordn, passes=10)
ldan.print_topics()
Out[16]:
[(0,
  '0.002*"son" + 0.002*"king" + 0.002*"thou" + 0.001*"men" + 0.001*"battle" + 0.001*"art" + 0.001*"brahmana" + 0.001*"thee" + 0.001*"man" + 0.001*"words"'),
 (1,
  '0.028*"son" + 0.026*"battle" + 0.012*"shafts" + 0.012*"arrows" + 0.008*"car" + 0.008*"king" + 0.008*"steeds" + 0.007*"thou" + 0.007*"warriors" + 0.007*"men"'),
 (2,
  '0.010*"thou" + 0.008*"king" + 0.008*"men" + 0.007*"acts" + 0.007*"art" + 0.007*"son" + 0.007*"soul" + 0.006*"creatures" + 0.006*"person" + 0.006*"man"'),
 (3,
  '0.016*"son" + 0.015*"thou" + 0.014*"king" + 0.008*"thee" + 0.008*"men" + 0.008*"words" + 0.007*"sons" + 0.007*"race" + 0.006*"earth" + 0.005*"gods"')]

Topic Modeling - Attempt #3 (Nouns and Adjectives)

In [17]:
# Let's create a function to pull out nouns from a string of text
def nouns_adj(text):
    '''Given a string of text, tokenize the text and pull out only the nouns and adjectives.'''
    is_noun_adj = lambda pos: pos[:2] == 'NN' or pos[:2] == 'JJ'
    tokenized = word_tokenize(text)
    nouns_adj = [word for (word, pos) in pos_tag(tokenized) if is_noun_adj(pos)] 
    return ' '.join(nouns_adj)
In [18]:
# Apply the nouns function to the transcripts to filter only on nouns
data_nouns_adj = pd.DataFrame(data_clean.transcript.apply(nouns_adj))
data_nouns_adj
Out[18]:
transcript
Adi mahabharata krishnadwaipayana book adi parva e...
AshramavAsika mahabharata krishnadwaipayana book parva engli...
anushAsana mahabharata krishnadwaipayana book anusasana p...
ashvamedha mahabharata krishnadwaipayana book aswamedha p...
bhISma mahabharata krishnadwaipayana book bhishma par...
droNa mahabharata krishnadwaipayana book drona parva...
karNa mahabharata krishnadwaipayana book karnaparva ...
mahAprasthAnika mahabharata krishnadwaipayana book mahaprastha...
mausalA mahabharata krishnadwaipayana book mausalaparv...
sabhA mahabharata krishnadwaipayana book sabha parva...
sauptika mahabharata krishnadwaipayana book sauptikapar...
shalya mahabharata krishnadwaipayana book shalyaparva...
shanti mahabharata krishnadwaipayana book santi parva...
strI mahabharata krishnadwaipayana book striparva e...
svargArohaNika mahabharata krishnadwaipayana book svargarohan...
udyoga mahabharata krishnadwaipayana book udyoga parv...
vana mahabharata krishnadwaipayana book parva engli...
virAta mahabharata krishnadwaipayana book virata parv...
In [19]:
# Create a new document-term matrix using only nouns and adjectives, also remove common words with max_df
cvna = CountVectorizer(stop_words=stop_words, max_df=.8)
data_cvna = cvna.fit_transform(data_nouns_adj.transcript)
data_dtmna = pd.DataFrame(data_cvna.toarray(), columns=cvna.get_feature_names())
data_dtmna.index = data_nouns_adj.index
data_dtmna
Out[19]:
abandon abandoned abandoner abandoning abandoningall abandonment abandons abashed abashment abate ... yuyutshu yuyutsu yuyutsus zanoni zarathustra zeal zenana zisyphus zodiac zone
Adi 0 0 0 0 0 4 0 1 1 0 ... 0 10 0 0 0 0 0 0 1 0
AshramavAsika 0 0 0 0 0 0 0 0 0 0 ... 0 6 0 0 0 0 0 0 0 0
anushAsana 0 0 0 0 0 1 1 0 0 1 ... 0 2 0 0 0 0 0 1 0 0
ashvamedha 0 0 0 0 0 1 1 0 0 0 ... 0 1 0 0 0 0 0 0 0 0
bhISma 1 1 2 0 0 17 1 0 0 0 ... 0 0 0 0 0 0 0 0 0 0
droNa 2 0 0 1 0 0 0 0 0 0 ... 0 6 1 0 0 1 0 0 0 0
karNa 0 0 0 0 0 1 0 0 0 0 ... 0 7 1 0 0 0 0 0 0 0
mahAprasthAnika 0 0 0 0 0 2 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0
mausalA 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0
sabhA 1 0 0 0 0 1 0 0 0 0 ... 0 0 0 0 0 0 1 0 0 0
sauptika 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0
shalya 0 0 0 0 0 0 1 0 0 0 ... 0 5 0 0 0 0 0 0 0 0
shanti 3 0 0 1 1 21 3 0 0 0 ... 0 6 0 1 0 3 0 0 0 0
strI 0 0 0 0 0 0 0 0 0 0 ... 0 1 0 0 0 0 0 0 0 0
svargArohaNika 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0
udyoga 2 0 0 2 0 9 0 0 0 0 ... 0 2 0 0 0 0 0 0 1 0
vana 1 0 0 0 0 2 0 1 0 0 ... 1 0 0 0 1 0 0 0 0 0
virAta 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 1

18 rows × 28862 columns

In [20]:
# Create the gensim corpus
corpusna = matutils.Sparse2Corpus(scipy.sparse.csr_matrix(data_dtmna.transpose()))

# Create the vocabulary dictionary
id2wordna = dict((v, k) for k, v in cvna.vocabulary_.items())
In [21]:
# Let's start with 2 topics
ldana = models.LdaModel(corpus=corpusna, num_topics=2, id2word=id2wordna, passes=10)
ldana.print_topics()
Out[21]:
[(0,
  '0.020*"arrows" + 0.007*"sanjaya" + 0.006*"bull" + 0.005*"carwarrior" + 0.005*"host" + 0.005*"section" + 0.004*"showers" + 0.003*"arrow" + 0.003*"warrior" + 0.003*"viz"'),
 (1,
  '0.006*"section" + 0.004*"vaisampayana" + 0.003*"viz" + 0.003*"brahman" + 0.002*"bull" + 0.002*"commentator" + 0.002*"emancipation" + 0.002*"verse" + 0.002*"parva" + 0.002*"thousand"')]
In [22]:
# Let's try 3 topics
ldana = models.LdaModel(corpus=corpusna, num_topics=3, id2word=id2wordna, passes=10)
ldana.print_topics()
Out[22]:
[(0,
  '0.006*"section" + 0.003*"viz" + 0.003*"commentator" + 0.003*"verse" + 0.003*"emancipation" + 0.003*"brahman" + 0.003*"vaisampayana" + 0.002*"felicity" + 0.002*"bull" + 0.002*"cow"'),
 (1,
  '0.008*"section" + 0.007*"vaisampayana" + 0.005*"parva" + 0.004*"bull" + 0.003*"monarchs" + 0.003*"arrows" + 0.003*"sanjaya" + 0.002*"drupada" + 0.002*"brahman" + 0.002*"tiger"'),
 (2,
  '0.025*"arrows" + 0.008*"sanjaya" + 0.006*"bull" + 0.006*"carwarrior" + 0.005*"host" + 0.005*"showers" + 0.004*"arrow" + 0.004*"combatants" + 0.004*"warrior" + 0.004*"section"')]
In [23]:
# Let's try 4 topics
ldana = models.LdaModel(corpus=corpusna, num_topics=4, id2word=id2wordna, passes=10)
ldana.print_topics()
Out[23]:
[(0,
  '0.017*"arrows" + 0.007*"section" + 0.007*"bull" + 0.007*"sanjaya" + 0.005*"host" + 0.004*"carwarrior" + 0.004*"viz" + 0.004*"panchalas" + 0.003*"vaisampayana" + 0.003*"showers"'),
 (1,
  '0.021*"arrows" + 0.007*"sanjaya" + 0.006*"shalya" + 0.005*"bull" + 0.005*"pancalas" + 0.005*"sutas" + 0.005*"madras" + 0.005*"keen" + 0.005*"carwarrior" + 0.004*"showers"'),
 (2,
  '0.006*"section" + 0.005*"vaisampayana" + 0.005*"parva" + 0.003*"cow" + 0.003*"brahman" + 0.003*"mahadeva" + 0.003*"bull" + 0.002*"thousand" + 0.002*"status" + 0.002*"rakshasa"'),
 (3,
  '0.006*"section" + 0.004*"viz" + 0.003*"verse" + 0.003*"emancipation" + 0.003*"commentator" + 0.003*"brahman" + 0.002*"vaisampayana" + 0.002*"mode" + 0.002*"religious" + 0.002*"qualities"')]

Identify Topics in Each Document

Out of the 9 topic models we looked at, the nouns and adjectives, 4 topic one made the most sense. So let's pull that down here and run it through some more iterations to get more fine-tuned topics.

In [ ]:
# Our final LDA model (for now)
ldana = models.LdaModel(corpus=corpusna, num_topics=4, id2word=id2wordna, passes=80)
ldana.print_topics()

These four topics look pretty decent. Let's settle on these for now.

  • Topic 1: war narrative
  • Topic 2: kin affinity
  • Topic 3: spirituality
In [ ]:
# Let's take a look at which topics each transcript contains
corpus_transformed = ldana[corpusna]
list(zip([a for [(a,b)] in corpus_transformed], data_dtmna.index))

For a first pass of LDA, these kind of make sense to me, so we'll call it a day for now.

  • Topic 0: mom, parents [Anthony, Hasan, Louis, Ricky]
  • Topic 1: war narrative
  • Topic 2: kin affinity
  • Topic 3: spirituality
  1. Try further modifying the parameters of the topic models above and see if you can get better topics.
  2. Create a new topic model that includes terms from a different part of speech and see if you can get better topics.
In [ ]: