Imposing a cap on word count in scikit learn - python

I'm analyzing song lyrics where repetition doesn't necessarily mean higher importance, so I'd like to cap the word count per document. For example, if a word appears n times in a song, where n > threshold, then I would replace nwith threshold.
I've checked the CountVectorizer docs, and there's an option for a min_df and max_df, but these can only disregard words that appear in some m documents, not words that appear n times in a single document.
I was thinking of changing the elements of the sparse matrix (say, find all elements > threshold, then replace), but I couldn't find a way to that either. Thanks in advance!

I don't know of any prebuilt feature in scikit learn for this, but you could definitely edit your doc-term matrix directly, with numpy.where for example :
x = numpy.where(x < threshold, x, threshold)
where x is your doc-term matrix and threshold is, well, your threshold.
EDIT :
I hadn't realized numpy.where didn't work on scipy sparse matrices. You can use the find function from scipy.sparse that will return all non-0 indices in a sparse matrix in order to access and modify those values directly:
from scipy.sparse import find
results = find(x > threshold)
for i in range(len(results[0])):
x[results[0][i], results[1][i]] = threshold
It's significantly less elegant but it works.

Related

In count vectorizer which axis to use?

I want to create a document term matrix. In my case it is not like documents x words but it is sentences x words so the sentences will act as the documents. I am using 'l2' normalization post doc-term matrix creation.
The term count is important for me to create summarization using SVD in further steps.
My query is which axis will be appropriate to apply 'l2' normalization. With sufficient research I understood:
Axis=1 : Will give me the importance of the word in a sentence (column wise normalization)
Axis=0 : Importance of the word in a document (row wise normalization).
Even after knowing the theory I am not able to decide which alternative to choose because the choice will greatly affect my summarization results. So kindly guide me a solution along with a reason for the same.
By L2 normalization, do you mean division by the total count?
If you normalize along axis=0, then the value of x_{i,j} is the probability of the word j over all sentences i (division by the global word count), which is dependent on the length of the sentence, as longer ones can repeat some words over and over again and will have a much higher probability for this word, as they contribute a lot to the global word count.
If you normalize along axis=1, then you're asking whether sentences have the same composition of words, as you normalize along the lenght of the sentence.

What is the output of Spark MLLIB LDA topicsmatrix?

The output of LDAModel.topicsMatrix() is unclear to me.
I think I understand the concept of LDA and that each topic is represented by a distribution over terms.
In the LDAModel.describeTopics() it is clear (I think):
The highest sum of likelihoods of words of a sentence per topic, indicates the evidence of this tweet belonging to a topic.
With n topics, the output of describeTopics() is a n times m matrix where m stands for the size of the vocabulary. The values in this matrix are smaller or equal to 1.
However in the LDAModel.topicsMatrix(), I have no idea what I am looking at. The same holds when reading the documentation.
The matrix is a m times n matrix, the dimensions have changed and the values in this matrix are larger than zero (and thus can take the value 2, which is not a probability value). What are these values? The occurrence of this word in the topic perhaps?
How do I use these values do calculate the distance of a sentence to a topic?
i think the matrix is m*n m is the words number and n is the topic number

Finding k most similar documents

I have some documents and I'd like to find the k documents most similar to a selected document. For the sake of a reproducible example, let's say k is 1 and my documents are these
documents = ['Two roads diverged in a yellow wood,',
'And sorry I could not travel both',
'And be one traveler, long I stood',
'And looked down one as far as I could',
'To where it bent in the undergrowth']
Then I think what I want to do is the below. (I'm using CountVectorizer for transparency and simplicity, even though maybe later I'd want to use Tf-Idf and a hashing vectorizer.)
from sklearn.feature_extraction.text import CountVectorizer
import numpy as np
vectorizer = CountVectorizer(analyzer='word')
ft = vectorizer.fit_transform(documents)
one_doc = documents[1]
one_doc_code = vectorizer.transform([one_doc])
doc_match = np.matrix(ft) * np.matrix(one_doc_code.transpose())
and now doc_match is a column vector with weights that indicate closeness of match (0 = bad match, 1 = perfect match). But in order to do the multiplication, I (in desperation, in the face of element-wise multiplication) converted to a numpy matrix, so now I have this CSR format matrix that doesn't have a todense() member (so I can't just look, not that that would scale beyond my tiny example).
What I think I want now (but haven't been able to figure out so far) is how to say "what are the indices of the top k elements of doc_match?" (even if k is not 1).
If all you want are the indices in doc_match that have the highest score, you can do:
sorted_indices = np.argsort(doc_match)
doc_match_vals_sorted = doc_match[sorted_indices]

How to convert co-occurrence matrix to sparse matrix

I am starting dealing with sparse matrices so I'm not really proficient on this topic. My problem is, I have a simple coo-occurrences matrix from a word list, just a 2-dimensional co-occurrence matrix word by word counting how many times a word occurs in same context. The matrix is quite sparse since the corpus is not that big. I want to convert it to a sparse matrix to be able to deal better with it, eventually do some matrix multiplication afterwards. Here what I have done until now (only the first part, the rest is just output format and cleaning data):
def matrix(from_corpus):
d = defaultdict(lambda : defaultdict(int))
heads = set()
trans = set()
for text in corpus:
d[text[0]][text[1]] += 1
heads.add(text[0])
trans.add(text[1])
return d,heads,trans
My idea would be to make a new function:
def matrix_to_sparse(d):
A = sparse.lil_matrix(d)
Does this make any sense? This is however not working and somehow I don't the way how get a sparse matrix. Should I better work with numpy arrays? What would be the best way to do this. I want to compare many ways to deal with matrices.
It would be nice if some could put me in the direction.
Here's how you construct a document-term matrix A from a set of documents in SciPy's COO format, which is a good tradeoff between ease of use and efficiency(*):
vocabulary = {} # map terms to column indices
data = [] # values (maybe weights)
row = [] # row (document) indices
col = [] # column (term) indices
for i, doc in enumerate(documents):
for term in doc:
# get column index, adding the term to the vocabulary if needed
j = vocabulary.setdefault(term, len(vocabulary))
data.append(1) # uniform weights
row.append(i)
col.append(j)
A = scipy.sparse.coo_matrix((data, (row, col)))
Now, to get a cooccurrence matrix:
A.T * A
(ignore the diagonal, which holds cooccurrences of term with themselves, i.e. squared frequency).
Alternatively, use some package that does this kind of thing for you, such as Gensim or scikit-learn. (I'm a contributor to both projects, so this might not be unbiased advice.)

Fast algorithm to search for pattern within text file

I have an array of doubles, roughly 200,000 rows by 100 columns, and I'm looking for a fast algorithm to find the rows that contain sequences most similar to a given pattern (the pattern can be anywhere from 10 to 100 elements). I'm using python, so the brute force method (code below: looping over each row and starting column index, and computing the Euclidean distance at each point) takes around three minutes.
The numpy.correlate function promises to solve this problem much faster (running over the same dataset in less than 20 seconds). However, it simply computes a sliding dot product of the pattern over the full row, meaning that to compare similarity I'd have to normalize the results first. Normalizing the cross-correlation requires computing the standard deviation of each slice of the data, which instantly negates the speed improvement of using numpy.correlate in the first place.
Is it possible to compute normalized cross-correlation quickly in python? Or will I have to resort to coding the brute force method in C?
def norm_corr(x,y,mode='valid'):
ya=np.array(y)
slices=[x[pos:pos+len(y)] for pos in range(len(x)-len(y)+1)]
return [np.linalg.norm(np.array(z)-ya) for z in slices]
similarities=[norm_corr(arr,pointarray) for arr in arraytable]
If your data is in a 2D Numpy array, you can take a 2D slice from it (200000 rows by len(pattern) columns) and compute the norm for all the rows at once. Then slide the window to the right in a for loop.
ROWS = 200000
COLS = 100
PATLEN = 20
#random data for example's sake
a = np.random.rand(ROWS,COLS)
pattern = np.random.rand(PATLEN)
tmp = np.empty([ROWS, COLS-PATLEN])
for i in xrange(COLS-PATLEN):
window = a[:,i:i+PATLEN]
tmp[:,i] = np.sum((window-pattern)**2, axis=1)
result = np.sqrt(tmp)

Categories

Resources