Why is collections.Counter much slower than ''.count? - python

I have a simple task: To count how many times every letter occurs in a string. I've used a Counter() for it, but on one forum I saw information that using dict() / Counter() is much slower than using string.count() for every letter. I thought that it would interate through the string only once, and the string.count() solution would have to iterate through it four times (in this case). Why is Counter() so slow?
>>> timeit.timeit('x.count("A");x.count("G");x.count("C");x.count("T")', setup="x='GAAAAAGTCGTAGGGTTCCTTCACTCGAGGAATGCTGCGACAGTAAAGGAGGCCACGTGGTTGAGAGTTCCTAAGCATTCGTATGTACACCCGGACTCGATGCACTCAAACGTGCTTAAGGGTAAAGAAGGTCGAGAGGTATACTGGGGCACTCCCCTTAGAATTATATCTTGGTCAACTACAATATGGATGGAAATTCTAAGCCGAAAACGACCCGCTAGCGGATTGTGTATGTATCACAACGGTTTCGGTTCATACGCAAAATCATCCCATTTCAAGGCCACTCAAGGACATGACGCCGTGCAACTCCGAGGACATCCCTCAGCGATTGATGCAACCTGGTCATCTAATAATCCTTAGAACGGATGTGCCCTCTACTGGGAGAGCCGGCTAGACTGGCATCTCGCGTTGTTCGTACGAGCTCCGGGCGCCCGGGCGGTGTACGTTGATGTACAGCCTAAGAGCTTTCCACCTATGCTACGAACTAATTTCCCGTCCATCGTTCCTCGGACTGAGGTCAAAGTAACCCGGAAGTACATGGATCAGATACACTCACAGTCCCCTTTAATGACTGAGCTGGACGCTATTGATTGCTTTATAAGTGTTATGGTGAACTCGAAGACTTAGCTAGGAATTTCGCTATACCCGGGTAATGAGCTTAATACCTCACAGCATGTACGCTCTGAATATATGTAGCGATGCTAGCGGAACGTAAGCGTGAGCGTTATGCAGGGCTCCGCACCTCGTGGCCACTCGCCCAATGCCCGAGTTTTTGAGCAATGCCATGCCCTCCAGGTGAAGCGTGCTGAATATGTTCCGCCTCCGCACACCTACCCTACGGGCCTTACGCCATAGCTGAGGATACGCGAGTTGGTTAGCGATTACGTCATTCCAGGTGGTCGTTC'", number=10000)
0.07911698750407936
>>> timeit.timeit('Counter(x)', setup="from collections import Counter;x='GAAAAAGTCGTAGGGTTCCTTCACTCGAGGAATGCTGCGACAGTAAAGGAGGCCACGTGGTTGAGAGTTCCTAAGCATTCGTATGTACACCCGGACTCGATGCACTCAAACGTGCTTAAGGGTAAAGAAGGTCGAGAGGTATACTGGGGCACTCCCCTTAGAATTATATCTTGGTCAACTACAATATGGATGGAAATTCTAAGCCGAAAACGACCCGCTAGCGGATTGTGTATGTATCACAACGGTTTCGGTTCATACGCAAAATCATCCCATTTCAAGGCCACTCAAGGACATGACGCCGTGCAACTCCGAGGACATCCCTCAGCGATTGATGCAACCTGGTCATCTAATAATCCTTAGAACGGATGTGCCCTCTACTGGGAGAGCCGGCTAGACTGGCATCTCGCGTTGTTCGTACGAGCTCCGGGCGCCCGGGCGGTGTACGTTGATGTACAGCCTAAGAGCTTTCCACCTATGCTACGAACTAATTTCCCGTCCATCGTTCCTCGGACTGAGGTCAAAGTAACCCGGAAGTACATGGATCAGATACACTCACAGTCCCCTTTAATGACTGAGCTGGACGCTATTGATTGCTTTATAAGTGTTATGGTGAACTCGAAGACTTAGCTAGGAATTTCGCTATACCCGGGTAATGAGCTTAATACCTCACAGCATGTACGCTCTGAATATATGTAGCGATGCTAGCGGAACGTAAGCGTGAGCGTTATGCAGGGCTCCGCACCTCGTGGCCACTCGCCCAATGCCCGAGTTTTTGAGCAATGCCATGCCCTCCAGGTGAAGCGTGCTGAATATGTTCCGCCTCCGCACACCTACCCTACGGGCCTTACGCCATAGCTGAGGATACGCGAGTTGGTTAGCGATTACGTCATTCCAGGTGGTCGTTC'", number=10000)
2.1727447831030844
>>> 2.1727447831030844 / 0.07911698750407936
27.462430656767047
>>>

Counter() allows you to count any hashable objects, not just substrings. Both solutions are O(n)-time. Your measurements show that the overhead of iterating and hashing individual characters by Counter() is greater than running s.count() 4 times.
Counter() can use C helper to count elements but it seems it doesn't special case strings and uses general algorithm applicable for any other iterable i.e., processing a single character involves multiple Python C API calls to advance the iterator, get previous value (a lookup in the hash table), increment counter, set new value (a lookup in the hash table):
while (1) {
key = PyIter_Next(it);
if (key == NULL)
break;
oldval = PyObject_GetItem(mapping, key);
if (oldval == NULL) {
if (!PyErr_Occurred() || !PyErr_ExceptionMatches(PyExc_KeyError))
break;
PyErr_Clear();
Py_INCREF(one);
newval = one;
} else {
newval = PyNumber_Add(oldval, one);
Py_DECREF(oldval);
if (newval == NULL)
break;
}
if (PyObject_SetItem(mapping, key, newval) == -1)
break;
Py_CLEAR(newval);
Py_DECREF(key);
}
Compare it to FASTSEARCH() overhead for bytestrings:
for (i = 0; i < n; i++)
if (s[i] == p[0]) {
count++;
if (count == maxcount)
return maxcount;
}
return count;

The Counter class inherits from dict, while string.count is the following C-implementation (CPython 3.3):
/* stringlib: count implementation */
#ifndef STRINGLIB_FASTSEARCH_H
#error must include "stringlib/fastsearch.h" before including this module
#endif
Py_LOCAL_INLINE(Py_ssize_t)
STRINGLIB(count)(const STRINGLIB_CHAR* str, Py_ssize_t str_len,
const STRINGLIB_CHAR* sub, Py_ssize_t sub_len,
Py_ssize_t maxcount)
{
Py_ssize_t count;
if (str_len < 0)
return 0; /* start > len(str) */
if (sub_len == 0)
return (str_len < maxcount) ? str_len + 1 : maxcount;
count = FASTSEARCH(str, str_len, sub, sub_len, maxcount, FAST_COUNT);
if (count < 0)
return 0; /* no match */
return count;
}
Guess, which one is faster? :)

Related

How is python's float.__eq__ implemented in the language?

I know that the best way to compare two floats for equality is usually to use math.isclose(float_a, float_b). But I was curious to know how python does it if you simply do float_a == float_b.
I suppose it's implemented in C, but what is the logic behind it ?
Here is the source code for float object comparisons
Essentially. It looks super complex, but that complexity is mostly in handling the case where a float is compared to an int (int objects in Python are arbitrarily sized, they aren't C-int's wrapped in a Python object).
But for the simple case of float and float:
static PyObject*
float_richcompare(PyObject *v, PyObject *w, int op)
{
double i, j;
int r = 0;
assert(PyFloat_Check(v));
i = PyFloat_AS_DOUBLE(v);
/* Switch on the type of w. Set i and j to doubles to be compared,
* and op to the richcomp to use.
*/
if (PyFloat_Check(w))
j = PyFloat_AS_DOUBLE(w);
So it just creates two C doubles from the float objects, then (skipping all the int handling stuff):
Compare:
switch (op) {
case Py_EQ:
r = i == j;
break;
case Py_NE:
r = i != j;
break;
case Py_LE:
r = i <= j;
break;
case Py_GE:
r = i >= j;
break;
case Py_LT:
r = i < j;
break;
case Py_GT:
r = i > j;
break;
}
return PyBool_FromLong(r);
It just does a C-level == comparison, ultimately. So it does not do math.isclose(float_a, float_b). underneath the hood.

Google Coding Challenge Question 2020 : Unspecified Words

I got the following problem for the Google Coding Challenge which happened on 16th August 2020. I tried to solve it but couldn't.
There are N words in a dictionary such that each word is of fixed
length and M consists only of lowercase English letters, that is
('a', 'b', ...,'z') A query word is denoted by Q. The length
of query word is M. These words contain lowercase English letters
but at some places instead of a letter between 'a', 'b', ...,'z'
there is '?'. Refer to the Sample input section to understand this
case. A match count of Q, denoted by match_count(Q) is the
count of words that are in the dictionary and contain the same English
letters(excluding a letter that can be in the position of ?) in the
same position as the letters are there in the query word Q. In other
words, a word in the dictionary can contain any letters at the
position of '?' but the remaining alphabets must match with the
query word.
You are given a query word Q and you are required to compute
match_count.
Input Format
The first line contains two space-separated integers N and M denoting the number of words in the dictionary and length of each word
respectively.
The next N lines contain one word each from the dictionary.
The next line contains an integer Q denoting the number of query words for which you have to compute match_count.
The next Q lines contain one query word each.
Output Format For each query word, print match_count for a specific word in a new line.
Constraints
1 <= N <= 5X10^4
1 <= M <= 7
1 <= Q <= 10^5
So, I got 30 minutes for this question and I could write the following code which is incorrect and hence didn't give the expected output.
def Solve(N, M, Words, Q, Query):
output = []
count = 0
for i in range(Q):
x = Query[i].split('?')
for k in range(N):
if x in Words:
count += 1
else:
pass
output.append(count)
return output
N, M = map(int , input().split())
Words = []
for _ in range(N):
Words.append(input())
Q = int(input())
Query = []
for _ in range(Q):
Query.append(input())
out = Solve(N, M, Words, Q, Query)
for x in out_:
print(x)
Can somebody help me with some pseudocode or algorithm which can solve this problem, please?
I guess my first try would have been to replace the ? with a . in the query, i.e. change ?at to .at, and then use those as regular expressions and match them against all the words in the dictionary, something as simple as this:
import re
for q in queries:
p = re.compile(q.replace("?", "."))
print(sum(1 for w in words if p.match(w)))
However, seeing the input sizes as N up to 5x104 and Q up to 105, this might be too slow, just as any other algorithm comparing all pairs of words and queries.
On the other hand, note that M, the number of letters per word, is constant and rather low. So instead, you could create Mx26 sets of words for all letters in all positions and then get the intersection of those sets.
from collections import defaultdict
from functools import reduce
M = 3
words = ["cat", "map", "bat", "man", "pen"]
queries = ["?at", "ma?", "?a?", "??n"]
sets = defaultdict(set)
for word in words:
for i, c in enumerate(word):
sets[i,c].add(word)
all_words = set(words)
for q in queries:
possible_words = (sets[i,c] for i, c in enumerate(q) if c != "?")
w = reduce(set.intersection, possible_words, all_words)
print(q, len(w), w)
In the worst case (a query that has a non-? letter that is common to most or all words in the dictionary) this may still be slow, but should be much faster in filtering down the words than iterating all the words for each query. (Assuming random letters in both words and queries, the set of words for the first letter will contain N/26 words, the intersection for the first two has N/26² words, etc.)
This could probably be improved a bit by taking the different cases into account, e.g. (a) if the query does not contain any ?, just check whether it is in the set (!) of words without creating all those intersections; (b) if the query is all-?, just return the set of all words; and (c) sort the possible-words-sets by size and start the intersection with the smallest sets first to reduce the size of temporarily created sets.
About time complexity: To be honest, I am not sure what time complexity this algorithm has. With N, Q, and M being the number of words, number of queries, and length of words and queries, respectively, creating the initial sets will have complexity O(N*M). After that, the complexity of the queries obviously depends on the number of non-? in the queries (and thus the number of set intersections to create), and the average size of the sets. For queries with zero, one, or M non-? characters, the query will execute in O(M) (evaluating the situation and then a single set/dict lookup), but for queries with two or more non-?-characters, the first set intersections will have on average complexity O(N/26), which strictly speaking is still O(N). (All following intersections will only have to consider N/26², N/26³ etc. elements and are thus negligible.) I don't know how this compares to The Trie Approach and would be very interested if any of the other answers could elaborate on that.
This question can be done by the help of Trie Data Structures.
First add all words to trie ds.
Then you have to see if the word is present in trie or not, there's a special condition of ' ?' So you have to take care for that condition also, like if the character is ? then simply go to next character of the word.
I think this approach will work, there's a similar Question in Leetcode.
Link : https://leetcode.com/problems/design-add-and-search-words-data-structure/
It should be O(N) time and space approach given M is small and can be considered constant. You might want to look at implementation of Trie here.
Perform the first pass and store the words in Trie DS.
Next for your query, you perform a combination of DFS and BFS in the following order.
If you receive a ?, Perform BFS and add all the children.
For non ?, Perform a DFS and that should point to the existence of a word.
For further optimization, a suffix tree may also be used for storage DS.
You can use a simplified version of trie as the query string has pre-defined length. No need of ends variable in the Trie node
#include <bits/stdc++.h>
using namespace std;
typedef struct TrieNode_ {
struct TrieNode_* nxt[26];
} TrieNode;
void addWord(TrieNode* root, string s) {
TrieNode* node = root;
for(int i = 0; i < s.size(); ++i) {
if(node->nxt[s[i] - 'a'] == NULL) {
node->nxt[s[i] - 'a'] = new TrieNode;
}
node = node->nxt[s[i] - 'a'];
}
}
void matchCount(TrieNode* root, string s, int& cnt) {
if(root == NULL) {
return;
}
if(s.empty()) {
++cnt;
return;
}
TrieNode* node = root;
if(s[0] == '?') {
for(int i = 0; i < 26; ++i) {
matchCount(node->nxt[i], s.substr(1), cnt);
}
}
else {
matchCount(node->nxt[s[0] - 'a'], s.substr(1), cnt);
}
}
int main() {
int N, M;
cin >> N >> M;
vector<string> s(N);
TrieNode *root = new TrieNode;
for (int i = 0; i < N; ++i) {
cin >> s[i];
addWord(root, s[i]);
}
int Q;
cin >> Q;
for(int i = 0; i < Q; ++i) {
string queryString;
int cnt = 0;
cin >> queryString;
matchCount(root, queryString, cnt);
cout << cnt << endl;
}
}
Notes: 1. This code doesn't read the input but instead takes params from main method.
2. For large inputs, we could use java 8 streams to parallelize the search process and improve the performance.
import java.util.regex.Matcher;
import java.util.regex.Pattern;
public class WordSearch {
private void matchCount(int N, int M, int Q, String[] words, String[] queries) {
Pattern p = null;
Matcher m = null;
int count = 0;
for (int i=0; i<Q; i++) {
p = Pattern.compile(queries[i].replace('?','.'));
for (int j=0; j<N; j++) {
m = p.matcher(words[j]);
if (m.find()) {
count++;
}
}
System.out.println("For query word '"+ queries[i] + "', the count is: " + count) ;
count=0;
}
System.out.println("\n");
}
public static void main(String[] args) {
WordSearch ws = new WordSearch();
int N = 5; int M=3; int Q=4;
String[] w = new String[] {"cat", "map", "bat", "man", "pen"};
String[] q = new String[] {"?at", "ma?", "?a?", "??n" };
ws.matchCount(N, M, Q, w, q);
w = new String[] {"uqqur", "1xzev", "ydfgz"};
q = new String[] {"?z???", "???i?", "???e?", "???f?", "?z???"};
N=3; M=5; Q=5;
ws.matchCount(N, M, Q, w, q);
}
}
I can think of kind of trie with bfs for lookup approach
class Node:
def __init__(self, letter):
self.letter = letter
self.chidren = {}
#classmethod
def construct(cls):
return cls(letter=None)
def add_word(self, word):
current = self
for letter in word:
if letter not in current.chidren:
node = Node(letter)
current.chidren[letter] = node
else:
node = current.chidren[letter]
current = node
def lookup_word(self, word, m):
def _lookup_next_letter(_letter, _node):
if _letter == '?':
for node in _node.chidren.values():
q.put((node, i))
elif _letter in _node.chidren:
q.put((_node.chidren[_letter], i))
q = SimpleQueue()
count = 0
i = 0
current = self
letter = word[i]
i += 1
_lookup_next_letter(letter, current)
while not q.empty():
current, i = q.get()
if i == m:
count += 1
continue
letter = word[i]
i += 1
_lookup_next_letter(letter, current)
return count
def __eq__(self, other):
return self.letter == other.letter if isinstance(other, Node) else other
def __hash__(self):
return hash(self.letter)
I would create a lookup table for each letter of each word, and then use that table to iterate with. While the lookup table will cost O(NM) memory (or 15 entries in the situation shown), it will allow an easy O(NM) time complexity to be implemented, with a best case O(log N * log M).
The lookup table can be stored in the form of a coordinate plane. Each letter will have an "x" position (the letters index) as well as a "y" position (the words index in the dictionary). This will allow a quick cross reference from the query to look up a letter's position for existence and the word's position for eligibility.
Worst case, this approach has a time complexity O(NM) whereby there must be N iterations, one for each dictionary entry, times M iterations, one for each letter in each entry. In many cases it will skip the lookups though.
A coordinate system is also created, which also has O(NM) spacial complexity.
Unfamiliar with python, so this is written in JavaScript which was as close as I could come language wise. Hopefully this at least serves as an example of a possible solution.
In addition, as an added section, I included a heavily loaded section to use for performance comparisons. This takes about 5 seconds to complete a set with 2000 words, 5000 querys, each at a length of 200.
// Main function running the analysis
function run(dict, qs) {
// Use a coordinate system for tracking the letter and position
var coordinates = 'abcdefghijklmnopqrstuvwxyz'.split('').reduce((p, c) => (p[c] = {}, p), {});
// Populate the system
for (var i = 0; i < dict.length; i++) {
// Current word in the given dictionary
var dword = dict[i];
// Iterate the letters for tracking
for (var j = 0; j < dword.length; j++) {
// Current letter in our current word
var letter = dword[j];
// Make sure that there is object existence for assignment
coordinates[letter][j] = coordinates[letter][j] || {};
// Note the letter's coordinate by storing its array
// position (i) as well as its letter position (j)
coordinates[letter][j][i] = 1;
}
}
// Lookup the word letter by letter in our coordinate system
function match_count(Q) {
// Create an array which maps from the dictionary indices
// to a truthy value of 1 for tracking successful matches
var availLookup = dict.reduce((p,_,i) => (p[i]=1,p),{});
// Iterate the letters of Q to check against the coordinate system
for (var i = 0; i < Q.length; i++) {
// Current letter in Q
var letter = Q[i];
// Skip '?' characters
if (letter == '?') continue;
// Look up the existence of "points" in our coordinate system for
// the current letter
var points = coordinates[letter];
// If nothing from the dictionary matches in this position,
// then there are no matches anywhere and we return a 0
if (!points || !points[i]) return 0;
// Iterate the availability truth table made earlier
// and look up whether any points in our coordinate system
// are present for the current letter. If they are, then the word
// remains, if not, it is removed from consideration.
for(var n in availLookup){
if(!points[i][n]) delete availLookup[n];
}
}
// Sum the "truthy" 1 values we used earlier to determine the count of
// matched words
return Object.values(availLookup).reduce((x, y) => x + y, 0);
}
var matches = [];
for (var i = 0; i < qs.length; i++) {
matches.push(match_count(qs[i]));
}
return matches;
}
document.querySelector('button').onclick=_=>{
console.clear();
var d1 = [
'cat',
'map',
'bat',
'man',
'pen'
];
var q1 = [
'?at',
'ma?',
'?a?',
'??n'
];
console.log('running...');
console.log(run(d1, q1));
var d2 = [
'uqqur',
'lxzev',
'ydfgz'
];
var q2 = [
'?z???',
'???i?',
'???e?',
'???f?',
'?z???'
];
console.log('running...');
console.log(run(d2, q2));
// Load it up (try this with other versions to compare with efficiency)
var d3 = [];
var q3 = [];
var wordcount = 2000;
var querycount = 5000;
var len = 200;
var alphabet = 'abcdefghijklmnopqrstuvwxyz'.split('');
for(var i = 0; i < wordcount; i++){
var word = "";
for(var n = 0; n < len; n++){
var rand = (Math.random()*25)|0;
word += alphabet[rand];
}
d3.push(word);
}
for(var i = 0; i < querycount; i++){
var qword = d3[(Math.random()*(wordcount-1))|0];
var query = "";
for(var n = 0; n < len; n++){
var rand = (Math.random()*100)|0;
if(rand > 98){ word += alphabet[(Math.random()*25)|0]; }
else{ query += rand > 75 ? qword[n] : '?'; }
}
q3.push(query);
}
if(document.querySelector('input').checked){
//console.log(d3,q3);
console.log('running...');
console.log(run(d3, q3).reduce((x, y) => x + y, 0) + ' matches');
}
};
<input type=checkbox>Include the ~5 second larger version<br>
<button type=button>run</button>
I don't know Python, but the gist of the naive algorithm looks like this:
#count how many words in Words list match a single query
def DoQuery(Words, OneQuery):
count = 0
#for each word in the Words list
for i in range(Words.size()):
word = Words.at(i)
#compare each letter to the query
match = true
for j in range(word.size()):
wordLetter = word.at(j)
queryLetter = OneQuery.at(j)
#if the letters do not match and are not ?, then skip to next word
if queryLetter != '?' and queryLetter != wordLetter:
match = false
break
#if we did not skip, the words match. Increase the count
if match == true
count = count + 1
#we have now checked all the words, return the count
return count
Of course, this executes the innermost loop around 3.5x10^10 times, which might be too slow. So one would need to read in the dictionary, precompute some short of shortcut data structure, then use the shortcut to find the answers faster.
One shortcut data structure would be to make a map of possible queries to answers, making the query O(1). There are only 4.47*10^9 possible queries, so this is possibly faster.
A similar shortcut data structure would be to make a trie of possible queries to answers, making the query O(M). There are only 4.47*10^9 possible queries, so this is possibly faster. This is more complex code, but may also be easier to understand for some people.
Another shortcut would be to "assume" each query has exactly one non-question-mark, and make a map of possible queries to subset dictionaries. This would mean you'd still have to run the naive query on the subset dictionary, but it would be ~26x smaller, and thus ~26x faster. You'd also have to convert the real query into only having one non-question-mark to lookup the subset dictionary in the map, but that should be easy.
I think we can use trie to solve this problem.
Initially, we will just add all the strings to the trie, and later when we get each query we can just check whether it exists in trie or not.
The only thing different here is the '?' but we can use it as an all char match, so whenever we will detect the '?' in our search string we will look what are all possible words possible from here and then simply do a dfs by searching the word in all possible paths.
Below is the C++ code
class Trie {
public:
bool isEnd;
vector<Trie*> children;
Trie() {
this->isEnd = false;
this->children = vector<Trie*>(26, nullptr);
}
};
Trie* root;
void insert(string& str) {
int n = str.size(), idx, i = 0;
Trie* node = root;
while(i < n) {
idx = str[i++] - 'a';
if (node->children[idx] == nullptr) {
node->children[idx] = new Trie();
}
node = node->children[idx];
}
node->isEnd = true;
}
int getMatches(int i, string& str, Trie* node) {
int idx, n = str.size();
while(i < n) {
if (str[i] >= 'a' && str[i] <='z')
idx = str[i] - 'a';
else {
int res = 0;
for(int j = 0;j<26;j++) {
if (node->children[j] != nullptr)
res += getMatches(i+1, str, node->children[j]);
}
return res;
}
if (node->children[idx] == nullptr) return 0;
node = node->children[idx];
++i;
}
return node->isEnd ? 1 : 0;
}
int main() {
int n, m;
cin>>n>>m;
string str;
root = new Trie();
while(n--) {
cin>>str;
insert(str);
}
int q;
cin>>q;
while(q--) {
cin>>str;
cout<<(str.size() == m ? getMatches(0, str, root) : 0)<<"\n";
}
}
Can I do it with ascii values like:
for charcters in queryword calculate the ascii values sum.
for words in dictionary, calculate ascii of words character wise and check it with ascii sum of query word, like for bat, if ascii of b matches ascii sum of queryword then increment count else calculate ascii of a and check with query ascii if not then add it to ascii of b then check and hence atlast return the count.
How's this approach?
Java Implementation using Trie
import java.util.*;
import java.io.*;
import java.lang.*;
public class Main {
static class TrieNode
{
TrieNode []children = new TrieNode[26];
boolean endOfWord;
TrieNode()
{
this.endOfWord = false;
for (int i = 0; i < 26; i++) {
this.children[i] = null;
}
}
void addWord(String word)
{
// Crawl pointer points the object
// in reference
TrieNode pCrawl = this;
// Traverse the given array of words
for (int i = 0; i < word.length(); i++) {
int index = word.charAt(i) - 'a';
if (pCrawl.children[index]==null)
pCrawl.children[index]
= new TrieNode();
pCrawl = pCrawl.children[index];
}
pCrawl.endOfWord = true;
}
public static int ans2 = 0;
void search(String word, boolean found, String curr_found, int pos)
{
TrieNode pCrawl = this;
if (pos == word.length()) {
if (pCrawl.endOfWord) {
found = true;
ans2++;
}
return;
}
if (word.charAt(pos) == '?') {
// Iterate over every letter and
// proceed further by replacing
// the character in place of '.'
for (int i = 0; i < 26; i++) {
if (pCrawl.children[i] != null) {
pCrawl.children[i].search(word,found,curr_found + (char)('a' + i),pos + 1);
}
}
}
else { // Check if pointer at character
// position is available,
// then proceed
if (pCrawl.children[word.charAt(pos) - 'a'] != null) {
pCrawl.children[word.charAt(pos) - 'a']
.search(word,found,curr_found + word.charAt(pos),pos + 1);
}
}
return;
}
// Utility function for search operation
int searchUtil(String word)
{
TrieNode pCrawl = this;
boolean found = false;
ans2 = 0;
pCrawl.search(word, found,"",0);
return ans2;
}
}
static int searchPattern(String arr[], int N,String str)
{
// Object of the class Trie
TrieNode obj = new TrieNode();
for (int i = 0; i < N; i++) {
obj.addWord(arr[i]);
}
// Search pattern
return obj.searchUtil(str);
}
public static void ans(String []arr , int n, int m,String [] query, int q){
for(int i=0;i<q;i++)
System.out.println(searchPattern(arr,n,query[i]));
}
public static void main(String args[]) {
Scanner scn = new Scanner();
int n = scn.nextInt();
int m = scn.nextInt();
String []arr = new String[n];
for(int i=0;i<n;i++){
arr[i] = scn.next();
}
int q = scn.nextInt();
String []query = new String[q];
for(int i=0;i<q;i++){
query[i] = scn.next();
}
ans(arr,n,m,query,q);
}
}
This is brute but Trie is a better implemntaion.
"""
Input: db whic is a list of words
chk : str to find
"""
def check(db,chk):
seen = collections.defaultdict(list)
for i in db:
for j in range(len(i)):
temp = i[:j] + "?" + i[j+1:]
seen[temp].append(i)
return len(seen[chk])
print check(["cat","bat"], "?at")
Sounds like it was a coding challenge about https://en.wikipedia.org/wiki/Space%E2%80%93time_tradeoff
Depending on parameters N,M,Q as well as data and query distribution, the "best" algorithm will be different. A simple example, given the query ??? you know the answer — the length of the dictionary — without any computation 😸
In the general case, most likely, it pays to create a search index in advance (that is while reading the dictionary, before any query is seen).
I'd go with this: number the input 0 cat; 1 map; ...
Then build a search index per letter position:
index = [
{"c": 0b00001, "m": 0b00010, ...} # first query letter
{"a": 0b01111, "e": 0x10000} # second query letter
]
Prepare all = 0x11111 (all bits set) as "matches everything".
Then query lookup: ?a? ⇒ all & index[1]["a"] & all. †
Afterwards you'll need to count number of bits set in the result.
The time complexity of single query is therefore O(N) * (M + O(1)) ‡, which is a decent trade-off.
The entire batch is O(N*M*Q).
Python (as well as es2020) supports native arbitrary precision integers, which can be elegantly used for bitmaps, as well as native dictionaries, use them :) However if the data is sparse, an adaptive or compressed bitmap such as https://pypi.org/project/roaringbitmap may perform better.
† In practice ... & index[1].get("a", 0) & ... in case you hit a blank.
‡ Python data structure time complexity is reported O(...) amortised worst case while in CS O(...) worst case is usually considered. While the difference is subtle, it can bite even experienced developers, see e.g. https://bugs.python.org/issue13703
One approach could be to use Python's fnmatch module (for every pattern sum the matches in words):
import fnmatch
names = ['uqqur', 'lxzev', 'ydfgs']
patterns = ['?z???', '???i?', '???e?', '???f?', '?z???']
[sum(fnmatch.fnmatch(name, pattern) for name in names) for pattern in patterns]
# [0, 0, 1, 0, 0]

What is the time complexity of iterating through a deque in Python?

What is the time complexity of iterating, or more precisely each iteration through a deque from the collections library in Python?
An example is this:
elements = deque([1,2,3,4])
for element in elements:
print(element)
Is each iteration a constant O(1) operation? or does it do a linear O(n) operation to get to the element in each iteration?
There are many resources online for time complexity with all of the other deque methods like appendleft, append, popleft, pop. There doesn't seem to be any time complexity information about the iteration of a deque.
Thanks!
If your construction is something like:
elements = deque([1,2,3,4])
for i in range(len(elements)):
print(elements[i])
You are not iterating over the deque, you are iterating over the range object, and then indexing into the deque. This makes the iteration polynomial time, since each indexing operation, elements[i] is O(n). However, actually iterating over the deque is linear time.
for x in elements:
print(x)
Here's a quick, empirical test:
import timeit
import pandas as pd
from collections import deque
def build_deque(n):
return deque(range(n))
def iter_index(d):
for i in range(len(d)):
d[i]
def iter_it(d):
for x in d:
x
r = range(100, 10001, 100)
index_runs = [timeit.timeit('iter_index(d)', 'from __main__ import build_deque, iter_index, iter_it; d = build_deque({})'.format(n), number=1000) for n in r]
it_runs = [timeit.timeit('iter_it(d)', 'from __main__ import build_deque, iter_index, iter_it; d = build_deque({})'.format(n), number=1000) for n in r]
df = pd.DataFrame({'index':index_runs, 'iter':it_runs}, index=r)
df.plot()
And the resulting plot:
Now, we can actually see how the iterator protocol is implemented for deque objects in CPython source code:
First, the deque object itself:
typedef struct BLOCK {
struct BLOCK *leftlink;
PyObject *data[BLOCKLEN];
struct BLOCK *rightlink;
} block;
typedef struct {
PyObject_VAR_HEAD
block *leftblock;
block *rightblock;
Py_ssize_t leftindex; /* 0 <= leftindex < BLOCKLEN */
Py_ssize_t rightindex; /* 0 <= rightindex < BLOCKLEN */
size_t state; /* incremented whenever the indices move */
Py_ssize_t maxlen;
PyObject *weakreflist;
} dequeobject;
So, as stated in the comments, a deque is a doubly-linked list of "block" nodes, where a block is essentially an array of python object pointers. Now for the iterator protocol:
typedef struct {
PyObject_HEAD
block *b;
Py_ssize_t index;
dequeobject *deque;
size_t state; /* state when the iterator is created */
Py_ssize_t counter; /* number of items remaining for iteration */
} dequeiterobject;
static PyTypeObject dequeiter_type;
static PyObject *
deque_iter(dequeobject *deque)
{
dequeiterobject *it;
it = PyObject_GC_New(dequeiterobject, &dequeiter_type);
if (it == NULL)
return NULL;
it->b = deque->leftblock;
it->index = deque->leftindex;
Py_INCREF(deque);
it->deque = deque;
it->state = deque->state;
it->counter = Py_SIZE(deque);
PyObject_GC_Track(it);
return (PyObject *)it;
}
// ...
static PyObject *
dequeiter_next(dequeiterobject *it)
{
PyObject *item;
if (it->deque->state != it->state) {
it->counter = 0;
PyErr_SetString(PyExc_RuntimeError,
"deque mutated during iteration");
return NULL;
}
if (it->counter == 0)
return NULL;
assert (!(it->b == it->deque->rightblock &&
it->index > it->deque->rightindex));
item = it->b->data[it->index];
it->index++;
it->counter--;
if (it->index == BLOCKLEN && it->counter > 0) {
CHECK_NOT_END(it->b->rightlink);
it->b = it->b->rightlink;
it->index = 0;
}
Py_INCREF(item);
return item;
}
As you can see, the iterator essentially keeps track of a block index, a pointer to a block, and a counter of total items in the deque. It stops iterating if the counter reaches zero, if not, it grabs the element at the current index, increments the index, decrements the counter, and tales care of checking whether to move to the next block or not. In other words, A deque could be represented as a list-of-lists in Python, e.g. d = [[1,2,3],[4,5,6]], and it iterates
for block in d:
for x in block:
...

Transform python yield into c++

I have a piece of python code I need to use in c++. The algorithm is a recursion that uses yield.
Here is the python function:
def getSubSequences(self, s, minLength=1):
if len(s) >= minLength:
for i in range(minLength, len(s) + 1):
for p in self.getSubSequences(s[i:], 1 if i > 1 else 2):
yield [s[:i]] + p
elif not s:
yield []
and here is my attempt so far
vector< vector<string> > getSubSequences(string number, unsigned int minLength=1) {
if (number.length() >= minLength) {
for (unsigned int i=minLength; i<=number.length()+1; i++) {
string sub = "";
if (i <= number.length())
sub = number.substr(i);
vector< vector<string> > res = getSubSequences(sub, (i > 1 ? 1 : 2));
vector< vector<string> > container;
vector<string> tmp;
tmp.push_back(number.substr(0, i));
container.push_back(tmp);
for (unsigned int j=0; j<res.size(); j++) {
container.push_back(res.at(j));
return container;
}
}
} else if (number.length() == 0)
return vector< vector<string> >();
}
Unfortunately I get a segmentation fault when executing it. Is this even the right attempt or is there an easier way to do this? The data structures are not fixed I just need the same result as I get in the python code!
The loops in your above code snippets are not equivalent.
The Python code has
for i in range(minLength, len(s) + 1):
The C++ code has
for (unsigned int i=minLength; i<=number.length()+1; i++) {
So the Python loop terminates one iteration sooner than the C++ one.
The question has really nothing to do with yield. I think you should print stuff out from implementations, in these cases, and study them. In this case, it would have shown that the two algorithms diverge.

Permutation with backtraking from C to Python

I have to do a program that gives all permutations of n numbers {1,2,3..n} using backtracking. I managed to do it in C, and it works very well, here is the code:
int st[25], n=4;
int valid(int k)
{
int i;
for (i = 1; i <= k - 1; i++)
if (st[k] == st[i])
return 0;
return 1;
}
void bktr(int k)
{
int i;
if (k == n + 1)
{
for (i = 1; i <= n; i++)
printf("%d ", st[i]);
printf("\n");
}
else
for (i = 1; i <= n; i++)
{
st[k] = i;
if (valid(k))
bktr(k + 1);
}
}
int main()
{
bktr(1);
return 0;
}
Now I have to write it in Python. Here is what I did:
st=[]
n=4
def bktr(k):
if k==n+1:
for i in range(1,n):
print (st[i])
else:
for i in range(1,n):
st[k]=i
if valid(k):
bktr(k+1)
def valid(k):
for i in range(1,k-1):
if st[k]==st[i]:
return 0
return 1
bktr(1)
I get this error:
list assignment index out of range
at st[k]==st[i].
Python has a "permutations" functions in the itertools module:
import itertools
itertools.permutations([1,2,3])
If you need to write the code yourself (for example if this is homework), here is the issue:
Python lists do not have a predetermined size, so you can't just set e.g. the 10th element to 3. You can only change existing elements or add to the end.
Python lists (and C arrays) also start at 0. This means you have to access the first element with st[0], not st[1].
When you start your program, st has a length of 0; this means you can not assign to st[1], as it is not the end.
If this is confusing, I recommend you use the st.append(element) method instead, which always adds to the end.
If the code is done and works, I recommend you head over to code review stack exchange because there are a lot more things that could be improved.

Categories

Resources