I got the following problem for the Google Coding Challenge which happened on 16th August 2020. I tried to solve it but couldn't.
There are N words in a dictionary such that each word is of fixed
length and M consists only of lowercase English letters, that is
('a', 'b', ...,'z') A query word is denoted by Q. The length
of query word is M. These words contain lowercase English letters
but at some places instead of a letter between 'a', 'b', ...,'z'
there is '?'. Refer to the Sample input section to understand this
case. A match count of Q, denoted by match_count(Q) is the
count of words that are in the dictionary and contain the same English
letters(excluding a letter that can be in the position of ?) in the
same position as the letters are there in the query word Q. In other
words, a word in the dictionary can contain any letters at the
position of '?' but the remaining alphabets must match with the
query word.
You are given a query word Q and you are required to compute
match_count.
Input Format
The first line contains two space-separated integers N and M denoting the number of words in the dictionary and length of each word
respectively.
The next N lines contain one word each from the dictionary.
The next line contains an integer Q denoting the number of query words for which you have to compute match_count.
The next Q lines contain one query word each.
Output Format For each query word, print match_count for a specific word in a new line.
Constraints
1 <= N <= 5X10^4
1 <= M <= 7
1 <= Q <= 10^5
So, I got 30 minutes for this question and I could write the following code which is incorrect and hence didn't give the expected output.
def Solve(N, M, Words, Q, Query):
output = []
count = 0
for i in range(Q):
x = Query[i].split('?')
for k in range(N):
if x in Words:
count += 1
else:
pass
output.append(count)
return output
N, M = map(int , input().split())
Words = []
for _ in range(N):
Words.append(input())
Q = int(input())
Query = []
for _ in range(Q):
Query.append(input())
out = Solve(N, M, Words, Q, Query)
for x in out_:
print(x)
Can somebody help me with some pseudocode or algorithm which can solve this problem, please?
I guess my first try would have been to replace the ? with a . in the query, i.e. change ?at to .at, and then use those as regular expressions and match them against all the words in the dictionary, something as simple as this:
import re
for q in queries:
p = re.compile(q.replace("?", "."))
print(sum(1 for w in words if p.match(w)))
However, seeing the input sizes as N up to 5x104 and Q up to 105, this might be too slow, just as any other algorithm comparing all pairs of words and queries.
On the other hand, note that M, the number of letters per word, is constant and rather low. So instead, you could create Mx26 sets of words for all letters in all positions and then get the intersection of those sets.
from collections import defaultdict
from functools import reduce
M = 3
words = ["cat", "map", "bat", "man", "pen"]
queries = ["?at", "ma?", "?a?", "??n"]
sets = defaultdict(set)
for word in words:
for i, c in enumerate(word):
sets[i,c].add(word)
all_words = set(words)
for q in queries:
possible_words = (sets[i,c] for i, c in enumerate(q) if c != "?")
w = reduce(set.intersection, possible_words, all_words)
print(q, len(w), w)
In the worst case (a query that has a non-? letter that is common to most or all words in the dictionary) this may still be slow, but should be much faster in filtering down the words than iterating all the words for each query. (Assuming random letters in both words and queries, the set of words for the first letter will contain N/26 words, the intersection for the first two has N/26² words, etc.)
This could probably be improved a bit by taking the different cases into account, e.g. (a) if the query does not contain any ?, just check whether it is in the set (!) of words without creating all those intersections; (b) if the query is all-?, just return the set of all words; and (c) sort the possible-words-sets by size and start the intersection with the smallest sets first to reduce the size of temporarily created sets.
About time complexity: To be honest, I am not sure what time complexity this algorithm has. With N, Q, and M being the number of words, number of queries, and length of words and queries, respectively, creating the initial sets will have complexity O(N*M). After that, the complexity of the queries obviously depends on the number of non-? in the queries (and thus the number of set intersections to create), and the average size of the sets. For queries with zero, one, or M non-? characters, the query will execute in O(M) (evaluating the situation and then a single set/dict lookup), but for queries with two or more non-?-characters, the first set intersections will have on average complexity O(N/26), which strictly speaking is still O(N). (All following intersections will only have to consider N/26², N/26³ etc. elements and are thus negligible.) I don't know how this compares to The Trie Approach and would be very interested if any of the other answers could elaborate on that.
This question can be done by the help of Trie Data Structures.
First add all words to trie ds.
Then you have to see if the word is present in trie or not, there's a special condition of ' ?' So you have to take care for that condition also, like if the character is ? then simply go to next character of the word.
I think this approach will work, there's a similar Question in Leetcode.
Link : https://leetcode.com/problems/design-add-and-search-words-data-structure/
It should be O(N) time and space approach given M is small and can be considered constant. You might want to look at implementation of Trie here.
Perform the first pass and store the words in Trie DS.
Next for your query, you perform a combination of DFS and BFS in the following order.
If you receive a ?, Perform BFS and add all the children.
For non ?, Perform a DFS and that should point to the existence of a word.
For further optimization, a suffix tree may also be used for storage DS.
You can use a simplified version of trie as the query string has pre-defined length. No need of ends variable in the Trie node
#include <bits/stdc++.h>
using namespace std;
typedef struct TrieNode_ {
struct TrieNode_* nxt[26];
} TrieNode;
void addWord(TrieNode* root, string s) {
TrieNode* node = root;
for(int i = 0; i < s.size(); ++i) {
if(node->nxt[s[i] - 'a'] == NULL) {
node->nxt[s[i] - 'a'] = new TrieNode;
}
node = node->nxt[s[i] - 'a'];
}
}
void matchCount(TrieNode* root, string s, int& cnt) {
if(root == NULL) {
return;
}
if(s.empty()) {
++cnt;
return;
}
TrieNode* node = root;
if(s[0] == '?') {
for(int i = 0; i < 26; ++i) {
matchCount(node->nxt[i], s.substr(1), cnt);
}
}
else {
matchCount(node->nxt[s[0] - 'a'], s.substr(1), cnt);
}
}
int main() {
int N, M;
cin >> N >> M;
vector<string> s(N);
TrieNode *root = new TrieNode;
for (int i = 0; i < N; ++i) {
cin >> s[i];
addWord(root, s[i]);
}
int Q;
cin >> Q;
for(int i = 0; i < Q; ++i) {
string queryString;
int cnt = 0;
cin >> queryString;
matchCount(root, queryString, cnt);
cout << cnt << endl;
}
}
Notes: 1. This code doesn't read the input but instead takes params from main method.
2. For large inputs, we could use java 8 streams to parallelize the search process and improve the performance.
import java.util.regex.Matcher;
import java.util.regex.Pattern;
public class WordSearch {
private void matchCount(int N, int M, int Q, String[] words, String[] queries) {
Pattern p = null;
Matcher m = null;
int count = 0;
for (int i=0; i<Q; i++) {
p = Pattern.compile(queries[i].replace('?','.'));
for (int j=0; j<N; j++) {
m = p.matcher(words[j]);
if (m.find()) {
count++;
}
}
System.out.println("For query word '"+ queries[i] + "', the count is: " + count) ;
count=0;
}
System.out.println("\n");
}
public static void main(String[] args) {
WordSearch ws = new WordSearch();
int N = 5; int M=3; int Q=4;
String[] w = new String[] {"cat", "map", "bat", "man", "pen"};
String[] q = new String[] {"?at", "ma?", "?a?", "??n" };
ws.matchCount(N, M, Q, w, q);
w = new String[] {"uqqur", "1xzev", "ydfgz"};
q = new String[] {"?z???", "???i?", "???e?", "???f?", "?z???"};
N=3; M=5; Q=5;
ws.matchCount(N, M, Q, w, q);
}
}
I can think of kind of trie with bfs for lookup approach
class Node:
def __init__(self, letter):
self.letter = letter
self.chidren = {}
#classmethod
def construct(cls):
return cls(letter=None)
def add_word(self, word):
current = self
for letter in word:
if letter not in current.chidren:
node = Node(letter)
current.chidren[letter] = node
else:
node = current.chidren[letter]
current = node
def lookup_word(self, word, m):
def _lookup_next_letter(_letter, _node):
if _letter == '?':
for node in _node.chidren.values():
q.put((node, i))
elif _letter in _node.chidren:
q.put((_node.chidren[_letter], i))
q = SimpleQueue()
count = 0
i = 0
current = self
letter = word[i]
i += 1
_lookup_next_letter(letter, current)
while not q.empty():
current, i = q.get()
if i == m:
count += 1
continue
letter = word[i]
i += 1
_lookup_next_letter(letter, current)
return count
def __eq__(self, other):
return self.letter == other.letter if isinstance(other, Node) else other
def __hash__(self):
return hash(self.letter)
I would create a lookup table for each letter of each word, and then use that table to iterate with. While the lookup table will cost O(NM) memory (or 15 entries in the situation shown), it will allow an easy O(NM) time complexity to be implemented, with a best case O(log N * log M).
The lookup table can be stored in the form of a coordinate plane. Each letter will have an "x" position (the letters index) as well as a "y" position (the words index in the dictionary). This will allow a quick cross reference from the query to look up a letter's position for existence and the word's position for eligibility.
Worst case, this approach has a time complexity O(NM) whereby there must be N iterations, one for each dictionary entry, times M iterations, one for each letter in each entry. In many cases it will skip the lookups though.
A coordinate system is also created, which also has O(NM) spacial complexity.
Unfamiliar with python, so this is written in JavaScript which was as close as I could come language wise. Hopefully this at least serves as an example of a possible solution.
In addition, as an added section, I included a heavily loaded section to use for performance comparisons. This takes about 5 seconds to complete a set with 2000 words, 5000 querys, each at a length of 200.
// Main function running the analysis
function run(dict, qs) {
// Use a coordinate system for tracking the letter and position
var coordinates = 'abcdefghijklmnopqrstuvwxyz'.split('').reduce((p, c) => (p[c] = {}, p), {});
// Populate the system
for (var i = 0; i < dict.length; i++) {
// Current word in the given dictionary
var dword = dict[i];
// Iterate the letters for tracking
for (var j = 0; j < dword.length; j++) {
// Current letter in our current word
var letter = dword[j];
// Make sure that there is object existence for assignment
coordinates[letter][j] = coordinates[letter][j] || {};
// Note the letter's coordinate by storing its array
// position (i) as well as its letter position (j)
coordinates[letter][j][i] = 1;
}
}
// Lookup the word letter by letter in our coordinate system
function match_count(Q) {
// Create an array which maps from the dictionary indices
// to a truthy value of 1 for tracking successful matches
var availLookup = dict.reduce((p,_,i) => (p[i]=1,p),{});
// Iterate the letters of Q to check against the coordinate system
for (var i = 0; i < Q.length; i++) {
// Current letter in Q
var letter = Q[i];
// Skip '?' characters
if (letter == '?') continue;
// Look up the existence of "points" in our coordinate system for
// the current letter
var points = coordinates[letter];
// If nothing from the dictionary matches in this position,
// then there are no matches anywhere and we return a 0
if (!points || !points[i]) return 0;
// Iterate the availability truth table made earlier
// and look up whether any points in our coordinate system
// are present for the current letter. If they are, then the word
// remains, if not, it is removed from consideration.
for(var n in availLookup){
if(!points[i][n]) delete availLookup[n];
}
}
// Sum the "truthy" 1 values we used earlier to determine the count of
// matched words
return Object.values(availLookup).reduce((x, y) => x + y, 0);
}
var matches = [];
for (var i = 0; i < qs.length; i++) {
matches.push(match_count(qs[i]));
}
return matches;
}
document.querySelector('button').onclick=_=>{
console.clear();
var d1 = [
'cat',
'map',
'bat',
'man',
'pen'
];
var q1 = [
'?at',
'ma?',
'?a?',
'??n'
];
console.log('running...');
console.log(run(d1, q1));
var d2 = [
'uqqur',
'lxzev',
'ydfgz'
];
var q2 = [
'?z???',
'???i?',
'???e?',
'???f?',
'?z???'
];
console.log('running...');
console.log(run(d2, q2));
// Load it up (try this with other versions to compare with efficiency)
var d3 = [];
var q3 = [];
var wordcount = 2000;
var querycount = 5000;
var len = 200;
var alphabet = 'abcdefghijklmnopqrstuvwxyz'.split('');
for(var i = 0; i < wordcount; i++){
var word = "";
for(var n = 0; n < len; n++){
var rand = (Math.random()*25)|0;
word += alphabet[rand];
}
d3.push(word);
}
for(var i = 0; i < querycount; i++){
var qword = d3[(Math.random()*(wordcount-1))|0];
var query = "";
for(var n = 0; n < len; n++){
var rand = (Math.random()*100)|0;
if(rand > 98){ word += alphabet[(Math.random()*25)|0]; }
else{ query += rand > 75 ? qword[n] : '?'; }
}
q3.push(query);
}
if(document.querySelector('input').checked){
//console.log(d3,q3);
console.log('running...');
console.log(run(d3, q3).reduce((x, y) => x + y, 0) + ' matches');
}
};
<input type=checkbox>Include the ~5 second larger version<br>
<button type=button>run</button>
I don't know Python, but the gist of the naive algorithm looks like this:
#count how many words in Words list match a single query
def DoQuery(Words, OneQuery):
count = 0
#for each word in the Words list
for i in range(Words.size()):
word = Words.at(i)
#compare each letter to the query
match = true
for j in range(word.size()):
wordLetter = word.at(j)
queryLetter = OneQuery.at(j)
#if the letters do not match and are not ?, then skip to next word
if queryLetter != '?' and queryLetter != wordLetter:
match = false
break
#if we did not skip, the words match. Increase the count
if match == true
count = count + 1
#we have now checked all the words, return the count
return count
Of course, this executes the innermost loop around 3.5x10^10 times, which might be too slow. So one would need to read in the dictionary, precompute some short of shortcut data structure, then use the shortcut to find the answers faster.
One shortcut data structure would be to make a map of possible queries to answers, making the query O(1). There are only 4.47*10^9 possible queries, so this is possibly faster.
A similar shortcut data structure would be to make a trie of possible queries to answers, making the query O(M). There are only 4.47*10^9 possible queries, so this is possibly faster. This is more complex code, but may also be easier to understand for some people.
Another shortcut would be to "assume" each query has exactly one non-question-mark, and make a map of possible queries to subset dictionaries. This would mean you'd still have to run the naive query on the subset dictionary, but it would be ~26x smaller, and thus ~26x faster. You'd also have to convert the real query into only having one non-question-mark to lookup the subset dictionary in the map, but that should be easy.
I think we can use trie to solve this problem.
Initially, we will just add all the strings to the trie, and later when we get each query we can just check whether it exists in trie or not.
The only thing different here is the '?' but we can use it as an all char match, so whenever we will detect the '?' in our search string we will look what are all possible words possible from here and then simply do a dfs by searching the word in all possible paths.
Below is the C++ code
class Trie {
public:
bool isEnd;
vector<Trie*> children;
Trie() {
this->isEnd = false;
this->children = vector<Trie*>(26, nullptr);
}
};
Trie* root;
void insert(string& str) {
int n = str.size(), idx, i = 0;
Trie* node = root;
while(i < n) {
idx = str[i++] - 'a';
if (node->children[idx] == nullptr) {
node->children[idx] = new Trie();
}
node = node->children[idx];
}
node->isEnd = true;
}
int getMatches(int i, string& str, Trie* node) {
int idx, n = str.size();
while(i < n) {
if (str[i] >= 'a' && str[i] <='z')
idx = str[i] - 'a';
else {
int res = 0;
for(int j = 0;j<26;j++) {
if (node->children[j] != nullptr)
res += getMatches(i+1, str, node->children[j]);
}
return res;
}
if (node->children[idx] == nullptr) return 0;
node = node->children[idx];
++i;
}
return node->isEnd ? 1 : 0;
}
int main() {
int n, m;
cin>>n>>m;
string str;
root = new Trie();
while(n--) {
cin>>str;
insert(str);
}
int q;
cin>>q;
while(q--) {
cin>>str;
cout<<(str.size() == m ? getMatches(0, str, root) : 0)<<"\n";
}
}
Can I do it with ascii values like:
for charcters in queryword calculate the ascii values sum.
for words in dictionary, calculate ascii of words character wise and check it with ascii sum of query word, like for bat, if ascii of b matches ascii sum of queryword then increment count else calculate ascii of a and check with query ascii if not then add it to ascii of b then check and hence atlast return the count.
How's this approach?
Java Implementation using Trie
import java.util.*;
import java.io.*;
import java.lang.*;
public class Main {
static class TrieNode
{
TrieNode []children = new TrieNode[26];
boolean endOfWord;
TrieNode()
{
this.endOfWord = false;
for (int i = 0; i < 26; i++) {
this.children[i] = null;
}
}
void addWord(String word)
{
// Crawl pointer points the object
// in reference
TrieNode pCrawl = this;
// Traverse the given array of words
for (int i = 0; i < word.length(); i++) {
int index = word.charAt(i) - 'a';
if (pCrawl.children[index]==null)
pCrawl.children[index]
= new TrieNode();
pCrawl = pCrawl.children[index];
}
pCrawl.endOfWord = true;
}
public static int ans2 = 0;
void search(String word, boolean found, String curr_found, int pos)
{
TrieNode pCrawl = this;
if (pos == word.length()) {
if (pCrawl.endOfWord) {
found = true;
ans2++;
}
return;
}
if (word.charAt(pos) == '?') {
// Iterate over every letter and
// proceed further by replacing
// the character in place of '.'
for (int i = 0; i < 26; i++) {
if (pCrawl.children[i] != null) {
pCrawl.children[i].search(word,found,curr_found + (char)('a' + i),pos + 1);
}
}
}
else { // Check if pointer at character
// position is available,
// then proceed
if (pCrawl.children[word.charAt(pos) - 'a'] != null) {
pCrawl.children[word.charAt(pos) - 'a']
.search(word,found,curr_found + word.charAt(pos),pos + 1);
}
}
return;
}
// Utility function for search operation
int searchUtil(String word)
{
TrieNode pCrawl = this;
boolean found = false;
ans2 = 0;
pCrawl.search(word, found,"",0);
return ans2;
}
}
static int searchPattern(String arr[], int N,String str)
{
// Object of the class Trie
TrieNode obj = new TrieNode();
for (int i = 0; i < N; i++) {
obj.addWord(arr[i]);
}
// Search pattern
return obj.searchUtil(str);
}
public static void ans(String []arr , int n, int m,String [] query, int q){
for(int i=0;i<q;i++)
System.out.println(searchPattern(arr,n,query[i]));
}
public static void main(String args[]) {
Scanner scn = new Scanner();
int n = scn.nextInt();
int m = scn.nextInt();
String []arr = new String[n];
for(int i=0;i<n;i++){
arr[i] = scn.next();
}
int q = scn.nextInt();
String []query = new String[q];
for(int i=0;i<q;i++){
query[i] = scn.next();
}
ans(arr,n,m,query,q);
}
}
This is brute but Trie is a better implemntaion.
"""
Input: db whic is a list of words
chk : str to find
"""
def check(db,chk):
seen = collections.defaultdict(list)
for i in db:
for j in range(len(i)):
temp = i[:j] + "?" + i[j+1:]
seen[temp].append(i)
return len(seen[chk])
print check(["cat","bat"], "?at")
Sounds like it was a coding challenge about https://en.wikipedia.org/wiki/Space%E2%80%93time_tradeoff
Depending on parameters N,M,Q as well as data and query distribution, the "best" algorithm will be different. A simple example, given the query ??? you know the answer — the length of the dictionary — without any computation 😸
In the general case, most likely, it pays to create a search index in advance (that is while reading the dictionary, before any query is seen).
I'd go with this: number the input 0 cat; 1 map; ...
Then build a search index per letter position:
index = [
{"c": 0b00001, "m": 0b00010, ...} # first query letter
{"a": 0b01111, "e": 0x10000} # second query letter
]
Prepare all = 0x11111 (all bits set) as "matches everything".
Then query lookup: ?a? ⇒ all & index[1]["a"] & all. †
Afterwards you'll need to count number of bits set in the result.
The time complexity of single query is therefore O(N) * (M + O(1)) ‡, which is a decent trade-off.
The entire batch is O(N*M*Q).
Python (as well as es2020) supports native arbitrary precision integers, which can be elegantly used for bitmaps, as well as native dictionaries, use them :) However if the data is sparse, an adaptive or compressed bitmap such as https://pypi.org/project/roaringbitmap may perform better.
† In practice ... & index[1].get("a", 0) & ... in case you hit a blank.
‡ Python data structure time complexity is reported O(...) amortised worst case while in CS O(...) worst case is usually considered. While the difference is subtle, it can bite even experienced developers, see e.g. https://bugs.python.org/issue13703
One approach could be to use Python's fnmatch module (for every pattern sum the matches in words):
import fnmatch
names = ['uqqur', 'lxzev', 'ydfgs']
patterns = ['?z???', '???i?', '???e?', '???f?', '?z???']
[sum(fnmatch.fnmatch(name, pattern) for name in names) for pattern in patterns]
# [0, 0, 1, 0, 0]
Related
I'm trying to convert a Python algorithm from this Stack Overflow answer to split a string without spaces into words to C#.
Unfortunately I don't know anything about Python so the translation is proving very difficult.
The lines I don't understand are:
wordcost = dict((k, log((i+1)*log(len(words)))) for i,k in enumerate(words)) <= THIS LINE
and
def best_match(i):
candidates = enumerate(reversed(cost[max(0, i-maxword):i]))
return min((c + wordcost.get(s[i-k-1:i], 9e999), k+1) for k,c in candidates) <= THIS LINE
It looks as though best_match(i) it should return a Tuple<>. What is the equivalent in C#?
Here is the full Python script:
from math import log
# Build a cost dictionary, assuming Zipf's law and cost = -math.log(probability).
words = open("words-by-frequency.txt").read().split()
wordcost = dict((k, log((i+1)*log(len(words)))) for i,k in enumerate(words))
maxword = max(len(x) for x in words)
def infer_spaces(s):
"""Uses dynamic programming to infer the location of spaces in a string
without spaces."""
# Find the best match for the i first characters, assuming cost has
# been built for the i-1 first characters.
# Returns a pair (match_cost, match_length).
def best_match(i):
candidates = enumerate(reversed(cost[max(0, i-maxword):i]))
return min((c + wordcost.get(s[i-k-1:i], 9e999), k+1) for k,c in candidates)
# Build the cost array.
cost = [0]
for i in range(1,len(s)+1):
c,k = best_match(i)
cost.append(c)
# Backtrack to recover the minimal-cost string.
out = []
i = len(s)
while i>0:
c,k = best_match(i)
assert c == cost[i]
out.append(s[i-k:i])
i -= k
return " ".join(reversed(out))
I found that algorithm interesting so here is my translation:
class WordSplitter {
private readonly Dictionary<string, double> _wordCosts;
private readonly int _maxWordLength;
public WordSplitter(string freqFilePath) {
// words = open("words-by-frequency.txt").read().split()
var words = File.ReadAllLines(freqFilePath);
// wordcost = dict((k, log((i+1)*log(len(words)))) for i,k in enumerate(words))
_wordCosts = words.Select((k, i) => new { Key = k, Value = Math.Log((i + 1) * Math.Log(words.Length)) }).ToDictionary(c => c.Key, c => c.Value);
// maxword = max(len(x) for x in words)
_maxWordLength = words.Select(c => c.Length).Max();
}
public string InferSpaces(string target) {
// cost = [0]
var costs = new List<double>() { 0 };
foreach (var i in Enumerable.Range(1, target.Length)) {
var (c, k) = BestMatch(i);
costs.Add(c);
}
var output = new List<string>();
int len = target.Length;
while (len > 0) {
var (c, k) = BestMatch(len);
Debug.Assert(k > 0);
Debug.Assert(c == costs[len]);
// use Substring if your compiler version doesn't support slicing
// but pay attention that Substring second argument is length, not end index
output.Add(target[(len - k)..len]);
len -= k;
}
output.Reverse();
return String.Join(" ", output);
(double cost, int length) BestMatch(int i) {
var start = Math.Max(0, i - _maxWordLength);
// GetRange second argument is length
var x = costs.GetRange(start, i - start);
x.Reverse();
// now, this part is easier to comprehend if it's expanded a bit
// you can do it in cryptic way too like in python though if you like
(double cost, int length)? result = null;
for (int k = 0; k < x.Count; k++) {
var c = x[k];
var sub = target[(i - k - 1)..i];
var cost = c + (_wordCosts.ContainsKey(sub) ? _wordCosts[sub] : 9e99); // 9e99 is just some big number. 9e999 is outside of double range in C#, so use smaller one
// save minimal cost
if (result == null || result.Value.cost > cost)
result = (cost, k + 1);
}
// return minimal cost
return result.Value;
}
}
}
Usage:
var splitter = new WordSplitter(#"C:\tmp\words.txt");
var result = splitter.InferSpaces("thumbgreenappleactiveassignmentweeklymetaphor");
I have been given a set S, of n integers, and have to print the size of a maximal subset S' of S where the sum of any 2 numbers in S' are not evenly divisible by k.
Input Format
The first line contains 2 space-separated integers, n and k, respectively.
The second line contains n space-separated integers describing the unique values of the set.
My Code :
import sys
n,k = raw_input().strip().split(' ')
n,k = [int(n),int(k)]
a = map(int,raw_input().strip().split(' '))
count = 0
for i in range(len(a)):
for j in range(len(a)):
if (a[i]+a[j])%k != 0:
count = count+1
print count
Input:
4 3
1 7 2 4
Expected Output:
3
My Output:
10
What am i doing wrong? Anyone?
You can solve it in O(n) time using the following approach:
L = [0]*k
for x in a:
L[x % k] += 1
res = 0
for i in range(k//2+1):
if i == 0 or k == i*2:
res += bool(L[i])
else:
res += max(L[i], L[k-i])
print(res)
Yes O(n) solution for this problem is very much possible. Like planetp rightly pointed out its pretty much the same solution I have coded in java. Added comments for better understanding.
import java.io.; import java.util.;
public class Solution {
public static void main(String[] args) {
Scanner in = new Scanner(System.in);
int n=in.nextInt();
int k=in.nextInt();
int [] arr = new int[k];
Arrays.fill(arr, 0);
Map<Integer,Integer> mp=new HashMap<>();
Storing the values in a map considering there are no duplicates. You can store them in array list if there are duplicates. Only then you have different results.
for(int i=0;i
int res=0;
for(int i=0;i<=(k/2);i++)
{
if(i==0 || k==i*2)
{
if(arr[i]!=0)
res+=1;
}
If the no. is divisible by k we can have only one and if the no is exactly half of k then we can have only 1. Rational if a & b are divisble by k then a+b is also divisible by k. Similarly if c%k=k/2 then if we have more than one such no. their combination is divisible by k. Hence we restrict them to 1 value each.
else
{
int p=arr[i];
int q=arr[k-i];
if(p>=q)
res+=p;
else
res+=q;
}
This is simple figure out which is more from a list of 0 to k/2 in the list if a[x]>a[k-x] get the values which is greater. i.e. if we have k=4 and we have no. 1,3,5,7,9,13,17. Then a[1]=4 and a[3]=2 thus pick a[1] because 1,5,13,17 can be kept together.
}
System.out.println(res);
}
}
# given k, n and a as per your input.
# Will return 0 directly if n == 1
def maxsize(k, n, a):
import itertools
while n > 1:
sets = itertools.combinations(a, n)
for set_ in sets:
if all((u+v) % k for (u, v) in itertools.combinations(set_, 2)):
return n
n -= 1
return 0
Java solution
public class Solution {
static PrintStream out = System.out;
public static void main(String[] args) {
/* Enter your code here. Read input from STDIN. Print output to STDOUT. Your class should be named Solution. */
Scanner in = new Scanner (System.in);
int n = in.nextInt();
int k = in.nextInt();
int[] A = new int[n];
for(int i=0;i<n;i++){
A[i]=in.nextInt();
}
int[] R = new int[k];
for(int i=0;i<n;i++)
R[A[i] % k]+=1;
int res=0;
for(int i=0;i<k/2+1;i++){
if(i==0 || k==i*2)
res+= (R[i]!=0)?1:0;
else
res+= Math.max(R[i], R[k-i]);
}
out.println(res);
}
}
I am looking for a algorithm that takes a string and splits it into a certain number of parts. These parts shall contain complete words (so whitespaces are used to split the string) and the parts shall be of nearly the same length, or contain the longest possible parts.
I know it is not that hard to code a function that can do what I want but I wonder whether there is a well-proven and fast algorithm for that purpose?
edit:
To clarify my question I'll describe you the problem I am trying to solve.
I generate images with a fixed width. Into these images I write user names using GD and Freetype in PHP. Since I have a fixed width I want to split the names into 2 or 3 lines if they don't fit into one.
In order to fill as much space as possible I want to split the names in a way that each line contains as much words as possible. With this I mean that in one line should be as much words as neccessary in order to keep each line's length near to an average line length of the whole text block. So if there are one long word and two short words the two short words should stand on one line if it makes all lines about equal long.
(Then I compute the text block width using 1, 2 or 3 lines and if it fits into my image I render it. Just if there are 3 lines and it won't fit I decrease the font size until everything is fine.)
Example:
This is a long text
should be display something like that:
This is a
long text
or:
This is
a long
text
but not:
This
is a long
text
and also not:
This is a long
text
Hope I could explain clearer what I am looking for.
If you're talking about line-breaking, take a look at Dynamic Line Breaking, which gives a Dynamic Programming solution to divide words into lines.
I don't know about proven, but it seems like the simplest and most efficient solution would be to divide the length of the string by N then find the closest white space to the split locations (you'll want to search both forward and back).
The below code seems to work though there are plenty of error conditions that it doesn't handle. It seems like it would run in O(n) where n is the number of strings you want.
class Program
{
static void Main(string[] args)
{
var s = "This is a string for testing purposes. It will be split into 3 parts";
var p = s.Length / 3;
var w1 = 0;
var w2 = FindClosestWordIndex(s, p);
var w3 = FindClosestWordIndex(s, p * 2);
Console.WriteLine(string.Format("1: {0}", s.Substring(w1, w2 - w1).Trim()));
Console.WriteLine(string.Format("2: {0}", s.Substring(w2, w3 - w2).Trim()));
Console.WriteLine(string.Format("3: {0}", s.Substring(w3).Trim()));
Console.ReadKey();
}
public static int FindClosestWordIndex(string s, int startIndex)
{
int wordAfterIndex = -1;
int wordBeforeIndex = -1;
for (int i = startIndex; i < s.Length; i++)
{
if (s[i] == ' ')
{
wordAfterIndex = i;
break;
}
}
for (int i = startIndex; i >= 0; i--)
{
if (s[i] == ' ')
{
wordBeforeIndex = i;
break;
}
}
if (wordAfterIndex - startIndex <= startIndex - wordBeforeIndex)
return wordAfterIndex;
else
return wordBeforeIndex;
}
}
The output for this is:
1: This is a string for
2: testing purposes. It will
3: be split into 3 parts
Again, following Brian's answer, I made a PHP version of his code:
// Input text
$txt = "This is a really long string that should be broken up onto lines of about the same number of characters.";
// Number of lines
$numLines = 3;
/* Do it, result comes as an array: */
$aResult = splitLinesByClosestWhitespace($txt, $numLines);
/* Output result: */
if ($aResult)
{
for ($x=1; $x<=$numLines; $x++)
echo "Line ".$x.": ".$aResult[$x]."<br>";
} else {
echo "Not enough spaces to generate the lines!";
}
/**********************/
/**
* Splits a string into multiple lines of the closest possible same length,
* using the closest whitespaces
* #param string $txt String to split
* #param integer $numLines Number of lines
* #return array|false
*/
function splitLinesByClosestWhitespace($txt, $numLines)
{
$p = intval( strlen($txt) / $numLines );
$aTxtIndx = array();
$aTxt = array();
// Check we have enough whitespaces to generate the number of lines
$wsCount = count( explode(" ", $txt) ) - 1;
if ($wsCount<$numLines)
return false;
// Get the indexes
for ($x=1; $x<=$numLines; $x++)
{
$aTxtIndx[$x] = FindClosestWordIndex($txt, $p * ($x-1) );
}
// Do the split
for ($x=1; $x<=$numLines; $x++)
{
if ($x != $numLines)
$aTxt[$x] = substr( $txt, $aTxtIndx[$x], trim($aTxtIndx[$x+1]) );
else
$aTxt[$x] = substr( $txt, trim($aTxtIndx[$x]) );
}
return $aTxt;
}
/**
* Finds the closest word to a string index
* #param string $s String to search
* #param integer $startIndex Index at which to find the closest word
* #return integer
*/
function FindClosestWordIndex($s, $startIndex)
{
$wordAfterIndex = 0;
$wordBeforeIndex = 0;
for ($i = $startIndex; $i < strlen($s); $i++)
{
if ($s[$i] == ' ')
{
$wordAfterIndex = $i;
break;
}
}
for ($i = $startIndex; $i >= 0; $i--)
{
if ($s[$i] == ' ')
{
$wordBeforeIndex = $i;
break;
}
}
if ($wordAfterIndex - $startIndex <= $startIndex - $wordBeforeIndex)
return $wordAfterIndex;
else
return $wordBeforeIndex;
}
Partitioning into equal sizes is NP-Complete
Working python codes
Wrap.py - Break paragraphs into lines, attempting to avoid short lines.
SMAWK.py - Same thing in O(n)
codes by David Eppstein.
The way word-wrap is usually implemented is to place as many words as possible onto one line, and break to the next when there is no more room. This assumes, of course, that you have a maximum-width in mind.
Regardless of what algorithm you use, keep in mind that unless you are working with a fixed-width font, you want to work with the physical width of the word, not the number of letters.
Following Brian's answer, I made a JavaScript version of his code: http://jsfiddle.net/gmoz22/CPGY2/.
// Input text
var txt = "This is a really long string that should be broken up onto lines of about the same number of characters.";
// Number of lines
var numLines = 3;
/* Do it, result comes as an array: */
var aResult = splitLinesByClosestWhitespace(txt, numLines);
/* Output result: */
if (aResult)
{
for (var x = 1; x<=numLines; x++)
document.write( "Line "+x+": " + aResult[x] + "<br>" );
} else {
document.write("Not enough spaces to generate the lines!");
}
/**********************/
// Original algorithm by http://stackoverflow.com/questions/2381525/algorithm-split-a-string-into-n-parts-using-whitespaces-so-all-parts-have-nearl/2381772#2381772, rewritten for JavaScript by Steve Oziel
/**
* Trims a string for older browsers
* Used only if trim() if it is not already available on the Prototype-Object
* since overriding it is a huge performance hit (generally recommended when extending Native Objects)
*/
if (!String.prototype.trim)
{
String.prototype.trim = function(){return this.replace(/^\s+|\s+$/g, '');};
}
/**
* Splits a string into multiple lines of the closest possible same length,
* using the closest whitespaces
* #param {string} txt String to split
* #param {integer} numLines Number of lines
* #returns {Array}
*/
function splitLinesByClosestWhitespace(txt, numLines)
{
var p = parseInt(txt.length / numLines);
var aTxtIndx = [];
var aTxt = [];
// Check we have enough whitespaces to generate the number of lines
var wsCount = txt.split(" ").length - 1;
if (wsCount<numLines)
return false;
// Get the indexes
for (var x=1; x<=numLines; x++)
{
aTxtIndx[x] = FindClosestWordIndex(txt, p * (x-1) );
}
// Do the split
for (var x=1; x<=numLines; x++)
{
if (x != numLines)
aTxt[x] = txt.slice(aTxtIndx[x], aTxtIndx[x+1]).trim();
else
aTxt[x] = txt.slice(aTxtIndx[x]).trim();
}
return aTxt;
}
/**
* Finds the closest word to a string index
* #param {string} s String to search
* #param {integer} startIndex Index at which to find the closest word
* #returns {integer}
*/
function FindClosestWordIndex(s, startIndex)
{
var wordAfterIndex = 0;
var wordBeforeIndex = 0;
for (var i = startIndex; i < s.length; i++)
{
if (s[i] == ' ')
{
wordAfterIndex = i;
break;
}
}
for (var i = startIndex; i >= 0; i--)
{
if (s[i] == ' ')
{
wordBeforeIndex = i;
break;
}
}
if (wordAfterIndex - startIndex <= startIndex - wordBeforeIndex)
return wordAfterIndex;
else
return wordBeforeIndex;
}
It works fine when the number of desired lines is not too close to the number of whitespaces.
In the example I gave, there are 19 whitespaces and it starts to bug when you ask to break it into 17, 18 or 19 lines.
Edits welcome!
I have a simple task: To count how many times every letter occurs in a string. I've used a Counter() for it, but on one forum I saw information that using dict() / Counter() is much slower than using string.count() for every letter. I thought that it would interate through the string only once, and the string.count() solution would have to iterate through it four times (in this case). Why is Counter() so slow?
>>> timeit.timeit('x.count("A");x.count("G");x.count("C");x.count("T")', setup="x='GAAAAAGTCGTAGGGTTCCTTCACTCGAGGAATGCTGCGACAGTAAAGGAGGCCACGTGGTTGAGAGTTCCTAAGCATTCGTATGTACACCCGGACTCGATGCACTCAAACGTGCTTAAGGGTAAAGAAGGTCGAGAGGTATACTGGGGCACTCCCCTTAGAATTATATCTTGGTCAACTACAATATGGATGGAAATTCTAAGCCGAAAACGACCCGCTAGCGGATTGTGTATGTATCACAACGGTTTCGGTTCATACGCAAAATCATCCCATTTCAAGGCCACTCAAGGACATGACGCCGTGCAACTCCGAGGACATCCCTCAGCGATTGATGCAACCTGGTCATCTAATAATCCTTAGAACGGATGTGCCCTCTACTGGGAGAGCCGGCTAGACTGGCATCTCGCGTTGTTCGTACGAGCTCCGGGCGCCCGGGCGGTGTACGTTGATGTACAGCCTAAGAGCTTTCCACCTATGCTACGAACTAATTTCCCGTCCATCGTTCCTCGGACTGAGGTCAAAGTAACCCGGAAGTACATGGATCAGATACACTCACAGTCCCCTTTAATGACTGAGCTGGACGCTATTGATTGCTTTATAAGTGTTATGGTGAACTCGAAGACTTAGCTAGGAATTTCGCTATACCCGGGTAATGAGCTTAATACCTCACAGCATGTACGCTCTGAATATATGTAGCGATGCTAGCGGAACGTAAGCGTGAGCGTTATGCAGGGCTCCGCACCTCGTGGCCACTCGCCCAATGCCCGAGTTTTTGAGCAATGCCATGCCCTCCAGGTGAAGCGTGCTGAATATGTTCCGCCTCCGCACACCTACCCTACGGGCCTTACGCCATAGCTGAGGATACGCGAGTTGGTTAGCGATTACGTCATTCCAGGTGGTCGTTC'", number=10000)
0.07911698750407936
>>> timeit.timeit('Counter(x)', setup="from collections import Counter;x='GAAAAAGTCGTAGGGTTCCTTCACTCGAGGAATGCTGCGACAGTAAAGGAGGCCACGTGGTTGAGAGTTCCTAAGCATTCGTATGTACACCCGGACTCGATGCACTCAAACGTGCTTAAGGGTAAAGAAGGTCGAGAGGTATACTGGGGCACTCCCCTTAGAATTATATCTTGGTCAACTACAATATGGATGGAAATTCTAAGCCGAAAACGACCCGCTAGCGGATTGTGTATGTATCACAACGGTTTCGGTTCATACGCAAAATCATCCCATTTCAAGGCCACTCAAGGACATGACGCCGTGCAACTCCGAGGACATCCCTCAGCGATTGATGCAACCTGGTCATCTAATAATCCTTAGAACGGATGTGCCCTCTACTGGGAGAGCCGGCTAGACTGGCATCTCGCGTTGTTCGTACGAGCTCCGGGCGCCCGGGCGGTGTACGTTGATGTACAGCCTAAGAGCTTTCCACCTATGCTACGAACTAATTTCCCGTCCATCGTTCCTCGGACTGAGGTCAAAGTAACCCGGAAGTACATGGATCAGATACACTCACAGTCCCCTTTAATGACTGAGCTGGACGCTATTGATTGCTTTATAAGTGTTATGGTGAACTCGAAGACTTAGCTAGGAATTTCGCTATACCCGGGTAATGAGCTTAATACCTCACAGCATGTACGCTCTGAATATATGTAGCGATGCTAGCGGAACGTAAGCGTGAGCGTTATGCAGGGCTCCGCACCTCGTGGCCACTCGCCCAATGCCCGAGTTTTTGAGCAATGCCATGCCCTCCAGGTGAAGCGTGCTGAATATGTTCCGCCTCCGCACACCTACCCTACGGGCCTTACGCCATAGCTGAGGATACGCGAGTTGGTTAGCGATTACGTCATTCCAGGTGGTCGTTC'", number=10000)
2.1727447831030844
>>> 2.1727447831030844 / 0.07911698750407936
27.462430656767047
>>>
Counter() allows you to count any hashable objects, not just substrings. Both solutions are O(n)-time. Your measurements show that the overhead of iterating and hashing individual characters by Counter() is greater than running s.count() 4 times.
Counter() can use C helper to count elements but it seems it doesn't special case strings and uses general algorithm applicable for any other iterable i.e., processing a single character involves multiple Python C API calls to advance the iterator, get previous value (a lookup in the hash table), increment counter, set new value (a lookup in the hash table):
while (1) {
key = PyIter_Next(it);
if (key == NULL)
break;
oldval = PyObject_GetItem(mapping, key);
if (oldval == NULL) {
if (!PyErr_Occurred() || !PyErr_ExceptionMatches(PyExc_KeyError))
break;
PyErr_Clear();
Py_INCREF(one);
newval = one;
} else {
newval = PyNumber_Add(oldval, one);
Py_DECREF(oldval);
if (newval == NULL)
break;
}
if (PyObject_SetItem(mapping, key, newval) == -1)
break;
Py_CLEAR(newval);
Py_DECREF(key);
}
Compare it to FASTSEARCH() overhead for bytestrings:
for (i = 0; i < n; i++)
if (s[i] == p[0]) {
count++;
if (count == maxcount)
return maxcount;
}
return count;
The Counter class inherits from dict, while string.count is the following C-implementation (CPython 3.3):
/* stringlib: count implementation */
#ifndef STRINGLIB_FASTSEARCH_H
#error must include "stringlib/fastsearch.h" before including this module
#endif
Py_LOCAL_INLINE(Py_ssize_t)
STRINGLIB(count)(const STRINGLIB_CHAR* str, Py_ssize_t str_len,
const STRINGLIB_CHAR* sub, Py_ssize_t sub_len,
Py_ssize_t maxcount)
{
Py_ssize_t count;
if (str_len < 0)
return 0; /* start > len(str) */
if (sub_len == 0)
return (str_len < maxcount) ? str_len + 1 : maxcount;
count = FASTSEARCH(str, str_len, sub, sub_len, maxcount, FAST_COUNT);
if (count < 0)
return 0; /* no match */
return count;
}
Guess, which one is faster? :)
I have to do a program that gives all permutations of n numbers {1,2,3..n} using backtracking. I managed to do it in C, and it works very well, here is the code:
int st[25], n=4;
int valid(int k)
{
int i;
for (i = 1; i <= k - 1; i++)
if (st[k] == st[i])
return 0;
return 1;
}
void bktr(int k)
{
int i;
if (k == n + 1)
{
for (i = 1; i <= n; i++)
printf("%d ", st[i]);
printf("\n");
}
else
for (i = 1; i <= n; i++)
{
st[k] = i;
if (valid(k))
bktr(k + 1);
}
}
int main()
{
bktr(1);
return 0;
}
Now I have to write it in Python. Here is what I did:
st=[]
n=4
def bktr(k):
if k==n+1:
for i in range(1,n):
print (st[i])
else:
for i in range(1,n):
st[k]=i
if valid(k):
bktr(k+1)
def valid(k):
for i in range(1,k-1):
if st[k]==st[i]:
return 0
return 1
bktr(1)
I get this error:
list assignment index out of range
at st[k]==st[i].
Python has a "permutations" functions in the itertools module:
import itertools
itertools.permutations([1,2,3])
If you need to write the code yourself (for example if this is homework), here is the issue:
Python lists do not have a predetermined size, so you can't just set e.g. the 10th element to 3. You can only change existing elements or add to the end.
Python lists (and C arrays) also start at 0. This means you have to access the first element with st[0], not st[1].
When you start your program, st has a length of 0; this means you can not assign to st[1], as it is not the end.
If this is confusing, I recommend you use the st.append(element) method instead, which always adds to the end.
If the code is done and works, I recommend you head over to code review stack exchange because there are a lot more things that could be improved.