Fastest sorted string concatenation - python

What is the fastest and most efficient way to do this:
word = "dinosaur"
newWord = word[0] + ''.join(sorted(word[1:]))
output:
"dainorsu"
Thoughts:
Would something as converting the word to an array increase performance? I read somewhere that arrays have less overhead due to them being the same data type compared to a string.
Basically I want to sort everything after the first character in the string as fast as possible. If memory is saved that would also be a plus. The problem I am trying to solve needs to be within a certain time limit so I am trying to be as fast as possible. I dont know much about python efficiency under the hood so if you explain why this method is fast as well that would be AWESOME!

Here's how I'd approach it.
Create an array of size 26 (assuming that only lowercase letters are used). Then iterate through each character in the string. For the 1st letter of the alphabet, increment the 1st index of the array; for the 2nd, increment the 2nd. Once you've scanned the whole string (which is of complexity O(n)) you will be able to reconstruct it afterwards by repeating the 'a' array[0] times, 'b' array[1] times, and so on.
This approach would beat a fast sort algorithm like quicksort or partition sort, which have complexity O(nlogn).
EDIT: Finally you'd also want to reassemble the final string efficiently. The string concatenation provided by some languages using the + operator can be inefficient, so consider using an efficient string builder class.

Related

Code complexity Explanation | Powerset Generation

I am trying to understand difference/similarity in complexities when writing the code to generate powerset in 2 ways:
def powerset(s, i, cur):
if i==len(s):
print(cur)
return
powerset(s, i+1, cur+s[i]) # string addition is also possibly O(n^2) here?
powerset(s, i+1, cur)
powerset("abc", 0, "")
Output:
['abc', 'ab', 'ac', 'a', 'bc', 'b', 'c', '']
This is going into recursion, with 2 choices at each step (adding s[i] or not), creating 2 branches. Leading to 2^n and adding to the array/printing is another O(n), leading to O(n*2^n)
Also thinking about it in the terms of branches^depth = O(2^n)
The space complexity for this will be: O(n)? Considering max depth of the tree to go up to n by the above logic.
And with this:
s = "abc"
res = [""]
for i in s:
res += [j+i for j in res]
I get the same output.
But here, I see 2 for loops and the additional complexity for creating the strings -- which is possibly O(n^2) in Python. Leading to possible O(N^4) as opposed to O(n*2^n) in the solution above.
Space complexity here seems to me to be O(n) since we are reserving space for just the output. But no additional space, so overall: O(1)
Is my understanding for these solutions in time and space correct? I was under the impression that computing powerset is O(2^n). But I figured maybe its a more optimized solution? (even though the second solution seems more naive).
https://stackoverflow.com/questions/34008010/is-the-time-complexity-of-iterative-string-append-actually-on2-or-on
Here, they suggest using arrays to avoid the `O(n^2)` complexity of string concatenation.
It's clear that both of those solutions involving creating 2len(s) strings, which are all the subsets of s. The amount of time that takes is O(2len(s) * T) where T is the time to create and process each string. The processing time is impossible to know without seeing the code which consumes the subsets; if the processing consisting only of printing, then it's presumably O(len(s)) since printing has to be done character by character.
I think the core of your question is how much time constructing the string takes.
It's often said that constructing a string by repeated concatenation is O(n²), since the entire string needs to be copied in order to perform each concatenation. And that's basically true (although Python has some tricks up its sleeve which can optimise in certain cases). But that's assuming that you're constructing the string from scratch, tossing away the intermediate results as soon as they're no longer necessary.
In this case, however, the intermediate results are also part of the desired result. And you only copy the prefixes implicitly, when you append the next character. That means that the additional cost of producing each subset is O(n), not O(n²), because the cost of producing the immediate prefix was already accounted for. That's true in both of the solutions you provide.

Compare strings in a list to another list of strings: pairwise string comparison vs check existence in set [duplicate]

This question already has answers here:
Can hash tables really be O(1)?
(10 answers)
Closed last year.
I'm comparing a list of strings words_to_lookup to a list of strings word_list and for every match I'll do something.
I wonder whether direct string comparison or checking existence in a set will be more time efficient.
String comparison is O(n)
for w in word_list:
for s in words_to_lookup:
if s == w:
# do something
Checking existence in a set is O(1) but getting the hash of the string (probably) takes O(n).
w_set = set(word_list)
for s in words_to_lookup:
if s in w_set:
# do something
So is string comparison an efficient approach? If the set approach is faster, how?
EDIT:
I was not thinking clearly when I first posted the question. Thank you for the comments. I find it hard to convert my real problem to a concise one suited for online discussions. Now I made the edit, the answer is obvious that the first approach is O(n^2) and the second approach is O(n).
My real question should have been this: Can hash tables really be O(1)?.
And the answer is:
The hash function however does not have to be O(m) - it can be O(1). Unlike a cryptographic hash, a hash function for use in a dictionary does not have to look at every bit in the input in order to calculate the hash. Implementations are free to look at only a fixed number of bits.
If you only need to search once, your first approach is more efficient.
One advantage of constructing a set is the following: if you need to search against the same set many times, you only need to build the set once.
In other words, suppose you have N words in the dictionary (dictionary_list) and you have a list of M words that you want to look up (words_to_lookup). If you go with the set approach, the complexity is O(N+M). If you don't build a set, the complexity is O(N*M) because you may have to go over the whole dictionary of N words for each of the M words that you are looking up.
For this problem, the following code is the more efficient approach.
w_set = set(dictionary_list)
for w in words_to_lookup:
if w in w_set:
# do something
EDIT
Ok. Now I see what you mean. In that case, the set version is definitely better. Note, you can also do:
for s in words_to_lookup:
if s in word_list:
# do something
That's the same thing as your set way, but the running time of the "in" operator will be worse.
list - Average: O(n)
set/dict - Average: O(1), Worst: O(n)
So the set way is probably best.

Constructing an object that's greater than any string

In Python 3, I have a list of strings and would find it useful to be able to append a sentinel that would compare greater than all elements in the list.
Is there a straightforward way to construct such an object?
I can define a class, possibly subclassing str, but it feels like there ought to be a simpler way.
For this to be useful in simplifying my algorithm, I need to do this ahead of time, before I know what the strings contained in the list are going to be (and so it can't be a function of those strings).
This is kind of a naïve answer, but when you're dealing with numbers and need a sentinel value for comparison purposes, it's not uncommon to use the largest (or smallest) number that a specific number type can hold.
Python strings are compared lexicographically, so to create a "max string", you'd simply need to create a long string of the "max char":
# 1114111 is the highest value that chr seems to accept
MAX_CHAR = chr(1114111)
# One million is entirely arbitary here.
# It should ideally be 1 + the length of the longest possible string that you'll compare against
MAX_STRING = MAX_CHAR * int(1e6)
Unless there's weird corner cases that I'm not aware of, MAX_STRING should now be considered greater than any other string (other than itself); providing that it's long enough.

What is the time complexity of the below code in python?

Below is a code to remove the recurrent alphabets from a string in python. I would like to know the time complexity of this code. More specifically time complexity of line if string_1[i] not in char_found:. Searching in a list.
Also if possible can this be explained using space allocated by a list.
def remove_recorring_char(string_1):
result = ""
char_found = []
for i in range(0,len(string_1)):
if string_1[i] not in char_found:
char_found.append(string_1[i])
result = result+string_1[i]
return result
if __name__== "__main__":
print(remove_recorring_char("aabbbcc"))
print(remove_recorring_char("chdsgdsgggsggsjddaaxcvcj"))
if string_1[i] not in char_found:
This line does two things:
First, it accesses string_1[i]. That takes constant time, because strings are basically just arrays of characters.
Then it searches in a list char_found, comparing that character string_1[i] to each element until one matches. That takes (worst-case) linear time in the length of char_found. And, since char_found could (worst-case) be all of the characters in string_1[:i], that's linear in the length of string_1.
So, this line is O(N).
And of course this line is inside an outer loop that's even more obviously O(N): for i in range(0,len(string_1)):. So, that combination of the two is O(N**2).
Even if you fix that in test to be constant time, you also do result = result+string_1[i] inside the loop. String concatenation is worst-case linear in the length of the string. Recent versions of CPython and PyPy have some optimizations so it's sometimes amortized constant time, like appending to a list, but Python the language doesn't guarantee those optimizations. And result is, worst-case, also as long as string_1. So, the whole thing is still O(N**2), unless your interpreter is extra nice.
You could reduce the whole thing to O(N) by making two small changes.
First, use a set rather than a list for char_found. Searching a set, and adding to it, are both amortized constant-time operations.
Second, use a list rather than a str for result, then do result = ''.join(result) at the end. Appending to a list is amortized constant-time. Converting a list back to a string is of course linear time, but you're not doing it inside your loops, so that's fine.

Python slicing with wrapping from negative to positive numbers

Say I have a string "abcde", and want to get "deab". How can I get "deab" using string slicing?
I've tried using string[-2:1], but this gives me an empty result ''.
The project I am working on makes splitting this up into [-2:] and [:2] difficult, hence the question. Thanks!
You can simulate wrapping by doubling the string:
(string * 2)[3:7]
This is not necessarily much less efficient than getting two slices the usual way: it only creates one temporary string instead of two, but obviously requires quite a bit more space.
You may want this,
s[-2:] + s[:2]

Categories

Resources