Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
i want to create bash script.
that skip 1000 on each loop untill 2M,
i'm stuck here:
for i in {1..2000000} ; do
done;
for exmple:
the first loop:
offset=0
second loop:
offset=100
3rd loop
offset=2000
until 2M
i try few ways but with no success.
python will be welcome also
how can i do that?
Use while loop :
i=0
while [ $i -lt 2000000 ]
do
echo offset=$i
i=$(($i+1000))
done
What you want is the C-style for loop:
for ((i=0; i <= 2000000; i+=1000)); do
bash does support a brace expansion operator that lets you generate sequences with strides greater than one (support appears to have been added in 4.0, although there is no mention in the release notes):
for i in {0..200000..1000}
However, the C-style loop is preferable because it generates the values of i lazily, rather than creating the entire sequence in memory before starting the iteration. Unless you are generating absolutely enormous sequences, this will not usually be an issue, but you might notice a short delay while the sequence is generated.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
name=str(input("enter the string:"))
count=0
for x in name:
if x.isupper():
count=count+1
print("The number of capital letters found in the string is:",count)
How can I rewrite this code without a for loop that gets the same function?
Since this seems like a homework problem, it's probably not appropriate to just post an answer. To give you some hints:
you could re-write the for loop as a while loop that uses a counter
you could re-write the for loop as a while loop that pops characters off of name one-at-a-time, and terminates when name is empty
you could use a list comprehension with a filter to get just the upper-case characters, and report the length of the resulting string
you could write a recursive function
you could use filter the same way you would use a list comprehension
you could use sum, as suggested in comments above
you could use functools.reduce (or just reduce if you're using a geriatric python interpreter)
if you're feeling really perverse, you could use regular expressions
Along with probably a dozen other ways that I'm not thinking of now...
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I have a list to iterate through, the first iteration took 3 minutes 40 seconds, result was a bunch of generated images being saves on hard disk. Would it make sense to split the list into 2 or 3 and apply multithreading in this case?
You can't write to a hard disk in parallel, so using Threading/Multiprocessing wouldn't show any time improvements and would most likely add overhead.
If it's python that's slowing you down and not your disk writing speed, then it might be worth looking into the Map function, if you're using Python3.
https://docs.python.org/3/library/functions.html#map
Otherwise you'd need to look at using a faster language like C
https://docs.python.org/2/c-api/index.html
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
I have a loop like that;
for i in range(0,500):
But the rest of them takes more time. I want to split my loop for instance 5 steps. In the first step i want to run the first 100, at last, 401 to 500. But i don't want to write this loop five times.
Is there any short-way this kind of progressive run?
Just create a loop inside a loop:
for s in range(0, 500, 100):
for i in range(s, s+100)):
...
Since in python indices start and 0 and the range is not inclusive of the last number this does, 0-99, 100-199, ..., 400-499.
If time is what you are trying to trim, use xrange(), it is MUCH faster, especially when dealing with large numbers:
for i in xrange(500):
Edit: This is for Python 2.x, not 3.x!
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I have a big file with some format. I want to substitute some items in the string by others, the string is long (like a big XML but with no format).
I know where they are, i could locate them by using a regular expression for each, but i wonder which is the best method, easier and better if its the most efficient way.
format/% already searches the string for parameter placeholders internally. Since they're implemented in C, you're not gonna beat their performance with Python code even if your search and replace workload is somewhat simpler. See Faster alternatives to numpy.argmax/argmin which is slow for a glance on C to Python relative performance.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Does syntax change Big O? Or perhaps change the speed of how a program is processed? I am going to use python as an example.
If I had the list comprehension code:
new_list = [expression(i) for i in old_list if filter(i)]
Would it run any differently than:
new_list = []
for i in old_list:
if filter(i):
new_list.append(expressions(i))
Do these pieces of code have anything different in them? Would one be considered faster than the other? Why or why not?
Big-O says nothing about syntax choice in a programming language. It is only useful as a tool to compare algorithms.
Syntax choices can change the fixed cost of each iteration. Your specific sample has different fixed execution cost per iteration and so the speed of execution will differ.
In Python you could use the timeit module to compare execution speed of two ways to implement the same algorithm, and you could use the dis module to analyse what bytecode will be executed for each alternative 'spelling', informing you how much work the Python interpreter will do for each iteration.
For the specific example, the list comprehension will be faster because it does less work in bytecode; the extra lookups of the .append() method in the second example as well as invoking it is what slows it down.