Does python automatically terminate or generate an error for an infinite "for" loop?
for example, would
list = [1]
for number in list:
print (number + 1)
list.append(number + 1)
cause the program to self-terminate or error?
I have tested it a bit, going as far as around 10,000.
No, the program will run forever. It may crash because your call stack is to big (if you use recursion somewhere) or you used to much memory though. Your example will eventually crash.
while True:
pass
Will run "forever" with no issues however.
That program will end, but it will be a very long time until it does. Eventually, you'll run the system out of memory for your list and the program will crash with an error.
It won't unless you specify a range or a limit. The way you currently have it it'll keep going until you get a memory error.
Also, don't use list as a variable since it's a reserved function by python.
Related
I'm running a large Python3.7 script using PyCharm and interfaced by Django that parses txt files line by line and processes the text. It gets stuck at a certain point on one particularly large file and I can't for the life of me figure out why. Once it gets stuck, the memory that PyCharm uses according to Task Manager runs up to 100% of available over the course of 5-10 seconds and I have to manually stop the execution (memory usage is low when it runs on other files and before the execution stops on the large file).
I've narrowed the issue down to the following loop:
i = 0
for line in line_list:
label_tmp = self.get_label(line) # note: self because this is all contained in a class
if label_tmp in target_list:
index_dict[i] = line
i += 1
print(i) # this is only here for diagnostic purposes for this issue
This works perfectly for a handful of files that I've tested it on, but on the problem file it will stop on the 2494th iteration (ie when i=2494). It does this even when I delete the 2494th line of the file or when I delete the first 10 lines of the file - so this rules out a bug in the code on any particular line in the file - it will stop running regardless of what is in the 2494th line.
I built self.get_label() to produce a log file since it is a large function. After playing around, I've begun to suspect that it will stop running after a certain number of actions no matter what. For example I added the following dummy lines to the beginning of self.get_label():
log.write('Check1\n')
log.write('Check2\n')
log.write('Check3\n')
log.write('Check4\n')
On the 2494th iteration, the last entry in the log file is "Check2". If I make some tweaks to the function it will stop at Check 4; if I make other tweaks it will stop at iteration 2493 but stop at "Check1" or even make it all the way to the end of the function.
I thought the problem might have something to do with memory from the log file, but even when I comment out the log lines the code still stops on the 2494th line (once again, irrespective of the text that's actually contained in that line) or the 2493rd line, depending on the changes that I make.
No matter what I do, execution stops, then memory used according to Task Manager runs up to 100%. It's important to note that the memory DOES NOT increase substantially until AFTER the execution gets stuck.
Does anyone have any ideas what might be causing this? I don't see anything wrong with the code and the fact that it stops executing after a certain number of actions indicates that I'm hitting some sort of fundamental limit that I'm not aware of.
Can you try using sys.getsizeof. Something must be happening to that dict that increases memory like crazy. Something else to try is using your regular terminal/cmd. Otherwise, I'd want to see a little bit more of the code.
Also, instead of using i += 1, you can enumerate your for loop.
for i, line in enumerate(line_list):
Hopefully some of that helps.
(Sorry, not enough rep to comment)
Just wanted to provide the solution months after asking. As most experienced coders probably know, the write() function only adds the output to a buffer. So if an infinite loop occurs before the buffer can clear (it only clears once every few lines, depending on the size of the buffer) then any lines still in the buffer won't print to the file. This made it appear to be a different type of issue (I thought the issue was ~20-30 lines before the actual flawed line; the buffer cleared on different lines depending on how I changed the code, which explains why the log file ended on different lines when unrelated changes were made). When I replaced "write" with "print" I was able to identify the exact line in the code that caused the loop.
To avoid a dummy situation like this, I recommend making a custom "write_to_file" function that includes a "flush" so that it writes every single line to the log file. I also added other types of protection to that custom "write_to_file" function such as not writing if the file exceeds a certain size, etc.
I've used multi-threading a lot, while appending to the same list from different threads. Everything worked fine.
However, I'm getting a problem with list appending when the threads are like 70 threads or more. Appending in the last thread gets stuck for like 5 mins (the processor is not occubied at this time, maybe 10 %. So, it's not a hardware problem I would say). Then appending occurs successfully.
At this link, it says that list appending is thread-safe.
My question is: Can list appending ever become not thread safe?
Don't ask for a code or so. I just need a simple yes or no to my question. And if yes, kindly provide suggestions to fix that.
list appending in python is thread safe.
For more details: what kinds of global value mutation are thread safe
You last thread gets stuck maybe due to other reason, for example: Memory allocating..
My first step to fix the stuck is use strace to trace the syscall.
And you can use gdb to print all threads' call stack too. Here is a wiki page: https://wiki.python.org/moin/DebuggingWithGdb
I have a python program which features a C++ core python wrapped. It's written in parallel as it is computationally very expensive and I'm currently making it run on a server in remote on a Ubuntu 16.04 platform.
The problem I'm experiencing is that, at a certain number of cycles (let's say 2000) for my test case, it freezes abruptly without giving error messages or anything. I detected the part of the code where it stops and is a python function which doesn't feature any for cycle (so I assume it's not stuck into a loop). I tried to simply comment the function where it gets stuck out of the code, as it does minor calculations and now, at the exact same number of cycles, it gets stuck a little bit ahead, this time inside the C++ written part. I'm starting to assume that a possibility is some memory problem related to the server.
Doing htop from terminal when the code is stuck I can see the units involved in the computation are fully loaded as they are currently involved in some unknown calculations. Moreover, the memory involved in the process (at least when the process is already stuck) is not fully occupied so it may not be a RAM problem neither.
I also tried to reduce drastically the number of output written at every cycle (which, I admit, where consistent in size) but nothing. With the optimum number of processors it takes like 20 minutes to get to the critical point of 2000 cycles so the problem is not easy reproducible.
It is the first time I'm experiencing these sort of problems, Is there anything else I can do to highlight the issue?
Thanks for the answer
Here is something you could try.
Write a code which checks which iterations is taking place and store all the variables at the start of the 2000th iteration.
Then use the same variable set to run the iteration again.
It won't solve your problem, but it will help in reducing the testing time, and consequently the time it takes for you to find the problem.
If it is definitely a memory issue, the code will not get stuck at 2000 (That's where you start) but will get stuck at 4000.
Then you can monitor memory at 2000th iteration and replicate that.
I'm just learning Python and I have tried this simple loop based on Learn Python The Hard Way. With my basic understanding, this should keep printing "Hello", one letter at a time at the same position. This seems to be the case, but the print is not fluid, it doesn't spend the same amount of time on each character; some go very fast, and then it seems to get stuck for one or two seconds on one.
Can you explain why?
while True:
for i in ["H","e","l","l","o"]:
print "%s\r" % i,
you are running an infinite loop with very little work done in it, and most of it being printing. The bottleneck of such an application is how fast your output can be integrated in your running environment (you console).
There are various buffers involved, and the system can also schedule other processes and therefore pause your app for a few cycles.
I have a code in Python that makes the python interpreter crash randomly. I have tried to isolate the source of the problem, but I am still investigating. While searching on the net for problems that could make the interpreter crash, I stumble upon this:
def crash():
'''\
crash the Python interpreter...
'''
i = ctypes.c_char('a')
j = ctypes.pointer(i)
c = 0
while True:
j[c] = 'a'
c += 1
j
http://wiki.python.org/moin/CrashingPython
Since I am using Ctypes, I think that the problem could be related to the way the Ctypes is used. So I am trying to understand why that code could make Python interpreter crash. Understanding it would help investigate my problem in my Ctypes code.
Can anybody explain this?
Help would be appreciate.
It makes a pointer to memory that's likely to be unwritable, and writes to it.
The numerical value of a is very small, and very low memory addresses are typically not writable, causing a crash when you try to write to them.
Should the initial write succeed, it keeps trying successive addresses until it finds one that isn't writable. Not all memory addresses are writable, so it's bound to crash eventually.
(Why it doesn't simply start at address zero I don't know - that's a bit odd. Perhaps ctypes explicitly protects against that?)
The problem seems to be that there you're writing to memory locations indefinitely. So it will come the time when the memory accessed will be unwritable and the program will crash.