Python starts extremely slowly the first time after I reboot Windows - python

I apologize for not having a reproducible example. My problem is with a large base of proprietary code and I don't have an extract that shows the same behavior. Even better, it isn't my software and I know about 2% of how it works.
Simply, this Python program I'm dealing with takes about 80 seconds to complete its entire setup and get to the point where all its flask code is running and the webserver being created is up and able to respond to requests. BUT -- that's only the first time I run it on Windows after rebooting. On subsequent times starting the python script in question, it takes more like 10 seconds.
And the nutty part is, in a workgroup of 10 people, mine is the only computer that has the problem.
Things I can say:
Python 2.7.11, Windows 7, git bash version 2.9.0.windows.1.
It doesn't appear to matter whether I invoke my python program from the git bash command line or the Windows command line.
However, in git bash, saying "python" gets no response forever until I hit Ctrl-C, but saying "winpty python" opens an interactive python session as it should. I mention this because for a while I thought my main problem was related to the git bash shell bug (https://stackoverflow.com/a/32599341/5593532). But point 2 above would seem to contradict this. No such weirdness occurs in invoking a bare python interactive session from the Windows command line.
I've had trouble getting meaningful profiling output, partly because of multi-threading or child processes or something. And the web server doesn't have an exit event per se, thus I can only stop it by smacking it with a Ctrl-C in the command-line window where I ran the script, which seems to kill the part of the process that would save the profiling data.
From the fragmentary profiling info I was able to produce (with gratitude to https://ymichael.com/2014/03/08/profiling-python-with-cprofile.html), I am suspicious that something weird is happening in loading a large number of imported packages, and perhaps especially the alembic and/or werkzeug packages. (And maybe even sqlalchemy.) The profiling output didn't have much tottime in those packages, but it did have rather a lot of cumtime there.
My sys.path inside Python doesn't seem meaningfully different from anyone else's nearby. I might have one or two different items in the list, or three .egg files on the path when they've only got one, but it's mostly the same list in the same order. So much for the idea that it's taking a long time to hunt and learn where packages are and then re-using the information later.
I've got PyCharm Community Edition able to run the script and its associated junk in IDE mode, set breakpoints, and all that jazz, so I can set breakpoints and follow execution to a degree, in case that would answer a noteworthy question you could raise for me.
Anyone got a wild notion what's up? (he asked quite unreasonably)

Related

PyCharm Debugger Stuck on "Collecting Data"

So I installed the free version of PyCharm professional last week and have been encountering the problem where I am debugging code on a remote server and when I try to display variables, it simply says "collecting data" and then if I try to continue the debugging process PyCharm breaks.
I have been researching solutions and I have Gevent compatible enabled as well as tried all 3 variable settings; Synchronously, Asynchronously, and On Demand.
I should also note that I am running into a problem where the debugger is skipping all my break points as well and I have to restart my server connection in order to get the break points to hit (and sometimes it takes a couple of tries)
I know that it is entirely possible to see the variables that are collecting data as one of my co-workers who recommended PyCharm has no problem, and there was 1 run where I was able to see the variables, but when I re-ran the commands (with absolutely no change), I was back at square one.
I've been going through PyCharm forums and it seems as if this has been a reoccurring issue for a handful of years now, but knowing that it worked once for me, and it works for my coworker, am I simply missing something?
Just recently my PyCharm has started to behave this way as well. I researched and tried the same solutions you did, to no avail. On certain projects it simply hangs forever on "collecting data" - projects that used to work - where code hasn't changed. Pls let me know if you find anything else; I will keep researching and testing as well
EDIT: FWIW. In my particular case I isolated the cause of this (at least I think). I had a very large dataframe in memory and if this DF is not in memory, the debugger does not hang. None of my watches were explicitly on this DF, but I guess the debugger needed to inspect it upon break and the object was just too big somehow. Note that it hung even if "variables loading policy" was set to "on demand", so the debugger still must automatically investigate all variables somehow.

External executable crashes when being launched from Python script

I am currently getting an issue with an external executable crashing when it is launched from a Python script. So far I have tried using various subprocess calls. As well as the more redundant methods such as os.system and os.startfile.
Now the exe doesn't have this issue when I call it normally from the command line or by double-clicking on it from the explorer window. I've looked around to see if other people have had a similar problem too. As far as I can tell the closest possible cause of this issue is that the child process unnecessarily hangs due to the I/O exceeding 65K. So I've tried using Popen without PIPES and I have also changed the stdout and stdin to write to temporary files to try and alleviate my problem. But unfortunately none of this has worked.
What I eventually want to do is be able to autorun this executable several times with various outputs provided by xmls. Everything else is pretty much in place, including the xml modifications which the executable requires. I have also tested the xml modification portion of the code as a standalone script to make sure that this isn't the issue.
Due to the nature of script I am a bit reluctant to put up any actual code up on the net as the company I work for is a bit strict when it comes to showing code. I would ask my colleagues if I could but unfortunately I'm the only one here who actually has used python.
Any help would be much appreciated.
Thanks.
As I've not had any response I've kind of gone down a different route with this. Rather than relying on the subprocess module to call the exe I have moved that logic out into a batch file. The xmls are still modified by the python script and most of the logic is still handled in script. It's not what ideally would have liked from the program but it will have to do.
Thanks to anybody who gave this some thought and tried to at least look for an alternative. Even if nobody answered.

Kernel crashes when increasing iterations

I am running a Python script using Spyder 2.3.9. I have a fairly large script and when running it through with (300x600) iterations (a loop inside another loop), everything appears to be working fine and takes approximately 40 minutes. But when I increase the number to (500x600) iterations, after 2 hours, the output yields:
It seems the kernel died unexpectedly. Use 'Restart kernel' to continue using this console.
I've been trying to go through the code but don't see anything that might be causing this in particular. I am using Python 2.7.12 64bits, Qt 4.8.7, PyQt4 (API v2) 4.11.4. (Anaconda2-4.0.0-MacOSX-x86_64)
I'm not entirely sure what additional information is pertinent, but if you have any suggestions or questions, I'd be happy to read them.
https://github.com/spyder-ide/spyder/issues/3114
It seems this issue has been opened on their GitHub profile, should be addressed soon given the repo record.
Some possible solutions:
It may be helpful, if possible, to modify your script for faster convergence. Very often, for most practical purposes, the incremental value of iterations after a certain point is negligible.
An upgrade or downgrade of the Spyder environment may help.
Check your local firewall for blocked connections to 127.0.0.1 from pythonw.exe.
If nothing works, try using Spyder on Ubuntu.

Why GASP doesn't work when I run it from IDLE?

I made this script:
from gasp import *
begin_graphics()
Circle((200, 200), 60)
Line((100, 400), (580, 200))
Box((400, 350), 120, 100)
update_when('key_pressed')
end_graphics()
When I start it from terminal, it works perfectly. When I run it from IDLE, it doesn't work, I get no answer (shell prompt (>>>) disappears but nothing happens).
In general, you can't run GUI apps in the embedded Python interpreter in IDLE, unless the library you're using is designed to integrate with IDLE. Or, worse, it may work on one machine and not on another. I'll explain why below, but first just take that on faith.
As far as I can tell, gasp's documentation doesn't address this, but similar libraries either warn you that they may not work in IDLE (easygui, early versions of graphics, etc.) or come with special instructions for how to use them in IDLE (e.g., later versions of graphics).
Now, maybe gasp should be designed to integrate with IDLE, given that it's specifically designed for novices, and many of those novices will be using the IDE that comes built in with Python. Or maybe not. But, even if that should be true, that's something for gasp to deal with. File a bug or feature request, but you'll need some way to keep working until someone gets around to writing the code.
The simplest solution here is to use a different IDE, one that runs its interactive Python interpreter in a completely separate process, exactly the same as you get when running it yourself in the terminal. There are lots of good options out there that are at least free (-as-in-beer) for non-commercial use (PyCharm, Komodo, Eclipse PyDev, emacs with your favorite collection of packages, etc.). Although Stack Overflow is not a good place for advice on picking the best one for you (if googling isn't sufficient, try asking on a mailing list or forum), almost any of them will work.
Another option: instead of using an interpreter built into an IDE, you might want to to consider running an enhanced interpreter environment (like ipython-gtk or emacs with a smaller set of packages) alongside your IDE. Of course they'll no longer be tightly integrated (the "I" in "IDE"), but in my experience, even working in an environment where the whole team uses PyCharm or PyDev, I still end up doing most of my interactive testing in ipython; you may find you prefer that as well. Or you may not, but give it a try and see.
So, why is there a problem in the first place?
First, if you don't understand what an "event loop" or "runloop" or "mainloop" is, please read either Why your GUI app freezes or the Wikipedia page or some other introduction to the idea.
Normally, when you run the interactive Python interpreter (e.g., by typing python at the bash or C: prompt in your terminal), it runs in its own process. So, it can start up a runloop and never return (until you quit), and the terminal won't get in your way.
But when you run the interactive Python interpreter inside IDLE, it's actually running in the same process as IDLE, which has its own runloop. If you start up a runloop and never return, IDLE's runloop doesn't get to run. That means it doesn't get to respond to events from the OS, like "refresh your window" or "prepare for a new window to open", so from the user's (your) point of view, IDLE and your app are both just frozen.
One way to get around this is in your code is to spawn another thread for your runloop, instead of taking over the main thread. (This doesn't work with all GUI libraries, but it works with some. This is how graphics solved the problem.) Another way is to spawn a whole new child process to run your GUI. (This works with all GUI libraries, but it's a lot more work—now you have to deal with inter-process communication. One of the novice-friendly matplotlib wrappers does this.) Finally, you can integrate your runloop with IDLE's Tkinter runloop by having one drive the other. (Not all GUI libraries can be driven this way, but Tkinter's can, and IDLE can be monkeypatched to work this way; graphics used to do this.) But none of these are even remotely simple. And they're probably things that gasp itself should be doing, not your code.

How to limit sublimetext 3 build time and memory for python code?

Good day.
I'm using sublimetext 3 with its build-in python builder and the REPL console.
And of course I make a lot of errors in my code. sometimes I get stuck at an infinite loop or with extremely heavy arguments.
I've looked for a way to limit the memory size and running time for builds in sublime text 3 but I couldn't find anything useful.
Any suggestion would be highly appreciated!
There is no way to specify this in .sublime-build files - all they are doing is calling the Python interpreter installed on your system, and having it run your source code. If you feel like a build is taking too long, you can always go to the Tools menu and select Cancel Build. Otherwise, you will need to add error- and time-checking functionality in to your code to make this work. Python will fail with a MemoryError under some circumstances (read the link for more info), so you can always watch for that in your output.

Categories

Resources