Earlier this year, I made a python 3.7.0 script that modifies my Steam profile picture. It worked great, provided I copied/pasted the sessionid, steamLogin, and steamLoginSecure cookies from my browser.
Lately, I wanted to automate obtaining these cookies, so I added a simple function using ValvePython to log in and return the cookies:
def GetCookies():
client = SteamClient()
client.cli_login()
webCookies = client.get_web_session_cookies()
if webCookies == None:
raise Exception("Unable to get web session. Try again later.")
return {"steamLogin": webCookies.get("steamLogin"),
"steamLoginSecure": webCookies.get("steamLoginSecure"),
"sessionid": str(client.session_id),
"steamid": str(client.steam_id)}
But now there's a problem. My script works great, but now the script restarts when it hits one of the two urlopen commands later in the script. I didn't have this issue before adding this function, and it only fails on urlopen commands that come after the function is called, and the specific line it fails on varies.
In IDLE, it says "RESTART: Shell". There're no errors that I can see or catch. On cmd, it just closes without writing anything before close. I tried running the script in an online debugger, and it works without fail.
What could be the reason for the script just restarting without an error? Is there some process in GetCookies() that is too much to handle or something? How can I go about figuring out why the script is restarting?
Any help would be appreciated.
Related
I have a script that will be running 24/7 as a service and I'm currently using logging module to log into files.
The issue is, that I only get logs when my script stops, which isn't what I'm looking for because I need to check if the script is running correctly in real-time.
For example, let's say I have simple code like this:
while True:
info_logger.info('This is an info message')
time.sleep(10)
The info_logger is set up earlier, using
handler = logging.FileHandler('filepath')
info_logger = logging.getLogger('info_logger')
info_logger.setLevel(logging.INFO)
info_logger.addHandler(handler)
The logger is working perfectly fine when script ends, but I want to collect logs as the script is running. Is there a way to do it using logging? Thanks in advance
I believe that you need to change the line info_logger = logger.getLogger('info_logger') to info_logger = logging.getLogger('info_logger'). When I ran the code on my machine with that change, it succesfully wrote to the file while the script was running.
Okay, I think it has to do with PyCharm not updating files in real-time.
When I've used 'Reload from Disk' on the file, it's up to date.
I have a simple app running on Windows Server 2012 using IIS. It is used to run an R script underneath (it's a very complicated script and rewritting it in Python is not an option). This is the code:
try:
output = subprocess.run(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, encoding='utf-8')
except:
return jsonify({'output': 'Error!', 'returncode': '0', 'error': 'Unknown Error'})
else:
error = str(output.stderr)
return jsonify({'output': str(output.stdout), 'returncode': str(output.returncode), 'error': error})
It is ran by AJAX and is running fine most of the time, but sometimes it results in "Internal Server Error".
Now the interesting part. Above error is not caught by the except clause, it's not logged in the Flask error.log and the underlying R script does everything it's meant to do. So in short everything works as it should but it throws Internal Server Error for no particular reason. This is very annoying as the user gets an error despite everything working fine.
I already tried not using try/except and also "except Exception as err" but they also don't log any errors.
This is how error.log is setup. It works for other parts of the application without any issues.
app = Flask(__name__, static_url_path="", static_folder="static")
errHandler = logging.FileHandler('errors.log')
errHandler.setLevel(logging.ERROR)
app.logger.addHandler(errHandler)
Any ideas how can I catch this error so I can try to debug it?
UPDATE
I've noticed that the Internal Server Error is returned after around 1.5 min. When I changed R script to a simple wait 10s command it works flawlessly so it seems to be a timeout issue.
I have set timeout to 180s on subprocess and ajax but it didn't help. Is there any other place I should look?
UPDATE 2
I've taken out ajax out of the equation and use standard hyperlink to the page with subprocess. It still gives Internal Server Error after 1.5 min. I've also changed R script to wait 2 min and the script itself finishes without any issues (30s after I get the error).
I was looking in a completely wrong place. The issue was caused by the FastCGI Activity Timeout setting in IIS Manager. The timeout was only 70s while the subprocess took much longer to process. Increasing the Activity Timeout resolved the issue.
Sorry if this has been asked before, I couldn't find anything related to this on Python and regarding using this with Selenium.
I'm running a Selenium script on a particular website but after staying on the site too long, the site asks me to login again. This event gives me a stale element reference error midway during the running of the code, but all I have to do is enter a password, enter, and it brings me back to exactly where I was (e.g where the script was running until it was prompted for login details).
My question is, is it possible to create an event listener that listens for this particular error THROUGHOUT the entire script and then does a function? Obviously, I know there is error handling, but to my knowledge, that doesn't work continuously... unless I wrap my entire script around a try...except block, which I'd rather not do.
Are there any options? Am I missing something obvious?
Here is what I've tried so far with the simplegui package, but I received a CannotSendRequest(self.__state) error upon running.
timer = simplegui.create_timer(2000, check)
timer.start()
Original problem
I am creating an API using express that queries a sqlite DB and outputs the result as a PDF using html-pdf module.
The problem is that certain queries might take a long time to process and thus would like to de-couple the actual query call from the node server where express is running, otherwise the API might slow down if several clients are running heavy queries.
My idea to solve this was to decouple the execution of the sqlite query and instead run that on a python script. This script can then be called from the API and thus avoid using node to query the DB.
Current problem
After quickly creating a python script that runs a sqlite query, and calling that from my API using child_process.spawn(), I found out that express seems to get an exit code signal as soon as the python script starts to execute the query.
To confirm this, I created a simple python script that just sleeps in between printing two messages and the problem was isolated.
To reproduce this behavior you can create a python script like this:
print("test 1")
sleep(1)
print("test 2)
Then call it from express like this:
router.get('/async', function(req, res, next) {
var python = child_process.spawn([
'python3'
);
var output = "";
python.stdout.on('data', function(data){
output += data
console.log(output)
});
python.on('close', function(code){
if (code !== 0) {
return res.status(200).send(code)
}
return res.status(200).send(output)
});
});
If you then run the express server and do a GET /async you will get a "1" as the exit code.
However if you comment the sleep(1) line, the server successfully returns
test 1
test 2
as the response.
You can even trigger this using sleep(0).
I have tried flushing the stdout before the sleep, I have also tried piping the result instead of using .on('close') and I have also tried using -u option when calling python (to use unbuffered streams).
None of this has worked, so I'm guessing there's some mechanism baked into express that closes the request as soon as the spawned process sleeps OR finishes (instead of only when finishing).
I also found this answer related to using child_process.fork() but I'm not sure if this would have a different behavior or not and this one is very similar to my issue but has no answer.
Main question
So my question is, why does the python script send an exit signal when doing a sleep() (or in the case of my query script when running cursor.execute(query))?
If my supposition is correct that express closes the request when a spawned process sleeps, is this avoidable?
One potential solution I found suggested the use of ZeroRPC, but I don't see how that would make express keep the connection open.
The only other option I can think of is using something like Kue so that my express API will only need to respond with some sort of job ID, and then Kue will actually spawn the python script and wait for its response, so that I can query the result via some other API endpoint.
Is there something I'm missing?
Edit:
AllTheTime's comment is correct regarding the sleep issue. After I added from time import sleep it worked. However my sqlite script is not working yet.
As it turns out AllTheTime was indeed correct.
The problem was that in my python script I was loading a config.json file, which was loaded correctly when called from the console because the path was relative to the script.
However when calling it from node, the relative path was no longer correct.
After fixing the path it worked exactly as expected.
I have a python script which is working fine so far. However, my program does not exit properly. I can debug until and I'm returning to the end, but the programm keeps running.
main.main() does a lot of stuff: it downloads (http, ftp, sftp, ...) some csv files from a data provider, converts the data into a standardized file format and loads everyting into the database.
This works fine. However, the program does not exit. How can I find out, where the programm is "waiting"?
There exist more than one provider - the script terminates correctly for all providers except for one (sftp download, I'm using paramiko)
if __name__ == "__main__":
main.log = main.log2both
filestoconvert = []
#filestoconvert = glob.glob(r'C:\Data\Feed\ProviderName\download\*.csv')
main.main(['ProviderName'], ['download', 'convert', 'load'], filestoconvert)
I'm happy for any thoughts and ideas!
If your program does not terminate it most likely means you have a thread still working.
To list all the running threads you can use :
threading.enumerate()
This function lists all Thread that are currently running (see documentation)
If this is not enough you might need a bit of script along with the function (see documentation):
sys._current_frames()
So to print stacktrace of all alive threads you would do something like :
import sys, traceback, threading
thread_names = {t.ident: t.name for t in threading.enumerate()}
for thread_id, frame in sys._current_frames().iteritems():
print("Thread %s:" % thread_names.get(thread_id, thread_id))
traceback.print_stack(frame)
print()
Good luck !
You can involve the python debugger for a script.py with
python -m pdb script.py
You find the pdb commands at http://docs.python.org/library/pdb.html#debugger-commands
You'd better use GDB, which allows to pinpoint hung processes, like jstack in Java
This question is 10 years old, but I post my solution for someone with a similar issue with a non-finishing Python script like mine.
In my case, the debugging process didn't help. All debugging outputs showed only one thread. But the suggestion by #JC Plessis that some work should be going on helped me find the cause.
I was using Selenium with the chrome driver, and I was finishing the selenium process after closing the only tab that was open with
driver.close()
But later, I changed the code to use a headless browser, and the Selenium driver wasn't closed after driver.close(), and the python script was stuck indefinitely. It results that the right way to shutdown the Selenium driver was actually.
driver.quit()
That solved the problem, and the script was finally finishing again.
You can use sys.settrace to pinpoint which function blocks. Then you can use pdb to step through it.