I have a code that repeats itself every 10 seconds, but I can't test it for a long time because my powershell keeps on hanging and the code just stops for no particular reason (the code is running but it doesn't give out results). Is there a way to test the code, or safely running it without it being interrupted? I tried to search but it seems that a library like Unittest will just crash along with my code due to windowshell if I want to run it for lets say a day. Because it usually hangs just a few hours after I start testing manually.
The code is something like this:
import time
import requests
while True:
getting = requests.get(some_url)
result = getting. json()
posting = requests.post(another_url,headers,json=result)
time.sleep(10)
Thank you for your help.
So after testing and experimenting. It seems like a resource miss-management by windows when I use PowerShell, cmd.exe or the default Python IDE. Thus in case someone wants to test their code for a prolonged period of time, it is recommended to use PyCharm as it has been running for more than a day for me. This should mean that PyCharm has better management in this specific area.
For more details check the comments under the question itself.
Related
I am facing a problem with a python script getting killed. I had always used this script with no problem at all until two days ago, then it started to print, without any change in the code, the string 'killed' before aborting the execution.
Other people have tried to run the same code on their system and it works fine, as it used to do with me until two days ago.
I have read some old similar question, and I have got the problem could be an out-of-memory issue due to a bad memory management in my code. It sounds a little strange to me, since it used to work perfectly until some days ago and the problem appears on my system only.
Do you have any idea on how to inspect the problem and find a possible solution, please?
Python version: Python 2.7.14+
System: Scientific Linux CERN 7
In your case, it's highly probale that the script you're processing reached some given limit of the amount of resources it's able to use and that depends on your OS and other parameters, are you running something else with the script ? or are there many open files etc ?
The most likely reason for such an error is exceeding memory use, whiwh forces the system to not take risks and break when allocating more starts failing. Maybe you can print in parallel the total memory you're using to have a glimpse of what's happening since the information you've given are not enough to help you :
import os, psutil
process = psutil.Process(os.getpid())
then: (for python 3)
print(process.memory_info().rss)
or: (for python 2.7) (tested)
print(process.memory_info()[0])
When making changes to larger modules, this is my current (inefficient) process:
Make needed change to code
Run program to test (using pdb - python3 -m pdb path/to/script.py
Program will throw an error
Fix error/create an exception
Run again
New error appears
Rinse an repeat
The data processing module I'm working on has many steps, and rerunning every time I make a code change to make sure there are no errors takes a long time and it's frustating. It's also obviously an inefficient way to develop a program, but I don't know what alternative
What advice do you have so that I don't have to run, and wait for, my whole data processing pipeline to find what the next error will be? Is there any way to make changes on the code and continue executing before the last error appeared?
You could do unit testing for every module and for every steps. Basically it's "create fake data to pass to every step and check if the result after the step is what you want", obviously automated.
Check the internet to learn about testing in general and testing in Python.
I am running some scientific experiments that take a lot of time. Everything is written in Python 3 and I use the Anaconda Command Prompt to activate the Python scripts. I am working on Windows.
Right now, an experiment is running. I want to execute the next experiment as soon the current experiment is finished, however, I will not be near this computer when that happens. Is there a way to execute a Python script with, say, a 4 hour delay so I do not waste a night of precious computation time?
Potentially, adding a long sleep statement in my main python script could do the trick but I was wondering if any of you has a more elegant solution to the problem.
there is a way with Windows Task Scheduler, you should see the following:
https://www.youtube.com/watch?v=n2Cr_YRQk7o
when you set the trigger set it as you like (in 4 hours)
While sleep could be a dirty work around, you could also make use of the innate tasksceduler of windows, like XtoR stated.
You could also call to the other script at the end of your current one, by inserting the following bit of code into the first script.
import subprocess
import sys
sys.pid = subprocess.pOpen([path_to_python_executable, 'path_to_second_script'])
Personally I'm predisposed towards writing a quick wrapper script.
import subprocess
import sys
# We're just providing the python path here. Make sure to change according to system settings
python_path = 'C:\Python\python.exe'
# Here we specify the scripts you want to run.
run_script_one = 'C:\Path_to_first_script.py'
run_script_two = 'C:\Path_to_second_script.py'
sys.pid = subprocess.call([python_path, run_script_one])
sys.pid = subprocess.call([python_path, run_script_two])
sys.exit(0)
I am writing a python script to get some basic system stats. I am using psutil for most of it and it is working fine except for one thing that I need.
I'd like to log the average cpu wait time at the moment.
from top output it would be in CPU section under %wa.
I can't seem to find how to get that in psutil, does anyone know how to get it? I am about to go down a road I really don't want to go on....
That entire CPU row is rather nice, since it totals to 100 and it is easy to log and plot.
Thanks in advance.
%wa is giving your the iowait of the CPU, and if you are using times = psutil.cpu_times() or times = psutil.cpu_times_percent() then it is under the times.iowait variable of the returned value (Assuming you are on a Linux system)
This is a follow-up to that question.
Basically, i have a python script which should start another program (.exe) via a timer after some 2-6 hours. Everything works fine as long as i test it with a short countdown or as long as the computer is "active" (=userinput before, screen on) before the timer expires or as long there is no other program working at the same time (an Excel VBA script in my case).
On Windows 7, for long countdowns and with Excel running, the external program just doesnt open. There arent any Error messages and any other (python-internal) commands AFTER that are executed as they should.
Im using the x=subprocess.Popen([program,args],flags) command and tried almost all possible flags (Shell, buffersize, creationflags,stdout etc.) and alternatives (call) but it behaves always as described above.
Now i noticed a similar behaviour when trying to open the external program via VBA, so i dont think its a Python-specific but a Windows-specific problem. Additionally i tried it on another PC with Windows Vista and there it works surprisingly (both 64-bit if that matters).
I already tried increasing the process priority or prevent idle state via SetThreadExecutionState and disabled all energy-saving features i´am aware of, but nothing changed so far.
Does anyone have an idea? Many thanks, im getting frustrated slowly...
After taking account the problem stated in Here, I think a viable alternative would be to
use many short pauses instead of a long pause, so the program is always active, but may result in higher memory usage.
def wait(sec,sleeptime = 0):
import time
endsecs = time.time() + sec
while True:
if endsecs <= time.time():
return None
if sleeptime != 0:
time.sleep(sleeptime)
Just a guess, nothing certain, no time to verify.