im writing a little routine to track my sleep cycle. Usually when i wake up I turn on my PC within minutes so reading out when the system turns on and when it shuts down would be great. This program here does the same function https://www.neuber.com/free/pctime/
I tried googling for a lib or function that can call these system events but most of the results are for turning on and shutting down the PC with command, so my question is:
What would be the best way to get the time the pc turns on and off?
Thanks
If you're on Linux (I'll assume Systemd here), you could write a service that executes code on startup/shutdown. That code would write the current timestamp to a CSV file, along with an indicator "startup" or "shutdown".
Here's a Python3 script that takes as first argument the type of timestamp to log to "updownlog.txt":
import os
import sys
import time
def main():
logfile = "updownlog.csv"
write_header = False
if len(sys.argv) != 2:
sys.exit("Error: script takes exactly one argument")
if sys.argv[1] != "shutdown" and sys.argv[1] != "startup":
sys.exit("Error: First argument should be 'startup' or 'shutdown'")
typ = sys.argv[1]
if not os.path.exists(logfile):
write_header = True
with open("updownlog.csv", "a") as f:
now = time.time()
if write_header:
f.write("type,timestamp\n")
f.write("{},{}\n".format(typ, now))
if __name__ == "__main__":
main()
Next, you'll need to create the system-service triggering this script. I'm shamelessly copying a solution offered in this answer on UnixSX: all credit to "John 9631"! If you still use an init.d based system there are great answers in that thread, too.
So, create the service file for your logging:
vim /etc/systemd/system/log_start_stop.service
and copy in the file content:
[Unit]
Description=Log startup and shutdown times
[Service]
Type=oneshot
RemainAfterExit=true
ExecStart="/home/Sungod3k/log.py startup"
ExecStop="/home/Sungod3k/log.py shutdown"
[Install]
WantedBy=multi-user.target
Then enable the service with the command:
systemctl enable log_start_stop
Granted, this won't yet tell you whether yo have a sleep deficit yet, so you'll need to do some post-processing, e.g. with Python or R, or even awk.
Related
('The First script' takes input from the user and 'the second script' notify the task.)
I have been trying to restart a python script using another one but i couldn't succeed it after trying to do a few methods. I developed a reminder, notify user when time previously set by the user has arrived, app works on Linux and it have 2 python script. First one is for taking input that given by the user to schedule a task. For example, "Call the boss at 12:30 pm". Then Linux is going to notify it at 12:30 pm. The other one is checking the inputs and notify them when the time comes.
In first script, i am trying to restart the other script when the user give a new task because the script needs to read the new task to notify it. Also I want to terminate the first script when it ran the second script. But the second script must still be working. In first script, I tried these commands to do that:
os.system(f"pkill -f {path2}")
os.system(f"python {path2}")
These aren't work.
Also I want to run the second script at the startup of my os.
Summary:
1- I wanna restart a python script using another one and the first one should be terminated when the second one is run.
2- I wanna run the second script at the startup of my os.
Repository about my reminder app is here.
About 1 :
Assuming the name of the other script is 2.py (Changeable with the code below), this worked for me pretty well:
1.py:
import subprocess
import os
import time
OTHER_SCRIPT_NAME = "2.py"
process_outputs = subprocess.getoutput("ps aux | grep " + OTHER_SCRIPT_NAME) # Searching for the process running 2.py
wanted_process_info = process_outputs.split("\n")[0] # Getting the first line only
splitted_process_info = wanted_process_info.split(" ") # Splitting the string
splitted_process_info = [x for x in splitted_process_info if x != ''] # Removing empty items
pid = splitted_process_info[1] # PID is the secend item in the ps output
os.system("kill -9 " + str (pid)) # Killing the other process
exit()
time.sleep(1000) # Will not be called because exit() was called before
2.py:
import time
time.sleep(100)
About 2:
In linux, you can execute scripts on startup by writing it into the /etc/rc.local file
Just run your scripts from the rc.local file and you are good to go:
/etc/rc.local:
python '/path/to/your/scripts'
How can I create a command line interface that persists as a background process and only executes commands when specific commands are entered? The following is pseudo code:
def startup():
# costly startup that loads objects into memory once in a background process
def tear_down()
# shut down; generally not needed, as main is acting like a service
def main():
startup()
# pseudo code that checks shell input for a match; after execution releases back to shell
while True:
usr_in = get_current_bash_command()
if usr_in == "option a":
# blocking action that release control back to shell when complete
if usr_in == "option b":
# something else with handling like option a
if usr_in == "quit":
# shuts down background process; will not be used frequently
tear_down()
break
print("service has been shut down. Bye!")
if __name__ == "__main__":
# run non-blocking in background, only executing code if usr_in matches commands:
main()
Note what this is not:
typical example of argparse or click, that runs a (blocking) python process until all commands are completed
a series of one-off scripts for each command; they utilize objects in memory instantiated in the background process with startup()
a completely different shell, like ipython; I'd like to integrate this with a standard shell, e.g. bash.
I am familiar with click, argparse, subprocess, etc., and as far as I can tell they accept a single command and block until completion. I specifically seek something that interacts with a background process so that expensive startup (loading objects into memory) is handled only once. I have looked at both python daemon and service packages, but I'm unsure if these are the right tool for the job also.
How can I accomplish this? I feel as though I don't know what to google to get the answer I need...
this is just one way you might do this... (there are other ways as well, this is just a pretty simple method)
you could use a server as your long running process (you would then turn it into a service using init systemV or upstart)
hardworker.py
import flask
app = flask.Flask("__main__")
#app.route("/command1")
def do_something():
time.sleep(20)
return json.dumps({"result":"OK"})
#app.route("/command2")
def do_something_else():
time.sleep(26)
return json.dumps({"result":"OK","reason":"Updated"})
if __name__ == "__main__":
app.run(port=5000)
then your client could just make simple http requests against your "service"
thin_client.sh
if [[ $1 == "command1" ]]; then
curl http://localhost:5000/command1
else if [[ $1 == "command2" ]]; then
curl http://localhost:5000/command2
fi;
So I´m trying to create a python backend script for an electron app. I want to be able to continually pass system inputs to the python file, and have it run a function whenever the system input changes. I don´t want to have to run the python script each time as it takes a few seconds to load modules and data and just slows down the app.
I can´t find a good description anywhere on how to do this.
import sys
sysArg = []
def sysPrint(sysArgs):
print(sysArgs[1:])
while True:
if sys.argv <> sysArg:
sysPrint(sys.argv)
sysArg = sys.argv
This doesn´t work for me and also a "while True" loop doesn´t feel very safe CPU wise.
I´m also thinking that sys.argv might not be the right choice as that is perhaps only generated when calling the Python script?
If I don't misunderstand your question, you can just redirect the system output to a file,use nohup run program as a background process.
like this:
nohup some_program &> output.log" &
And then you can handle the output file like this:
import time
def tail(f):
f.seek(0,2)
while True:
line = f.readline()
if not line:
time.sleep(0.1)
continue
yield line
if __name__ == '__main__':
output_file= open("output.log","r")
for line in tail(output_file):
print(line)
I have a script that collects data from the streaming API. I'm getting an error at random that I believe it's coming from twitter's end for whatever reason. It doesn't happen at specific time, I've been seen it as early as 10 minutes after running my script, and other times after 2 hours.
My question is how do I create another script (outside the running one) that can catch if it terminated with an error, then restart after a delay.
I did some searching and most were related to using bash on linux, I'm on windows. Other suggestions were to use Windows Task Scheduler but that can only be set for a known time.
I came across the following code:
import os, sys, time
def main():
print "AutoRes is starting"
executable = sys.executable
args = sys.argv[:]
args.insert(0, sys.executable)
time.sleep(1)
print "Respawning"
os.execvp(executable, args)
if __name__ == "__main__":
main()
If I'm not mistaken that runs inside the code correct? Issue with that is my script is currently collecting data and I can't terminate to edit.
How about this?
from os import system
from time import sleep
while True: #manually terminate when you want to stop streaming
system('python streamer.py')
sleep(300) #sleep for 5 minutes
In the meanwhile, when something goes wrong in streamer.py , end it from there by invoking sys.exit(1)
Make sure this and streamer.py are in the same directory.
I am using Cherrypy framework to run my python code on server. But the process stops working when the load increases.
Every time this happens I have to manually go and start the python code. Is there any way i can use Gunicorn with Cherrypy so that Gunicorn can start the code automatically when it stops working.
Any other solution will also work in this case. Just want to make sure that the python program does not stop working.
I use a cron that checks the memory load every few minutes and resets cherrypy when the memory exceeds 500MB -- so that the web host doesn't complain to me with emails. Something on my server doesn't release memory when a function ends as it should, so this is a pragmatic work around.
This hack may be weird because I reset it using an HTTP request, but that's because I spent hours trying to figure out how to do this withing the BASH and gave up. It works.
CRON PART
*/2 * * * * /usr/local/bin/python2.7 /home/{mypath}/cron_reset_cp.py > $HOME/cron.log 2>&1
And code inside cron_reset_cp.py...
#cron for resetting cherrypy /cp/ when 500+ MB
import os
#assuming starts in /home/my_username/
os.chdir('/home/my_username/cp/')
import mem
C = mem.MemoryMonitor('my_username') #this function adds up all the memory
memory = int(float(C.usage()))
if memory > 500:#MB
#### Tried: pid = os.getpid() #current process = cronjob --- THIS approach did not work for me.
import urllib2
cp = urllib2.urlopen('http://myserver.com/cp?reset={password}')
Then I added this function to reset the cherrypy via cron OR after a github update from any browser (assuming only I know the {password})
The reset url would be http://myserver.com/cp?reset={password}
def index(self, **kw):
if kw.get('reset') == '{password}':
cherrypy.engine.restart()
ip = cherrypy.request.headers["X-Forwarded-For"] #get_client_ip
return 'CherryPy RESETTING for duty, sir! requested by '+str(ip)
The MemoryMonitor part is from here:
How to get current CPU and RAM usage in Python?
Python uses many error handling strategies to control flow. A simple try/except statement could throw an exception if, say, your memory overflowed, a load increased, or any number of issues making your code stall (hard to see without the actual code).
In the except clause, you could clear any memory you allocated and restart your processes again.
Depending on your OS, try the following logic:
Implement a os.pid() > /path/pid.file
Create a service script that connected to your web-port
Try to fetch data
If no data was recieved, kill PID #/path/pid.file
restart script
Your main script:
import os
with open('./pidfile.pid', 'wb') as fh:
fh.write(str(os.getpid()))
... Execute your code as per normal ...
service.py script:
from socket import *
from os import kill
s = socket()
try:
s.connect(('127.0.0.1', 80))
s.send('GET / HTTP/1.1\r\n\r\n')
len = len(s.recv(8192))
s.close()
except:
len = 0
if len <= 0:
with open('/path/to/pidfile.pid', 'rb') as fh:
kill(int(fh.read()))
And have a cronjob (execute in a console):
sudo export EDITOR=nano; crontab -e
Now you're in the text-editor editing your cronjobs, write the following two lines at the bottom:
*/2 * * * * cd /path/to/service.py; python service.py
*/5 * * * * cd /path/to/main/script.py; python script.py
Press Ctrl+X and when asked to save changes, write Y and enjoy.
Also, instead of restarting your script from within service.py script, i'd suggest that service.py kills the PID located in /path/pid.file and let your OS handle starting up your script if the PID is missing in /path/, Linux at least have very nifty features for this.
Best practice Ubuntu
It's more than considered best practice to use the systems service status apache2 for instance, the service scripts lets you reload, stop, start, get job states and what not.
Check out: http://upstart.ubuntu.com/cookbook/#service-job
And check the service scripts for all the other applications and not only use them as skeletons but make sure your application follows the same logic.
Perhaps you need supervisord to monitor your gunicorn process and restart it when it's necessary:
http://www.onurguzel.com/managing-gunicorn-processes-with-supervisor/