I'd like to keep my Python program running, even when my computer is sleeping. The python program should not stop even when the system sleeps.
I am using a Ubuntu.
My file is only on my computer, not online.
Please help me out to achieve this.
If you need more information, please leave a comment!
Short Answer -
No! you can't.
Details -
Sleep Mode is a power-saving state that is similar to pausing a DVD movie, all applications on the computer are suspended and there is no setting to change this. If you set the machine to sleep, then all programs are suspended. Sleep mode and hibernation both simply save the state your desktop is in (what programs are open, what files are accessed) in a file that is saved in RAM or on the hard drive respectively. But the computer is then put into a low power state.
I am using python 3. On windows 7.
I recently made a python keylogger. It saves the keylogs in a text file as i type and upon pressing WIDOWS Key it sends the text from the textfile to my gmail account using smtplib.
I have to manually start the python file and it gets quite boring!
What my question is that is there any way to run that keylogger script on startup (without manually putting it in the startup folder -- because i want the script to do everything itself), and then to quickly close the script as soon as the user presses the shutdown button (and delay the shutdown time somehow).
The reason i want this is because i belive that a keylogger must be hidden from the user not to include that it must be hidden from the antivirus ;)
i have tested this with python task scheduler but it only takes time parameters (i.e. from 5:00 to 7:00) not the startup and shutdown time.
If i am to include more information on this topic that you need to solve this question i will gladly help you!
Thanks in advance
I create an executable using py2exe in python.
I was looking at this post but unfortunately the answers were superficial.
The first solution using Tendo but this works to limit 1 instance of application per user, and my app is being used in a Windows Server Environment where there are 20+ users loged in at a time.
And the second solution offered Listening to a defined port doesn't have examples of how it could be accomplished.
So I decided to go for mutexes to prevent my app from running multiple times.
So I currently use this code for using mutexes, but it doesn't have mutex detection between applications and Services.
This post shows how to accomplish mutexes, but doesn't show how its done in python.
How could I use Mutexes to have single instance of program on Windows where the mutexes don't limit single instance of program on Windows, and has detection between applications and Services.
I'm not sure why you'd have to use mutexes to this purpose on Windows? There's much simpler option: a bad old lockfile.
If all you want to achieve is making sure that only a single instance of the app runs, you could do something like this:
Windows supports you here since you can't delete a file if it's opened by another process. So (code untested):
tempdir = tempfile.gettempdir()
lockfile = os.sep.join([tempdir, 'myapp.lock'])
try:
if os.path.isfile(lockfile):
os.unlink(lockfile)
except WindowsError as e: # Should give you smth like 'WindowsError: [Error 32] The process cannot access the file because it is being used by another process..'
# there's instance already running
sys.exit(0)
with open(lockfile, 'wb') as lockfileobj:
# run your app's main here
main()
os.unlink(lockfile)
with section ensures the file is opened when your main runs and it is closed when your main finishes running. Then os.unlink removes the lockfile.
If another instance tries to start up, it exits on WindowsError exception (it would be good to check its numeric code though to be sure it's precisely the case of the file already opened).
Above is a rough solution, a niftier one would be to use entry/exit functionality to delete lockfile if main exits for any reason. Explanation here: http://effbot.org/zone/python-with-statement.htm
I have a web crawling python script running in terminal for several hours, which is continuously populating my database. It has several nested for loops. For some reasons I need to restart my computer and continue my script from exactly the place where I left. Is it possible to preserve the pointer state and resume the previously running script in terminal?
I am looking for a solution which will work without altering the python script. Modifying the code is a lower priority as that would mean to relaunch the program and reinvest time.
Update:
Thanks for the VM suggestion. I'll take that. For the sake of completion, what generic modifications should be made to script to make it pause and resumable?
Update2:
Porting on VM works fine. I have also modified script to make it failsafe against network failures. Code written below.
You might try suspending your computer or running in a virtual machine which you can subsequently suspend. But as your script is working with network connections chances are your script won't work from the point you left once you bring up the system. Suspending a computer and restoring it or saving a Virtual M/C and restoring it would mean you need to restablish the network connection. This is true for any elements which are external to your system and network is one of them. And there are high chances that if you are using a dynamic network, the next time you boot chances are you would get a new IP and the network state that you were working previously would be void.
If you are planning to modify the script, few things you need to keep it mind.
Add serializing and Deserializing capabilities. Python has the pickle and the faster cPickle method to do it.
Add Restart points. The best way to do this is to save the state at regular interval and when restarting your script, restart from last saved state after establishing all the transients elements like network.
This would not be an easy task so consider investing a considrable amount of time :-)
Note***
On a second thought. There is one alternative from changing your script. You can try using cloud Virtualization Solutions like Amazon EC2.
I ported my script to VM and launched it from there. However there were network connection glitches after resuming from hibernation. Here's how I solved it by tweaking python script:
import logging
import socket
import time
socket.setdefaulttimeout(30) #set timeout in secs
maxretry = 10 #set max retries
sleeptime_between_retry = 1 #waiting time between retries
erroroccured = 0
while True:
try:
domroot = parse(urllib2.urlopen(myurl)).getroot()
except Exception as e:
erroroccured += 1
if erroroccured>maxretry:
logger.info("Maximum retries reached. Quitting this leg.")
break
time.sleep(sleeptime_between_retry)
logging.info("Network error occurred. Retrying %d time..."%(erroroccured))
continue
finally:
#common code to execute after try or except block, if any
pass
break
This modification made my script temper proof to network failures.
As others have commented, unless you are running your script in a virtual machine that can be suspended, you would need to modify your script to track its state.
Since you're populating a database with your data, I suggest to use it as a way to track the progress of the script (get the latest URL parsed, have a list of pending URLs, etc.).
If the script is terminated abruptly, you don't have to worry about saving its state because the database transactions will come to the rescue and only the data that you've committed will be saved.
When the script is retarted, only the data for the URLs that you completely processed will be stored and you it can resume just picking up the next URL according to the database.
If this problem is important enough to warrant this kind of financial investment, you could run the script on a virtual machine. When you need to shut down, suspend the virtual machine, and then shut down the computer. When you want to start again, start the computer, and then wake up your virtual machine.
WinPDB is a python debugger that supports remote debugging. I never used it, and don't know if remote debugging a running process requires a modification to the script (which is very likely, otherwise it'd be a security issue); but if remote debugging without modifying the script is possible then you may be able to dump the current state of the script to a file and figure out later how to load it. I don't think it would work though.
Typically the transcode of my 1 hr long audio recording sessions to an mp3 file takes twenty odd minutes.
I want to use a python script to execute a series of python code when the OSX application garageband finishes writing that mp3 file.
What are the best ways in python to detect that an external application is done writing data to a file and closed that file. I read about kqueue and epoll, but since I have no background in os event detection and couldnt find a good example I am asking for one here.
The code I am using right now does the following and I am looking for something more elegant.
while True:
try:
today_file = open("todays_recording.mp3","r")
my_custom_function_to_process_file(today_file)
except IOError:
print "File not ready yet..continuing to wait"
You could popen lsof and filter by either the process or file you're interested in...