I'm using a python script named vpngate.py.
This script takes as a parameter a country so when i launch my program i do : python vpngate.py korea (for instance). It allows me to change my IP address, looking to the specified country an available IP address.
When i will launch my program, i will have my IP changed (the program is running).
When i will stop my program, i will get my old IP back. And it's only after stopping my program, that i want to re-launch it in order to have a new IP etc..
I read that from package sys, i can import exit, to stop my program. But my question is : how to programatically plan to launch my program every 2minutes for instance. I need to precise that i'm on windows (so the CRON solution doesn't work), and that's really import that the script is ran in background, because i have other script running in parallel. I hope that i was clear enough. Thank you in advance.
Have you tried putting the code performing the setting of IP in a function and calling it from a python script with time.sleep() to time execution.
Related
I'm writing a script that (basically) controls some motors from raspi gpio pins. I've been making it right for a while now, and when I boot the pi and run the program manually it works just fine. I also have a portion of the code that allows me to use my phone to connect via bluetooth and send some data to control the motors. That also works fine on a manual run of the program.
Now I am trying to make the program start automatically on boot, as this will eventually go in a larger machine (boat) and I won't be hooking a monitor etc. to it. I'm currently doing this through a cron job with the #reboot tag. Looks like this:
#This enables GPIO (as far as I know). The program fails without this command being run first.
#reboot sudo pigpiod
#This runs the python program. ampersand forks the process because it should run continuously.
#reboot python3 /home/pi/Desktop/BoatBrain.py &
#and this lets me connect my phone over bluetooth. The python program has
#a portion takes data from that connection. ampersand forks the process, which
#seems like the right thing to do, since it looks like it blocks other things.
#That is also why it is at the end of the cron table.
#reboot sudo rfcomm watch hci0 &
When I reboot, the jobs all run, and I can connect my phone, so it must have passed the line executing the python script, but the servo I have connected just jitters in place uncontrollably. Let me restate that when I take the cronjobs away and run this manually, the program works correctly with few to no jitters, so it doesn't feel like an electrical problem...
If you need any more information please let me know and I'll be happy to provide it. I have a tendency to leave things out without realizing XD
Thanks!
Did you add anything to ~/.profile? It might be why it works when you invoke the commands yourself. If so, create a file in sudo vi /etc/profile.d/servo.sh with the same couple lines you added to ~/.profile. Then the system will have those on reboot.
Also, you could put all the three commands in one shell script and just put the script in the crontab. then the script can control that they start in order. You could also have the cronjob write output to a logfile and then see what it says. Also you can check when the cron runs by looking in /var/log/syslog
Oh, also for testing, you can change #reboot to a start time like 10 * * * * and then you can get the cron working without having to reboot. Then later, change it back to #reboot to try it with reboot.
Either something is missing that your login has (.profile), the commands are starting too quickly at the same time and need to start in a controlled order or the system isn't completely ready yet, but I doubt that one.
I have a python script that checks the temperature every 24 hours, is there a way to leave it running if I shut the computer down/log off.
Shutdown - no.
Logoff - potentially, yes.
If you want to the script to automatically start when you turn the computer back on, then you can add the script to your startup folder (Windows) or schedule the script (Windows tasks, cron job, systemd timer).
If you really want a temperature tracker that is permanently available, you can use a low-power solution like the Raspberry Pi rather than leaving your pc on.
The best way to accomplish this is to have your program run on some type of server that your computer can connect to. A server could be anything from a raspberry pi to an old disused computer or a web server or cloud server. You would have to build a program that can be accessed from your computer, and depending on the server and you would access it in a lot of different ways depending the way you build your program and your server.
Doing things this way means your script will always be able to check the temperature because it will be running on a system that stays on.
Scripts are unable to run while your computer is powered off. What operating system are you running? How are you collecting the temperature? It is hard to give much more help without this information.
One thing I might suggest is powering on the system remotely at a scheduled time, using another networked machine.
You can take a look at the following pages
http://www.wikihow.com/Automatically-Turn-on-a-Computer-at-a-Specified-Time
http://lifehacker.com/5831504/how-can-i-start-and-shut-down-my-computer-automatically-every-morning
Additionally once it turn on, you can perform a cronjob, for execute your python code by a console command >> python yourfile.py . What is the Windows version of cron?
i got a python script to collect data from a few connected sensors. This script runs in background from starting the PI. The script is working fine but irregular it fells in 'Sl' state. If i restart the PI it works again for a few days, but then it happens again.
Is there a way to monitor the state of the script (kill it and start it again if this happens) or any idea why this happen?
You have a few options (somewhat related):
Run your script as per normal, but have another script (bash works good) that checks for the state of your script. If it's stalled, kill it, and restart. This second script can be called from a regular cron job.
Change your python script into a linux service (see here for an example), and either monitor this service with a second script (similar to 1), OR, do a service restart at regular intervals with a cron job.
I have a simple python script to send data from a Windows 7 box to a remote computer via SFTP. The script is set to continuously send a single file every 5 minutes. This all works fine but I'm worried about the off chance that the process stops or fails and the customer doesn't notice the data files have stopped coming in. I've found several ways to monitor python processes in a ubuntu/unix environment but nothing for Windows.
If there are no other mitigating factors in your design or requirements, my suggestion would be to simplify the script so that it doesn't do the polling; it simply sends the file when invoked, and use Windows Scheduler to invoke the script on whatever schedule you need. By relying on a core Windows service, you can factor that complexity out of your script.
You can check out restartme the following link shows how you can use it
http://www.howtogeek.com/130665/quickly-and-automatically-restart-a-windows-program-when-it-crashes/
I have a python script and am wondering is there any way that I can ensure that the script run's continuously on a remote computer? Like for example, if the script crashes for whatever reason, is there a way to start it up automatically instead of having to remote desktop. Are there any other factors I have to be aware of? The script will be running on a window's machine.
Many ways - In the case of windows, even a simple looping batch file would probably do - just have it start the script in a loop (whenever it crashes it would return to the shell and be restarted).
Maybe you can use XMLRPC to call functions and pass data. Some time ago I did something like that you ask by using the SimpleXMLRPCServer and xmlrpc.client. You have examples of simple configurations in the docs.
Depends on what you mean by "crash". If it's just exceptions and stuff, you can catch everything and restart your process within itself. If it's more, then one possibility though is to run it as a daemon spawned from a separate python process that acts as a supervisor. I'd recommend supervisord but that's UNIX only. You can clone a subset of the functionality though.