I have GUI code written in PyQt in main.py that I want to start up automatically after startx starts.
I've already configured my beaglebone (Debian) to run startx on power up.
I initially included the following in /etc/x11/xinitrc:
#/usr/bin/python3 /root/PyQt/main.py
This worked perfectly until I deleted some files from /root to create space on my beaglebone. I'm not sure what exactly I deleted (mostly log files) but I might have also deleted the .XAuthority, .bash_profile, .config folder, .dbus folder.
Ever since then, it hasn't been autostarting my main.py on boot. Even now, after new .XAuthority, .bash_profile, etc have been created, it still isn't auto-starting my program.
Is there a way to fix this? Or another way to autostart main.py?
Note: I'm running Debian on my beaglebone and lxqt.
sudo nano /etc/rc.local then add the executing command at the end of the file right before exit 0. something like
python /path/to/file/main.py &
exit 0
remember to add & at the end of the command so the the command would run as deamon and won't stop the os from booting if it has an infinite loop in it.
Related
I need to start a python program when the system boots. It must run in the background (forever) such that opening a terminal session and closing it does not affect the program.
I have demonstrated that by using tmux this can be done manually from a terminal session. Can the equivalent be done from a script that is run at bootup?
Then where done one put that script so that it will be run on bootup.
Create a startup script that runs on boot and launches the desired Python program in the background.
Here are the steps:
Create a shell script that launches the Python program in the background:
#!/bin/sh
python /path/to/your/python/program.py &
Make the shell script executable:
chmod +x /path/to/your/script.sh
Add the script to the startup applications:
On Ubuntu, this can be done by going to the Startup Applications program and adding the script.
On other systems, you may need to add the script to the appropriate startup folder, such as /etc/rc.d/ or /etc/init.d/.
After these steps, the Python program should start automatically on boot and run in the background.
It appears that in addition to putting a script that starts the program in /etc/init.d, one also has to put a link in /etc/rc2.d with
sudo ln -s /etc/init.d/scriptname.sh
sudo mv scriptname.sh S01scriptname.sh
The S01 was just copied from all the other files in /etc/rc2.d
I have a python script which will install an application:
os.system("path/to/my.exe /VERYSILENT")
When i do this, for example I would install Git.
Later on, the application will call:
os.system("git --version")
which fails to call because it doesnt know what git is.
From what it looks like, the System variables etc are all grabbed when you import os so could I just after installing the application reimport os somehow and then carry on?
My desired end state is to refresh CMD, similar to how you would close a terminal and open a new one.
Sub-shells (as in os.system(..)) cannot effect the execution environment of the parent process (it would be a huge security hole). You can update the permanent user environment with e.g. Powershell ([environment]::SetEnvironmentVariable($key, $val, "User")). Any processes started afterwards will see the new environment variable (this is why you need to close your cmd window and start a new one.
I have a Python script that should open my Linux terminal, browser, file manager and text editor on system startup. I decided crontab is a suitable way to automatically run the script. Unfortunately, it doesn't went well, nothing happened when I reboot my laptop. So, I captured the output of the script to a file in order to get some clues. It seems my script is only partially executed. I use Debian 8 (Jessie), and here's my Python script:
#!/usr/bin/env python3
import subprocess
import webbrowser
def action():
subprocess.call('gnome-terminal')
subprocess.call('subl')
subprocess.call(('xdg-open', '/home/fin/Documents/Learning'))
webbrowser.open('https://reddit.com/r/python')
if __name__ == '__main__':
action()
here's the entry in my crontab file:
#reboot python3 /home/fin/Labs/my-cheatcodes/src/dsktp_startup_script/dsktp_startup_script.py > capture_report.txt
Here's the content of capture_report.txt file (I trim several lines, since its too long, it only prints my folder structures. seems like it came from 'xdg-open' line on Python script):
Directory list of /home/fin/Documents/Learning/
Type Format Sort
[Tree ] [Standard] [By Name] [Update]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
/
... the rest of my dir stuctures goes here
I have no other clue what's possible going wrong here. I really appreciate your advice guys. thanks.
No, cron is not suitable for this. The cron daemon has no connection to your user's desktop session, which will not be running at system startup, anyway.
My recommendation would be to hook into your desktop environment's login scripts, which are responsible for starting various desktop services for you when you log in, anyway, and easily extended with your own scripts.
I'd do as tripleee suggested, but your job might be failing because it requires an X session, since you're trying to open a browser. You should put export DISPLAY=:0; after the schedule in your cronjob, as in
#reboot export DISPLAY=:0; python3 /home/fin/Labs/my-cheatcodes/src/dsktp_startup_script/dsktp_startup_script.py > capture_report.txt
If this doesn't work, you could try replacing :0 with the output of echo $DISPLAY in a graphical terminal.
Ran into additional issue with Supervisord.
Centos 6.5
supervisor
python 2.6 installed with the OS
python 2.7 installed in /usr/local/bin
supervisord program settings
[program:inf_svr]
process_name=inf_svr%(process_num)s
directory=/opt/inf_api/
environment=USER=root,PYTHONPATH=/usr/local/bin/
command=python2.7 /opt/inf_api/inf_server.py --port=%(process_num)s
startsecs=2
user=root
autostart=true
autorestart=true
numprocs=4
numprocs_start=8080
stderr_logfile = /var/log/supervisord/tornado-stderr.log
stdout_logfile = /var/log/supervisord/tornado-stdout.log
I can run inf_server.py with:
python2.7 inf_server.py --port=8080
with no problems.
I made sure the files were executable (that was my problem before).
Any thoughts?
UPDATE:
I cant get it to even launch a basic python script without failing.
Started by commenting out the old program, adding a new one and then putting in:
command=python /opt/inf_api/test.py
where test.py just writes something to the screen and to a file. Fails with exit status 0.
So I started adding back in the location of python (after discovering it with 'which python')
environment=PYTHONPATH=/usr/bin
Tried putting the path in single quote, tried adding USER=root, to the environment, tried adding
directory=opt/inf_api/
tried adding
user=root
All the same thing, exit status 0. Nothing seems to added to any log files either, except what Im seeing from the debug of supervisord.
Man I am at a loss.
This turns out to be an issue with how Supervisord is catching error messages from python. As in it isnt. Im running it to launch a tornado app, that calls a second python file so it can spawn n instances of tornado servers. If there are errors in that second python app, then it isnt catching them and saving them to the log files. I tried all manner of methods but ended up having to catch them myself with try: except: and saving it to my own log files. Probably good paractice anyway but talk about a round about way of going about it.
Not sure if anyone has run into this, but I'll take suggestions for troubleshooting and/or alternative methods.
I have a Windows 2008 server on which I am running several scheduled tasks. One of those tasks is a python script that uses pscp to log into a linux box, checks for new files and if there is anything new, copies them down to a local directory on the C: drive. I've put some logging into the script at key points as well and I'm using logging.basicConfig(level=DEBUG).
I built the command using a variable, command = 'pscp -pw xxxx name#ip:/ c:\local_dir' and then I use subprocess.call(command) to execute the command.
Now here's the weird part. If I run the script manually from the command line, it works fine. New files are downloaded and processed. However, if the Task Scheduler runs the script, no new files are downloaded. The script is running under the same user, but yet yields different results.
According to the log files created by the script and on the linux box, the script successfully logs into the linux box. However, no files are downloaded despite there being new files. Again, when I run it via the command line, files are downloaded.
Any ideas? suggestions, alternative methods?
Thanks.
You can use the windows Task Scheduler, but make sure the "optional" field "Start In" is filled in.
In the Task Scheduler app, add an action that specifies your python file to run "doSomeWork" and fill in the Start in (optional) input with the directory that contains the file.. So for example if you have a python file in:
C:\pythonProject\doSomeWork.py
You would enter:
Program/Script: doSomeWork.py
Start in (optional): C:\pythonProject
I had the same issue when trying to open an MS Access database on a Linux VM. Running the script at the Windows 7 command prompt worked but running it in Task Scheduler didn't. With Task Scheduler it would find the database and verify it existed but wouldn't return the tables within it.
The solution was to have Task Scheduler run cmd as the Program/Script with the arguments /c python C:\path\to\script.py (under Add arguments (optional)).
I can't tell you why this works but it solved my problem.
I'm having a similar issue. In testing I found that any type of call with subprocess stops the python script when run in task scheduler but works fine when run on the command line.
import subprocess
print('Start')
test = subprocess.check_output(["dir"], shell=True)
print('First call finished')
When run on command line this outputs:
Start
First call finished
When run from task scheduler the output is:
Start
In order to get the output from task scheduler I run the python script from a batch file as follows:
python test.py >> log.txt
I run the script through the batch file both on command line and through task scheduler.
Brad's answer is right. Subprocess needs the shell context to work and the task manager can launch python without that. Another way to do it is to make a batch file that is launched by the task scheduler that calls python c:\path\to\script.py etc. The only difference to this is that if you run into a script that has a call to os.getcwd() you will always get the root where the script is but you get something else when you make the call to cmd from task scheduler.
Last edit - start
After experiments... If you put there full path to python program it works without highest privileges (as admin). Meaning task settings like this:
program: "C:\Program Files\Python37\python.exe"
arguments: "D:\folder\folder\python script.py"
I have no idea why, but it works even if script uses subprocess and multiple threads.
Last edit - end
What I did is I changed task settings: checked Run with highest privileges. And task started to work perfectly while running python [script path].
But keep in mind, that title contains "Administrator: " at the begining... always...
P.S. Thanks guys for pointing out that subprocess is a problem. It made me think of task settings.
I had similar problem when one script is running from Windows Task Scheduler, and another one doesn't.
Running cmd with python [script path] didn't work for me on Windows 8.1 Embedded x64. Not sure why. Probably because of necessity to have spaces in path and issue with quotes.
Hope my answer helps someone. ;)
Create a batch file add your python script in your batch file and then schedule that batch file .it will work .
Example : suppose your python script is in folder c:\abhishek\script\merun.py
first you have to go to directory by cd command .so your batch file would be like :
cd c:\abhishek\script
python merun.py
it work for me .
Just leaving this for posterity: A similar issue I faced was resolved by using the UNC (\10.x.xx.xx\Folder\xxx)path everywhere in my .bat and .py scripts instead of the letter assigned to the drive (\K:\Folder\xxx).
I had this issue before. I was able to run the task manually in Windows Task Scheduler, but not automatically. I remembered that there was a change in the time made by another user, maybe this change made the task scheduler to error out. I am not sure. Therefore, I created another task with a different name, for the same script, and the script worked automatically. Try to create a test task running the same script. Hopefully that works!
For Anaconda installation of python in windows below solution worked for me
create a batch file with below
"C:\Users\username\Anaconda3\condabin\activate" && python "script.py" &&
deactivate
Setup task to run this batch file