Supervisord (exit status 1; not expected) centos python - python

Ran into additional issue with Supervisord.
Centos 6.5
supervisor
python 2.6 installed with the OS
python 2.7 installed in /usr/local/bin
supervisord program settings
[program:inf_svr]
process_name=inf_svr%(process_num)s
directory=/opt/inf_api/
environment=USER=root,PYTHONPATH=/usr/local/bin/
command=python2.7 /opt/inf_api/inf_server.py --port=%(process_num)s
startsecs=2
user=root
autostart=true
autorestart=true
numprocs=4
numprocs_start=8080
stderr_logfile = /var/log/supervisord/tornado-stderr.log
stdout_logfile = /var/log/supervisord/tornado-stdout.log
I can run inf_server.py with:
python2.7 inf_server.py --port=8080
with no problems.
I made sure the files were executable (that was my problem before).
Any thoughts?
UPDATE:
I cant get it to even launch a basic python script without failing.
Started by commenting out the old program, adding a new one and then putting in:
command=python /opt/inf_api/test.py
where test.py just writes something to the screen and to a file. Fails with exit status 0.
So I started adding back in the location of python (after discovering it with 'which python')
environment=PYTHONPATH=/usr/bin
Tried putting the path in single quote, tried adding USER=root, to the environment, tried adding
directory=opt/inf_api/
tried adding
user=root
All the same thing, exit status 0. Nothing seems to added to any log files either, except what Im seeing from the debug of supervisord.
Man I am at a loss.

This turns out to be an issue with how Supervisord is catching error messages from python. As in it isnt. Im running it to launch a tornado app, that calls a second python file so it can spawn n instances of tornado servers. If there are errors in that second python app, then it isnt catching them and saving them to the log files. I tried all manner of methods but ended up having to catch them myself with try: except: and saving it to my own log files. Probably good paractice anyway but talk about a round about way of going about it.

Related

automatically starting GUI program on startx

I have GUI code written in PyQt in main.py that I want to start up automatically after startx starts.
I've already configured my beaglebone (Debian) to run startx on power up.
I initially included the following in /etc/x11/xinitrc:
#/usr/bin/python3 /root/PyQt/main.py
This worked perfectly until I deleted some files from /root to create space on my beaglebone. I'm not sure what exactly I deleted (mostly log files) but I might have also deleted the .XAuthority, .bash_profile, .config folder, .dbus folder.
Ever since then, it hasn't been autostarting my main.py on boot. Even now, after new .XAuthority, .bash_profile, etc have been created, it still isn't auto-starting my program.
Is there a way to fix this? Or another way to autostart main.py?
Note: I'm running Debian on my beaglebone and lxqt.
sudo nano /etc/rc.local then add the executing command at the end of the file right before exit 0. something like
python /path/to/file/main.py &
exit 0
remember to add & at the end of the command so the the command would run as deamon and won't stop the os from booting if it has an infinite loop in it.

how to properly run Python script with crontab on every system startup

I have a Python script that should open my Linux terminal, browser, file manager and text editor on system startup. I decided crontab is a suitable way to automatically run the script. Unfortunately, it doesn't went well, nothing happened when I reboot my laptop. So, I captured the output of the script to a file in order to get some clues. It seems my script is only partially executed. I use Debian 8 (Jessie), and here's my Python script:
#!/usr/bin/env python3
import subprocess
import webbrowser
def action():
subprocess.call('gnome-terminal')
subprocess.call('subl')
subprocess.call(('xdg-open', '/home/fin/Documents/Learning'))
webbrowser.open('https://reddit.com/r/python')
if __name__ == '__main__':
action()
here's the entry in my crontab file:
#reboot python3 /home/fin/Labs/my-cheatcodes/src/dsktp_startup_script/dsktp_startup_script.py > capture_report.txt
Here's the content of capture_report.txt file (I trim several lines, since its too long, it only prints my folder structures. seems like it came from 'xdg-open' line on Python script):
Directory list of /home/fin/Documents/Learning/
Type Format Sort
[Tree ] [Standard] [By Name] [Update]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
/
... the rest of my dir stuctures goes here
I have no other clue what's possible going wrong here. I really appreciate your advice guys. thanks.
No, cron is not suitable for this. The cron daemon has no connection to your user's desktop session, which will not be running at system startup, anyway.
My recommendation would be to hook into your desktop environment's login scripts, which are responsible for starting various desktop services for you when you log in, anyway, and easily extended with your own scripts.
I'd do as tripleee suggested, but your job might be failing because it requires an X session, since you're trying to open a browser. You should put export DISPLAY=:0; after the schedule in your cronjob, as in
#reboot export DISPLAY=:0; python3 /home/fin/Labs/my-cheatcodes/src/dsktp_startup_script/dsktp_startup_script.py > capture_report.txt
If this doesn't work, you could try replacing :0 with the output of echo $DISPLAY in a graphical terminal.

Status Information: 127, plugin may not be installed

So, the title pretty much says it. I'm trying to customize several plugins for nagios and several of them have to be in python.
I'm running Centos 6.5, Python 2.6.6, and Nagios Core 3.5.1
I've installed nagios and python use the yum repository, and everything works when run from the command line, even as the nagios user. I can get bash scripts to run from the nagios system just fine, but even trying to wrap the python in a bash script doesn't work. Whatever I run, even something as simple as
echo `/usr/bin/python --version`
returns an empty or null string.
It also apparently exits with status zero (even when the run code should have produced something else) no matter what I do. This problem appears to be specific to Python and not have anything to do with basic permissions. It might have something to do with ACLs, though I have no idea what. Does anyone have any ideas for what might be going wrong?
Nagios can display some really odd behaviors when things exit with an unknown status and no output. It turns out that a good first debugging step is to try adding something like
<command> 2>&1
or
echo `<command> 2>&1`
to your plugin call to check what stderr is telling you.

Problems running python script by windows task scheduler that does pscp

Not sure if anyone has run into this, but I'll take suggestions for troubleshooting and/or alternative methods.
I have a Windows 2008 server on which I am running several scheduled tasks. One of those tasks is a python script that uses pscp to log into a linux box, checks for new files and if there is anything new, copies them down to a local directory on the C: drive. I've put some logging into the script at key points as well and I'm using logging.basicConfig(level=DEBUG).
I built the command using a variable, command = 'pscp -pw xxxx name#ip:/ c:\local_dir' and then I use subprocess.call(command) to execute the command.
Now here's the weird part. If I run the script manually from the command line, it works fine. New files are downloaded and processed. However, if the Task Scheduler runs the script, no new files are downloaded. The script is running under the same user, but yet yields different results.
According to the log files created by the script and on the linux box, the script successfully logs into the linux box. However, no files are downloaded despite there being new files. Again, when I run it via the command line, files are downloaded.
Any ideas? suggestions, alternative methods?
Thanks.
You can use the windows Task Scheduler, but make sure the "optional" field "Start In" is filled in.
In the Task Scheduler app, add an action that specifies your python file to run "doSomeWork" and fill in the Start in (optional) input with the directory that contains the file.. So for example if you have a python file in:
C:\pythonProject\doSomeWork.py
You would enter:
Program/Script: doSomeWork.py
Start in (optional): C:\pythonProject
I had the same issue when trying to open an MS Access database on a Linux VM. Running the script at the Windows 7 command prompt worked but running it in Task Scheduler didn't. With Task Scheduler it would find the database and verify it existed but wouldn't return the tables within it.
The solution was to have Task Scheduler run cmd as the Program/Script with the arguments /c python C:\path\to\script.py (under Add arguments (optional)).
I can't tell you why this works but it solved my problem.
I'm having a similar issue. In testing I found that any type of call with subprocess stops the python script when run in task scheduler but works fine when run on the command line.
import subprocess
print('Start')
test = subprocess.check_output(["dir"], shell=True)
print('First call finished')
When run on command line this outputs:
Start
First call finished
When run from task scheduler the output is:
Start
In order to get the output from task scheduler I run the python script from a batch file as follows:
python test.py >> log.txt
I run the script through the batch file both on command line and through task scheduler.
Brad's answer is right. Subprocess needs the shell context to work and the task manager can launch python without that. Another way to do it is to make a batch file that is launched by the task scheduler that calls python c:\path\to\script.py etc. The only difference to this is that if you run into a script that has a call to os.getcwd() you will always get the root where the script is but you get something else when you make the call to cmd from task scheduler.
Last edit - start
After experiments... If you put there full path to python program it works without highest privileges (as admin). Meaning task settings like this:
program: "C:\Program Files\Python37\python.exe"
arguments: "D:\folder\folder\python script.py"
I have no idea why, but it works even if script uses subprocess and multiple threads.
Last edit - end
What I did is I changed task settings: checked Run with highest privileges. And task started to work perfectly while running python [script path].
But keep in mind, that title contains "Administrator: " at the begining... always...
P.S. Thanks guys for pointing out that subprocess is a problem. It made me think of task settings.
I had similar problem when one script is running from Windows Task Scheduler, and another one doesn't.
Running cmd with python [script path] didn't work for me on Windows 8.1 Embedded x64. Not sure why. Probably because of necessity to have spaces in path and issue with quotes.
Hope my answer helps someone. ;)
Create a batch file add your python script in your batch file and then schedule that batch file .it will work .
Example : suppose your python script is in folder c:\abhishek\script\merun.py
first you have to go to directory by cd command .so your batch file would be like :
cd c:\abhishek\script
python merun.py
it work for me .
Just leaving this for posterity: A similar issue I faced was resolved by using the UNC (\10.x.xx.xx\Folder\xxx)path everywhere in my .bat and .py scripts instead of the letter assigned to the drive (\K:\Folder\xxx).
I had this issue before. I was able to run the task manually in Windows Task Scheduler, but not automatically. I remembered that there was a change in the time made by another user, maybe this change made the task scheduler to error out. I am not sure. Therefore, I created another task with a different name, for the same script, and the script worked automatically. Try to create a test task running the same script. Hopefully that works!
For Anaconda installation of python in windows below solution worked for me
create a batch file with below
"C:\Users\username\Anaconda3\condabin\activate" && python "script.py" &&
deactivate
Setup task to run this batch file

Python repeatedly causing a crash when executing daemons

I'm not a programmer or anything like that, but I recently installed some python scripts and I'm struggling to execute them as part of a workflow on startup. I run an automator workflow that executes these three separate commands:
python /Applications/Sick-Beard/Sickbeard.py --daemon
python /Applications/CouchPotatoServer/CouchPotato.py --daemon
python /Applications/Headphones/Headphones.py --daemon
I can run these commands in a terminal window without any problem. However, when executing the commands as a single workflow or even three separate ones in quick succession, it always yields the follow error.
I'm at my wits end. Your help is very appreciated. The same error is caused when I create .plit LaunchAgents.
Thanks in advance, your help is more than appreciated. I had to put this on pastebin because I kept on getting formatting errors.
http://pastebin.com/qb65Mc53

Categories

Resources