Finding output of watch [script.py] command that I already ran - python

Couple days ago I ran script using watch:
watch -c -n 300 python3 some_script.py
Did not thought that I would like to have possibility to look at the log of this and I did not redirected the output to a separate log.txt file.
Is there any possibility that I can look the output up?
The watch command is still running and the session is active.
I wasn't able to find any solution for this, but maybe I did not look hard enough. If so - sorry.
Best regards,
matmakonen

use an external command witch is tee.
so your final command should be :
watch -c -n 300 python3 some_script.py | tee log.txt

as #RamanSailopal mentioned, There are no historic logs saved. But if you want to change files in the future, you do not necessarily have to use the external packages. You can use &> or >> or >.
See here and here for more details.

Related

Python Check Permissions?

Please note, I'm NOT asking about the user running it but rather did he run it as sudo command or not
I have a python script, which I want to obligate the user to run it like this:
sudo python3 script.py
and not like this:
python3 script.py
How can I do verify in run time if it was run correctly or not (if so end program using sys.exit())?
Check out this topic:
What is the best way for checking if the user of a script has root-like privileges?
It has a bunch of solutions that you might find helpful.

strange problem with running bash-scripts from python in docker

I have python-script, which run bash-scripts via subprocess library. I need to collect stdout and stderr to files, so I have wrapper like:
def execute_chell_script(stage_name, script):
subprocess.check_output('{} &>logs/{}'.format(script, stage_name), shell=True)
And it works correct when I launch my python script on mac. But If I launch it in docker-container (FROM ubuntu:18.04) I cant see any log-files. I can fix it if I use bash -c 'command &>log_file' instead of just command &>log_file inside subprocess.check_output(...). But it looks like too much magic.
I thought about the default shell for user, which launches python-script (its root), but cat /etc/passwd shows root ... /bin/bash.
It would be nice if someone explain me what happened. And maybe I can add some lines to dockerfile to use the same python-script inside and outside docker-container?
As the OP reported in a comment that this fixed their problem, I'm posting it as an answer so they can accept it.
Using check_output when you don't get expect any output is weird; and requiring shell=True here is misdirected. You want
with open(os.path.join('logs', stage_name)) as output:
subprocess.run([script], stdout=ouput, stderr=output)

Running a python script on a linux machine after every few minutes, only if its not already running

I have a python script which has to run once every 3 minutes. The script runs a process can run for more than 10 minutes though. And hence I need to make sure that if it's already running it shouldn't run. I need to do this without any database interference.
The approach I have used it is having a cron via the following command in the crontab.
*/3 * * * * sudo ps aux|grep -v grep|grep "python XMLProcessor.py"
|| cd /home/ubuntu/git/perpule-python-subscriber; sudo python XMLProcessor.py
It runs smoothly. But the problem here is, once in a while, even after the process ends, the command sudo ps aux|grep -v grep|grep "python XMLProcessor.py" still gives an output because of which the python script doesn't run.
Please suggest me a better approach or rectify the one I'm using. All the suggestions are appreciated.
The approach you are using has some problems as ps can report "unexpected things". Are you sure that your python program has a unique name? Could it be that there are race conditions (it could run the program twice)?
A typical way of doing this is to touch a file at the start of the process and remove it and the end of it:
if [ ! -f .working ]; then
touch .working && \
python do_something.py && \
rm .working
fi
Then you can check if it's working by checking for that file. However, there are multiple problems with this approach: what happens if the process crash? should you remove the touched file? is it possible to remove it for every possible crash?. Then you need to add timeouts, and it starts getting complicated.
The proper and safer solution then is to use some sort of server or tool that checks that your job is being run, and if not it runs it. I have used luigi to do something similar and it integrates quite well with python code, so you could give it a try.

Python open default terminal, execute commands, keep open, AND then allow user-input

I'm wanting to open a terminal from a Python script (not one marked as executable, but actually doing python3 myscript.py to run it), have the terminal run commands, and then keep the terminal open and let the user type commands into it.
EDIT (as suggested): I am primarily needing this for Linux (I'm using Xubuntu, Ubuntu and stuff like that). It would be really nice to know Windows 7/8 and Mac methods, too, since I'd like a cross-platform solution in the long-run. Input for any system would be appreciated, however.
Just so people know some useful stuff pertaining to this, here's some code that may be difficult to come up with without some research. This doesn't allow user-input, but it does keep the window open. The code is specifically for Linux:
import subprocess, shlex;
myFilePathString="/home/asdf asdf/file.py";
params=shlex.split('x-terminal-emulator -e bash -c "python3 \''+myFilePathString+'\'; echo \'(Press any key to exit the terminal emulator.)\'; read -n 1 -s"');
subprocess.call(params);
To open it with the Python interpreter running afterward, which is about as good, if not better than what I'm looking for, try this:
import subprocess, shlex;
myFilePathString="/home/asdf asdf/file.py";
params=shlex.split('x-terminal-emulator -e bash -c "python3 -i \''+myFilePathString+'\'"');
subprocess.call(params);
I say these examples may take some time to come up with because passing parameters to bash, which is being opened within another command can be problematic without taking a few steps. Plus, you need to know to use to quotes in the right places, or else, for example, if there's a space in your file path, then you'll have problems and might not know why.
EDIT: For clarity (and part of the answer), I found out that there's a standard way to do this in Windows:
cmd /K [whatever your commands are]
So, if you don't know what I mean try that and see what happens. Here's the URL where I found the information: http://ss64.com/nt/cmd.html

nohup not logging in nohup.out

I am running a python script of web2py and want to log its output. I am using following command
nohup python /var/www/web2py/web2py.py -S cloud -M -N -R applications/cloud/private/process.py >>/var/log/web2pyserver.log 2>&1 &
The process is running but it is not logging into the file. I have tried without nohup also but it is still same.
The default logging of nohup in nohup.out is also not working.
Any suggestion what might be going wrong?
Nothing to worry. Actually the python process along with nohup was logging the file in batch mode and i could see the output only after quite some time and not instantaneously.
nohup will try to create the file in the local directory. Can you create a file in the folder you are running it from ?
If you've got commas in your print statements there's a good chance it's due to buffering. You can put a sys command (forget which) in your code or when you run the nohup, just add the option -u and you'll disable std(in|out|err) buffering
Don't worry about this, it is because of the buffering mechanism, run your Python script with the -u flag will solve the problem:
nohup python -u code.py > code.log &
or just
nohup python -u code.py &

Categories

Resources