I have an embedded system on which I run code live. Every time I want to run code, I start two scripts in two different terminals: "run1.sh" and "run2.sh". I can see the output of those scripts in my terminals (I wish to too).
Now I want to make a python script that starts those two scripts in two different terminals. I want to still see their output. Also I want to insert a password from the python script to the terminals, since the scripts run in sudo mode. I've played a lot with supbrocess and the PIPES but I've never achieved all of the above requirements simultaneously. How can these requirements be met?
I'm using Ubuntu btw (so I have gnome terminal)
Update : I was probably not clear in my question, but this has to be inside a python script. It is not for my convenience, it's part of an integration process. The code of the script will be part of a larger python program, so the whole point of the question is how do I do it in python.
Based on your new information added I've created an small python script which will launch two terminals and their output separately:
Main script:
mortiz#florida:~/Documents/projects/python/split_python_execution$ cat split_pythonstuff.py
#!/usr/bin/python3
import subprocess
subprocess.call(['gnome-terminal', '-x', 'python', '/home/mortiz/Documents/projects/python/split_python_execution/script1.py'])
subprocess.call(['gnome-terminal', '-x', 'python', '/home/mortiz/Documents/projects/python/split_python_execution/script2.py'])
Script 1:
mortiz#florida:~/Documents/projects/python/split_python_execution$ cat script1.py
#!/usr/bin/python3
while True :
print ('script 1')
Script 2:
mortiz#florida:~/Documents/projects/python/split_python_execution$ cat script2.py
#!/usr/bin/python3
while True:
print ('script 2')
From here I guess you can develop anything you want.
UPDATE: About sudo
Sudoers is a great way of controlling which things can be executed by specific users providing passwords or not.
If you add this line in /etc/sudoers there's not need for a password when you pass sudo to your command:
<YOUR_USER> ALL = NOPASSWD : /usr/bin/python <SCRIPT.py>
In your question as far as I understand you have the password stored inside the script. There's no need to do that and it's a bad practice. Sudoers would be a better way.
Anyway, if you want to do it in an insecure way then refer to this question and place it before the commands in the scripts provided in this answer.
The linked provided works:
echo -e "mypassword\n" | sudo -S python test.py
15
You only need to implement that on the previous code.
You could install Terminator and configure one profile per terminal to run any script you want.
I have a default template which will load 3 terminals and run 3 different commands / or scripts if you wanted to:
When I load that profile the first one will move me to my projects dir and list them. The next one will run df -h to see the space available and the lower my ip configuration.
This way would save you lots of programming and it's quite easy.
UPDATE: It will run any command, bash, zsh, python, etc.. available for your terminal. If the script is locally in your machine:
python <your_script_1> # first terminal profile
python <your_script_2> # second terminal profile
both would be executed "at the same time".
If your scripts are remote in the target machine, simply create a bash script using ssh to connect to the remote machine with a private key and then running the script, the result is the same in both scenarios.
EDIT: The best thing is setting colors and transparency for each terminal, so you can enjoy the penguin's selfie while you work.
Related
So, on a Raspberry Pi I'm using a camera app with a web interface, I wanted to add LED lighting by adding a neopixel. I have successfully done this and can now turn it on and off running two python scripts.
Explanation and question:
I have a python script in /usr/local/bin that is executable.
It is owned by 'root root'.
I have a shell script in /var/www/html/macros that is executable and has to run the python script in /usr/local/bin.
The shell script is owned by 'www-data'
When I manually run the python file, it executes the script.
When I manually run the shell script, it executes the python script.
When I run the shell script by clicking on a button on my webpage, it seems to execute the shell script correctly, however, it looks like it doesn't execute the python script.
What can I do to fix this?
I'm not that experienced with permissions, but I wanted to emphasize on the fact that this is a closed system that does not contain any sensitive information. So safety/best practice is not a concern. I just want to make this work.
I'm not an expert in this area, but I believe to access /usr/local/bin/ you need root privileges which explains why you're having success but not Apache.
Rather than give Apache root permissions, it's best to simply remove the requirement from the individual file you want to execute. This can be accomplished by
$ cd /usr/local/bin
$ sudo chmod 777 your_script.py
Now, after 11 hours and a group of people thinking along we found a solution to the problem.
The problem turned out to be that the Web interface can only execute as 'www-data', and the NeoPixel library that the python script depends on needs to be executed as sudo/root.
These two factors make it so that there will never be a direct way of getting the scripts to work together.
However, the idea emerged to use some sort of pipe.
A brilliant user suggested to me to use sshpass. This would allow to pass data to ssh and have it essentially be executed as a root user.
The data from the web interface would be relayed to the sshpass and this would successfully run the needed scripts with the needed privileges.
Special thanks to Minty Trebor and Falcounet from the RRF for LPC/STM Discord!
I've created an app that runs on cygwin, that will open some new shells and run python script on each of them. The problem started when I wanted to have control on the new shells and kill them at will. after a lot of digging I decided to use the following command:
subprocess.run('mintty.exe -t {} -h always -e {} &'.format(app_name, run_app_cmd), shell = True)
and later when I'll want to kill it use:
subprocess.run('kill -2 {}'.format(apps[app].shell_pid), shell = True).
it worked pretty well until I realized that A-L-O-T of times the new terminal gets stuck and doesn't respond, and I don't like it. I made some more digging and I found that while I thought that the python on the current mintty executes the command and opens the new terminal, what that actually happens is that the windows host opens the new mintty (the PPID of the new terminal is 1), and then probably the signal goes through some windows problems and etc.
the reason that I want each of the scripts in a separate terminal is that each of them have a lot of output, and I want them in different windows.
Now, after all this explanation, is there any way to prevent this? I don't want these stuck to become a part of my life...
I would like to create a simple Python program that will concurrently execute 2 independent scripts. For now, the two scripts just print a sequence of numbers but my intention is to use this program to concurrently run a few Twitter streaming programs in the future.
I suspect I need to use subprocess.Popen but I cannot quite get my head around what arguments I should put in there. There was a similar question on StackOverflow but the code provided there (pasted below) doesn't print anything. I will appreciate your help.
My files are:
thread1.py
thread2.py
import subprocess
subprocess.Popen(['screen', './thread1.py']))
subprocess.Popen(['screen', './thread2.py'])
Use supervisord
supervisord is process control system just for the purpose of running multiple command line scripts.
It features:
multiple controlled processes
autorestarting failed runs
log stdout and stderr output
starting scripts in order (using priority)
command line utility to view latest log output, stop, start, restart the processes
This solution works only on *nix based systems, it is not available on Windows.
As wanderlust mentioned, why do you want to do it this way and not via linux command line?
Otherwise, the solution you post is doing what it is meant to, i.e, you are doing this at the command line:
screen ./thread1.py
screen ./thread2.py
This will open a screen session and run the program and output within this screen session, such that you will not see the output on your terminal directly. To trouble shoot your output, just execute the scripts without the screen call:
import subprocess
subprocess.Popen(['./thread1.py'])
subprocess.Popen(['./thread2.py'])
Content of thread1.py:
#!/usr/bin/env python
def countToTen():
for i in range(10):
print i
countToTen()
Content of thread2.py:
#!/usr/bin/env python
def countToHundreds():
for i in range(10):
print i*100
countToHundreds()
Then don't forget to do this on the command line:
chmod u+x thread*.py
You can also just open several Command Prompt windows to run several Python programs at once - just run one in each of them:
In each Command Prompt window, go to the correct directory (such as C:/Python27) and then type 'python YourCodeNo1.py' in one Command Prompt window, 'python YourCodeNo2.py' in the next one ect. .
I'm currently running 3 codes at one time in this way, without slowing any of them down.
I'm wanting to open a terminal from a Python script (not one marked as executable, but actually doing python3 myscript.py to run it), have the terminal run commands, and then keep the terminal open and let the user type commands into it.
EDIT (as suggested): I am primarily needing this for Linux (I'm using Xubuntu, Ubuntu and stuff like that). It would be really nice to know Windows 7/8 and Mac methods, too, since I'd like a cross-platform solution in the long-run. Input for any system would be appreciated, however.
Just so people know some useful stuff pertaining to this, here's some code that may be difficult to come up with without some research. This doesn't allow user-input, but it does keep the window open. The code is specifically for Linux:
import subprocess, shlex;
myFilePathString="/home/asdf asdf/file.py";
params=shlex.split('x-terminal-emulator -e bash -c "python3 \''+myFilePathString+'\'; echo \'(Press any key to exit the terminal emulator.)\'; read -n 1 -s"');
subprocess.call(params);
To open it with the Python interpreter running afterward, which is about as good, if not better than what I'm looking for, try this:
import subprocess, shlex;
myFilePathString="/home/asdf asdf/file.py";
params=shlex.split('x-terminal-emulator -e bash -c "python3 -i \''+myFilePathString+'\'"');
subprocess.call(params);
I say these examples may take some time to come up with because passing parameters to bash, which is being opened within another command can be problematic without taking a few steps. Plus, you need to know to use to quotes in the right places, or else, for example, if there's a space in your file path, then you'll have problems and might not know why.
EDIT: For clarity (and part of the answer), I found out that there's a standard way to do this in Windows:
cmd /K [whatever your commands are]
So, if you don't know what I mean try that and see what happens. Here's the URL where I found the information: http://ss64.com/nt/cmd.html
I am developing FUSE filesystem with python. The problem is that after mounting a filesystem I have no access to stdin/stdout/stderr from my fuse script. I don't see anything, even tracebacks. I am trying to launch pdb like this:
import pdb
pdb.Pdb(None, open('pdb.in', 'r'), open('pdb.out', 'w')).set_trace()
All works fine but very inconvenient. I want to make pdb.in and pdb.out as fifo files but don't know how to connect it correctly. Ideally I want to type commands and see output in one terminal, but will be happy even with two terminals (in one put commands and see output in another). Questions:
1) Is it better/other way to run pdb without stdin/stdout?
2) How can I redirect stdin to pdb.in fifo (All what I type must go to pdb.in)? How can I redirect pdb.out to stdout (I had strange errors with "cat pdb.out" but maybe I don't understand something)
Ok. Exactly what I want, has been done in http://pypi.python.org/pypi/rpdb/0.1.1 .
Before starting the python app
mkfifo pdb.in
mkfifo pdb.out
Then when pdb is called you can interact with it using these two cat commands, one running in the background
cat pdb.out & cat > pdb.in
Note the readline support does not work (i.e. up arrow)
I just ran into a similar issue in a much simpler use-case:
debug a simple Python program running from the command line that had a file piped into sys.stdin, meaning, no way to use the console for pdb.
I ended up solving it by using wdb.
Quick rundown for my use-case. In the shell, install both the wdb server and the wdb client:
pip install wdb.server wdb
Now launch the wdb server with:
wdb.server.py
Now you can navigate to localhost:1984 with your browser and see an interface listing all Python programs running. The wdb project page above has instructions on what you can do if you want to debug any of these running programs.
As for a program under your control, you can you can debug it from the start with:
wdb myscript.py --script=args < and/stdin/redirection
Or, in your code, you can do:
import wdb; wdb.set_trace()
This will pop up an interface in your browser (if local) showing the traced program.
Or you can navigate to the wdb.server.py port to see all ongoing debugging sessions on top of the list of running Python programs, which you can then use to access the specific debugging session you want.
Notice that the commands for navigating the code during the trace are different from the standard pdb ones, for example, to step into a function you use .s instead of s and to step over use .n instead of n. See the wdb README in the link above for details.