I just found out that I need a quick-and-dirty demo for tomorrow. I'm working with a robot that uses ROS, and we have some packages that make it move in a simple pattern. I want to start all the necessary nodes with one command. The command lines I would need to run--all in separate terminals--are:
roscore
rviz
roslaunch [blank move_base map]
roslaunch [package] [movement script]
rqt_graph
All of these programs run indefinitely--e.g., roscore is a server that coordinates the other nodes. I can't just use "&" to string them together into one line. They each require a dedicated terminal window/process. How can I do that in bash or Python?
Note: I realize it would probably be better to use a custom ROS launch file, but I don't have time.
You can launch your softwares from the terminal binaries themselves to get a new terminal for each. It would depend on the terminal you use. With konsole you can have
konsole -e command [args]
...
With gnome-terminal you do:
gnome-terminal -e command [args] &
With xterm:
xterm -e command [args] &
Probably refer as well to a similar thread: Run multiple .sh scripts from one .sh script? CentOS
Related
I'm very new to coding and software, so please stick with me. I am trying to execute a command in my Raspberry Pi terminal via a Python script. I want to be able to run this pi script from the Desktop. The command to execute is (rpi-deep-pantilt-env) pi#raspberrypi:~/rpi-deep-pantilt $ rpi-deep-pantilt detect So as you can see, I need to cd into rpi-deep-pantilt, then activate my virtual environment, then run the command all via the py script.
A simple shell script to do what you ask:
#!/bin/sh
cd "$HOME"/rpi-deep-pantilt
. ./rpi-deep-pantilt-env/bin/activate
./rpi-deep-pantilt detect "$#"
Most or all of this is probably unnecessary. I guess you could run
#!/bin/sh
d="$HOME"/rpi-deep-pantilt
exec "$d"/rpi-deep-pantilt-env/bin/python "$d"/rpi-deep-pantilt detect "$#"
though if your Python script has hardcoded file paths which require it to run in a specific directory, that's a bug which will prevent this from working.
The "$#" says to pass on any command-line arguments, so if you saved this script as pant, running pant blind mice will pass on the arguments blind and mice to your Python script. (Of course, if it doesn't accept additional command-line arguments after detect, this is unimportant, but I'd still pass them on so you can generate an error message, rather than have them be ignored as if they were not there.)
I need to run commands in command prompt but they only work when the command prompt is set at a particular location in the system. I need the following commands to run in a python script:
import os
os.system("set OMP_NUM_THREADS=2")
os.system("explorer.exe /e,::{20D04FE0-3AEA-1069-A2D8-08002B30309D}"#
os.system("cd C:\CFD\crit_vel_01_02")
os.system("mpiexec -n 9 FDS crit_vel_01_02.fds")
os.system("PAUSE")
the system does not recognise the command
os.system("mpiexec -n 9 FDS crit_vel_01_02.fds")
unless this is run in the command shell which is installed on installation of the program "fds" which is a fire dynamics simulator. I appreciate this seems quite specific to the program but I am assuming there is some generic way that python can run command shell from a different location/with different settings.
The shortcut to the command prompt is called CMDfds and is installed in:
"C:\ProgramData\Microsoft\Windows\Start Menu\Programs\FDS6"
in the properties the target in the shortcut tab is:
"C:\Windows\System32\cmd.exe /k fdsinit"
Not sure it will work but you can give a try at subprocess.run with shell=True.
If shell is True, the specified command will be executed through the shell. This can be useful if you are using Python primarily for the enhanced control flow it offers over most system shells and still want convenient access to other shell features such as shell pipes, filename wildcards, environment variable expansion, and expansion of ~ to a user’s home directory.
Also try running the python script from the fds command shell. It seems to be initializing stuff in the shell.
The trouble with running programs with system commands is that they often have a different shell environment. In order to prevent problems arising from this it's a good idea to use absolute paths. In your case:
os.system("mpiexec -n 9 FDS crit_vel_01_02.fds")
should be changed to:
os.system("/absolute/path/to/mpiexec -n 9 FDS crit_vel_01_02.fds")
I have an embedded system on which I run code live. Every time I want to run code, I start two scripts in two different terminals: "run1.sh" and "run2.sh". I can see the output of those scripts in my terminals (I wish to too).
Now I want to make a python script that starts those two scripts in two different terminals. I want to still see their output. Also I want to insert a password from the python script to the terminals, since the scripts run in sudo mode. I've played a lot with supbrocess and the PIPES but I've never achieved all of the above requirements simultaneously. How can these requirements be met?
I'm using Ubuntu btw (so I have gnome terminal)
Update : I was probably not clear in my question, but this has to be inside a python script. It is not for my convenience, it's part of an integration process. The code of the script will be part of a larger python program, so the whole point of the question is how do I do it in python.
Based on your new information added I've created an small python script which will launch two terminals and their output separately:
Main script:
mortiz#florida:~/Documents/projects/python/split_python_execution$ cat split_pythonstuff.py
#!/usr/bin/python3
import subprocess
subprocess.call(['gnome-terminal', '-x', 'python', '/home/mortiz/Documents/projects/python/split_python_execution/script1.py'])
subprocess.call(['gnome-terminal', '-x', 'python', '/home/mortiz/Documents/projects/python/split_python_execution/script2.py'])
Script 1:
mortiz#florida:~/Documents/projects/python/split_python_execution$ cat script1.py
#!/usr/bin/python3
while True :
print ('script 1')
Script 2:
mortiz#florida:~/Documents/projects/python/split_python_execution$ cat script2.py
#!/usr/bin/python3
while True:
print ('script 2')
From here I guess you can develop anything you want.
UPDATE: About sudo
Sudoers is a great way of controlling which things can be executed by specific users providing passwords or not.
If you add this line in /etc/sudoers there's not need for a password when you pass sudo to your command:
<YOUR_USER> ALL = NOPASSWD : /usr/bin/python <SCRIPT.py>
In your question as far as I understand you have the password stored inside the script. There's no need to do that and it's a bad practice. Sudoers would be a better way.
Anyway, if you want to do it in an insecure way then refer to this question and place it before the commands in the scripts provided in this answer.
The linked provided works:
echo -e "mypassword\n" | sudo -S python test.py
15
You only need to implement that on the previous code.
You could install Terminator and configure one profile per terminal to run any script you want.
I have a default template which will load 3 terminals and run 3 different commands / or scripts if you wanted to:
When I load that profile the first one will move me to my projects dir and list them. The next one will run df -h to see the space available and the lower my ip configuration.
This way would save you lots of programming and it's quite easy.
UPDATE: It will run any command, bash, zsh, python, etc.. available for your terminal. If the script is locally in your machine:
python <your_script_1> # first terminal profile
python <your_script_2> # second terminal profile
both would be executed "at the same time".
If your scripts are remote in the target machine, simply create a bash script using ssh to connect to the remote machine with a private key and then running the script, the result is the same in both scenarios.
EDIT: The best thing is setting colors and transparency for each terminal, so you can enjoy the penguin's selfie while you work.
I'm trying to run a python script using Notepadqq in Ubuntu, but when I try running my script by going to the Run command it oop up a windows that says
Special Placeholders
with the options to save or OK and cancel. See image
I think like that:
/usr/bin/python3 %path%
This window allows you to have several previously configured commands. For example, you can create one command to execute the default python2.7 (/usr/bin/python2.7), another one to execute with python3 (/usr/bin/python3), or perhaps a specific virtual environment python (let's say /home/py3env/bin/python).
Natively, notepadqq won't know you want to execute it with python. At the blank space, write the following command (edit python path if needed):
gnome-terminal -x sh -c '/usr/bin/python3 %path%'
After that, click the "Save" button. You will be asked to enter a name for the command (e.g. "run default python3"). After this, you can run your python scripts by clicking on you named command.
If you need the terminal to be kept open after the command is executed, you can edit your terminal preferences. Open a terminal and follow this path:
(edit -> preferences -> go to your profile -> "Command" tab -> When commands exits -> Hold the terminal open)
gnome-terminal-preferences
I use two "Run" shortcuts.
This runs the code and holds the xterm open so I can see the output. Handy if there's an error.
/usr/bin/xterm -hold -e /usr/bin/python3 %path%
And to auto close the xterm.
/usr/bin/xterm -e /usr/bin/python3 %path%
Make sure to save your file because this works with the file on the disk at %path%
Also you can change /usr/bin/xterm to point to whatever term you like.
Running the code depends on the console you want to use. Since the question asks for a Ubuntu solution I'd suggest gnome-terminal.
gnome-terminal -e "python3 %path%"
Here's a screenshot and sample code for clarity. The last line input() prevents the window from closing so the user can view the output. I've tried this solution with Ubuntu 20.04.
I have the virutalenv created and installed. I have also installed jsnapy tool inside my virutal env.
This is the script that we are using:
Filename : venv.py
import os
os.system('/bin/bash --rcfile ~/TestAutomation/End2EndAutomation/bin/activate')
os.system('End2EndAutomation/bin/jsnapy')
ubuntu#server:~/TestAutomation$ python venv.py
(End2EndAutomation) ubuntu#sdno-server:~/TestAutomation$ ^C
We need to know, is how we can get into virutalenv, run a command and deactivate it using python script?
[EDIT1]
i used the code given in the comment. its just entering virutal env. When i issue exit, its running jsnapy command.
ubuntu#server:~/TestAutomation$ python venv.py
(End2EndAutomation) ubuntu#server:~/TestAutomation$ exit
exit
usage:
This tool enables you to capture and audit runtime environment of
networked devices running the Junos operating system (Junos OS)
Tool to capture snapshots and compare them
It supports four subcommands:
--snap, --check, --snapcheck, --diff
1. Take snapshot:
jsnapy --snap pre_snapfile -f main_configfil
Each call to os.system() will create a new bash instance and terminate the previous one. To run all the commands in one bash instance you could put all your commands inside a single bash script and call that from os.system()
run.sh
source ~/TestAutomation/End2EndAutomation/bin/activate
End2EndAutomation/bin/jsnapy
deactivate
Python
os.system('source run.sh')
Alternatively, you could write a multiline bash command, as long as it's all in one os.system() call.
Two successive calls to os.system() will create two independent processes, one after the other. The second will run when the first finishes. Any effects of commands executed in the first process will have been forgotten and flushed when the second runs.
You want to run the activation and the command which needs to be run in the virtualenv in the same process, i.e. the same single shell instance.
To do that, you can use bash -c '...' to run a sequence of commands. See below.
However, a better solution is to simply activate the virtual environment from within Python itself.
p = os.path.expanduser('~/TestAutomation/End2EndAutomation/bin/activate_this.py')
execfile(p, dict(__file__=p))
subprocess.check_call(['./End2EndAutomation/bin/jsnapy'])
For completeness, here is the Bash solution, with comments.
import subprocess
subprocess.check_call(['bash', '-c', """
. ~/TestAutomation/End2EndAutomation/bin/activate
./End2EndAutomation/bin/jsnapy"""])
The preference for subprocess over os.system is recommended even in the os.system documentation.
There is no need to explicitly deactivate; when the bash command finishes, that will implicitly also deactivate the virtual environment.
The --rcfile trick is a nice idea, but it doesn't work when the shell you are calling isn't interactive.