Running Salome script without graphics - python

I exported a script from Salome (dump), and I want to run it in python (I'm doing some geometric operation and I don't need any graphics). So I removed all the graphic command, but when I try to launch my python file, python cannot found the salome libraries.
I tried to export the salome path ('install_path'/appli_V6_5_0p1/bin/salome/) in PYTHONPATH and LD_LIBRARY_PATH but it still doesn't work.
I also would like to know if it's possible to use only the geompy library without salome, and if it's possible, how can I install only the geompy library? ( I need to launch some geompy script on a UAV with only 8gb of memory, so the less thing I install, the better)

I had similar wishes to you but after much searching I ended up concluding that what we both want to do is not completely possible.
In order to run a salome script on the command line without the GUI use
salome -t python script.py
or simply
salome -t script.py
In order to run a salome script you must call it using the salome executable. It seems that you cannot use the salome libraries (by importing them into a python script that is then called with python script.py) without the compiled program. The executables that salome uses contain much of what the platform needs to do its job.
This frustrated me for a long time, but I found a workaround; for a simple example, if you have a salome script you can call the salome executable from within another python program with
os.system("salome -t python script.py")
But now you have a problem; salome does not automatically kill the session so if you run the above command a number of times your system will become clogged up with multiple instances of running salome processes. These can be killed manually by running killSalome.py, found in your salome installation folder. But beware! This will kill all instances of salome running on your computer! This will be a problem if you are running multiple model generation scripts at once or if you also have the salome GUI open.
Obviously, a better way is for your script to kill each specific instance of salome after it has been used. The following is one method (the exact paths to the executable etc will need to change depending on your installation):
# Make a subprocess call to the salome executable and store the used port in a text file:
subprocess.call('/salomedirectory/bin/runAppli -t python script.py --ns-port-log=/absolute/path/salomePort.txt', shell=True)
# Read in the port number from the text file:
port_file = open('/absolute/path/salomePort.txt','r')
killPort = int(port_file.readline())
port_file.close()
# Kill the session with the specified port:
subprocess.call('/salomedirectory/bin/salome/killSalomeWithPort.py %s' % killPort,shell=True)
EDIT: Typo correction to the python os command.
EDIT2: I recently found that problems with this method are met when the port log file (here "salomePort.txt" but can be named arbitrarily) is given with only its relative path. It seems that giving it with its full, absolute path is necessarily for this to work.

If you are working with salome on windows platform, use following
salome_folder\WORK>run_salome.bat -t script_file.py

According to Salome FAQ:
To launch SALOME without GUI, use “runSalome –t” command: only the
necessary servers are launched (without GUI). To start an interactive
Python console then (for example, to be able to load TUI scripts), you
will need to use --pinter option
To launch Salome with only chosen modules:
To launch a group of chosen SALOME modules, use the command “runSalome
–-modules=XXX,YYY”, where XXX and YYY are modules' names. You can use
–h command to display help of runSalome script.

Related

How to run a Python script from Apache on Raspberry Pi?

So, on a Raspberry Pi I'm using a camera app with a web interface, I wanted to add LED lighting by adding a neopixel. I have successfully done this and can now turn it on and off running two python scripts.
Explanation and question:
I have a python script in /usr/local/bin that is executable.
It is owned by 'root root'.
I have a shell script in /var/www/html/macros that is executable and has to run the python script in /usr/local/bin.
The shell script is owned by 'www-data'
When I manually run the python file, it executes the script.
When I manually run the shell script, it executes the python script.
When I run the shell script by clicking on a button on my webpage, it seems to execute the shell script correctly, however, it looks like it doesn't execute the python script.
What can I do to fix this?
I'm not that experienced with permissions, but I wanted to emphasize on the fact that this is a closed system that does not contain any sensitive information. So safety/best practice is not a concern. I just want to make this work.
I'm not an expert in this area, but I believe to access /usr/local/bin/ you need root privileges which explains why you're having success but not Apache.
Rather than give Apache root permissions, it's best to simply remove the requirement from the individual file you want to execute. This can be accomplished by
$ cd /usr/local/bin
$ sudo chmod 777 your_script.py
Now, after 11 hours and a group of people thinking along we found a solution to the problem.
The problem turned out to be that the Web interface can only execute as 'www-data', and the NeoPixel library that the python script depends on needs to be executed as sudo/root.
These two factors make it so that there will never be a direct way of getting the scripts to work together.
However, the idea emerged to use some sort of pipe.
A brilliant user suggested to me to use sshpass. This would allow to pass data to ssh and have it essentially be executed as a root user.
The data from the web interface would be relayed to the sshpass and this would successfully run the needed scripts with the needed privileges.
Special thanks to Minty Trebor and Falcounet from the RRF for LPC/STM Discord!

How to prefix file with custom commands when debugging in PyCharm?

This question is very similar to this one but for PyCharm.
I need to use aws-vault to access AWS resources in my script, but this seems to be impossible to accomplish in PyCharm debugging mode. It gives ability to enter script path, parameters, environment variables and there is also external tools functionality, but neither of these work.
Here is the format that works in shell:
aws-vault exec ${AWS_PROFILE} -- script.py
I thought that I've almost arrived at a solution by using external tools and setting the program to "aws-vault" and its arguments to "exec your-profile -- $FilePath$", but it wants to run the script in $FilePath$, finish and only after completion run the debugged script in PyCharm (which is the same one as the one inserted by $FilePath$).
How it would work for my case is by running needed script in debug mode in conjunction with external tool, so the script would go into arguments of the external tool and run as one command.
There are ways to deal with this by launching PyCharm from command line with aws-vault as a prefix or editing its .desktop file and writing the prefix directly into the Exec field, but the app needs to be restarted when AWS profile has to be changed.
Any ideas would be appreciated, thanks.
I was able to do this by installing the envfile plugin in PyCharm. This plugin can read in a .env file when starting a process. Basically I did the following:
Create a script that generates a .env file, envfile.env and name the script generate.sh
This generate.sh script is a shell script that basically does: aws-vault exec $AWS_PROFILE -- env | grep AWS_ > envfile.env, so all the aws creds are in the envfile.env. Possibly add other environment variables if you need so.
Execute command above at least once.
Install the envfile plugin in pycharm.
In the Run configuration, a new tab appears with 'EnvFile'. In this tab, enable the EnvFile. Add the generated envfile.env (see previous).
Do Tool / External Tools and create an external tool for the generate.sh. This way you can execute the script from PyCharm.
Again in the Run configuration add a Before Launch that executes the External Tool generate.sh.
Warning, the temporary aws-creds are in the plaintext envfile.env.

Start two shell sudo scripts in two different terminals from python3

I have an embedded system on which I run code live. Every time I want to run code, I start two scripts in two different terminals: "run1.sh" and "run2.sh". I can see the output of those scripts in my terminals (I wish to too).
Now I want to make a python script that starts those two scripts in two different terminals. I want to still see their output. Also I want to insert a password from the python script to the terminals, since the scripts run in sudo mode. I've played a lot with supbrocess and the PIPES but I've never achieved all of the above requirements simultaneously. How can these requirements be met?
I'm using Ubuntu btw (so I have gnome terminal)
Update : I was probably not clear in my question, but this has to be inside a python script. It is not for my convenience, it's part of an integration process. The code of the script will be part of a larger python program, so the whole point of the question is how do I do it in python.
Based on your new information added I've created an small python script which will launch two terminals and their output separately:
Main script:
mortiz#florida:~/Documents/projects/python/split_python_execution$ cat split_pythonstuff.py
#!/usr/bin/python3
import subprocess
subprocess.call(['gnome-terminal', '-x', 'python', '/home/mortiz/Documents/projects/python/split_python_execution/script1.py'])
subprocess.call(['gnome-terminal', '-x', 'python', '/home/mortiz/Documents/projects/python/split_python_execution/script2.py'])
Script 1:
mortiz#florida:~/Documents/projects/python/split_python_execution$ cat script1.py
#!/usr/bin/python3
while True :
print ('script 1')
Script 2:
mortiz#florida:~/Documents/projects/python/split_python_execution$ cat script2.py
#!/usr/bin/python3
while True:
print ('script 2')
From here I guess you can develop anything you want.
UPDATE: About sudo
Sudoers is a great way of controlling which things can be executed by specific users providing passwords or not.
If you add this line in /etc/sudoers there's not need for a password when you pass sudo to your command:
<YOUR_USER> ALL = NOPASSWD : /usr/bin/python <SCRIPT.py>
In your question as far as I understand you have the password stored inside the script. There's no need to do that and it's a bad practice. Sudoers would be a better way.
Anyway, if you want to do it in an insecure way then refer to this question and place it before the commands in the scripts provided in this answer.
The linked provided works:
echo -e "mypassword\n" | sudo -S python test.py
15
You only need to implement that on the previous code.
You could install Terminator and configure one profile per terminal to run any script you want.
I have a default template which will load 3 terminals and run 3 different commands / or scripts if you wanted to:
When I load that profile the first one will move me to my projects dir and list them. The next one will run df -h to see the space available and the lower my ip configuration.
This way would save you lots of programming and it's quite easy.
UPDATE: It will run any command, bash, zsh, python, etc.. available for your terminal. If the script is locally in your machine:
python <your_script_1> # first terminal profile
python <your_script_2> # second terminal profile
both would be executed "at the same time".
If your scripts are remote in the target machine, simply create a bash script using ssh to connect to the remote machine with a private key and then running the script, the result is the same in both scenarios.
EDIT: The best thing is setting colors and transparency for each terminal, so you can enjoy the penguin's selfie while you work.

Run my Python script at login on linux

I have a Python script, and I want it to be autostarted at every login. It's in a linux system. I followed a guide that explains that is enough to create a .desktop file in ~/.config/autostart/*.desktop and write:
[Desktop Entry]
Name=MyApp
Type=Application
Exec=python3 ~/.myapp/myapp
Terminal=false
I tried several times to reboot but the program doesn't execute, even if it seems to be active in the list of application of startup in my lxde environment.
Often things like the ~ (tilde) aren't evaluated when placed in a config file. Try using the complete path instead (/home/user/… instead of ~/…) and see if this works. If this works, you can try to use $HOME instead ($HOME/…) to make this more portable and abstract.
If you want to run your script on terminal login, place it to /etc/profile.d/
For KDE (at least, KDE 5) you could add applications to autorun in System Settings > Startup and Shutdown > Autostart (either *.desktop files or scripts), it adds links to ~/.config/autostart.
You can achieve this by adding this line python /home/user/program.py in your .bashrc file. It will be invoked each time when you login to your system.

launch external shell python instance in shell from python

I'd like to call a separate non-child python program from a python script and have it run externally in a new shell instance. The original python script doesn't need to be aware of the instance it launches, it shouldn't block when the launched process is running and shouldn't care if it dies. This is what I have tried which returns no error but seems to do nothing...
import subprocess
python_path = '/usr/bin/python'
args = [python_path, '&']
p = subprocess.Popen(args, shell=True)
What should I be doing differently
EDIT
The reason for doing this is I have an application with a built in version of python, I have written some python tools that should be run separately alongside this application but there is no assurance that the user will have python installed on their system outside the application with the builtin version I'm using. Because of this I can get the python binary path from the built in version programatically and I'd like to launch an external version of the built in python. This eliminates the need for the user to install python themselves. So in essence I need a simple way to call an external python script using my current running version of python programatically.
I don't need to catch any output into the original program, in fact once launched I'd like it to have nothing to do with the original program
EDIT 2
So it seems that my original question was very unclear so here are more details, I think I was trying to over simplify the question:
I'm running OSX but the code should also work on windows machines.
The main application that has a built in version of CPython is a compiled c++ application that ships with a python framework that it uses at runtime. You can launch the embedded version of this version of python by doing this in a Terminal window on OSX
/my_main_app/Contents/Frameworks/Python.framework/Versions/2.7/bin/python
From my main application I'd like to be able to run a command in the version of python embedded in the main app that launches an external copy of a python script using the above python version just like I would if I did the following command in a Terminal window. The new launched orphan process should have its own Terminal window so the user can interact with it.
/my_main_app/Contents/Frameworks/Python.framework/Versions/2.7/bin/python my_python_script
I would like the child python instance not to block the main application and I'd like it to have its own terminal window so the user can interact with it. The main application doesn't need to be aware of the child once its launched in any way. The only reason I would do this is to automate launching an external application using a Terminal for the user
If you're trying to launch a new terminal window to run a new Python in (which isn't what your question asks for, but from a comment it sounds like it's what you actually want):
You can't. At least not in a general-purpose, cross-platform way.
Python is just a command-line program that runs with whatever stdin/stdout/stderr it's given. If those happen to be from a terminal, then it's running in a terminal. It doesn't know anything about the terminal beyond that.
If you need to do this for some specific platform and some specific terminal program—e.g., Terminal.app on OS X, iTerm on OS X, the "DOS prompt" on Windows, gnome-terminal on any X11 system, etc.—that's generally doable, but the way to do it is by launching or scripting the terminal program and telling it to open a new window and run Python in that window. And, needless to say, they all have completely different ways of doing that.
And even then, it's not going to be possible in all cases. For example, if you ssh in to a remote machine and run Python on that machine, there is no way it can reach back to your machine and open a new terminal window.
On most platforms that have multiple possible terminals, you can write some heuristic code that figures out which terminal you're currently running under by just walking os.getppid() until you find something that looks like a terminal you know how to deal with (and if you get to init/launchd/etc. without finding one, then you weren't running in a terminal).
The problem is that you're running Python with the argument &. Python has no idea what to do with that. It's like typing this at the shell:
/usr/bin/python '&'
In fact, if you pay attention, you're almost certainly getting something like this through your stderr:
python: can't open file '&': [Errno 2] No such file or directory
… which is exactly what you'd get from doing the equivalent at the shell.
What you presumably wanted was the equivalent of this shell command:
/usr/bin/python &
But the & there isn't an argument at all, it's part of sh syntax. The subprocess module doesn't know anything about sh syntax, and you're telling it not to use a shell, so there's nobody to interpret that &.
You could tell subprocess to use a shell, so it can do this for you:
cmdline = '{} &'.format(python_path)
p = subprocess.Popen(cmdline, shell=True)
But really, there's no good reason to. Just opening a subprocess and not calling communicate or wait on it already effectively "puts it in the background", just like & does on the shell. So:
args = [python_path]
p = subprocess.Popen(args)
This will start a new Python interpreter that sits there running in the background, trying to use the same stdin/stdout/stderr as your parent. I'm not sure why you want that, but it's the same thing that using & in the shell would have done.
Actually I think there might be a solution to your problem, I found a useful solution at another question here.
This way subprocess.popen starts a new python shell instance and runs the second script from there. It worked perfectly for me on Windows 10.
You can try using screen command
with this command a new shell instance created and the current instance runs in the background.
# screen; python script1.py
After running above command, a new shell prompt will be seen where we can run another script and script1.py will be running in the background.
Hope it helps.

Categories

Resources