Some subShell problems with python3 - python

Well, I'm trying to using a python3 script to manage my aliases on my MAC OS X. At first I've put all alias commands in a single file and try to use below code to turn on/off these alias:
def enable_alias(self):
alias_controller = AliasListControl() # just a simple class to handle the single file path and other unimportant things.
os.popen('cp ~/.bash_aliases ~/.bash_aliases.bak')
os.popen('cat ' + alias_controller.path + '>> ~/.bash_aliases')
os.system('source ~/.bash_aliases')
def disable_alias(self):
os.popen('mv ~/.bash_aliases.bak ~/.bash_aliases')
os.popen('source ~/.bash_aliases')# maybe I should call some other unalias commands there
As you see, there exists an problem. When the script runs to os.system('source ~/.bash_aliases'), it will first open A subshell and execute the command, so the source operation will only take effect in the subshell, not the parent shell, then the command finished and the subshell was closed. This means what os.system('source ~/.bash_aliases') has done is just in vein.

It doesn't address your process problem, but an alternative is to put your commands either into shell scripts or into function definitions that are defined in your ~/.bash_profile.
For example, as a script:
Create the file enable_alias.sh:
filename=$1
cp ~/.bash_aliases ~/.bash_aliases.bak
# If you use `cat` here, your aliases file will keep getting longer and longer with repeated definitions... think you want to use > not >>
cp /path/to/$1.txt ~/.bash_aliases
source ~/.bash_aliases
Put this file somewhere in a folder in your PATH and make it executable. Then run it as:
enable_alias.sh A
..where your file of settings, etc is called A.txt. The $1 will pass the first value as the file name.
Or alternatively, you could do it as a function, and add that definition to your .bash_profile. (Functions can also take $1 when called.)
disable_alias(){
mv ~/.bash_aliases.bak ~/.bash_aliases
source ~/.bash_aliases
}
As you say, it might be a good idea to put unalias commands into your .bash_aliases file as well. It might also be simpler to have copies of aliases as their own files A.txt B.txt etc and just cp A.txt ~/.bash_aliases with the enable command and not use the disable command at all (disable is equivalent to enabling file B.txt, for example.)
Just some thoughts on another approach that is more 'bash-like'...

I'm not familiar with OS/X, but I am familiar with bash, so I'll take a shot at this.
First, look into Python's shutil module and/or subprocess module; os.system and os.popen are no longer the best way of doing these things.
Second, don't source a script from a subshell that's going to go away immediately afterward. Instead, add something like:
source ~/.bash_aliases
in your ~/.bashrc, so that it'll get used when every new bash is started.

Related

Setting up build environment using source file inside bash script does not work

So I have a build environment that gets setup when I do a . setup or source setup.
There are a bunch of commands (lets say command abc, command xyz) that become available once the setup file is sourced which I need to use inside a bash script. So I have to do something like this :
#!/bin/bash
cd build_dir
. setup
command abc && command xyz
And then on calling my script, I expect command abc and command xyz to have been executed.
But instead I see an error saying command abc not Found.
The setup environment is complex enough that I wouldn't want to bog my script with adding all the commands and env variables manually, I'd rather ditch the script completely.
Why is this happening and is there anyway of doing this with either shell scripting or python?
command abc means to ignore any alias or function named abc and instead searches an executable file inside the directories listed in $PATH. Obviously there is no such file on your system.
So, you need to know where your file abc is located. Then you have two possibilities: Either add this directory to your PATH,
PATH=$PATH:/foo/bar/abcdir
or you don't use command, but execute your program using the explicit path, i.e.
/foo/bar/abcdir/abc && command xyz

Change directory from python script for calling shell

I would like to build a python script, which can manipulate the state of it's calling bash shell, especially it's working directory for the beginning.
With os.chdir or os.system("ls ..") you can only change the interpreters path, but how can I apply the comments changes to the scripts caller then?
Thank you for any hint!
You can't do that directly from python, as a child process can never change the environment of its parent process.
But you can create a shell script that you source from your shell, i.e. it runs in the same process, and in that script, you'll call python and use its output as the name of the directory to cd to:
/home/choroba $ cat 1.sh
cd "$(python -c 'print ".."')"
/home/choroba $ . 1.sh
/home $

Shell/Terminal: Execute a command for all files in a directory using their absolute path

I'm trying to execute a command for each file in a directory but while using their absolute path (such as /home/richi/mydir/myfile.py) instead of their relative path (such as myfile.py).
In other words, I want to execute a command on files in a directory based on their absolute path - similar to for file in *.py; do thecommand -a "$file"; done but not quite.
I'm asking this because I'm trying to implement a Travis CI script running in an Ubuntu 14.04 environment which will install and use pyminifier to recursively minify all the Python code files in a directory.
Please note that I'm asking may be similar to this post but it's not.
Since you're on a standard Linux distro with a full userland, you can just use the realpath command:
Print the resolved absolute file name…
For example:
$ pwd
/home/abarnert/src/test
$ touch 1
$ realpath 1
/home/abarnert/src/test/1
That's it.
If you don't know how to use that from within bash, you can call a subcommand using $(…) syntax:
$ echo $(realpath 1)
/home/abarnert/src/test/1
Of course you want to pass it the value of the variable file, but that's just as easy:
$ file=1
$ echo $(realpath "$file")
/home/abarnert/src/test/1
I'm assuming you're using bash here. With a different sh-style shell, things will be different; with tcsh or zsh or fish or something, it may be even more different.
A really old userland, or a really stripped down one (e.g., for an embedded system) might not include realpath. In that case, you can use readlink, since the GNU version, as usually, adds everything including a couple kitchen sinks, and can be used as a realpath substitute.
Or, if worst comes to worst, Python has come with a realpath function since 2.2:
$(python -c 'import os,sys; print(os.path.realpath(sys.argv[1]))' "$file")

Write a makefile with one rule for many targets?

It may seem as a very simple question, but I could not find any way to fix it.
My intention is to convert every ".ui" file into a ".py" file by invoking the pyuic4 command (from PyQt). I tried to manage this with a very short makefile:
%.py: %.ui
pyuic4 $< --output $#
That's all I need at the moment.
The makefile is named "Makefile" and located in the folder where "make" is invoked from, and so are the ".ui" files. "pyuic4(.bat)" is in the system's path (Windows 7), and so are the Unix Utilities where "make" is part of.
When running "make" from the Windows console, it says:
make: *** No targets. Stop.
Invoking pyuic4 from the command line with explicit file names works.
I know I could specify any target file by its own, but if possible I want to avoid this.
Any ideas?
As per kasterma's comment, you need to tell make which target to build, but you've only provided a pattern rule. This can be done in the following way.
UIFILES := $(wildcard *.ui)
PYFILES := $(UIFILES:.ui=.py)
.PHONY: all
all: $(PYFILES)
%.py: %.ui
pyuic4 $< --output $#
As you are obviously using a GNU Makefile syntax, I would advise you to write your rule like this:
UIFILES = $(wildcard *.ui)
.PHONY: ui2py
ui2py: $(UIFILES)
#for uifile in $(UIFILES); do \
pyuic4 $$(uifile) --output $${uifile%.ui}.py; \
done
Although the problem could be solved by either perror's or eriktous' solution, I'm now going the third way as mentioned by eriktous by simply invoking the pyuic4 command with a shell script. It's running quite fast and even if the output will result in identical files, no harm will be done for the source code control.
I encountered a second point, which may have distracted me. The pyuic4 command is really named pyuic4.bat, which is a "batch file" in Windows, similar to shell scripts in a Linux/Unix environment; similar, but not identical. If a batch file is invoked from another batch file it should be invoked with a leading "call" statement to prevent termination of the batch after the first invocation.
If I have three files (the # sign is to prevent the command from being listed during execution)
D:\Projekte\test>type main.cmd
#sub1
#sub2
D:\Projekte\test>type sub1.cmd
#echo This is sub 1
D:\Projekte\test>type sub2.cmd
#echo This is sub 2
... the result is just
D:\Projekte\test>main
This is sub 1
So my "solution" for this very small thing is a simple batch file called "update.cmd", which may be expanded by copies of this line:
call pyuic4 mainwindow.ui --output mainwindow.py
That's not what I initially wanted, but it works for me.
But anyway, thanks for your help :-)

Why can't environmental variables set in python persist?

I was hoping to write a python script to create some appropriate environmental variables by running the script in whatever directory I'll be executing some simulation code, and I've read that I can't write a script to make these env vars persist in the mac os terminal. So two things:
Is this true?
and
It seems like it would be a useful things to do; why isn't it possible in general?
You can't do it from python, but some clever bash tricks can do something similar. The basic reasoning is this: environment variables exist in a per-process memory space. When a new process is created with fork() it inherits its parent's environment variables. When you set an environment variable in your shell (e.g. bash) like this:
export VAR="foo"
What you're doing is telling bash to set the variable VAR in its process space to "foo". When you run a program, bash uses fork() and then exec() to run the program, so anything you run from bash inherits the bash environment variables.
Now, suppose you want to create a bash command that sets some environment variable DATA with content from a file in your current directory called ".data". First, you need to have a command to get the data out of the file:
cat .data
That prints the data. Now, we want to create a bash command to set that data in an environment variable:
export DATA=`cat .data`
That command takes the contents of .data and puts it in the environment variable DATA. Now, if you put that inside an alias command, you have a bash command that sets your environment variable:
alias set-data="export DATA=`cat .data`"
You can put that alias command inside the .bashrc or .bash_profile files in your home directory to have that command available in any new bash shell you start.
One workaround is to output export commands, and have the parent shell evaluate this..
thescript.py:
import pipes
import random
r = random.randint(1,100)
print("export BLAHBLAH=%s" % (pipes.quote(str(r))))
..and the bash alias (the same can be done in most shells.. even tcsh!):
alias setblahblahenv="eval $(python thescript.py)"
Usage:
$ echo $BLAHBLAH
$ setblahblahenv
$ echo $BLAHBLAH
72
You can output any arbitrary shell code, including multiple commands like:
export BLAHBLAH=23 SECONDENVVAR='something else' && echo 'everything worked'
Just remember to be careful about escaping any dynamically created output (the pipes.quote module is good for this)
If you set environment variables within a python script (or any other script or program), it won't affect the parent shell.
Edit clarification:
So the answer to your question is yes, it is true.
You can however export from within a shell script and source it by using the dot invocation
in fooexport.sh
export FOO="bar"
at the command prompt
$ . ./fooexport.sh
$ echo $FOO
bar
It's not generally possible. The new process created for python cannot affect its parent process' environment. Neither can the parent affect the child, but the parent gets to setup the child's environment as part of new process creation.
Perhaps you can set them in .bashrc, .profile or the equivalent "runs on login" or "runs on every new terminal session" script in MacOS.
You can also have python start the simulation program with the desired environment. (use the env parameter to subprocess.Popen (http://docs.python.org/library/subprocess.html) )
import subprocess, os
os.chdir('/home/you/desired/directory')
subprocess.Popen(['desired_program_cmd', 'args', ...], env=dict(SOMEVAR='a_value') )
Or you could have python write out a shell script like this to a file with a .sh extension:
export SOMEVAR=a_value
cd /home/you/desired/directory
./desired_program_cmd
and then chmod +x it and run it from anywhere.
What I like to do is use /usr/bin/env in a shell script to "wrap" my command line when I find myself in similar situations:
#!/bin/bash
/usr/bin/env NAME1="VALUE1" NAME2="VALUE2" ${*}
So let's call this script "myappenv". I put it in my $HOME/bin directory which I have in my $PATH.
Now I can invoke any command using that environment by simply prepending "myappenv" as such:
myappenv dosometask -xyz
Other posted solutions work too, but this is my personal preference. One advantage is that the environment is transient, so if I'm working in the shell only the command I invoke is affected by the altered environment.
Modified version based on new comments
#!/bin/bash
/usr/bin/env G4WORKDIR=$PWD ${*}
You could wrap this all up in an alias too. I prefer the wrapper script approach since I tend to have other environment prep in there too, which makes it easier for me to maintain.
As answered by Benson, but the best hack-around is to create a simple bash function to preserve arguments:
upsert-env-var (){ eval $(python upsert_env_var.py $*); }
Your can do whatever you want in your python script with the arguments. To simply add a variable use something like:
var = sys.argv[1]
val = sys.argv[2]
if os.environ.get(var, None):
print "export %s=%s:%s" % (var, val, os.environ[var])
else:
print "export %s=%s" % (var, val)
Usage:
upsert-env-var VAR VAL
As others have pointed out, the reason this doesn't work is that environment variables live in a per-process memory spaces and thus die when the Python process exits.
They point out that a solution to this is to define an alias in .bashrc to do what you want such as this:
alias export_my_program="export MY_VAR=`my_program`"
However, there's another (a tad hacky) method which does not require you to modify .bachrc, nor requires you to have my_program in $PATH (or specify the full path to it in the alias). The idea is to run the program in Python if it is invoked normally (./my_program), but in Bash if it is sourced (source my_program). (Using source on a script does not spawn a new process and thus does not kill environment variables created within.) You can do that as follows:
my_program.py:
#!/usr/bin/env python3
_UNUSED_VAR=0
_UNUSED_VAR=0 \
<< _UNUSED_VAR
#=======================
# Bash code starts here
#=======================
'''
_UNUSED_VAR
export MY_VAR=`$(dirname $0)/my_program.py`
echo $MY_VAR
return
'''
#=========================
# Python code starts here
#=========================
print('Hello environment!')
Running this in Python (./my_program.py), the first 3 lines will not do anything useful and the triple-quotes will comment out the Bash code, allowing Python to run normally without any syntax errors from Bash.
Sourcing this in bash (source my_program.py), the heredoc (<< _UNUSED_VAR) is a hack used to "comment out" the first-triple quote, which would otherwise be a syntax error. The script returns before reaching the second triple-quote, avoiding another syntax error. The export assigns the result of running my_program.py in Python from the correct directory (given by $(dirname $0)) to the environment variable MY_VAR. echo $MY_VAR prints the result on the command-line.
Example usage:
$ source my_program.py
Hello environment!
$ echo $MY_VAR
Hello environment!
However, the script will still do everything it did before except exporting, the environment variable if run normally:
$ ./my_program.py
Hello environment!
$ echo $MY_VAR
<-- Empty line
As noted by other authors, the memory is thrown away when the Python process exits. But during the python process, you can edit the running environment. For example:
>>> os.environ["foo"] = "bar"
>>> import subprocess
>>> subprocess.call(["printenv", "foo"])
bar
0
>>> os.environ["foo"] = "foo"
>>> subprocess.call(["printenv", "foo"])
foo
0

Categories

Resources