How do you refresh System path/variables after installing programs? - python

I have a python script which will install an application:
os.system("path/to/my.exe /VERYSILENT")
When i do this, for example I would install Git.
Later on, the application will call:
os.system("git --version")
which fails to call because it doesnt know what git is.
From what it looks like, the System variables etc are all grabbed when you import os so could I just after installing the application reimport os somehow and then carry on?
My desired end state is to refresh CMD, similar to how you would close a terminal and open a new one.

Sub-shells (as in os.system(..)) cannot effect the execution environment of the parent process (it would be a huge security hole). You can update the permanent user environment with e.g. Powershell ([environment]::SetEnvironmentVariable($key, $val, "User")). Any processes started afterwards will see the new environment variable (this is why you need to close your cmd window and start a new one.

Related

How do I add a directory to system environment variable using python script [duplicate]

From what I've read, any changes to the environment variables in a Python instance are only available within that instance, and disappear once the instance is closed. Is there any way to make them stick by committing them to the system?
The reason I need to do this is because at the studio where I work, tools like Maya rely heavily on environment variables to configure paths across multiple platforms.
My test code is
import os
os.environ['FAKE'] = 'C:\\'
Opening another instance of Python and requesting os.environ['FAKE'] yields a KeyError.
NOTE: Portability will be an issue, but the small API I'm writing will be able to check OS version and trigger different commands if necessary.
That said, I've gone the route of using the Windows registry technique and will simply write alternative methods that will call shell scripts on other platforms as they become requirements.
You can using SETX at the command-line.
By default these actions go on the USER env vars.
To set and modify SYSTEM vars use the /M flag
import os
env_var = "BUILD_NUMBER"
env_val = "3.1.3.3.7"
os.system("SETX {0} {1} /M".format(env_var,env_val))
make them stick by committing them to
the system?
I think you are a bit confused here. There is no 'system' environment. Each process has their own environment as part its memory. A process can only change its own environment. A process can set the initial environment for processes it creates.
If you really do think you need to set environment variables for the system you will need to look at changing them in the location they get initially loaded from like the registry on windows or your shell configuration file on Linux.
Under Windows it's possible for you to make changes to environment variables persistent via the registry with this recipe, though it seems like overkill.
To echo Brian's question, what are you trying to accomplish? There is probably an easier way.
Seems like there is simplier solution for Windows
import subprocess
subprocess.call(['setx', 'Hello', 'World!'], shell=True)
I don't believe you can do this; there are two work-arounds I can think of.
The os.putenv function sets the environment for processes you start with, i.e. os.system, popen, etc. Depending on what you're trying to do, perhaps you could have one master Python instance that sets the variable, and then spawns new instances.
You could run a shell script or batch file to set it for you, but that becomes much less portable. See this article:
http://code.activestate.com/recipes/159462/
Think about it this way.
You're not setting shell environment variables.
You're spawning a subshell with some given environment variable settings; this subshell runs your application with the modified environment.
According to this discussion, you cannot do it. What are you trying to accomplish?
You are forking a new process and cannot change the environment of the parent process as you cannot do if you start a new shell process from the shell
You might want to try Python Win32 Extensions, developed by Mark Hammond, which is included in the ActivePython (or can be installed separately). You can learn how to perform many Windows related tasks in Hammond's and Robinson's book.
Using PyWin32 to access windows COM objects, a Python program can use the Environment Property (a collection of environment variables) of the WScript.Shell object.
Try to use py-setenv that will allow you to set variable via registry
python -m pip install py-setenv
From within Python? No, it can't be done!
If you are not bound to Python, you should consider using shell scripts (sh, bash, etc). The "source" command allows you to run a script that modifies the environment and will "stick" like you want to the shell you "sourced" the script in. What's going on here is that the shell executes the script directly rather creating a sub-process to execute the script.
This will be quite portable - you can use cygwin on windows to do this.
In case someone might need this info. I realize this was asked 7 yrs ago, but even I forget how sometimes. .
Yes there is a way to make them "stick" in windows. Simply go control panel, system, advanced system settings,when the system properties window opens you should see an option (button) for Environment Variables. .The process for getting to this is a little different depending on what OS you're using (google it).
Choose that (click button), then the Environment Variables window will open. It has 2 split windows, the top one should be your "User Variables For yourusername". . .choose "new", then simply set the variable. For instance one of mine is "Database_Password = mypassword".
Then in your app you can access them like this: import os, os.environ.get('Database_Password'). You can do something like pass = os.environ.get('Database_Password').

Setting env variable from a Python script

In my build (I'm using Linux) I need to call a Python script and set some env variables. I need these variables to be set even after I exit the script. I am able to set it using os.environ within the script but whenever I exit the script and try to see if the env variable is set from the terminal (echo $myenv) - I get nothing.
I am new to Python and did quite a bit googling to figure this out. However, I am not quite sure if it's possible. I tried using the subprocess:
subprocess.call('setenv myenv 4s3', shell=True)
Also tried using os.system:
os.system("setenv myenv 4s3")
So far, I didn't succeed.
You cannot set environment variables from a child process and have them be visible in the parent process. Every process gets its own copy of the environment, and changes do not propagate upwards.
What you could do is have the Python script print the settings it wants to change and have the outside shell execute the appropriate commands.
Maybe if you find some equivalent function like c vfork for Python.
When you vfork, both processes share memory space so, you might overwrite environment variables in parent process from child process.
Warning: vfork has many security issues, and therefore not recommended. Just use it if you are desperate.

VirtualEnv initilaized from a bash script

I am trying to write what should be a super simple bash script. Basically activate a virtual env and than change to the working directory. A task i do a lot and condesing to one command just made sense.
Basically ...
#!/bin/bash
source /usr/local/turbogears/pps_beta/bin/activate
cd /usr/local/turbogears/pps_beta/src
However when it runs it just dumps back to the shell and i am still in the directory i ran the script from and the environment isn't activated.
All you need to do is to run your script with the source command. This is because the cd command is local to the shell that runs it. When you run a script directly, a new shell is executed which terminates when it reaches the script's end of file. By using the source command you tell the shell to directly execute the script's instructions.
The value of cd is local to the current script, which ends when you fall off the end of the file.
What you are trying to do is not "super simple" because you want to override this behavior.
Look at exec for replacing the current process with the process of your choice.
For feeding commands into an interactive Bash, look at the --rcfile option.
I imagine you wish your script to be dynamic, however, as a quick fix when working on a new system I create an alias.
begin i.e
the env is called 'py1' located at ~/envs/py1/ with a repository
location at ~/proj/py1/
alias py1='source ~/envs/py1/bin/activate; cd ~/proj/py1/;
end i.e
You can now access your project and virtualenv by typing py1 from anywhere in the CLI.
I know that this is no where near ideal, violates DRY, and many other programming concepts. It is just a quick and dirty way of getting your env and project accessible quickly without having to setup the variables.
I know that I'm late to the game here, but may I suggest using virtualenvwrapper? It provides a nice bash hook that appears to do exactly what you want.
Check out this tutorial: http://blog.fruiapps.com/2012/06/An-introductory-tutorial-to-python-virtualenv-and-virtualenvwrapper

Using python scripts in subversion hooks on windows

My main goal is to get this up and running.
My hook gets called when I do the commit with Tortoise SVN, but it always exits when I get to this line: Python "%~dp0trac-post-commit-hook.py" -p "%TRAC_ENV%" -r "%REV%" || EXIT 5
If I try and replace the call to the python script with any simple Python script it still doesn't work so I'm assuming it is a problem with the call to Python and not the script itself.
I have tried setting the PYTHON_PATH variable and also set %PATH% to include Python.
I have trac up and running so Python is working on the server itself.
Here is some background info:
Python is installed on Windows server and script is called from local machine so
IF NOT EXIST %TRAC_ENV% EXIT 3
and
SET PYTHON_PATH=X:\Python26
IF NOT EXIST %PYTHON_PATH% EXIT 4
fail unless I point set them to the mapped network drive (That is point them at X and Y drives not C and E drives)
Python scripts can be called anywhere from the command line from the server regardless of the drive so the PATH variable should be set correctly
Appears to be an issue with calling python scripts externally, but not sure how I go about changing the permissions for this.
Thanks in advance.
Take the following things into account:
network drive mappings and subst
mappings are user specific. Make sure
the drives exist for the user account
under which the svn server is
running.
subversion hook scripts are run
without any environment variables
being set for security reasons, not even %path%. Call
the python executable with an
absolute path, e.g.
c:\python25\python.exe.

How can I make a fake "active session" for gconf?

I've automated my Ubuntu installation - I've got Python code that runs automatically (after a clean install, but before the first user login - it's in a temporary /etc/init.d/ script) that sets up everything from Apache & its configuration to my personal Gnome preferences. It's the latter that's giving me trouble.
This worked fine in Ubuntu 8.04 (Hardy), but when I use this with 8.10 (Intrepid), the first time I try to access gconf, I get this exception:
Failed to contact configuration server; some possible causes are that you need to enable TCP/IP networking for ORBit, or you have stale NFS locks due to a system crash. See http://www.gnome.org/projects/gconf/ for information. (Details - 1: Not running within active session)
Yes, right, there's no Gnome session when this is running, because the user hasn't logged in yet - however, this worked before; this appears to be new with Intrepid's Gnome (2.24?).
Short of modifying the gconf's XML files directly, is there a way to make some sort of proxy Gnome session? Or, any other suggestions?
(More details: this is python code that runs as root, but setuid's & setgid's to be me before setting my preferences using the "gconf" module from the python-gconf package.)
I can reproduce this by installing GConf 2.24 on my machine. GConf 2.22 works fine, but 2.24 breaks it.
GConf is failing to launch because D-Bus is not running. Manually spawning D-Bus and the GConf daemon makes this work again.
I tried to spawn the D-Bus session bus by doing the following:
import dbus
dummy_bus = dbus.SessionBus()
...but got this:
dbus.exceptions.DBusException: org.freedesktop.DBus.Error.Spawn.ExecFailed: dbus-launch failed to autolaunch D-Bus session: Autolaunch error: X11 initialization failed.
Weird. Looks like it doesn't like to come up if X isn't running. To work around that, start dbus-launch manually (IIRC use the os.system() call):
$ dbus-launch
DBUS_SESSION_BUS_ADDRESS=unix:abstract=/tmp/dbus-eAmT3q94u0,guid=c250f62d3c4739dcc9a12d48490fc268
DBUS_SESSION_BUS_PID=15836
You'll need to parse the output somehow and inject them into environment variables (you'll probably want to use os.putenv). For my testing, I just used the shell, and set the environment vars manually with export DBUS_SESSION_BUS_ADDRESS=blahblah..., etc.
Next, you need to launch gconftool-2 --spawn with those environment variables you received from dbus-launch. This will launch the GConf daemon. If the D-Bus environment vars are not set, the daemon will not launch.
Then, run your GConf code. Provided you set the D-Bus session bus environment variables for your own script, you will now be able to communicate with the GConf daemon.
I know it's complicated.
gconftool-2 provides a --direct option that enables you to set GConf variables without needing to communicate with the server, but I haven't been able to find an equivalent option for the Python bindings (short of outputting XML manually).
Edit: For future reference, if anybody wants to run dbus-launch from within a normal bash script (as opposed to a Python script, as this thread is discussing), it is quite easy to retrieve the session bus address for use within the script:
#!/bin/bash
eval `dbus-launch --sh-syntax`
export DBUS_SESSION_BUS_ADDRESS
export DBUS_SESSION_BUS_PID
do_other_stuff_here
Well, I think I understand the question. Looks like your script just needs to start the dbus daemon, or make sure its started. I believe "session" here refers to a dbus session. (here is some evidence), not a Gnome session. Dbus and gconf both run fine without Gnome.
Either way, faking an "active session" sounds like a pretty bad idea. It would only look for it if it needed it.
Perhaps we could see the script in a pastebin? I should have really seen it before making any comment.
Thanks, Ali & Jeremy - both your answers were a big help. I'm still working on this (though I've stopped for the evening).
First, I took the hint from Ali and was trying part of Jeremy's suggestion: I was using dbus-launch to run "gconftool-2 --spawn". It didn't work for me; I now understand why (thx, Jeremy) -- I was trying to use gconf from within the same python program that was launching dbus & gconftool, but its environment didn't have the environment variables - duh.
I set that strategy aside when I noticed gconftool-2's --direct option; internally, gconftool-2 is using API that isn't exposed by the gconf python bindings. So, I modified python-gconf to expose the extra method, and once that builds (I had some unrelated problems getting this to work), we'll see if that fixes things - if it doesn't (and maybe if it does, because building those bindings seems to build all of gnome!), I'll find a better way to manage the environment variables in that first strategy.
(I'll add another answer here tomorrow either way)
And it's the next day: I ran into a little trouble with my modified python-gconf, which inspired me to try Jeremy's simpler idea, which worked fine - before doing the first gconf operation, I simply ran "dbus-launch", parsed the resulting name-value pairs, and added them directly to python's environment. Having done that, I ran "gconftool-2 --spawn". Problem solved.

Categories

Resources