How can I make a fake "active session" for gconf? - python

I've automated my Ubuntu installation - I've got Python code that runs automatically (after a clean install, but before the first user login - it's in a temporary /etc/init.d/ script) that sets up everything from Apache & its configuration to my personal Gnome preferences. It's the latter that's giving me trouble.
This worked fine in Ubuntu 8.04 (Hardy), but when I use this with 8.10 (Intrepid), the first time I try to access gconf, I get this exception:
Failed to contact configuration server; some possible causes are that you need to enable TCP/IP networking for ORBit, or you have stale NFS locks due to a system crash. See http://www.gnome.org/projects/gconf/ for information. (Details - 1: Not running within active session)
Yes, right, there's no Gnome session when this is running, because the user hasn't logged in yet - however, this worked before; this appears to be new with Intrepid's Gnome (2.24?).
Short of modifying the gconf's XML files directly, is there a way to make some sort of proxy Gnome session? Or, any other suggestions?
(More details: this is python code that runs as root, but setuid's & setgid's to be me before setting my preferences using the "gconf" module from the python-gconf package.)

I can reproduce this by installing GConf 2.24 on my machine. GConf 2.22 works fine, but 2.24 breaks it.
GConf is failing to launch because D-Bus is not running. Manually spawning D-Bus and the GConf daemon makes this work again.
I tried to spawn the D-Bus session bus by doing the following:
import dbus
dummy_bus = dbus.SessionBus()
...but got this:
dbus.exceptions.DBusException: org.freedesktop.DBus.Error.Spawn.ExecFailed: dbus-launch failed to autolaunch D-Bus session: Autolaunch error: X11 initialization failed.
Weird. Looks like it doesn't like to come up if X isn't running. To work around that, start dbus-launch manually (IIRC use the os.system() call):
$ dbus-launch
DBUS_SESSION_BUS_ADDRESS=unix:abstract=/tmp/dbus-eAmT3q94u0,guid=c250f62d3c4739dcc9a12d48490fc268
DBUS_SESSION_BUS_PID=15836
You'll need to parse the output somehow and inject them into environment variables (you'll probably want to use os.putenv). For my testing, I just used the shell, and set the environment vars manually with export DBUS_SESSION_BUS_ADDRESS=blahblah..., etc.
Next, you need to launch gconftool-2 --spawn with those environment variables you received from dbus-launch. This will launch the GConf daemon. If the D-Bus environment vars are not set, the daemon will not launch.
Then, run your GConf code. Provided you set the D-Bus session bus environment variables for your own script, you will now be able to communicate with the GConf daemon.
I know it's complicated.
gconftool-2 provides a --direct option that enables you to set GConf variables without needing to communicate with the server, but I haven't been able to find an equivalent option for the Python bindings (short of outputting XML manually).
Edit: For future reference, if anybody wants to run dbus-launch from within a normal bash script (as opposed to a Python script, as this thread is discussing), it is quite easy to retrieve the session bus address for use within the script:
#!/bin/bash
eval `dbus-launch --sh-syntax`
export DBUS_SESSION_BUS_ADDRESS
export DBUS_SESSION_BUS_PID
do_other_stuff_here

Well, I think I understand the question. Looks like your script just needs to start the dbus daemon, or make sure its started. I believe "session" here refers to a dbus session. (here is some evidence), not a Gnome session. Dbus and gconf both run fine without Gnome.
Either way, faking an "active session" sounds like a pretty bad idea. It would only look for it if it needed it.
Perhaps we could see the script in a pastebin? I should have really seen it before making any comment.

Thanks, Ali & Jeremy - both your answers were a big help. I'm still working on this (though I've stopped for the evening).
First, I took the hint from Ali and was trying part of Jeremy's suggestion: I was using dbus-launch to run "gconftool-2 --spawn". It didn't work for me; I now understand why (thx, Jeremy) -- I was trying to use gconf from within the same python program that was launching dbus & gconftool, but its environment didn't have the environment variables - duh.
I set that strategy aside when I noticed gconftool-2's --direct option; internally, gconftool-2 is using API that isn't exposed by the gconf python bindings. So, I modified python-gconf to expose the extra method, and once that builds (I had some unrelated problems getting this to work), we'll see if that fixes things - if it doesn't (and maybe if it does, because building those bindings seems to build all of gnome!), I'll find a better way to manage the environment variables in that first strategy.
(I'll add another answer here tomorrow either way)
And it's the next day: I ran into a little trouble with my modified python-gconf, which inspired me to try Jeremy's simpler idea, which worked fine - before doing the first gconf operation, I simply ran "dbus-launch", parsed the resulting name-value pairs, and added them directly to python's environment. Having done that, I ran "gconftool-2 --spawn". Problem solved.

Related

How do I add a directory to system environment variable using python script [duplicate]

From what I've read, any changes to the environment variables in a Python instance are only available within that instance, and disappear once the instance is closed. Is there any way to make them stick by committing them to the system?
The reason I need to do this is because at the studio where I work, tools like Maya rely heavily on environment variables to configure paths across multiple platforms.
My test code is
import os
os.environ['FAKE'] = 'C:\\'
Opening another instance of Python and requesting os.environ['FAKE'] yields a KeyError.
NOTE: Portability will be an issue, but the small API I'm writing will be able to check OS version and trigger different commands if necessary.
That said, I've gone the route of using the Windows registry technique and will simply write alternative methods that will call shell scripts on other platforms as they become requirements.
You can using SETX at the command-line.
By default these actions go on the USER env vars.
To set and modify SYSTEM vars use the /M flag
import os
env_var = "BUILD_NUMBER"
env_val = "3.1.3.3.7"
os.system("SETX {0} {1} /M".format(env_var,env_val))
make them stick by committing them to
the system?
I think you are a bit confused here. There is no 'system' environment. Each process has their own environment as part its memory. A process can only change its own environment. A process can set the initial environment for processes it creates.
If you really do think you need to set environment variables for the system you will need to look at changing them in the location they get initially loaded from like the registry on windows or your shell configuration file on Linux.
Under Windows it's possible for you to make changes to environment variables persistent via the registry with this recipe, though it seems like overkill.
To echo Brian's question, what are you trying to accomplish? There is probably an easier way.
Seems like there is simplier solution for Windows
import subprocess
subprocess.call(['setx', 'Hello', 'World!'], shell=True)
I don't believe you can do this; there are two work-arounds I can think of.
The os.putenv function sets the environment for processes you start with, i.e. os.system, popen, etc. Depending on what you're trying to do, perhaps you could have one master Python instance that sets the variable, and then spawns new instances.
You could run a shell script or batch file to set it for you, but that becomes much less portable. See this article:
http://code.activestate.com/recipes/159462/
Think about it this way.
You're not setting shell environment variables.
You're spawning a subshell with some given environment variable settings; this subshell runs your application with the modified environment.
According to this discussion, you cannot do it. What are you trying to accomplish?
You are forking a new process and cannot change the environment of the parent process as you cannot do if you start a new shell process from the shell
You might want to try Python Win32 Extensions, developed by Mark Hammond, which is included in the ActivePython (or can be installed separately). You can learn how to perform many Windows related tasks in Hammond's and Robinson's book.
Using PyWin32 to access windows COM objects, a Python program can use the Environment Property (a collection of environment variables) of the WScript.Shell object.
Try to use py-setenv that will allow you to set variable via registry
python -m pip install py-setenv
From within Python? No, it can't be done!
If you are not bound to Python, you should consider using shell scripts (sh, bash, etc). The "source" command allows you to run a script that modifies the environment and will "stick" like you want to the shell you "sourced" the script in. What's going on here is that the shell executes the script directly rather creating a sub-process to execute the script.
This will be quite portable - you can use cygwin on windows to do this.
In case someone might need this info. I realize this was asked 7 yrs ago, but even I forget how sometimes. .
Yes there is a way to make them "stick" in windows. Simply go control panel, system, advanced system settings,when the system properties window opens you should see an option (button) for Environment Variables. .The process for getting to this is a little different depending on what OS you're using (google it).
Choose that (click button), then the Environment Variables window will open. It has 2 split windows, the top one should be your "User Variables For yourusername". . .choose "new", then simply set the variable. For instance one of mine is "Database_Password = mypassword".
Then in your app you can access them like this: import os, os.environ.get('Database_Password'). You can do something like pass = os.environ.get('Database_Password').

Connecting Spyder to Docker Containers

I've started working at a company that develops code using docker containers, which I have had no experience with thus far. The nature of my work is Data Science-y, and so I find Spyder to be an invaluable tool for such work.
I would like to connect spyder to the docker containers that are being used by my colleagues, but I am not sure how to, or if this is even possible. I was not able to find helpful material on this that I could comprehend.
I considered ditching Spyder in favor of VS Code, as it has the ability to connect to docker containers. But my best attempts at trying to recreate Spyder's functionality in VS Code were only partially successful.
Given the popularity of both Spyder and Docker, I thought this would be a straightforward thing to do. Anyway, I would greatly appreciate any information you might have on this topic. I suppose I could consider other IDEs if you are aware of any that can do this. The key features I need are the ability to launch an interactive python environment which allows me to run scripts in the docker, keep the variables stored after the script runs, using these variables to find where things are going wrong and easily create plots, and possibly also have access to a debugger like Spyder's.
I obviously don't want to bloat the Dockerfile and install Spyder inside the container, I'd like something to run on the outside but be able to connect into the docker container and use the python environment defined there.
The following two links were not helpful to me:
Connect Spyder to a console in a docker container on a remote host
Connecting Spyder to Remote Jupyter Notebook in a Docker Container
I'm not sure anyone cares, but I can briefly describe my solution. You can set up ssh in the docker container and then use Spyder's "connect to an existing kernel" feature to use Spyder as a sort of frontend for an ipython kernel running in the docker container. This feature allows you to connect to remote ipython kernels through ssh. It's a bit of an annoyance to setup, so if you can get by with just ipython sessions and ipdb inside the container, that's probably the way to go. But if you're really stuck trying to debug something and want the full Spyder frontend, this works reliably, in my limited experience.
If you do try this, just note that some versions of Spyder seem to simply crash anytime you try to connect to any existing kernel, in a docker container or not. So if the latest version doesn't work, try some other ones...
EDIT: I'm now especially not sure anyone cares, but I've abandoned Spyder since posting this question. It took too long to actually start getting work done when trying to rely on Spyder inside docker, which itself was buggy in its ability to connect to remote kernels and with respect to its debugger functionality. I got inconsistent experiences on Windows/Ubuntu. Instead, if I don't need to do any data visualizations, I just use ipython in the container.
If I need to do data visualizations, I usually do whatever I need to do in docker to get the data I need saved to a file. Then, I have a conda environment that I use to analyze the data in the file. Even outside of docker I've stopped using Spyder. Instead I write my files in vim, run them at the command line, and I use breakpoint() in my code for debugging; I set export PYTHONBREAKPOINT=ipdb.set_trace or IPython.embed depending on if I want ipdb to step around through the code, or if I just need to try some code out at a particular spot in the program in an interactive shell with all the program's variables defined at that point. In fact, you can launch an ipython shell from ipdb by issuing from IPython import embed; embed(). This has been working well for me, and may work for others depending on the nature of the work/company. It is very much freeing not having to rely on bulky tools like Spyder.

How to debug a python script launched by a third party app

I'm using Linux Eclipse (pydev) as IDE to develop python scripts that are launched by an application written in C++. I can debug the python script without problems in the IDE, but the environment is not real (the C++ program sends and receives messages through the stdin/stdout and it's a complex communication channel that I can't fully reproduce writing the messages by hand).
Until now I was using log messages to debug (poor man's debug) but it's getting too complex. When I do something similar in PHP I can just leave xdebug listening and add breakpoints in Netbeans. Very neat and easy. Is it possible to do something like that in Python 3.X (with Eclipse or other IDE)?
NOTE: I know there is a Pydev / Attach to Process functionality, but it doesn't work. Always fails to attach.
NOTE2: There is also a built-in "breakpoint()" in Python 3.7 but it links to a debugger and if also fails, the IDE never gets the control.
After some research, this is the best option I have found. Without any other solution provided, I post it just in case anyone has the same problem.
Python has an integrated debugger: pdb. It works as a module and it doesn't allow to use it if you don't have the window control (i.e. you launch the script).
To solve this there are some coders that have created modules that add a layer on pdb. I have tried some and the most easy and still visual interesting is rpudb (but have a look also to this).
To install it:
pip3 install https://github.com/msbrogli/rpudb/archive/master.zip
(if you install it using the pip3 install rpudb command it will install an old version only valid for python 2)
Then, you use it just adding an import and a function call:
import rpudb
.....
rpudb.set_trace('127.0.0.1', 4444)
.....
Launch the program and it will stop in the set_trace call. To debug it (and continue) open a terminal and launch a telnet like this:
telnet 127.0.0.1 4444
You will have a visual debugger in front of you with the advantage that you can not only debug local programs, but also remote (just change the IP).
I was able to attach PyCharm to a running python process and use break points using PyCharm attach to process
I created a bash script which exec a python script, should work the same with C++

Run pdb without stdin/stdout using FIFO

I am developing FUSE filesystem with python. The problem is that after mounting a filesystem I have no access to stdin/stdout/stderr from my fuse script. I don't see anything, even tracebacks. I am trying to launch pdb like this:
import pdb
pdb.Pdb(None, open('pdb.in', 'r'), open('pdb.out', 'w')).set_trace()
All works fine but very inconvenient. I want to make pdb.in and pdb.out as fifo files but don't know how to connect it correctly. Ideally I want to type commands and see output in one terminal, but will be happy even with two terminals (in one put commands and see output in another). Questions:
1) Is it better/other way to run pdb without stdin/stdout?
2) How can I redirect stdin to pdb.in fifo (All what I type must go to pdb.in)? How can I redirect pdb.out to stdout (I had strange errors with "cat pdb.out" but maybe I don't understand something)
Ok. Exactly what I want, has been done in http://pypi.python.org/pypi/rpdb/0.1.1 .
Before starting the python app
mkfifo pdb.in
mkfifo pdb.out
Then when pdb is called you can interact with it using these two cat commands, one running in the background
cat pdb.out & cat > pdb.in
Note the readline support does not work (i.e. up arrow)
I just ran into a similar issue in a much simpler use-case:
debug a simple Python program running from the command line that had a file piped into sys.stdin, meaning, no way to use the console for pdb.
I ended up solving it by using wdb.
Quick rundown for my use-case. In the shell, install both the wdb server and the wdb client:
pip install wdb.server wdb
Now launch the wdb server with:
wdb.server.py
Now you can navigate to localhost:1984 with your browser and see an interface listing all Python programs running. The wdb project page above has instructions on what you can do if you want to debug any of these running programs.
As for a program under your control, you can you can debug it from the start with:
wdb myscript.py --script=args < and/stdin/redirection
Or, in your code, you can do:
import wdb; wdb.set_trace()
This will pop up an interface in your browser (if local) showing the traced program.
Or you can navigate to the wdb.server.py port to see all ongoing debugging sessions on top of the list of running Python programs, which you can then use to access the specific debugging session you want.
Notice that the commands for navigating the code during the trace are different from the standard pdb ones, for example, to step into a function you use .s instead of s and to step over use .n instead of n. See the wdb README in the link above for details.

Python IDE on Linux Console

This may sound strange, but I need a better way to build python scripts than opening a file with nano/vi, change something, quit the editor, and type in python script.py, over and over again.
I need to build the script on a webserver without any gui. Any ideas how can I improve my workflow?
put this line in your .vimrc file:
:map <F2> :w\|!python %<CR>
now hitting <F2> will save and run your python script
You should give the screen utility a look. While it's not an IDE it is some kind of window manager on the terminal -- i.e. you can have multiple windows and switch between them, which makes especially tasks like this much easier.
You can execute shell commands from within vim.
Using emacs with python-mode you can execute the script with C-c C-c
you can try ipython. using its edit command, it will bring up your editor (nano/vim/etc), you write your script, and then on exiting you're returned to the ipython prompt and the script is automatically executed.
When working with Vim on the console, I have found that using "tabs" in Vim, instead of having multiple Vim instances suspended in the background, makes handling multiple files in Vim more efficient. It takes a bit of getting used to, but it works really well.
You could run XVNC over ssh, which is actually passably responsive for doing this sort of thing and gets you a windowing GUI. I've done this quite effectively over really asthmatic Jetstart DSL services in New Zealand (128K up/ 128K down =8^P) and it's certainly responsive enough for gvim and xterm windows. Another option would be screen, which lets you have multiple textual sessions open and switch between them.
There are actually 2 questions. First is polling for a console IDE for python and the second is a better dev/test/deploy workflow.
For while there are many ways you can write python code in the console, I find a combination of screen, vim and python/ipython is the best as they are usually available on most servers. If you are doing long sessions, I find emacs + python-mode typically involves less typing.
For a better workflow, I would suggest setting up a development environment. You can easily setup a Linux VM on your desktop/laptop easily these days - there isn't a excuse not to even if it's for hobby projects. That opens up a much larger selection of IDEs available to you, such as:
GUI versions of VI and friends
Remote file editing with tramp and testing locally with python-mode inside Emacs
http://www.netbeans.org
and of course http://eclipse.org with the PyDev plugin
I would also setup a SCM to keep track of changes so that you do
better QA and use it to deploy tested changes onto the server.
For example I use Mercurial for my pet projects and I simply tag my repo when it's ready and update the production server to the tag when I deploy. On the devbox, I do:
(hack hack hack, test test test)
hg ci -m 'comment'
hg tag
hg push
Then I jump onto the server and do the following when I deploy:
hg update
restart service/webserver as needed
Well, apart from using one of the more capable console editors (Emacs or vi would come to mind), why do you have to edit it on the web server itself? Just edit it remotely if constant FTP/WebDAV transfer would seem to cumbersome.
Emacs has Tramp Mode, gedit on Linux and bbedit on the Mac support remote editing, too. Probably quite a large number of other editors. In that case you would just edit in on a more capable desktop and restart the script from a shell window.
For what it's worth, VIM alone can do the same tasks as previously posted. I have had the same problem with testing Python from the command line.
My solution was to use the screen command. I split screens vertically, I run Python in one instance of a shell, and on the second screen, I usually edit Python code with VIM.
Command to install screen:
sudo apt-get install screen
The screen package has a bit of a learning curve but there isn't any mystery if you can remember the "Ctrl-Alt ?" command that contains all knowledge.
No GUI is required!

Categories

Resources