Use PYTHONSTARTUP to interactively test a python file in the interpreter - python

I want to establish a standard script file that is imported into python at startup using the PYTHONSTARTUP environment variable. Additionally, I want to be able to conveniently reload the same script file after modifying it in an external editor, to test its behavior after the modification.
I created a ~/.pythonrc.py file and set it as PYTHONSTARTUP:
import os
import imp
def load_wb():
_cwd = os.getcwd()
os.chdir(os.path.join(os.getenv('HOME'),'Skripte'))
import workbench
imp.reload(workbench)
os.chdir(_cwd)
load_wb()
This is my very minimal script file for the start:
def dull_function():
print('Not doing much...')
print('Workbench loaded.')
When I launch Python 3.1.2, .pythonrc is successfully executed and the workbench.py is imported, but dull_function does not appear in the global namespace or in a local one. What do I have to do differently?

Move the import statement outside the function. You're basically importing the workbench module into the function scope, not the global scope (Try calling workbench.dull_function from inside load_wb to see for yourself).
In other words, change your code to:
import os
import imp
import workbench
def load_wb():
_cwd = os.getcwd()
os.chdir(os.path.join(os.getenv('HOME'), 'Skripte'))
imp.reload(workbench)
os.chdir(_cwd)
load_wb()

Not really solving your immediate problem but... You might appreciate using iPython shell for testing in that case. Using the autoimport functionality, you can mark a module for (re)loading on each executed line if needed.
That means you can %aimport workbench and then every time you run some_function_Im_testing(), workbench will be reloaded if it changed. Just add the autoimport line into the configuration file for ipython and you're done.

Related

Cannot set pythonpath environment variable from python

I'm trying to debug a project that has a lot of additional libraries added to PYTHONPATH at runtime before launching the python file.
I was not able to add those commands with tasks.json file prior to debugging python file in Visual Studio code (see post Visual Studio Code unable to set env variable paths prior to debugging python file), so I'm just adding them via an os.system("..") command
I'm only showing 1 of the libraries added below:
# Standard library imports
import os
import sys
os.system("SET PYTHONPATH=D:\\project\\calibration\\pylibrary\\camera")
# Pylibrary imports
from camera import capture
When I debug, it fails on line from camera import capture with:
Exception has occurred: ModuleNotFoundError
No module named 'camera'
File "D:\project\main.py", line 12, in <module>
from camera.capture import capture
I also tried
os.environ['PYTHONPATH']="D:\\project\\pylibrary\\camera" and I still get the same error
Why is it not remembering the pythonpath while running the script?
How else can I define the pythonpath while running Visual Studio Code and debugging the project file?
I know I can add the pythonpath to env variables in windows, but it loads too many libraries and I want it to only remember the path while the python script is executed.
Thanks
Using os.system() won't work because it starts a new cmd.exe shell and sets the env var in that shell. That won't affect the env vars of the python process. Assigning to os.environ['PYTHONPATH'] won't work because at that point your python process has already cached the value, if any, of that env var in the sys.path variable. The solution is to
import sys
sys.path.append(r"D:\project\calibration\pylibrary\camera")

executing standalone fabric script by calling it by its name, without the .py extension

I have a fabric script called fwp.py that I run without calling it throug fab by using:
if __name__ == '__main__':
# imports for standalone mode only
import sys
import fabric.main
fabric.main.main(fabfile_locations=[__file__])
The thing is then have to call the script by calling fwp.py. I'd like to rename it as fwp to be able to call it as fwp. But doing that would result in
Fatal error: Couldn't find any fabfiles!
Is there a way to make Python/Fabric import this file, despite the lack of a ".py" extension?
To reiterate and clarify:
I'm not using the "fab" utility (e.g. as fab task task:parameter); just calling my script as fwp.py task task:parameter, and would like to be able to call it as fwp task task:parameter.
Update
It's not a duplicate of this question. The question is not "How to run a stand-alone fabric script?", but "How to do so, while having a script without a .py" extension.
EDIT: Original answer corrected
The fabric.main.main() function automatically adds .py to the end of supplied fabfile locations (see https://github.com/fabric/fabric/blob/master/fabric/main.py#L93). Unfortunately that function also uses Python's import machinery to load the file so it has to look like a module or package. Without reimplementing much of the fabric.main module I don't think it will be possible. You could try monkey-patching both fabric.main.find_fabfiles and fabric.main.load_fabfiles to make it work.
Origininal answer (wrong)
I can get this to work unaltered on a freshly installed fabric package. The following will execute with a filename fwp and executable permission on version 1.10.1, Python2.7. I would just try upgrading fabric.
#!/usr/bin/env python
from fabric.api import *
import fabric.main
def do():
local('echo "Hello World"')
if __name__ == '__main__':
fabric.main.main(fabfile_locations=[__file__])
Output:
$ ./fwp do
Hello World
Done

Python does not show code changes from imported file

I am using a linux python shell and each time I make changes to the imported file I need restart the shell (I tried reimporting the file but the changes were not reflected)
I have a definition in a file called handlers.py
def testme():
print "Hello I am here"
I import the file in the python shell
>> import handlers as a
>> a.testme()
>> "Hello I am here"
I then change print statement to "Hello I am there", reimport handlers, it does not show the change?
Using Python 2.7 with Mint 17.1
You need to explicitly reload the module, as in:
import lib # first import
# later ....
import imp
imp.reload(lib) # lib being the module which was imported before
note that imp module is pending depreciation in favor of importlib and in python 3.4 one should use: importlib.reload.
You should use reload every time you make a change and then import again:
reload( handlers )
import handlers a a
As an alternative answer inside reload you can use
watchdog
.
A simple program that uses watchdog to monitor directories specified as command-line arguments and logs events generated:
From the website
Supported Platforms
Linux 2.6 (inotify)
Mac OS X (FSEvents, kqueue)
FreeBSD/BSD (kqueue)
Windows (ReadDirectoryChangesW with I/O completion ports; ReadDirectoryChangesW worker threads)
OS-independent (polling the disk for directory snapshots and comparing them periodically; slow and not recommended)

import local python module in HTCondor

This concerns the importing of my own python modules in a HTCondor job.
Suppose 'mymodule.py' is the module I want to import, and is saved in directory called a XDIR.
In another directory called YDIR, I have written a file called xImport.py:
#!/usr/bin/env python
import os
import sys
print sys.path
import numpy
import mymodule
and a condor submit file:
executable = xImport.py
getenv = True
universe = Vanilla
output = xImport.out
error = xImport.error
log = xImport.log
queue 1
The result of submitting this is that, in xImport.out, the sys.path is printed out, showing XDIR. But in xImport.error, there is an ImporError saying 'No module named mymodule'. So it seems that the path to mymodule is in sys.path, but python does not find it. I'd also like to mention that error message says that the ImportError originates from the file
/mnt/novowhatsit/YDIR/xImport.py
and not YDIR/xImport.py.
How can I edit the above files to import mymodule.py?
When condor runs your process, it creates a directory on that machine (usually on a local hard drive). It sets that as the working directory. That's probably the issue you are seeing. If XDIR is local to the machine where you are running condor_submit, then it's contents don't exist on the remote machine where the xImport.py is running.
Try using the .submit feature transfer_input_files mechanism (see http://research.cs.wisc.edu/htcondor/manual/v7.6/2_5Submitting_Job.html) to copy the mymodule.py to the remote machines.

Pydevd with virtual code or (source provider)

we´re having python source code stored in a sql database, the code is build together to a virtual python module and can be executed.
We want to debug this modules but then of course the Eclipse debugger host doesnt know where to find the source code for these modules.
Is there a way to provide pydevd with the location of the source code, even if that means to write down the files to disk?
Write it to the disk and when doing the compile pass the filename for the code (and, when you're not in debug mode, just don't write it and pass '<string>' as the filename).
See the example below:
from tempfile import mktemp
my_code = '''
a = 10
print a
'''
tmp_filename = mktemp('.py', 'temp_file_')
with open(tmp_filename, 'w') as f:
f.write(my_code)
obj = compile(my_code, tmp_filename, 'exec')
exec obj #Place breakpoint here: when stepping in it should get to the code.
You need to add module to PYTHONPATH in Eclipse project settings and import it using the standard Python import. Then PyDev debugger should find it without any problems.

Categories

Resources