This concerns the importing of my own python modules in a HTCondor job.
Suppose 'mymodule.py' is the module I want to import, and is saved in directory called a XDIR.
In another directory called YDIR, I have written a file called xImport.py:
#!/usr/bin/env python
import os
import sys
print sys.path
import numpy
import mymodule
and a condor submit file:
executable = xImport.py
getenv = True
universe = Vanilla
output = xImport.out
error = xImport.error
log = xImport.log
queue 1
The result of submitting this is that, in xImport.out, the sys.path is printed out, showing XDIR. But in xImport.error, there is an ImporError saying 'No module named mymodule'. So it seems that the path to mymodule is in sys.path, but python does not find it. I'd also like to mention that error message says that the ImportError originates from the file
/mnt/novowhatsit/YDIR/xImport.py
and not YDIR/xImport.py.
How can I edit the above files to import mymodule.py?
When condor runs your process, it creates a directory on that machine (usually on a local hard drive). It sets that as the working directory. That's probably the issue you are seeing. If XDIR is local to the machine where you are running condor_submit, then it's contents don't exist on the remote machine where the xImport.py is running.
Try using the .submit feature transfer_input_files mechanism (see http://research.cs.wisc.edu/htcondor/manual/v7.6/2_5Submitting_Job.html) to copy the mymodule.py to the remote machines.
Related
I have a python file that I tested locally using VS-Code and the shell. The file contains relative imports and worked as intended on my local machine.
After uploading the associated files to Colab I did the following:
py_file_location = '/content/gdrive/content/etc'
os.chdir(py_file_location)
# to verify whether I got the correct path
!python3
>>> import os
>>> os.getcwd()
output: '/content/gdrive/MyDrive/content/etc'
However, when I run the file I get the following error:
ImportError: attempted relative import with no known parent package
Why is that? Using the same file system and similar shell commands the file worked locally.
A solution that worked for me was to turn my relative paths into static ones, so instead of using:
directory = pathlib.Path(__file__).parent
sys.path.append(str(directory.parent))
sys.path.append(str(directory.parent.parent.parent))
__package__ = directory.name
I needed to make the path static by using resolve():
directory = pathlib.Path(__file__).resolve().parent
sys.path.append(str(directory.parent))
sys.path.append(str(directory.parent.parent.parent))
__package__ = directory.name
It is however still not fully clear to me why this is required when running on Colab but not when running locally.
I am trying to import modules while running my main python script, using a smaller setup.py script. However the importlib command: importlib.util.spec_from_file_location(name, location) doesn't appear to be detecting my small python script. Presumably I'm not filling in the name or location fields correctly.
Example Script A (setup.py):
import os
import pandas as pd
print("success!") # So I can see it has run.
Example Script B (my_script.py):
import importlib
setup_path = ("/home/solebay/My Project Name/")
start_up_script = importlib.util.spec_from_file_location("setup.py", setup_path)
module = importlib.util.module_from_spec(start_up_script)
Running the above snippet returns:
AttributeError: 'NoneType' object has no attribute 'loader'
I subsequently investigated by running type(start_up_script) the result it gives is typeNone.
The paths are correct. I verified this by running the following:
"/home/solebay/My Project Name/"
sudo python3 "/home/solebay/My Project Name/setup.py"
These printed the messages is a directory and success! respectively.
Note: Maurice Meyer succeeded in answering my main question, and so I have marked it as correct. However, I have not achieved my main goal; namely importing modules via another script. So if that is your aim, this question might not be appropriate for you.
The location argument passed to spec_from_file_location has to be the full path to the python script:
import importlib.util
spec = importlib.util.spec_from_file_location(
name='something__else', # name is not related to the file, it's the module name!
location='/tmp/solebay/My Project Name/setup.py' # full path to the script
)
my_mod = importlib.util.module_from_spec(spec)
spec.loader.exec_module(my_mod)
print(my_mod)
Out:
success!
<module 'something__else' from '/tmp/solebay/My Project Name/setup.py'>
the main module sits within the runner package and executes stuff in the other packages. The main module can also Update the other packages and when that happens I want to reload them in order to get the new functions/modules that were added to those packages.
Project Structure
|--runner
|----main.py
|--core
|----module_1.py
|--configurations
|--utils
But that doesn't work.
I tried the following commands:
importlib.reload - only reloads a single module, using it recursively with sys.modules didn't add the new modules to the import tree. example: if after the update, "core" received a new module "module_new.py" and its imported in "module_1.py" it's not recognized after the reload.
I tried using IPython.lib.deepreload - it didn't work as well.
I've been stuck with this issue for some time, and haven't found any working solution yet.
Suggestions? Thanks
I fixed the issue by restarting the entire program using a while loop from an outer execution script.
Exit code 2: update required
Do
{
$process = Start-Process python -ArgumentList $CommandLine -verb RunAs -PassThru -WindowStyle Minimized -Wait
} WHILE ($process.ExitCode -eq 2)
Modules will be reloaded by import command if they are not in sys.modules dict
# import some standard (non-updatable) modules
import numpy as np
# save set of non-reloadable modules on first run,
# and delete reloadable modules on other runs
if 'init_modules' not in globals():
init_modules = set(sys.modules.keys())
else:
modules = list(sys.modules.keys())
for m in modules:
if m not in init_modules:
del(sys.modules[m])
# import reloadable packages and modules
import MyPackage
I'm trying to debug a project that has a lot of additional libraries added to PYTHONPATH at runtime before launching the python file.
I was not able to add those commands with tasks.json file prior to debugging python file in Visual Studio code (see post Visual Studio Code unable to set env variable paths prior to debugging python file), so I'm just adding them via an os.system("..") command
I'm only showing 1 of the libraries added below:
# Standard library imports
import os
import sys
os.system("SET PYTHONPATH=D:\\project\\calibration\\pylibrary\\camera")
# Pylibrary imports
from camera import capture
When I debug, it fails on line from camera import capture with:
Exception has occurred: ModuleNotFoundError
No module named 'camera'
File "D:\project\main.py", line 12, in <module>
from camera.capture import capture
I also tried
os.environ['PYTHONPATH']="D:\\project\\pylibrary\\camera" and I still get the same error
Why is it not remembering the pythonpath while running the script?
How else can I define the pythonpath while running Visual Studio Code and debugging the project file?
I know I can add the pythonpath to env variables in windows, but it loads too many libraries and I want it to only remember the path while the python script is executed.
Thanks
Using os.system() won't work because it starts a new cmd.exe shell and sets the env var in that shell. That won't affect the env vars of the python process. Assigning to os.environ['PYTHONPATH'] won't work because at that point your python process has already cached the value, if any, of that env var in the sys.path variable. The solution is to
import sys
sys.path.append(r"D:\project\calibration\pylibrary\camera")
I want to establish a standard script file that is imported into python at startup using the PYTHONSTARTUP environment variable. Additionally, I want to be able to conveniently reload the same script file after modifying it in an external editor, to test its behavior after the modification.
I created a ~/.pythonrc.py file and set it as PYTHONSTARTUP:
import os
import imp
def load_wb():
_cwd = os.getcwd()
os.chdir(os.path.join(os.getenv('HOME'),'Skripte'))
import workbench
imp.reload(workbench)
os.chdir(_cwd)
load_wb()
This is my very minimal script file for the start:
def dull_function():
print('Not doing much...')
print('Workbench loaded.')
When I launch Python 3.1.2, .pythonrc is successfully executed and the workbench.py is imported, but dull_function does not appear in the global namespace or in a local one. What do I have to do differently?
Move the import statement outside the function. You're basically importing the workbench module into the function scope, not the global scope (Try calling workbench.dull_function from inside load_wb to see for yourself).
In other words, change your code to:
import os
import imp
import workbench
def load_wb():
_cwd = os.getcwd()
os.chdir(os.path.join(os.getenv('HOME'), 'Skripte'))
imp.reload(workbench)
os.chdir(_cwd)
load_wb()
Not really solving your immediate problem but... You might appreciate using iPython shell for testing in that case. Using the autoimport functionality, you can mark a module for (re)loading on each executed line if needed.
That means you can %aimport workbench and then every time you run some_function_Im_testing(), workbench will be reloaded if it changed. Just add the autoimport line into the configuration file for ipython and you're done.