Trouble with modules while embedding python in C++ - python

I'm making a .dll for an .exe program and embedding python in it. It worked fine with this simple .py program
from time import *
##import OptimRestriction
def test_callsign(b):
...(simple script)
return
What I did was copy the .py program, the Dll and Lib folders into the xxx.exe folder, like is said here.
But as soon as I uncomment the import of OptimRestriction the debug crashes. First it starts loading symbols when the call to the thread that initializes and deals with Python is called: _ctypes.pyd,_sockets.pyd,sst_pyd,harshlib.pyd,unicodedata.pyd,tkinter.pyd, all modules that the OptimRestriction does not use.
The error given after the debug crashes is:
Unhandled exception at 0x1E0AA0C5 (python27.dll) in xxx.exe: 0xC0000005: Access violation reading location 0x00000004.
And it breaks on the _callthreadstart function.
OptimRestriction is a long program that imports a lot of modules (that are also in the .exe folder). Here's the list of its imports:
from GrafFunc import *
from LogFunc import *
from DinamicaFunc import *
from Dinamica2 import *
from CDR import *
...
import sys
import cProfile"
What it seems to me is that the thread takes too long to start because the debug starts loading those files for a long time, and so it gives an error. Am I correct? And if so, why is it loading those files if OptimRestrictionand its impots don't use them?
Edit:New information. It crashes saying Can't import module on this line:
pModule = PyImport_Import(pName);

After long hours of reducing the problem to some lines of code, i just found out that the problem was in the lines where some of the modules opened .txt files to read.
I thought that having the .txt files in the same folder of the .py programs was the correct thing to do, but it seems that I needed to copy them to the folder of my c++ plugin (I think that's because while I'm debugging, the system path is changed for my plug-in folder since the writing and reading of pyhton is done to/from there)
Problem solved!

Related

Python: Why is this import failing with a FileNotFoundError despite other files in the same folder importing correctly?

I'm currently dealing with a strange bug when running a Python script from SSIS/a SQL Server Agent Job after promoting to a production server. This issue didn't arise in dev/staging environments, only when promoting to production. When importing several other files from the same folder the main script is located in, no error is raised, but on the final import I'm getting an error of the form 'FileNotFoundError(2, 'No such file or directory')' ; the relevant portions of the script looks something like this:
(The issues arise before loading the config file for the application which parameterizes the logging location, hence the ugly fallback logging here)
#!Python3.8
import sys
import os
sys.path.append(site-packages path)
from xyz import obj1, obj2
from abc import obj3
etc.
# Additional try/except block to figure out specific error for this import
try:
import script_in_question
except Exception as e:
with open(fallback log path, 'w') as f:
f.write(f"{sys.argv}")
f.write(f"Could not load script_in_question: {repr(e)} \n {os.getcwd()}")
f.write(f"\n {repr(e)}")
Weirdly, when logging sys.path, I can see the folder containing the successfully imported scripts and the script in question; os.getcwd() reports the correct working directory, and this error cropped up whether I was directly importing the script or using noation like 'from script_in_question import main_function'
I've tried switching between from script_in_question import main_function and import script_in_question, explicitly changing to the correct working directory.

python calling function from another file while using import from the main file

I'm trying to use multiple files in my programing and I ran into a problem.
I have two files: main.py, nu.py
The main file is:
import numpy
import nu
def numarray():
numpy.array(some code goes here)
nu.createarray()
The nu file is:
def createarray():
numpy.array(some code goes here)
When I run main I get an error:
File "D:\python\nu.py", line 2, in createarray
numpy.array(some code goes here)
NameError: name 'numpy' is not defined
numpy is just an exaple, I'm using about six imports.
As far as I see it, I have to import all moduls on all files, but it creating a problem where certain modules can't be loaded twice, it just hang.
What I'm doing wrong and how can I properly import functions from another file while using imported modules from the main file?
I hope i explain it well.
thanks for helping!
I have years in python and importing from other files is still a headache..
The problmen here is that you are not importing numpy in "nu.py".
But as you say sometimes it's a little annoying have to import al libraries in all files.
The last thing is, How do you get the error a module cannot be imported twice? can you give me an example?
In each separate python script if you are using a module within you need to import it to access. So you will need to 'import numpy' in your nu.py script like below
If possible try keeping the use of a module within a script so you dont have import the same multiple times, although this wont always be appropriate

How to make a module reload in python after the script is compiled?

The basic idea involved:
I am trying to make an application where students can write code
related to a specific problem(say to check if the number is even) The
code given by the student is then checked by the application by
comparing the output given by the user's code with the correct output
given by the correct code which is already present in the application.
The basic version of the project I am working on:
An application in which you can write a python script (in tkinter text
box). The contents of the text box are first stored in a test_it.py
file. This file is then imported (on the click of a button) by the
application. The function present in test_it.py is then called to
get the output of the code(by the user).
The problem:
Since I am "importing" the contents of test_it.py , therefore,
during the runtime of the application the user can test his script
only once. The reason is that python will import the test_it.py
file only once. So even after saving the new script of the user in
test_it.py , it wont be available to the application.
The solution:
Reload test_it.py every time when the button to test the script is clicked.
The actual problem:
While this works perfectly when I run the application from the script,
this method fails to work for the compiled/executable version(.exe) of the file (which is expected since during compilation all the imported modules would be
compiled too and so modifying them later will not work)
The question:
I want my test_it.py file to be reloaded even after compiling the application.
If you would like to see the working version of the application to test it yourself. You will find it here.
Problem summary:
A test_it.py program is running and has a predicate available, e.g. is_odd().
Every few minutes, a newly written file containing a revised is_odd() predicate becomes available,
and test_it wishes to feed a test vector to revised predicate.
There are several straightforward solutions.
Don't load the predicate in the current process at all. Serialize the test vector, send it to a newly forked child which computes and serializes results, and examine those results.
Typically eval is evil, but here you might want that, or exec.
Replace current process with a newly initialized interpreter: https://docs.python.org/3/library/os.html#os.execl
Go the memory leak route. Use a counter to assign each new file a unique module name, manipulate source file to match, and load that. As a bonus, this makes it easy to diff current results against previous results.
Reload: from importlib import reload
Even for the bundled application imports work the standard way. That means whenever an import is encountered, the interpreter will try to find the corresponding module. You can make your test_it.py module discoverable by appending the containing directory to sys.path. The import test_it should be dynamic, e.g. inside a function, so that it won't be discovered by PyInstaller (so that PyInstaller won't make an attempt to bundle it with the application).
Consider the following example script, where the app data is stored inside a temporary directory which hosts the test_it.py module:
import importlib
import os
import sys
import tempfile
def main():
with tempfile.TemporaryDirectory() as td:
f_name = os.path.join(td, 'test_it.py')
with open(f_name, 'w') as fh: # write the code
fh.write('foo = 1')
sys.path.append(td) # make available for import
import test_it
print(f'{test_it.foo=}')
with open(f_name, 'w') as fh: # update the code
fh.write('foo = 2')
importlib.reload(test_it)
print(f'{test_it.foo=}')
main()
The key is to check if program are running as exe and add exe path to the sys.path.
File program.py:
import time
import sys
import os
import msvcrt
import importlib
if getattr(sys, 'frozen', False):
# This is .exe so we change current working dir
# to the exe file directory:
app_path = os.path.dirname(sys.executable)
print(' Add .exe path to sys.path: ' + app_path)
sys.path.append(app_path)
os.chdir(app_path)
test_it = importlib.import_module('test_it')
def main():
global test_it
try:
print(' Start')
while True:
if not msvcrt.kbhit(): continue
key = msvcrt.getch()
if key in b'rR':
print(' Reload module')
del sys.modules['test_it']
del test_it
test_it = importlib.import_module('test_it')
elif key in b'tT':
print(' Run test')
test_it.test_func()
time.sleep(0.001)
except KeyboardInterrupt:
print(' Exit')
if __name__ == '__main__': main()
File test_it.py:
def test_func():
print('Hi')
Create an .exe file:
pyinstaller --onefile --clean program.py
Copy _text_it.py to the _dist_ folder and it's done.
Press t in program window to run test_func. Edit test_it.py then press r to reload module and press t again to see changes.
Maybe the solution is to use the code module:
import code
# get source from file as a string
src_code = ''.join(open('test_it.py').readlines())
# compile the source
compiled_code = code.compile_command(source=src_code, symbol='exec')
# run the code
eval(compiled_code) # or exec(compiled_code)

Can I prevent FreeCad from caching Python files?

I am trying to learn Python scripting for FreeCad.
In the folder "C:/p/Freecad/0.18/ZillmannTest" I have
2 files:
Macro1.py and
FCadHelper.py
The content of Macro1.py is as follows:
############
import sys
sys.path.append("C:/p/Freecad/0.18/ZillmannTest")
from FCadHelper import *
helper = FCadHelper()
helper.startDocument('TestKopf')
helper.addBody('TestKopfBody')
helper.addSketch('TestSketch')
####################
I can start Macro1.py from FreeCad Macro menu
But when I have an error in FCadHelper.py and correct it,
FreeCad does not load the changed file FCadHelper.py,
it keeps using the old (chached ?) version of the file.
To use the changed file I have to terminate FreeCad and
start it again. Which is annoying.
Is there a way to stop FreeCad from caching this file?
As I plan to create a Class library of similar files,
the problem will then be even greater than now.
Kind regards
You can try importlib.reload:
https://docs.python.org/3/library/importlib.html#importlib.reload
It is a little tricky sometimes, but it will work in your case.

Object not working properly when called from child module

Hello generous SO'ers,
This is a somewhat complicated question, but hopefully relevant to the more general use of global objects from a child-module.
I am using some commercial software that provides a python library for interfacing with their application through TCP. (I can't post the code for their library I don't think.)
I am having an issue with calling an object from a child module, that I think is more generally related to global variables or some such. Basically, the object's state is as expected when the child-module is in the same directory as all the other modules (including the module that creates the object).
But when I move the offending child module into a subfolder, it can still access the object but the state appears to have been altered, and the object's connection to the commercial app doesn't work anymore.
Following some advice from this question on global vars, I have organized my module's files as so:
scriptfile.py
pyFIMM/
__init__.py # imports all the other files
__globals.py # creates the connection object used in most other modules
__pyfimm.py # main module functions, such as pyFIMM.connect()
__Waveguide.py # there are many of these files with various classes and functions
(...other files...)
PhotonDesignLib/
__init__.py # imports all files in this folder
pdPythonLib.py # the commercial library
proprietary/
__init__.py # imports all files in this folder
University.py # <-- The offending child-module with issues
pyFIMM/__init__.py imports the sub-files like so:
from __globals import * # import global vars & create FimmWave connection object `fimm`
from __pyfimm import * # import the main module
from __Waveguide import *.
(...import the other files...)
from proprietary import * # imports the subfolder containing `University.py`
The __init__.py's in the subfolders "PhotonDesignLib" & "proprietary" both cause all files in the subfolders to imported, so, for example, scriptfile.py would access my proprietary files as so: import pyFIMM.proprietary.University. This is accomplished via this hint, coded as follows in proprietary/__init__.py:
import os, glob
__all__ = [ os.path.basename(f)[:-3] for f in glob.glob(os.path.dirname(__file__)+"/*.py")]
(Numerous coders from a few different institutions will have their own proprietary code, so we can share the base code but keep our proprietary files/functions to ourselves this way, without having to change any base code/import statements. I now realize that, for the more static PhotonDesignLib folder, this is overkill.)
The file __globals.py creates the object I need to use to communicate with their commercial app, with this code (this is all the code in this file):
import PhotonDesignLib.pdPythonLib as pd # the commercial lib/object
global fimm
fimm = pd.pdApp() # <- - this is the offending global object
All of my sub-modules contain a from __global import * statement, and are able to access the object fimm without specifically declaring it as a global var, without any issue.
So I run scriptfile.py, which has an import statement like from pyFIMM import *.
Most importantly, scriptfile.py initiates the TCP connection made to the application via fimm.connect() right at the top, before issuing any commands that require the communication, and all the other modules call fimm.Exec(<commands for app>) in various routines, which has been working swimmingly well - the fimm object has so-far been accessible to all modules, and keeps it's connection state without issue.
The issue I am running into is that the file proprietary/University.py can only successfully use the fimm object when it's placed in the pyFIMM root-level directory (ie. the same folder as __globals.py etc.). But when University.py is imported from within the proprietary sub-folder, it gives me an "application not initialized" error when I use the fimm object, as if the object had been overwritten or re-initialized or something. The object still exists, it just isn't maintaining it's connection state when called by this sub-module. (I've checked that it's not reinitialized in another module.)
If, after the script fails in proprietary/University.py, I use the console to send a command eg. pyFimm.fimm.Exec(<command to app>), it communicates just fine!
I set proprietary/University.py to print a dir(fimm) as a test right at the beginning, which works fine and looks like the fimm object exists as expected, but a subsequent call in the same file to fimm.Exec() indicates that the object's state is not correct, returning the "application not initialized" error.
This almost looks like there are two fimm objects - one that the main python console (and pyFIMM modules) see, which works great, and another that proprietary/University.py sees which doesn't know that we called fimm.connect() already. Again, if I put University.py in the main module folder "pyFIMM" it works fine - the fimm.Exec() calls operate as expected!
FYI proprietary/University.py imports the __globals.py file as so:
import sys, os, inspect
ScriptDir = inspect.currentframe().f_code.co_filename # get path to this module file
(ParentDir , tail) = os.path.split(ScriptDir) # split off top-level directory from path
(ParentDir , tail) = os.path.split(ParentDir) # split off top-level directory from path
sys.path.append(ParentDir) # add ParentDir to the python search path
from __globals import * # import global vars & FimmWave connection object
global fimm # This line makes no difference, was just trying it.
(FYI, Somewhere on SO it was stated that inspect was better than __file__, hence the code above.)
Why do you think having the sub-module in a sub-folder causes the object to lose it's state?
I suspect the issue is either the way I instruct University.py to import __globals.py or the "import all files in this folder" method I used in proprietary/__init__.py. But I have little idea how to fix it!
Thank you for looking at this question, and thanks in advance for your insightful comments.

Categories

Resources