Blessed / Curses controls don't work with Pyinstaller. Missing vtwin10 - python

I have a very simple Python program that uses 'Blessed'. It works fine with the Win10 Python interpreter, but reports an error when packaged with Pyinstaller, and terminal control codes are ignored. Here's the code:
from blessed import Terminal
t = Terminal()
print(t.bright_green('Hello world'))
The string 'Hello world' is supposed to display on the console in bright green. Pyinstaller completes with no errors, and when I run the .exe, I get the message:
terminal.py:222: UserWarning: Failed to setupterm(kind='vtwin10'): Could not find terminal vtwin10
and then 'Hello world' is displayed in default terminal color.
It looks like Pyinstaller isn't including something in the build that the interpreter finds without issue. I found a vtwin10.py file in my Anaconda3 installation folder at:
C:\Anaconda3\Lib\site-packages\jinxed\terminfo
I looked at the referenced error in the blessed library's terminal.py file. Here's the code:
try:
curses.setupterm(self._kind, self._init_descriptor)
except curses.error as err:
warnings.warn('Failed to setupterm(kind={0!r}): {1}'
.format(self._kind, err))
So it looks like self._kind is being set to 'vtwin10'. There is a conditional import in terminal.py that looks like this:
if platform.system() == 'Windows':
import jinxed as curses # pylint: disable=import-error
HAS_TTY = True
(I get the humor.) It looks like the jinxed package is being imported explicitly in the code, and replaces the curses package. But somehow the vtwin10 definition is missing.
I found setupterm() in jinxed and dug deeper to find where that error message is coming from. It's in this code:
try:
self.terminfo = importlib.import_module('jinxed.terminfo.%s' % term.replace('-', '_'))
except ImportError:
raise error('Could not find terminal %s' % term)
This is where I get stuck. It looks like this code is unable to find the vtwin10.py file in the jinxed library. Does anyone know how to force Pyinstaller to include the vtwin10 terminal definition for curses? I'm guessing this is the problem.
Many thanks.

For now you will only need to specify jinxed.terminfo.vtwin10 and jinxed.terminfo.ansicon on Windows, but if you want it to be more dynamic, pyinstaller spec files are executable Python, so you can just dynamically look up any terminfo modules.
import pkgutil
import jinxed.terminfo
hiddenimports = [mod.name for mod in pkgutil.iter_modules(jinxed.terminfo.__path__, 'jinxed.terminfo.')

Finally figured this out. In the jinxed library, the code line:
importlib.import_module('jinxed.terminfo.%s' % term.replace('-', '_'))
dynamically loads a library module. Pyinstaller can't package dynamically imported modules. So to fix this, I need to specify the module using the --hidden-import option. The syntax is as follows:
pyinstaller --hidden-import=jinxed.terminfo.vtwin10 --onefile test.py
Program works just like in the interpreter. It works, but I'm concerned this breaks any platform independence jinxed was supposed to have. I can force import the vtwin10.py module, and it will work on win10 platforms. But the way jinxed is written, it figures out the windows platform and then dynamically loads the required terminfo module. There are a number of them in the jinxed.terminfo directory. Wildcards for --hidden-import don't work, so the only option is to use --hidden-import for every file in the jinxed.terminfo folder.

Related

Fix "Fatal Python error: Py_Initialize: can't initialize sys standard streams"

I have two python environments with different versions running in parallel. When I execute a python script (test2.py) from one python environment in the other python environment, I get the following error:
Fatal Python error: Py_Initialize: can't initialize sys standard streams
Traceback (most recent call last):
File "C:\ProgramData\Miniconda3\envs\GMS_VENV_PYTHON\lib\io.py", line 52, in <module>
File "C:\ProgramData\Miniconda3\envs\GMS_VENV_PYTHON\lib\abc.py", line 147
print(f"Class: {cls.__module__}.{cls.__qualname__}", file=file)
^
SyntaxError: invalid syntax
So my setup is this:
Python 3.7
(test.py)
│
│ Python 3.5.6
├───────────────────────────────┐
┆ │
┆ execute test2.py
┆ │
┆ 🗲 Error
How can I fix this?
For dm-script-people: How can I execute a module with a different python version in Digital Micrograph?
Details
I have two python files.
File 1 (test.py):
# execute in Digital Micrograph
import os
import subprocess
command = ['C:\\ProgramData\\Miniconda3\\envs\\legacy\\python.exe',
os.path.join(os.getcwd(), 'test2.py')]
print(*command)
result = subprocess.run(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
print("Subprocess result: '{}', '{}'".format(result.stdout.decode("utf-8"), result.stderr.decode("utf-8")))
and File 2 (test2.py)
# only executable in python 3.5.6
print("Hi")
in the same directory. test.py is executing test2.py with a different python version (python 3.5.6, legacy environment).
My python script (test.py) is running in the python interpreter in a third party program (Digital Micrograph). This program installs a miniconda python enviromnent called GMS_VENV_PYTHON (python version 3.7.x) which can be seen in the above traceback. The legacy miniconda environment is used only for running test2.py (from test.py) in python version 3.5.6.
When I run test.py from the command line (also in the conda GMS_VENV_PYTHON environment), I get the expected output from test2.py in test.py. When I run the exact same file in Digital Micrograph, I get the response
Subprocess result: '', 'Fatal Python error: Py_Initialize: can't initialize sys standard streams
Traceback (most recent call last):
File "C:\ProgramData\Miniconda3\envs\GMS_VENV_PYTHON\lib\io.py", line 52, in <module>
File "C:\ProgramData\Miniconda3\envs\GMS_VENV_PYTHON\lib\abc.py", line 147
print(f"Class: {cls.__module__}.{cls.__qualname__}", file=file)
^
SyntaxError: invalid syntax
'
This tells me the following (I guess):
The test2.py is called since this is the error output of the subprocess call. So the subprocess.run() function seems to work fine
The paths are in the GMS_VENV_PYTHON environment which is wrong in this case. Since this is test2.py, they should be in the legacy paths
There is a SyntaxError because a f-string (Literal String Interpolation) is used which is introduced with python 3.6. So the executing python version is before 3.6. So the legacy python environment is used.
test2.py uses either use io nor abc (I don't know what to conclude here; are those modules loaded by default when executing python?)
So I guess this means, that the standard modules are loaded (I don't know why, probably because they are always loaded) from the wrong destination.
How can I fix this? (See What I've tried > PATH for more details)
What I've tried so far
Encoding
I came across this post "Fatal Python error: Py_Initialize: can't initialize sys standard streams LookupError: unknown encoding: 65001" telling me, that there might be problems with the encoding. I know that Digital Micrograph internally uses ISO 8859-1. I tried to use python -X utf8 and python -X utf8 (test2.py doesn't care about UTF-8, it is ASCII only) as shown below. But neither of them worked
command = ['C:\\ProgramData\\Miniconda3\\envs\\legacy\\python.exe',
"-X", "utf8=0",
os.path.join(os.getcwd(), 'test2.py')]
PATH
As far as I can tell, I think this is the problem. The answer "https://stackoverflow.com/a/31877629/5934316" of the post "PyCharm: Py_Initialize: can't initialize sys standard streams" suggests to change the PYTHONPATH.
So to specify my question:
Is this the way to go?
How can I set the PYTHONPATH for only the subprocess (while executing python with other libraries in the main thread)?
Is there a better way to have two different python versions at the same time?
Thank you for your help.
Background
I am currently writing a program for handling an electron microscope. I need the "environment" (the graphical interface, the help tools but also hardware access) from Digital Micrograph. So there is no way around using it. And DigitalMicrograph does only support python 3.7.
On the other hand I need an external module which is only available for python 3.5.6. Also there is no way around using this module since it controlls other hardware.
Both rely on python C modules. Since they are compiled already, there is no possibility to check if they work with other versions. Also they are controlling highly sensitive aperatures where one does not want to change code. So in short words: I need two python versions parallel.
I was actually quite close. The problem is, that python imports invalid modules from a wrong location. In my case modules were imported from another python installation due to a wrong path. Modifying the PYTHONPATH according to "https://stackoverflow.com/a/4453495/5934316" works for my example.
import os
my_env = os.environ.copy()
my_env["PYTHONHOME"] = "C:\\ProgramData\\Miniconda3\\envs\\legacy"
my_env["PYTHONPATH"] = "C:\\ProgramData\\Miniconda3\\envs\\legacy;"
my_env["PATH"] = my_env["PATH"].replace("C:\\ProgramData\\Miniconda3\\envs\\GMS_VENV_PYTHON",
"C:\\ProgramData\\Miniconda3\\envs\\legacy")
command = ["C:\\ProgramData\\Miniconda3\\envs\\legacy\\python.exe",
os.path.join(path, "test2.py")]
result = subprocess.run(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=my_env)
For Digital Micrograph users: The python environment is saved in the global tags in "Private:Python:Python Path". So replace:
import DigitalMicrograph as DM
# ...
success, gms_venv = DM.GetPersistentTagGroup().GetTagAsString("Private:Python:Python Path")
if not success:
raise KeyError("Python path is not set.")
my_env["PATH"] = my_env["PATH"].replace(gms_venv, "C:\\ProgramData\\Miniconda3\\envs\\legacy")
I had set "PYTHONPATH" as "D:\ProgramData\Anaconda3" for my python (base) python environment before, but i found when I had changed to another env my python still import basic python package from "D:\ProgramData\Anaconda3",which means it get the wrong basic package with the wrong "System environment variables" config.
so I delete "PYTHONPATH" from my windows "System environment variables", and that will be fixed.

PyInstaller - Program returns -1 in another computer

I am using PyInstaller to compile my program to an .exe in Windows,
I use the normal line: pyinstaller file.py --onefile
And all looks like working, while executing PyInstaller says about a WARNING but still "Succesfull finished".
So then i execute my program file.exe and it works perfectly.
The surprise comes when i go to another computer and try to run it...
It start running but when it achieves a point it returns:
Traceback: TypeError: 'NoneType' object has no attribute 'getitem' files returned -1
Warnings of compilations are:
missing module named 'Carbon.File'.FSGetResourceForkName - imported by 'Carbon.File', plistlib
missing module named 'Carbon.File'.FSRef - imported by 'Carbon.File', plistlib
...170 more...
Well, so now comes the question and sorry about the "long" post:
I am not using Carbon,math.cos,etc,etc (From Warning files) in my program I even dont know what Carbon is, it is "imported by ...", how can I do this to dont be imported so it will work in all computers.
If my program works in my computer but not in other I guess I am leaving out something to import?
I have seek for answer and ofc there are but what I have found are really specific: Example: pyInstaller: Import Error What to do when your imports are missing 170 modules?
Thanks!
Edit:
Full Traceback:
Code:
def check_actived (connection):
sql_query = """SELECT enabled FROM login """
connection.execute(sql_query)
result = connection.fetchone()
return result[0]

How to check if all modules imported by a Python script are installed without running the script?

I would like to check if all modules imported by a script are installed before I actually run the script, because the script is quite complex and is usually running for many hours. Also, it may import different modules depending on the options passed to it, so just running it once may not check everything. So, I wouldn't like to run this script on a new system for few hours only to see it failing before completion because of a missing module.
Apparently, pyflakes and pychecker are not helpful here, correct me if I'm wrong. I can do something like this:
$ python -c "$(cat *.py|grep import|sed 's/^\s\+//g'|tr '\n' ';')"
but it's not very robust, it will break if the word 'import' appears in a string, for example.
So, how can I do this task properly?
You could use ModuleFinder from the standard lib modulefinder
Using the example from the docs
from modulefinder import ModuleFinder
finder = ModuleFinder()
finder.run_script('bacon.py')
print 'Loaded modules:'
for name, mod in finder.modules.iteritems():
print '%s: ' % name,
print ','.join(mod.globalnames.keys()[:3])
print '-'*50
print 'Modules not imported:'
print '\n'.join(finder.badmodules.iterkeys())
You could write a test.py that just contains all the possible imports for example:
import these
import are
import some
import modules
Run it and if there are any problems python will let you know

python scripts issue (no module named ...) when starting in rc.local

I'm facing of a strange issue, and after a couple of hour of research I'm looking for help / explanation about the issue.
It's quite simple, I wrote a cgi server in python and I'm working with some libs including pynetlinux for instance.
When I'm starting the script from terminal with any user, it works fine, no bug, no dependency issue. But when I'm trying to start it using a script in rc.local, the following code produce an error.
import sys, cgi, pynetlinux, logging
it produce the following error :
Traceback (most recent call last):
File "/var/simkiosk/cgi-bin/load_config.py", line 3, in
import cgi, sys, json, pynetlinux, loggin
ImportError: No module named pynetlinux
Other dependencies produce similar issue.I suspect some few things like user who executing the script in rc.local (root normaly) and trying some stuff found on the web without success.
Somebody can help me ?
Thanx in advance.
Regards.
Ollie314
First of all, you need to make sure if the module you want to import is installed properly. You can check if the name of the module exists in pip list
Then, in a python shell, check what the paths are where Python is looking for modules:
import sys
sys.path
In my case, the output is:
['', '/usr/lib/python3.4', '/usr/lib/python3.4/plat-x86_64-linux-gnu', '/usr/lib/python3.4/lib-dynload', '/usr/local/lib/python3.4/dist-packages', '/usr/lib/python3/dist-packages']
Finally, append those paths to $PATH variable in /etc/rc.local. Here is an example of my rc.local:
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing
export PATH="$PATH:/usr/lib/python3.4:/usr/lib/python3.4/plat-x86_64-linux-gnu:/usr/lib/python3.4/lib-dynload:/usr/local/lib/python3.4/dist-packages:/usr/lib/python3/dist-packages"
# Do stuff
exit 0
The path where your modules are install is probably normally sourced by .bashrc or something similar. .bashrc doesn't get sourced when it's not an interactive shell. /etc/profile is one place that you can put system wide path changes. Depending on what Linux version/distro it may use /etc/profile.d/ in which case /etc/profile runs all the scripts in /etc/profile.d, add a new shell script there with execute permissions and a .sh extention.

Pydevd with virtual code or (source provider)

we´re having python source code stored in a sql database, the code is build together to a virtual python module and can be executed.
We want to debug this modules but then of course the Eclipse debugger host doesnt know where to find the source code for these modules.
Is there a way to provide pydevd with the location of the source code, even if that means to write down the files to disk?
Write it to the disk and when doing the compile pass the filename for the code (and, when you're not in debug mode, just don't write it and pass '<string>' as the filename).
See the example below:
from tempfile import mktemp
my_code = '''
a = 10
print a
'''
tmp_filename = mktemp('.py', 'temp_file_')
with open(tmp_filename, 'w') as f:
f.write(my_code)
obj = compile(my_code, tmp_filename, 'exec')
exec obj #Place breakpoint here: when stepping in it should get to the code.
You need to add module to PYTHONPATH in Eclipse project settings and import it using the standard Python import. Then PyDev debugger should find it without any problems.

Categories

Resources