I have frozen a python based GUI script using Py2app successfully but I run into trouble using this app on Mac. This app is supposed to send arguments/parameters to Clustal, a terminal-based application, but it instead returns an error non-zero exit status 127, '/bin/sh: clustal: command not found'.
I found that my frozen app can send shell command successfully when I execute the same app from Frozen_apl.app>Contents>MacOS>Frozen_apl (which is a UNIX executable file).
Why do these shell commands get blocked when they are passed directly from app? How can I get around this problem?
Note: Clustal is properly installed and its path is properly set. I use OS X 10.9. I have same script frozen for Ubuntu and Windows and they work just fine.
[Based on the discussion in comments] This isn't a problem with the arguments, it's due to the spawned shell not being able to find the clustal executable. I'm not sure why this is, since it's in /usr/local/bin/clustal, and since /usr/local/bin is in OS X's default PATH (it's listed in /etc/paths). Using the full path to the executable worked, so it appears the frozen app is spawning a shell with a non-default PATH.
Including the full path (/usr/local/bin/clustal) in the frozen app isn't really an optimal solution; it'd be better to figure out how to get a normal PATH in the spawned shell. But I'm now familiar enough with Py2app to know how to do this. (JeeYem: please give the workaround you came up with in a comment or another answer.)
Quoting from Py2app 0.6.4 minor feature release:
Issue #15: py2app now has an option to emulate the shell environment you get by opening a window in the Terminal.
Usage: python setup.py py2app --emulate-shell-environment
This option is experimental, it is far from certain that the implementation works on all systems.
Using this option with Py2app solved the problem of blocked communication between Py2app-frozen app and Os X shell.
Related
I'm using Linux Eclipse (pydev) as IDE to develop python scripts that are launched by an application written in C++. I can debug the python script without problems in the IDE, but the environment is not real (the C++ program sends and receives messages through the stdin/stdout and it's a complex communication channel that I can't fully reproduce writing the messages by hand).
Until now I was using log messages to debug (poor man's debug) but it's getting too complex. When I do something similar in PHP I can just leave xdebug listening and add breakpoints in Netbeans. Very neat and easy. Is it possible to do something like that in Python 3.X (with Eclipse or other IDE)?
NOTE: I know there is a Pydev / Attach to Process functionality, but it doesn't work. Always fails to attach.
NOTE2: There is also a built-in "breakpoint()" in Python 3.7 but it links to a debugger and if also fails, the IDE never gets the control.
After some research, this is the best option I have found. Without any other solution provided, I post it just in case anyone has the same problem.
Python has an integrated debugger: pdb. It works as a module and it doesn't allow to use it if you don't have the window control (i.e. you launch the script).
To solve this there are some coders that have created modules that add a layer on pdb. I have tried some and the most easy and still visual interesting is rpudb (but have a look also to this).
To install it:
pip3 install https://github.com/msbrogli/rpudb/archive/master.zip
(if you install it using the pip3 install rpudb command it will install an old version only valid for python 2)
Then, you use it just adding an import and a function call:
import rpudb
.....
rpudb.set_trace('127.0.0.1', 4444)
.....
Launch the program and it will stop in the set_trace call. To debug it (and continue) open a terminal and launch a telnet like this:
telnet 127.0.0.1 4444
You will have a visual debugger in front of you with the advantage that you can not only debug local programs, but also remote (just change the IP).
I was able to attach PyCharm to a running python process and use break points using PyCharm attach to process
I created a bash script which exec a python script, should work the same with C++
I'm asking help today because I'm new to Tkinter and Pyinstaller (and python in general) and I'm having troubles with it.
I have a simple app working with sqlite, tkinter and pyinstaller to compile all of this in an executable program, the entrance point of my program is a file named main.py
This file calls all the dependancies (like the sqlite module for python, tkinter and my other files like classes etc...)
I made a very simple interface, with a Hello World in a tkinter label and a button to go to page 2 which displays page2 (also in a label), just to see if I'm capable of making it all run and compile all of these pieces together.
I can run it throught my shell executing it like : python main.py and everything is working fine.
But when I run pyinstaller on my linux machine, and start executing the program, nothing appears, my database.db (sqlite database file) is created but I don't have any interface like when I run it with my shell. The thing is getting even worse on windows where, once I've my .exe it just opens a shell and crash after few seconds, not even creating the database.
What I did is I created a 'log file', in which I write the steps of the program.
As you can see on the following picture, the 2 first prints are wrote in my log file (on linux), so I think it crashes when I try to create the window.
If any of you have an idea on what I do wrong, I would really appreciate help :)
General
From the PyInstaller manual:
Before you attempt to bundle to one file, make sure your app works correctly when bundled to one folder. It is is much easier to diagnose problems in one-folder mode.
As the comments suggested, use a catch-all try/except block to log all exceptions to a file. That is probably the best way to see what is really happening. Make sure that the logfile is created in an existing location where you have the necessary permissions.
I would suggest to take advantage of the built-in logging module instead of creating your own. It can e.g. automatically add from which file a log line was created.
IMHO, it is probable that the failures on Linux and ms-windows have completely different causes. You should probably treat them as different issues.
Linux
When you use single file mode, that file is unpacked into a temporary folder, probably somewhere in /tmp. Some Linux distributions mount the /tmp filesystem with the noexec flag. This is incompatible with PyInstaller.
ms-windows
On windows, there are basically two different Pythons; python.exe and pythonw.exe. Basically it is one of the quirks of windows that this is necessary. The latter is for GUI programs like tkinter programs. A tkinter script should not show a cmd window. So I'm guessing that PyInstaller calls your command with python.exe instead of pythonw.exe. From the manual:
By default the bootloader creates a command-line console (a terminal window in GNU/Linux and Mac OS, a command window in Windows). It gives this window to the Python interpreter for its standard input and output. Your script’s use of print and input() are directed here. Error messages from Python and default logging output also appear in the console window.
An option for Windows and Mac OS is to tell PyInstaller to not provide a console window. The bootloader starts Python with no target for standard output or input. Do this when your script has a graphical interface for user input and can properly report its own diagnostics.
As noted in the CPython tutorial Appendix, for Windows a file extention of .pyw suppresses the console window that normally appears. Likewise, a console window will not be provided when using a myscript.pyw script with PyInstaller.
Also, on windows it can matter which Python distribution you're using. I used to be a fan of Anaconda, but lately I've come to prefer the python.org version because it gives me less headaches. On anaconda Python I had the problem that tkinter programs would not launch without showing a cmd window, whatever I tried. Only switching to python.org Python solved that problem.
I am developing a simple standalone, graphical application in python. My development has been done on linux but I would like to distribute the application cross-platform.
I have a launcher script which checks a bunch of environment variables and then sets various configuration options, and then calls the application with what amounts to python main.py (specifically os.system('python main.py %s'% (arg1, arg2...)) )
On OS X (without X11), the launcher script crashed with an error like Could not run application, need access to screen. A very quick google search later, the script was working locally by replacing python main.py with pythonw main.py.
My question is, what is the best way to write the launcher script so that it can do the right thing across platforms and not crash? Note that this question is not asking how to determine what platform I am on. The solution "check to see if I am on OS X, and if so invoke pythonw instead" is what I have done for now, but it seems like a somewhat hacky fix because it depends on understanding the details of the windowing system (which could easily break sometime in the future) and I wonder if there is a cleaner way.
This question does not yet have a satisfactory answer.
If you save the file as main.pyw, it should run the script without opening up a new cmd/terminal.
Then you can run it as python main.pyw
Firstly, you should always use .pyw for GUIs.
Secondly, you could convert it to .exe if you want people without python to be able to use your program. The process is simple. The hardest part is downloading one of these:
for python 2.x: p2exe
for python 3.x: cx_Freeze
You can simply google instructions on how to use them if you decide to go down that path.
Also, if you're using messageboxes in your GUI, it won't work. You will have to create windows/toplevels instead.
I'd like to call a separate non-child python program from a python script and have it run externally in a new shell instance. The original python script doesn't need to be aware of the instance it launches, it shouldn't block when the launched process is running and shouldn't care if it dies. This is what I have tried which returns no error but seems to do nothing...
import subprocess
python_path = '/usr/bin/python'
args = [python_path, '&']
p = subprocess.Popen(args, shell=True)
What should I be doing differently
EDIT
The reason for doing this is I have an application with a built in version of python, I have written some python tools that should be run separately alongside this application but there is no assurance that the user will have python installed on their system outside the application with the builtin version I'm using. Because of this I can get the python binary path from the built in version programatically and I'd like to launch an external version of the built in python. This eliminates the need for the user to install python themselves. So in essence I need a simple way to call an external python script using my current running version of python programatically.
I don't need to catch any output into the original program, in fact once launched I'd like it to have nothing to do with the original program
EDIT 2
So it seems that my original question was very unclear so here are more details, I think I was trying to over simplify the question:
I'm running OSX but the code should also work on windows machines.
The main application that has a built in version of CPython is a compiled c++ application that ships with a python framework that it uses at runtime. You can launch the embedded version of this version of python by doing this in a Terminal window on OSX
/my_main_app/Contents/Frameworks/Python.framework/Versions/2.7/bin/python
From my main application I'd like to be able to run a command in the version of python embedded in the main app that launches an external copy of a python script using the above python version just like I would if I did the following command in a Terminal window. The new launched orphan process should have its own Terminal window so the user can interact with it.
/my_main_app/Contents/Frameworks/Python.framework/Versions/2.7/bin/python my_python_script
I would like the child python instance not to block the main application and I'd like it to have its own terminal window so the user can interact with it. The main application doesn't need to be aware of the child once its launched in any way. The only reason I would do this is to automate launching an external application using a Terminal for the user
If you're trying to launch a new terminal window to run a new Python in (which isn't what your question asks for, but from a comment it sounds like it's what you actually want):
You can't. At least not in a general-purpose, cross-platform way.
Python is just a command-line program that runs with whatever stdin/stdout/stderr it's given. If those happen to be from a terminal, then it's running in a terminal. It doesn't know anything about the terminal beyond that.
If you need to do this for some specific platform and some specific terminal program—e.g., Terminal.app on OS X, iTerm on OS X, the "DOS prompt" on Windows, gnome-terminal on any X11 system, etc.—that's generally doable, but the way to do it is by launching or scripting the terminal program and telling it to open a new window and run Python in that window. And, needless to say, they all have completely different ways of doing that.
And even then, it's not going to be possible in all cases. For example, if you ssh in to a remote machine and run Python on that machine, there is no way it can reach back to your machine and open a new terminal window.
On most platforms that have multiple possible terminals, you can write some heuristic code that figures out which terminal you're currently running under by just walking os.getppid() until you find something that looks like a terminal you know how to deal with (and if you get to init/launchd/etc. without finding one, then you weren't running in a terminal).
The problem is that you're running Python with the argument &. Python has no idea what to do with that. It's like typing this at the shell:
/usr/bin/python '&'
In fact, if you pay attention, you're almost certainly getting something like this through your stderr:
python: can't open file '&': [Errno 2] No such file or directory
… which is exactly what you'd get from doing the equivalent at the shell.
What you presumably wanted was the equivalent of this shell command:
/usr/bin/python &
But the & there isn't an argument at all, it's part of sh syntax. The subprocess module doesn't know anything about sh syntax, and you're telling it not to use a shell, so there's nobody to interpret that &.
You could tell subprocess to use a shell, so it can do this for you:
cmdline = '{} &'.format(python_path)
p = subprocess.Popen(cmdline, shell=True)
But really, there's no good reason to. Just opening a subprocess and not calling communicate or wait on it already effectively "puts it in the background", just like & does on the shell. So:
args = [python_path]
p = subprocess.Popen(args)
This will start a new Python interpreter that sits there running in the background, trying to use the same stdin/stdout/stderr as your parent. I'm not sure why you want that, but it's the same thing that using & in the shell would have done.
Actually I think there might be a solution to your problem, I found a useful solution at another question here.
This way subprocess.popen starts a new python shell instance and runs the second script from there. It worked perfectly for me on Windows 10.
You can try using screen command
with this command a new shell instance created and the current instance runs in the background.
# screen; python script1.py
After running above command, a new shell prompt will be seen where we can run another script and script1.py will be running in the background.
Hope it helps.
So this is an unusual one, and perhaps I am simply missing the obvious, but I have the following python code that creates a powershell script and runs it.
# Create the PowerShell file
f = open("getKey.ps1", "w")
f.write('$c = Get-BitlockerVolume -MountPoint C:\n')
f.write('$c.KeyProtector[1].RecoveryPassword | Out-File C:\\Temp\\recovery.key\n')
# Invoke Script
startPS = subprocess.Popen([r'C:\Windows\system32\WindowsPowerShell\v1.0\powershell.exe',
'-ExecutionPolicy', 'Unrestricted', './getKey.ps1'], cwd=os.getcwd())
result = startPS.wait()
When this is run, it gives me the following error:
The term 'Get-BitlockerVolume' is not recognized as the name of a cmdlet, function, script file, or operable program.
However, if I then go and manually run the generated script, it works perfectly. To add to the oddity, if I run the same command exactly as above ie:
C:\Windows\system32\WindowsPowerShell\v1.0\powershell.exe -ExecutionPolicy Unrestricted ./getKey.ps1
it also works exactly as expected.
Clearly, the above error is a powershell error, so it is successfully running the script. It almost seems like powershell somehow knows that this is being run from python and has some restricted library of commands when a script is run from a particular source. I grant that that idea makes no real sense, but it's certainly how things appear.
I don't think this is a permissions issue, because when you run the same command from an unelevated powershell prompt, you get an Access is denied type error, rather than a command doesn't exist kind of error.
Anyway, any help would be greatly appreciated!
Edits
Edit: New evidence to help figure this out:
It's definitely an issue of cmdlets being loaded properly. If I programmatically run a script to dump the list of all available commands to a text file, it is only about 2/3's as big as if I do so through a powershell prompt directly
I bet Python is running as a 32-bit process on 64-bit Windows. In this case, you'll end up running 32-bit PowerShell, which in practice is a Bad Thing since many PowerShell modules depend on native binaries that may not have 32-bit equivalents. I hit this with IIS Manager commandlets--the commandlets themselves are registered in 32-bit PowerShell, but the underlying COM objects they rely on are not.
If you need to run 64-bit PowerShell from a 32-bit process, specify the path as %SystemRoot%\SysNative\WindowsPowerShell\v1.0\PowerShell.exe instead of System32.
System32 is actually virtualized for 32-bit processes and refers to the 32-bit binaries in %SystemRoot%\SysWow64. This is why your paths (and PSMODULEPATH) will look the same, but aren't. (SysNative is also a virtualized path that only exists in virtualized 32-bit processes.)
Adding to what #jbsmith said in the comment, also check to make sure that the environment variable that PowerShell relies on to know where it's modules are is populated correctly when python starts the process.
%PSMODULEPATH% is the environment variable in question, and it works the same way the %PATH% variable does, multiple directories separated by ;. Based on what you say your observed behavior is, it seems that you are using PowerShell 3.0, and cmdlet autoloading is in effect.
The solution here: Run a powershell script from python that uses Web-Administration module got me the cmdlet I needed, however there are still missing cmdlets even when using this method. I'm still at a loss as to why some are loaded and others are not, but for the time being, my script does what I need it to and I can't spend any more time to figure it out.
For reference here is the code that worked for me
startPS = subprocess.Popen([r'C:\Windows\sysnative\cmd.exe', '/c', 'powershell',
'-ExecutionPolicy', 'Unrestricted', './getKey.ps1'], cwd=os.getcwd())
I had the same issue, and it was simply that the BitLocker feature was not installed, hence the module wasn't present.
I fixed it by installing the Bitlocker feature:
Windows Server:
Install-WindowsFeature BitLocker -IncludeAllSubFeature -IncludeManagementTools -Restart
Windows Desktop:
Enable-WindowsOptionalFeature -Online -FeatureName BitLocker -All