I have python 3 installed on my system and a path to the executable has been added to the PATH. When i inter python in Windows PowerShell (win8.1) it runs fine, however i'd like to use PowerShell ISE for the advanced features it has. However running python in PowerShell ISE crashes with the following log:
python : Python 3.4.3 (v3.4.3:9b73f1c3e601, Feb 24 2015, 22:43:06) [MSC v.1600 32 bit (Intel)] on win32
In Zeile:1 Zeichen:1
+ python
+ ~~~~~~
+ CategoryInfo : NotSpecified: (Python 3.4.3 (v...ntel)] on win32:String) [], RemoteException
+ FullyQualifiedErrorId : NativeCommandError
Type "help", "copyright", "credits" or "license" for more information.
>>>
(sorry its partly in german)
I then can't enter anything and have to Ctrl+C to get back to PowerShell.
What might be the issue here?
PowerShell ISE isn't meant for running typical interactive console programs such as python.exe. It hides the console window and redirects stdout to a pipe. To see this in practice run the following in ISE:
python.exe -i -c "import ctypes; ctypes.windll.user32.ShowWindow(ctypes.windll.kernel32.GetConsoleWindow(), 5)"
Enter text in the console window, and you'll see the input echoed in the console, but output gets piped to ISE.
Here's some a brief overview of Windows console applications. powershell.exe, cmd.exe, and python.exe are all console applications that function as clients of a console server (or host) process, conhost.exe. The host process creates the window and runs the typical GUI event loop. When you run python.exe from a GUI application, such as explorer.exe, Windows executes a new instance of conhost.exe, which creates a new console window. When you run python.exe from another console application, such as powershell.exe, the default behavior is to inherit the console of the parent.
The console API communicates with the attached console host. Many of the functions, such as WriteConsole, require a handle to a console input or screen buffer. If you're attached to a console, the special file CONIN$ represents the input buffer, CONOUT$ is the current screen buffer, and CON can refer to either depending on whether it's opened for reading or writing. (You may have seen a command in cmd.exe such as copy con somefile.txt.)
A Windows process has three fields used for standard I/O handles. For a console process StandardInput defaults to a handle for CONIN$, and StandardOutput and StandardError default to handles for CONOUT$. The C runtime library opens these as the standard FILE streams stdin, stdout, and stderr using file descriptors 0, 1, and 2. When starting a process any of the standard handles can instead be set to an open file or pipe.
While a process can attach to only one console at a time, multiple processes can be attached to a single console. However, usually only one process is active. In the case of powershell.exe, for example, after running python.exe its main thread is waiting in the background for python.exe to exit. (Note that this execution model fails badly if in python.exe you start another interactive console process and then exit, since now both the shell and the child process compete for access to the console.)
Related
I am trying to run code in the python interpreter from a python script (on windows, using the terminal build in to vsc), but I can't make anything work. I have spent a lot of time using the subprocess,and have also tried os module, but the issue with those, is that they cannot run code in the interpreter. So, I can make them start the interpreter, and I can enter code myself, which my script can get the result of (stdout and stderr), but it cannot enter code into the interpreter. I have tried running multiple commands in a row, using \n\r in the commands, and a few other attempts, but it always runs the second command/line after I manually quit() the interpreter. I have tried almost all of the functions from the subprocess module, and have tried numerous configrations for stdin, stdout, and stderr.
So, my qyuestion is: How can I have a script enter code into the interpreter?
It would also be nice to collect the results in real time, so my script does not have to start and quit an instance of the interpreter every time it wants the results, but that is not a priority.
Example of the issue with the OS module (but the issue is more or less the same with subprocess:
My code:
import os
print(os.popen("python").read())
print(os.popen("1 + 1").read())
Result:
Python 3.10.8 (tags/v3.10.8:aaaf517, Oct 11 2022, 16:50:30) [MSC v.1933 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> 1 + 2 #entered by me
>>> quit() #entered by me
3 #what the print statement returns
'1' is not recognized as an internal or external command,
operable program or batch file.
P.S. I am aware there is another question about this issue, but the only answer it has does not work for me. (When using the module they say, python cannot find the module after I installed it)
EDIT: my code with subprocess:
import subprocess as sp
c = sp.Popen("python", text=True, stdin=sp.PIPE, stdout=sp.PIPE, stderr=sp.PIPE)
c.stdin.write("1 + 1")
c.stdin.close()
print(c.stdout.read())
Use the suprocess library like this.
import sys
import subprocess
p = subprocess.run(sys.executable, text=True, capture_output=True,
input='print(1+1)')
print(p.stdout)
print(p.stderr)
If you want to reuse a single child process, you have to implement a client and server system. One easy method is to implement a remote call with multiprocessing.Manager. See the example in the documentation.
As a side note, I don't recommend these if you don't have a good reason for spawning a child process, such as sandboxing an execution environment. Just use eval() in the parent process, because the child process will do the same work as what will be done by eval() if it has been done by the parent process.
A few days ago I was getting issues trying to run Tensorflow models with CUDA enabled and for a long time I couldn't resolve in large because PyCharm displayed completely unhelpful message
"Process finished with exit code -1073740791 (0xC0000409)"
I then launched VSCode and ran the same code in PowerShell and got a nice and extensive error message which allowed me to resolve all issues within half hour (same when running in cmd too). Other outputs are also somewhat different. So this makes me assume that PyCharm runs scripts in some type of its own terminal rather than relying on cmd or PowerShell.
Does anyone know if this is the case?
So this makes me assume that PyCharm runs scripts in some type of its own terminal rather than relying on cmd or PowerShell.
Not necessarily. Just because PyCharm displays a custom error message doesn't mean it doesn't rely on cmd. Here's a Python script that simulates what PyCharm does using the cmd:
# So we can launch a process from Python
import subprocess as sp
# Launches a Python process for a specific file
# `stdout=sp.DEVNULL` suppresses the process output
# so that's why there is no detailed error message
child = sp.Popen(["python", "file.py"], stdout=sp.DEVNULL)
# Start the Python process
process = child.communicate()[0]
# Returns the process exit code
# If the process finishes without errors, it'll return 0
# otherwise, it'll return a "random" value
exit_code = process.returncode
# Displays to stdout the completely unhelpful message
print(f"Process finished with exit code {exit_code} ({hex(exit_code)})")
Either way, here's what PyCharm says:
Initially, the terminal emulator runs with your default system shell, but it supports many other shells such as Windows PowerShell, Command Prompt cmd.exe, sh, bash, zsh, csh, and so on.
I've been using Wing IDE for python programming and I am trying to switch to Eclipse, PyDev.
When I run my code in Wing IDE, after finishing the execution the console goes right back to the interactive shell and I can continue on testing, but I don't know how to do this in Eclipse.
I'm not sure if I am describing my problem properly so I'll use an example:
Let's say I had a simple source code that looked like this (e.g. test.py):
print("hello")
When I run this in Wing IDE by clicking that green arrow, the console would look like this after execution:
Python 2.7.5 (default, May 15 2013, 22:43:36) [MSC v.1500 32 bit (Intel)]
Type "help", "copyright", "credits" or "license" for more information.
[evaluate untitled-1.py]
hello
>>>>
And I can keep doing whatever on the shell and it would know my code (defined functions etc.).
But when I do the same thing in Eclipse, the console would simply look like this:
hello
and I have to click "Remove All Terminated Launches" button to go back to the shell.
Can this be done in Eclipse?
What you want to use is the interactive console in PyDev (not the regular output when you do a run).
To use it do: Ctrl+Alt+Enter.
Note that if you're in the middle of a debug session, you can also use the debug session console to interact with the program.
It can also be created from the UI in the console view as shown below:
[
From what I know, we can open multiple consoles of a particular type in Eclipse.
Whenever we run a script within PyDev, it opens a new console to which it prints the output from the script (including error output). However this is just a new console that is added to the list of already opened consoles. Hence you can switch back to a previously open console by using the Display Selected Console option within the console view ( refer here for a list of all the available console options).
What does this mean?
You can open a new Python interpretor console using the Open Console option within the Eclipse Console view. You can define your methods and play with the interpretor within that console. You now run a Python script that is open within the PyDev editor. A new console gets opened up where-in you see the output from the script (includes error output too). Now if you want to go back to the interactive console, you simply choose the Python Interepretor console that you opened previously from the Display Console option.
Personally, I like this design where-in the output from your script does not mingle and mess up with your experimental sojourns on the Python console. This in turn results in a crisp, clear and concise view of what is happening within the various python environments.
Hope this bit of information helps.
psexec is installed in system32 directory and at the windows CMD line or powershell is able to execute a remote bat file on another server (which in turns executes an SSIS package and the data is verified as loaded).
I'm attempting to build this into a python script executed locally, but when I run the following line in a python shell a CMD window is opened and what looks like the classic 'psexec is not a recognized internal or external command' error appears (but the CMD window closes so quickly that I'm not 100%).
Following is executed unsuccessfully in python:
import os
os.system(r"psexec.exe \servername\ d:\gis\gis_data\gps\gps_data_sql\importgpsdata.bat")
Following is executed successfully in windows CM line:
psexec.exe \servername\ d:\gis\gis_data\gps\gps_data_sql\importgpsdata.bat
d:\etc. being the location of the remote bat to be executed.
For a simple bat execution, I don't think subproccess is required. I have also tried to provide the explicit location of psexec.exe with no luck either.
I'm just at a loss as to why psexec will execute just fine at the command line but not in the python shell.
I expect that this is due to the file system redirector. For a 32 bit process on a 64 bit system, that will redirect references to system32 to SysWOW64.
You have a 64 bit system, and are running 32 bit Python. When you invoke psexec from cmd.exe it finds psexec because cmd.exe is a 64 bit process, and so not subject to redirection. Likewise for PowerShell. But your 32 bit Python cannot see into the 64 bit system directory. So it cannot find psexec.
You also tried to execute C:\Windows\system32\psexec and that failed in the same way. For exactly the same reason. The redirector means that to a 32 bit process that path actually refers to C:\Windows\SysWOW64\psexec.
Test out this hypothesis by invoking C:\Windows\Sysnative\Psexe.exe. That should work from your 32 bit Python because it uses the Sysnative alias that allows 32 bit processes to see into the 64 bit system directory.
Any long term solution should involve putting psexec somewhere else. Remember that the system directory belongs to the system and you should not be modifying its contents. I suggest that you create a dedicated folder for such utilities, and add that directory to your PATH.
I just want to see the state of the process, is it possible to attach a console into the process, so I can invoke functions inside the process and see some of the global variables.
It's better the process is running without being affected(of course performance can down a little bit)
This will interrupt your process (unless you start it in a thread), but you can use the code module to start a Python console:
import code
code.interact()
This will block until the user exits the interactive console by executing exit().
The code module is available in at least Python v2.6, probably others.
I tend to use this approach in combination with signals for my Linux work (for Windows, see below). I slap this at the top of my Python scripts:
import code
import signal
signal.signal(signal.SIGUSR2, lambda sig, frame: code.interact())
And then trigger it from a shell with kill -SIGUSR2 <PID>, where <PID> is the process ID. The process then stops whatever it is doing and presents a console:
Python 2.6.2 (r262:71600, Oct 9 2009, 17:53:52)
[GCC 3.4.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole)
>>>
Generally from there I'll load the server-side component of a remote debugger like the excellent WinPDB.
Windows is not a POSIX-compliant OS, and so does not provide the same signals as Linux. However, Python v2.2 and above expose a Windows-specific signal SIGBREAK (triggered by pressing CTRL+Pause/Break). This does not interfere with normal CTRL+C (SIGINT) operation, and so is a handy alternative.
Therefore a portable, but slightly ugly, version of the above is:
import code
import signal
signal.signal(
vars(signal).get("SIGBREAK") or vars(signal).get("SIGUSR2"),
lambda sig, frame: code.interact()
)
Advantages of this approach:
No external modules (all standard Python stuff)
Barely consumes any resources until triggered (2x import)
Here's the code I use in my production environment which will load the server-side of WinPDB (if available) and fall back to opening a Python console.
# Break into a Python console upon SIGUSR1 (Linux) or SIGBREAK (Windows:
# CTRL+Pause/Break). To be included in all production code, just in case.
def debug_signal_handler(signal, frame):
del signal
del frame
try:
import rpdb2
print
print
print "Starting embedded RPDB2 debugger. Password is 'foobar'"
print
print
rpdb2.start_embedded_debugger("foobar", True, True)
rpdb2.setbreak(depth=1)
return
except StandardError:
pass
try:
import code
code.interact()
except StandardError as ex:
print "%r, returning to normal program flow" % ex
import signal
try:
signal.signal(
vars(signal).get("SIGBREAK") or vars(signal).get("SIGUSR1"),
debug_signal_handler
)
except ValueError:
# Typically: ValueError: signal only works in main thread
pass
If you have access to the program's source-code, you can add this functionality relatively easily.
See Recipe 576515: Debugging a running python process by interrupting and providing an interactive prompt (Python)
To quote:
This provides code to allow any python
program which uses it to be
interrupted at the current point, and
communicated with via a normal python
interactive console. This allows the
locals, globals and associated program
state to be investigated, as well as
calling arbitrary functions and
classes.
To use, a process should import the
module, and call listen() at any point
during startup. To interrupt this
process, the script can be run
directly, giving the process Id of the
process to debug as the parameter.
Another implementation of roughly the same concept is provided by rconsole. From the documentation:
rconsole is a remote Python console
with auto completion, which can be
used to inspect and modify the
namespace of a running script.
To invoke in a script do:
from rfoo.utils import rconsole
rconsole.spawn_server()
To attach from a shell do:
$ rconsole
Security note: The rconsole listener
started with spawn_server() will
accept any local connection and may
therefore be insecure to use in shared
hosting or similar environments!
Use pyrasite-shell. I can't believe it works so well, but it does. "Give it a pid, get a shell".
$ sudo pip install pyrasite
$ echo 0 | sudo tee /proc/sys/kernel/yama/ptrace_scope # If YAMA activated, see below.
$ pyrasite-shell 16262
Pyrasite Shell 2.0
Connected to 'python my_script.py'
Python 2.7.6 (default, Jun 22 2015, 17:58:13)
[GCC 4.8.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> globals()
>>> print(db_session)
>>> run_some_local_function()
>>> some_existing_local_variable = 'new value'
This launches the python shell with access to the globals() and locals() variables of that running python process, and other wonderful things.
Only tested this personally on Ubuntu but seems to cater for OSX too.
Adapted from this answer.
Note: The line switching off the ptrace_scope property is only necessary for kernels/systems that have been built with CONFIG_SECURITY_YAMA on. Take care messing with ptrace_scope in sensitive environments because it could introduce certain security vulnerabilities. See here for details.
Why not simply using the pdb module? It allows you to stop a script, inspect elements values, and execute the code line by line. And since it is built upon the Python interpreter, it also provides the features provided by the classic interpreter. To use it, just put these 2 lines in your code, where you wish to stop and inspect it:
import pdb
pdb.set_trace()
Another possibility, without adding stuff to the python scripts, is described here:
https://wiki.python.org/moin/DebuggingWithGdb
Unfortunately, this solution also requires some forethought, at least to the extent that you need to be using a version of python with debugging symbols in it.
pdb_attach worked well for us for attaching the Python debugger to a long-running process.
The author describes it as follows:
This package was made in response to frustration over debugging long running processes. Wouldn't it be nice to just attach pdb to a running python program and see what's going on? Well that's exactly what pdb-attach does.
Set it up as follows in your main module:
import pdb_attach
pdb_attach.listen(50000) # Listen on port 50000.
When the program is running, attach to it by calling pdb_attach from the command line with the PID of the program and the port passed to pdb_attach.listen():
$ python -m pdb_attach <PID> 50000
(Pdb) # Interact with pdb as you normally would
You can use my project madbg. It is a python debugger that allows you to attach to a running python program and debug it in your current terminal. It is similar to pyrasite and pyringe, but supports python3, doesn't require gdb, and uses IPython for the debugger (which means pdb with colors and autocomplete).
For example, to see where your script is stuck, you could run:
madbg attach <pid>
After that you will have a pdb shell, in which you can invoke functions and inspect variables.
Using PyCharm, I was getting a failure to connect to process in Ubuntu. The fix for this is to disable YAMA. For more info see askubuntu
echo 0 | sudo tee /proc/sys/kernel/yama/ptrace_scope