Is there a way to run the main script in pypy3, but an import, say helper.py to be executed/interpreted by regular python? And vice versa?
To clarify, let's say I have main.py that I want to execute with pypy3. That script imports helper and I want the entire script in helper.py to be executed with python3. Or vice versa. I was wondering if there's something like import pyximport; pyximport install() where the import is then compiled, basically work/act differently as the main.py. I was wondering if there's something like that, that I can do. Currently, I would use pypy3 main.py and within main.py, have subprocess.popen and execute python helper.py, and just pass an object or results through the stdout/pipe. Curious if there are other ways I could do this.
Yes, I know you would ask why even bother doing this. I am currently thinking of this since iterating a file with python in Windows is much faster than iterating a file line by line with pypy3. I know they are trying to update/fix this, but since it is not yet fixed, was wondering what I could do. In Linux, pypy3 works great, even in iterating a file.
I guess another scenario can be when a library is not supported in pypy3 yet, so you would want to still execute that script with python3, but maybe the other part of the scripts you may want to use pypy3 to gain some performance. Hope this question was clear.
Subprocess seems like the right way to go. There are however, humanized equivalent libraries for managing subprocesses that you could look at like,
Delegator
Envoy
Pexpect
This feels like an interesting experiment to provide fallback support for libraries or functions that are not supported in one runtime environment, but can be executed in some other supported environment and still retain the linear flow of execution of the program.
How you would scale this? is an entirely different question.
Related
I want to know becouse as far as i know how the python code works it is slower becouse is an interpretated languaje need to do some extra work in running time thus it is slower, but if i compiled it, it wuld be more performant? or ot will be the same? and what will be the difference between them? other thean performance and when i should compile the code? when i need no other dependencis of python interpretator?, larger proyects or somthing else?
I wanted to ask because there are a lot of peopol new in programing using python and that will be very useful information to know
When you use py2exe, that basically creates a big zip file that contains your source code, the modules you need, and a copy of the Python interpreter. When you run it, it unzips into a miniature self-contained Python environment and runs the interpreter. There is no difference in performance.
I've written a 40k line program in python3. Now I need to use a module throughout my program that is called pytan which will impart a functionality addition. The problem is that pytan is written in python2.
So is it possible to switch the interpreter to python 2.7 inside one script that is called by another running in python 3?
What's the best way to handle this situation.
You cannot "switch the interpreter to python 2.7". You're either using one or the other. Your choices are effectively:
Come up with an alternative that doesn't require the pytan module.
Modify the pytan module so that it runs under Python 3.
Modify your code so that it runs under Python 2.
Isolate the code that requires pytan such that you can run it as a subprocess under the python 2 interpreter. There are a number of problems with this solution:
It requires people to have two versions of Python installed.
It will complicate things like syntax highlighting in your editor.
It will complicate testing.
It may require some form of IPC (pipes, sockets, files, etc...) between your main code and your python 2 subprocess (this isn't terrible, but it's a chunk of additional complexity that wouldn't be necessary if you could make one of the other options work).
I've been searching a lot for this problem, but I didnt find any valuable answer.
I want to make a script (lets say it is a library) which runs some functions at reboot. Inside my library, there will be a function like
def randomfunction():
print("randomtext")
After loading this function, everytime a call for randomfunction() in any python run (I will .py as cgi scripts) will return me "randomtext".
Is that possible or I miss something?
It is working on python idle if I use exec, but I want this exec to be on system. That would be for a linux OS.
Don't you need some kind of Interprocess Communication for this?
Might be worth taking a look at these docs: Python IPC
Also,
this SO post might help you. I think it offers a solution to what you are looking for.
I want to make "common" script which I will use in all my sconscripts
This script must use some scons functions like Object() or SharedObject()
Is there any scons file that i can import or maybe another useful hack.
Im new to python and scons.
I have done exactly what you are explaining. If your SConstruct and/or SConscript scripts simply import your common Python code, then there is nothing special you have to do, except import the appropriate SCons modules in your Python code.
If, on the other hand, you have a Python script, from which you want to invoke SCons (as opposed to launching scons from the command line) then much more effort will be needed. I originally looked into doing this, but later decided it wasnt worth the effort.
I have been writing command-line Python scripts for a while, but recently I felt really frustrated with speed.
I'm not necessarily talking about processing speed, dispatching tasks or other command-line tool-specific processes (that is usually a design/implementation problem), but rather I am talking of simply running a tool to get a help menu, or display minimum information.
As an example, Mercurial is at around 0.080scs and GIT is at 0.030scs
I have looked into Mercurial's source code (it is Python after all) but the answer to have a fast-responding script still eludes me.
I think imports and how you manage them is a big reason to initial slow downs. But is there a best-practice for fast-acting, fast-responding command line scripts in Python?
A single Python script that import os and optparse and executes main() to parse some argument options takes 0.160scs on my machine just to display the help menu...
This is 5 times slower than just running git!
Edit:
I shouldn't have mentioned git as it is written in C. But the Mercurial part still stands, and no, pyc don't feel like big improvement (to me at least).
Edit 2:
Although lazy imports are key to speedups in Mercurial, they key to slowness in
regular Python scripts is not having auto-generated scripts with pkg_resources in them, like:
from pkg_resources import load_entry_point
If you have manually generated scripts that don't use pkg_resources you should see at least 2x speed increases.
However! Be warned that pkg_resources does provide a nice way of version dependencies so make sure you are aware that not using it basically means possible version conflicts.
In addition to compiling the Python files, Mercurial modifies importing to be on demand which does indeed reduce the start-up time. It sets __builtin__.__import__ to its own import function in the demandimport module.
If you look at the hg script in /usr/lib/ (or wherever it is on your machine), you can see this for yourself in the following lines:
try:
from mercurial import demandimport; demandimport.enable()
except ImportError:
import sys
sys.stderr.write("abort: couldn't find mercurial libraries in [%s]\n" %
' '.join(sys.path))
sys.stderr.write("(check your install and PYTHONPATH)\n")
sys.exit(-1)
If you change the demandimport line to pass, you will find that the start-up time increases substantially. On my machine, it seems to roughly double.
I recommend studying demandimport.py to see how to apply a similar technique in your own projects.
P.S. Git, as I'm sure you know, is written in C so I'm not surprised that it has a fast start-up time.
I am sorry - but certainly is not the 0.08 seconds that is bothering you. Although you don say it it feels like you are running an "outter" shell (or other language) scritp thatis calling several hundred Python scripts inside a loop -
That is the only way these start-up times cound make any difference. So, either are withholding this crucial information in your question, or your father is this guy.
So, assuming you have an external scripts that calls of the order of hundereds of python process: write this external script in Python, and import whatver python stuff you need in the same process and run it from there. There fore you will cut on interpretor start-up and module import for each script execution.
That applies even for mercurial, for example. You can import "mercurial" and the apropriate submodules and call functions inside it that perform the same actions than equivalent command line arguments