exec() python command stops the whole execution - python

I am trying to run a script that sequentially changes some parameters in a config file (MET_config_EEv40.cfg) and runs a script ('IS_MET_EEv40_RAW.py') that retrieves these new config parameters:
config_filename = os.getcwd() + '/MET_config_EEv40.cfg'
import sys
parser = configparser.ConfigParser()
parser.read('MET_config_EEv40.cfg')
parser.set('RAW', 'product', 'ERA')
parser.set('RAW', 'gee_product', 'ECMWF/ERA5_LAND/HOURLY')
parser.set('RAW', 'indicator', 'PRCP')
parser.set('RAW', 'resolution', '11110')
with open('MET_config_EEv40.cfg', 'w') as configfile:
parser.write(configfile)
## execute file
import sys
os.system(exec(open('IS_MET_EEv40_RAW.py').read()))
#exec(open('IS_MET_EEv40_RAW.py').read())
print('I am here')
After this execution, I get the output of my script as expected:
Period of Reference: 2005 - 2019
Area of Interest: /InfoSequia/GIS/ink/shp_basin_wgs84.shp
Raw data is up to date. No new dates available in raw data
Press any key to continue . . .
But it never prints the end line: I am here, so that means that after the execution of the script, the algorithm is terminated. That is not what I want it to do, as I would like to be able to change some other config parameters and run the script again.
That output is showed because of this line of the code:
if (delta.days<=1):
sys.exit('Raw data is up to date. No new dates available in raw data')
So could be that sys.exit is ending both processes? Any ideas to replace sys.exit() inside the code to avoid this?
Im executing this file from a .bat file that contains the following:
#echo OFF
docker exec container python MET/PRCPmain.py
pause

exec(source, globals=None, locals=None, /) does
Execute the given source in the context of globals and locals.
So
import sys
exec("sys.exit(0)")
print("after")
is same as writing
import sys
sys.exit(0)
print("after")
which obviously terminate and does not print after.
exec has optional argument globals which you can use to provide your alternative to sys for example
class MySys:
def exit(self, *args):
pass
exec("sys.exit(0)",{"sys":MySys()})
print("after")
which does output
after
as it does use exit from MySys instance. If your codes make use of other things from sys and want it to work normally you would need method mimicking sys function in MySys class

Related

How to compile multiple subprocess python files into single .exe file using pyinstaller

I have a similar question to this one:Similar Question.
I have a GUI and where the user can input information and the other scripts use some of that information to run.I have 4 different scripts for each button. I run them as a subprocess so that the main gui doesn’t act up or say that it’s not responding. This is an example of what I have since the code is really long since I used PAGE to generate the gui.
###Main.py#####
import subprocess
def resource_path(relative_path):
#I got this from another post to include images but I'm also using it to include the scripts"
try:
# PyInstaller creates a temp folder and stores path in _MEIPASS
base_path = sys._MEIPASS
except Exception:
base_path = os.path.abspath(".")
return os.path.join(base_path, relative_path)
Class aclass:
def get_info(self):
global ModelNumber, Serial,SpecFile,dateprint,Oper,outputfolder
ModelNumber=self.Model.get()
Serial=self.SerialNumber.get()
outputfolder=self.TEntry2.get()
SpecFile= self.Spec_File.get()
return ModelNumber,Serial,SpecFile,outputfolder
def First(self):
aclass.get_info(self) #Where I use the resource path function
First_proc = subprocess.Popen([sys.executable, resource_path('first.py'),str(ModelNumber),str(Serial),str(path),str(outputfolder)])
First_proc.wait()
#####First.py#####
import numpy as np
import scipy
from main import aclass
ModelNumber = sys.argv[1]
Serial = sys.argv[2]
path = sys.argv[3]
path_save = sys.argv[4]
and this goes on for my second,third, and fourth scripts.
In my spec file, I added:
a.datas +=[('first.py','C\\path\\to\\script\\first.py','DATA')]
a.datas +=[('main.py','C\\path\\to\\script\\main.py','DATA')]
this compiles and it works, but when I try to convert it to an .exe, it crashes because it can't import first.py properly and its own libraries (numpy,scipy....etc). I've tried adding it to the a.datas, and runtime_hooks=['first.py'] in the spec file...and I can't get it to work. Any ideas? I'm not sure if it's giving me this error because it is a subprocess.
Assuming you can't restructure your app so this isn't necessary (e.g., by using multiprocessing instead of subprocess), there are three solutions:
Ensure that the .exe contains the scripts as an (executable) zipfile—or just use pkg_resources—and copy the script out to a temporary directory so you can run it from there.
Write a multi-entrypoint wrapper script that can be run as your main program, and also run as each script—because, while you can't run a script out of the packed exe, you can import a module out of it.
Using pkg_resources again, write a wrapper that runs the script by loading it as a string and running it with exec instead.
The second one is probably the cleanest, but it is a bit of work. And, while we could rely on setuptools entrypoints to some of the work, trying to explain how to do this is much harder than explaining how to do it manually,1 so I'm going to do the latter.
Let's say your code looked like this:
# main.py
import subprocess
import sys
spam, eggs = sys.argv[1], sys.argv[2]
subprocess.run([sys.executable, 'vikings.py', spam])
subprocess.run([sys.executable, 'waitress.py', spam, eggs])
# vikings.py
import sys
print(' '.join(['spam'] * int(sys.argv[1])))
# waitress.py
import sys
import time
spam, eggs = int(sys.argv[1]), int(sys.argv[2]))
if eggs > spam:
print("You can't have more eggs than spam!")
sys.exit(2)
print("Frying...")
time.sleep(2)
raise Exception("This sketch is getting too silly!")
So, you run it like this:
$ python3 main.py 3 4
spam spam spam
You can't have more eggs than spam!
We want to reorganize it so there's a script that looks at the command-line arguments to decide what to import. Here's the smallest change to do that:
# main.py
import subprocess
import sys
if sys.argv[1][:2] == '--':
script = sys.argv[1][2:]
if script == 'vikings':
import vikings
vikings.run(*sys.argv[2:])
elif script == 'waitress':
import waitress
waitress.run(*sys.argv[2:])
else:
raise Exception(f'Unknown script {script}')
else:
spam, eggs = sys.argv[1], sys.argv[2]
subprocess.run([sys.executable, __file__, '--vikings', spam])
subprocess.run([sys.executable, __file__, '--waitress', spam, eggs])
# vikings.py
def run(spam):
print(' '.join(['spam'] * int(spam)))
# waitress.py
import sys
import time
def run(spam, eggs):
spam, eggs = int(spam), int(eggs)
if eggs > spam:
print("You can't have more eggs than spam!")
sys.exit(2)
print("Frying...")
time.sleep(2)
raise Exception("This sketch is getting too silly!")
And now:
$ python3 main.py 3 4
spam spam spam
You can't have more eggs than spam!
A few changes you might want to consider in real life:
DRY: We have the same three lines of code copied and pasted for each script, and we have to type each script name three times. You can just use something like __import__(sys.argv[1][2:]).run(sys.argv[2:]) with appropriate error handling.
Use argparse instead of this hacky special casing for the first argument. If you're already sending non-trivial arguments to the scripts, you're probably already using argparse or an alternative anyway.
Add an if __name__ == '__main__': block to each script that just calls run(sys.argv[1:]), so that during development you can still run the scripts directly to test them.
I didn't do any of these because they'd obscure the idea for this trivial example.
1 The documentation is great as a refresher if you've already done it, but as a tutorial and explanatory rationale, not so much. And trying to write the tutorial that the brilliant PyPA guys haven't been able to come up with for years… that's probably beyond the scope of an SO answer.

Write python code that execute python scripts

I have a Python file (sql_script.py) with some methods to add/modify data into a SQL database, say
import_data_into_specifications_table
import_data_into_linkage_table
truncate_linkage_table
....(do_other_stuff_on_db)
connect_db
Sometimes I have to call only one of the methods, some others several of them
Until now what I did was modify the main method according to what I needed to do:
if __name__ == '__main__':
conn = connect_db()
import_data_into_specifications_table(conn= conn)
import_data_into_linkage_table(conn=conn)
conn.close()
But I find it a bad practice, as I always have to remember removing the main before committing the code
A possible option could be to write an external python file, say launch_sql_script.py), in which I write all possible combinations of methods I have to run, say:
def import_spec_and_linkage():
conn = connect_db()
import_data_into_specifications_table(conn= conn)
import_data_into_linkage_table(conn=conn)
conn.close()
...
if __name__ == '__main__':
import_spec_and_linkage()
It can be useful to version this file, but still I will need to modify the main code according to what I need to do.
Do you think this is a good practise? Do you have any other suggestions?
The simplest way is to use program arguments mechanism: describe intended action during script execution.
Get a peek at sys.argv
Here is the scratch:
def meow():
print("Meow!")
def bark():
print("Bark!")
def moo():
print("Moo!")
actions = {
"meow": meow,
"bark": bark,
"moo": moo,
}
from sys import argv
actions[argv[1]]()
If you're going to parse sophisticated program arguments, check out argparse library.
Option 1: Separate them into individual scripts and run each from command line
# import_data_into_specifications_table.py
if name == '__main__':
conn = connect_db() # import from a shared fiel
import_data_into_specifications_table(conn= conn)
# in bash
$ import_data_into_specifications_table
Option 2: Write one file that parses command line arguments
# my_sql_script.py
if name == '__main__':
conn = connect_db()
if args.spec_table: # use argumentparser to get these
import_data_into_specifications_table(conn=conn)
if args.linkage_table:
import_data_into_linkage_table(conn=conn)
...
# in bash
$ my_sql_script.py --spec_table --linkage_table
I would favour option 2 if the order of the operations doesn't matter or is always constant. If there are many permutations, I would go with option 1.

Why are environment variables empty in Flask apps?

I have a flask app (my_app) that calls a function in a different file (my_function):
my_app.py:
from my_functions import my_function
#app.route('/')
def index():
my_function()
return render_template('index.html')
my_functions.py:
def my_function():
try:
import my_lib
except:
print("my_lib not found in system!")
# do stuff...
if __name__ == "__main__":
my_function()
When I execute my_functions.py directly (i.e., python my_functions.py) "my_lib" is imported without error; however, when I execute the flask app (i.e., python my_app.py) I get an import error for "my_lib".
When I print the LD_LIBRARY_PATH variable at the beginning of each file:
print(os.environ['LD_LIBRARY_PATH'])
I get the correct value when calling my_functions.py, but get no value (empty) when calling my_app.py.Trying to set this value at the beginning of my_app.py has no effect:
os.environ['LD_LIBRARY_PATH'] = '/usr/local/lib'
Questions:
(1) Why is 'LD_LIBRARY_PATH' empty when called within the Flask app?
(2) How do I set it?
Any help appreciated.
LD_LIBRARY_PATH is cleared when executing the flask app, likely for security reasons as Mike suggested.
To get around this, I use subprocess to make a call directly to an executable:
import subprocess
call_str = "executable_name -arg1 arg1_value -arg2 arg2_value"
subprocess.call(call_str, shell=True, stderr=subprocess.STDOUT)
Ideally the program should be able to use the python bindings, but for now calling the executable works.

WLST commands in child script

Currently I am working on jython script that can execute an external jython script. The executed (external) jython script must run in the same weblogic session where my script runs in order to be able to cancel the session (in case of error) or activate it. The weblogic connection is being created in my script and the called script has to use the created (with ‘startEdit()’) session.
I found a hybrid solution but perhaps it can be done better.
The executed script in the working solution:
import wlst_operations as wl
print 'Start'
wl.cd('/')
print wl.cmo
wl.cd('/Servers/AdminServer')
print wl.cmo
wl.cd('/JDBCSystemResources/pasDataSource/JDBCResource/pasDataSource/JDBCConnectionPoolPara ms/pasDataSource')
print wl.cmo
wl.cmo.setInitialCapacity(6)
The wlst_operations jython was taken from http://www.javamonamour.org/2013/08/wlst-nameerror-cd.html.
As you can see an object like reference (‘wl.’) must be put in front of each WLST command…
The output is fine:
[MBeanServerInvocationHandler]com.bea:Name=gintdev1,Type=Domain
[MBeanServerInvocationHandler]com.bea:Name=AdminServer,Type=Server
[MBeanServerInvocationHandler]com.bea:Name=pasDataSource,Type=weblogic.j2ee.descriptor.wl.JDBCConnectionPoolParamsBean,Parent=[gintdev1]/JDBCSystemResources[pasDataSource],Path=JDBCResource[pasDataSource]/JDBCConnectionPoolParams
When I don’t use the object reference:
from wlstModule import *
print 'user defined'
cd('/')
print cmo
cd('/Servers/AdminServer')
print cmo
cd('/JDBCSystemResources/pasDataSource/JDBCResource/pasDataSource/JDBCConnectionPoolParams/pasDataSource')
print cmo
cmo.setInitialCapacity(6)
Then the output is:
[MBeanServerInvocationHandler]com.bea:Name=gintdev1,Type=Domain
[MBeanServerInvocationHandler]com.bea:Name=gintdev1,Type=Domain
[MBeanServerInvocationHandler]com.bea:Name=gintdev1,Type=Domain
Problem invoking WLST - Traceback (innermost last):
File "/tmp/lv30083/./orchestrator.py", line 83, in ?
File "/tmp/lv30083/./orchestrator.py", line 66, in main
File "/tmp/lv30083/./utils/orch_wl.py", line 55, in execute
File "user_defined_script_incorrect.py", line 11, in ?
AttributeError: setInitialCapacity
i.e. the cd commands are executed (not getting error) but it just doesn’t jump to the datasource…
My script is
import orch_logging
import sys
from wlstModule import *
class WeblogicManager(object):
def connect_to_server(self, p_ssl, p_domainName, p_userConfigFile, p_userKeyFile):
logger = orch_logging.Logger()
logger.info('Trying to connect to the node manager. domainName='+p_domainName+',userConfigFile='+p_userConfigFile+',ssl='+p_ssl+',p_userKeyFile='+p_userKeyFile)
try:
connect(domainName=p_domainName,userConfigFile=p_userConfigFile,userKeyFile=p_userKeyFile)
return True
except:
logger.error("Error while trying to connect to node manager!")
return False
def startEdit(self):
edit()
startEdit()
def activate(self):
activate()
def undo(self):
cancelEdit('y')
def disconnect(self):
disconnect()
def execute(self, path):
execfile(path)
Is there any way to use the WLST commands without using the 'wl.' reference in front of them?
Thanks,
V.
I had to modify my script. All the operations are now in one context (in the context of my script)
import sys
from wlstModule import *
from weblogic.management.scripting.utils import WLSTUtil
import sys
# needed to execute normal (without object reference) WLST commands in child scripts
origPrompt = sys.ps1
theInterpreter = WLSTUtil.ensureInterpreter();
WLSTUtil.ensureWLCtx(theInterpreter)
execfile(WLSTUtil.getWLSTScriptPath())
execfile(WLSTUtil.getOfflineWLSTScriptPath())
exec(WLSTUtil.getOfflineWLSTScriptForModule())
execfile(WLSTUtil.getWLSTCommonModulePath())
theInterpreter = None
sys.ps1 = origPrompt
modules = WLSTUtil.getWLSTModules()
for mods in modules:
execfile(mods.getAbsolutePath())
wlstPrompt = "false"
class WeblogicManager(object):
...
def execute(self, path):
execfile(path)

How do I make PyCharm show code coverage of programs that use multiple processes?

Let's say I create this simple module and call it MyModule.py:
import threading
import multiprocessing
import time
def workerThreaded():
print 'thread working...'
time.sleep(2)
print 'thread complete'
def workerProcessed():
print 'process working...'
time.sleep(2)
print 'process complete'
def main():
workerThread = threading.Thread(target=workerThreaded)
workerThread.start()
workerProcess = multiprocessing.Process(target=workerProcessed)
workerProcess.start()
workerThread.join()
workerProcess.join()
if __name__ == '__main__':
main()
And then I throw this together to unit test it:
import unittest
import MyModule
class MyModuleTester(unittest.TestCase):
def testMyModule(self):
MyModule.main()
unittest.main()
(I know this isn't a good unit test because it doesn't actually TEST it, it just runs it, but that's not relevant to my question)
If I run this unit test in PyCharm with code coverage, then it only shows the code inside the workerThreaded() and main() functions as being covered, even though it clearly covers the workerProcessed() function as well.
How do I get PyCharm to include code that was started in a new process process in its code coverage? Also, how can I get it to include the if __name__ == '__main__': block as well?
I'm running PyCharm 2.7.3, as well as Python 2.7.3.
Coverage.py can measure code run in subprocesses, details are at http://nedbatchelder.com/code/coverage/subprocess.html
I managed to make it work with subprocesses, not sure if this will work with threads or with python 2.
Create .covergerc file in your project root
[run]
concurrency=multiprocessing
Create sitecustomize.py file in your project root
import atexit
from glob import glob
import os
from functools import partial
from shutil import copyfile
from tempfile import mktemp
def combine_coverage(coverage_pattern, xml_pattern, old_coverage, old_xml):
from coverage.cmdline import main
# Find newly created coverage files
coverage_files = [file for file in glob(coverage_pattern) if file not in old_coverage]
xml_files = [file for file in glob(xml_pattern) if file not in old_xml]
if not coverage_files:
raise Exception("No coverage files generated!")
if not xml_files:
raise Exception("No coverage xml file generated!")
# Combine all coverage files
main(["combine", *coverage_files])
# Convert them to xml
main(["xml"])
# Copy combined xml file over PyCharm generated one
copyfile('coverage.xml', xml_files[0])
os.remove('coverage.xml')
def enable_coverage():
import coverage
# Enable subprocess monitoring by providing rc file and enable coverage collecting
os.environ['COVERAGE_PROCESS_START'] = os.path.join(os.path.dirname(__file__), '.coveragerc')
coverage.process_startup()
# Get current coverage files so we can process only newly created ones
temp_root = os.path.dirname(mktemp())
coverage_pattern = '%s/pycharm-coverage*.coverage*' % temp_root
xml_pattern = '%s/pycharm-coverage*.xml' % temp_root
old_coverage = glob(coverage_pattern)
old_xml = glob(xml_pattern)
# Register atexit handler to collect coverage files when python is shutting down
atexit.register(partial(combine_coverage, coverage_pattern, xml_pattern, old_coverage, old_xml))
if os.getenv('PYCHARM_RUN_COVERAGE'):
enable_coverage()
This basically detects if the code is running in PyCharm Coverage and collects newly generated coverage files. There are multiple files, one for the main process and one for each subprocess. So we need to combine them with "coverage combine" then convert them to xml with "coverage xml" and copy the resulted file over PyCharm's generated xml file.
Note that if you kill the child process in you tests coverage.py will not write the data file.
It does not require anything else just hit "Run unittests with Coverage" button in PyCharm.
That's it.

Categories

Resources