I'm trying to analyze complex Python code. For some specific reasons, I decided to use tracing instead of a debugger.
The first thing I tried was:
python -m trace -t /path/to/file.py
The output of this command turned out to be completely useless. The reason is that there was a thread running in the background, doing some work every 0.1 s and this generated thousands of lines of tracing information. The useful information (a reaction to my interaction with app GUI) scrolled away in a blink.
For this reason, I decided to limit the scope of tracing. I did the following.
The original code:
def caller():
print("I'm the caller.")
callee()
def callee():
print("Callee here.")
Code modified for tracing:
import trace
tracer = trace.Tracer(
count=0,
trace=1,
)
def outer():
print("I'm the caller.")
tracer.runfunc(inner)
def inner():
print("Callee here.")
Now I launched the program and the tracer did not generate any output. I was hoping that this would provide complete tracing information, but only for the limited scope of inner().
No success with tracer.run() either.
What I was able to do, when I set count=1, I was able to catch the coverage data with tracer.results() and write them to a file. But the tracing information was not generated even in this case.
Am I doing anything wrong?
Related
I've got a LibreOffice python script that uses serial IO. On my systems, opening a serial port is a very slow process (around 1 second), so I'd like to keep the serial port open, and just send stuff as required.
But LibreOffice python apparently reloads the python framework every time a call is made. Unlike most python implementations, where the process is persistent, and un-enclosed code in a module is run once, when the module is imported.
Is there a way in LibreOffice python to persist objects between calls?
SerialObject=None
def return_global():
return str(SerialObject) #always returns "None"
def init_serial_object():
SerialObject=True
It looks like there is a simple bug. Add global and then it works.
However, it may be that the real problem is your setup. Here is a working example. Put the code in a file called inctest.py under $LIBREOFFICE_USER_DIR/Scripts/python/pythonpath/incmod/.
def init_serial_object():
global SerialObject
SerialObject=True
def moduleVersion():
return "2.0" #change to verify that this is the most recently updated code
The code to call it should be located in the user profile directory (that is, location=user).
from incmod import inctest
inctest.init_serial_object()
msgbox("{}: {}".format(
inctest.moduleVersion(), inctest.return_global()))
Run the script by going to Tools > Macros > Run Macro and find it under My Macros.
Be aware that inctest.py will not always get reloaded. There are two ways to reload it: restart LibreOffice, or force python to reload it by doing del sys.modules[mod].
The moduleVersion() function is not necessary but helps you see when the module is getting reloaded — make changes to that line and then see whether the output changes.
When executing a python script in Abaqus CAE, Errors, Warning and other messages are printed in the dedicated message window, and callback functions can further customise message behaviour.
During a noGui script execution, all messages are suppressed, even ones that would be really helpfull to see, such as "Keyword import not supported.", or "Job exited with error(s)."
The Job object has no return values to check for completion.
If the job, or job import fails early enough there are no files generated to check for warning messages.
Callback functions are not supported:
Messaging commands are available only if Abaqus/CAE is run interactively using the GUI.
My main problem with the 'usual' way of just calling abaqus job=input is that I need to iterate on the model as it generates results, and am trying to keep everything in one instance of python (or rather, abaqus' python) so that i can move objects around and update the model easily without copnstantly re-reading 30000-line input files.
As it stands my main loop looks like this:
import os
import sys
import abaqus
import abaqusConstants as abqc
my_db = abaqus.Mdb(pathname='models.cae')
my_model = my_db.ModelFromInputFile('Model-1','./inpfiles/import_me.inp')
# do some checks on the model to see if all the keywords were imported
# manually read them in if not.
for i in [1]: # Future iteration loop
my_job = my_db.Job(name='Job-1', model='Model-1', scratch='./work')
my_job.submit()
my_job.waitForCompletion()
#check results, update model
Is there any way of gaining access to the abaqus messages that are usually relegated to the message dialog box?
Update:
After some trying around, simply stating sys.stdout = sys.__stdout__ does the trick for one layer of code, i.e. the warning messages delivered by the ModelFromInputFile function. However the output stream from the actual job submission is still not printed to screen... Partial success?
Using this anser on redirected output also did not brear fruit since sys.stdout as defined abaqus does not have a fileno attribute.
Call a function in a running program
I am fairly new to programming and recently decided that I want to expand my Python knowledge and practice a bit. For that reason I decided that I create a little Weather Station with a Raspberry PI.
The program I am currently creating takes the output from a thermometer, parses it and writes it into a database. At the time the program is started every minute and after completing the aforementioned procedure, the program ends and all the instances get deleted (is that how you say it?).
I think that restarting it every minute wastes resources and time is getting lost so I want my program to run all the time and be accessible via command line commands (bash).
For example I have the class Thermometer:
class Thermometer():
def measure():
# do stuff
# return stuff
An instance of this class is created like this:
if __name__ == "__main__":
thermo = Thermometer
while True:
pass
Is it possible that I could call the function measure like this?:
sudo python3 < thermo.measure()
Or is there an other way to achieve this or am I doing a completely wrong approach? Also how could one describe this problem? I tried searching for "Python call function from outside" or "Python call function from bash" but I didn't find anything useful except Call a function from a running process
StackOverflow but it seems that this is the wrong Python version.
You can find my code here: github Jocomol/weatherstation
or if you don't trust the link go to github and search for "Jocomol/weatherstation".
Code review
As I already mentioned I am quite new to python and programming itself, so I make a lot of mistakes or writing useless code that could be resolved otherwise. So I am thankful for any feedback I can get on my code. If you guys could take a look at my code and point out places that aren't optimal or where I did something completely useless, I would be very happy.
I would like users of my program to be able to define custom scripts in python without breaking the program. I am looking at something like this:
def call(script):
code = "access modification code" + script
exec(code)
Where "access modification code" defines a scope such that script can only access variables it instantiates itself. Is it possible to do this or something with similar functionality, such as creating a new python environment with its own scope and then receiving output from it?
Thank you for your time :)
Clarification Edit
"I want to prevent both active attacks and accidental interaction with the program variables outside the users script (hence hiding all globals). The user scripts are intended to be small and inputted as text. The return of the user script needs to be immediate, as though it were native to the program."
There's two separate problems you want to prevent in this scenario:
Prevent an outside attacker from running arbitrary code in the context of the OS user executing your program. This means preventing arbitrary code execution and privilege escalation.
Preventing a legitimate user of your program from changing the program behavior in unintended ways.
For the first problem, you need to make sure that the source of the python code you execute is the user. You shouldn't accept input from a socket, or from a file that other users can write to. You need to make sure, somehow, that it was the user who is running the program that provided the input.
Actual solutions will depend on your OS, but you may want to consider setting up restrictive file permissions, if you're storing the code in a file.
Don't ignore or downplay this problem, or your users will fall victim to virus/malware/hackers thanks to your program.
The way you solve the second problem depends on what exactly constitutes intended behaviour in your program. If you're happy with outputting simple data structures, you can run your user-inputted code in a separate process, and pass the result over a pipe in a serialization format such as JSON or YAML.
Here's a very simple example:
#remember to set restrictive file permissions on this file. This is OS-dependent
USER_CODE_FILE="/home/user/user_code_file.py"
#absolute path to python binary (executable)
PYTHON_PATH="/usr/bin/python"
import subprocess
import json
user_code= '''
import json
my_data= {"a":[1,2,3]}
print json.dumps(my_data)
'''
with open(USER_CODE_FILE,"wb") as f:
f.write(user_code)
user_result_str= subprocess.check_output([PYTHON_PATH, USER_CODE_FILE])
user_result= json.loads( user_result_str )
print user_result
This is a fairly simple solution, and it has significant overhead. Don't use this if you need to run the user code many times in a short period of time.
Ultimately, this solution is only effective against unsophisticated attackers (users). Generally speaking, there's no way to protect any user process from the user itself - nor would it make much sense.
If you really want more assurance, and want to mitigate the first problem too, you should run the process as a separate ("guest") user. This, again, is OS-dependent.
Finally a warning: avoid exec and eval to the best of your abilities. They don't protect either your program or the user against the inputted code. There's a lot of information about this on the web, just search for "python secure eval"
i wrote actionscript and javascript. add callback to invoke a piece of code is pretty normal in everyday life.
but cames to python it seems not quit an easy job. i can hardly see things writing in callback style.i mean real callback,not a fake one,here's a fake callback example:
a list of file for download,you can write:
urls = []
def downloadfile(url,callback):
//download the file
callback()
def downloadNext():
if urls:
downloadfile(urls.pop(),downloadNext)
downloadNext()
this works but would finally meet the maximum recursion limit.while a really callback won't.
A real callback,as far as i understand,can't not come from program, it's must come from physics,like CPU clock,or some hardware IO state change,this would invoke some interception to CPU ,CPU interrupt current operating flow and check if the runtime registered any code about this int,if has,run it,the OS wrapped it as signal or event or something else ,and finally pass it to application.(if i'm wrong ,please point it out)thus would avoid the function calling stack pile up to overflow,otherwise you'll drop into infinite recursion .
there was something like coroutine in python to handle multi tasks,but must be very carefully.if in any of the routine you are blocked,all tasks would be blocked
there's some third party libs like twisted or gevent,but seems very troublesome to get and install,platform limited,not well supported in python 3,it's not good for writing a simple app and distribute.
multiprocessing, heavy and only works on linux
threading,because of GIL, never be the first choice,and it seems a psuedo one.
why not python give an implementation in standard libraries?and is there other easy way to get the real callback i want?
Your example code is just a complicated way of sequentially downloading all files.
If you really want to do asyncronous downloading, using a multiprocessing.Pool, especially the Pool.map_async member function. is the best way to go. Note that this uses callbacks.
According to the documentation for multiprocessing:
"It runs on both Unix and Windows."
But it is true that multiprocessing on ms windows has some extra restrictions.