What does the phrase "created at runtime mean?" - python

My supervisor asked me the other day if a log file in a Python script is created at runtime or not. Is that another way of asking: is a new log file created every time the script is run? Because to me everything happens "at runtime" and it can be no other way. How else could it be?

The phrase is usually used when you cannot figure out the content of some variable/data by just analyzing the code and other associated files (such as configs). The easiest example I can think of is
x = 'hi'
print(x)
versus
x = input()
print(x)
In the second example "x is created at runtime". (Or more precisely, the data which the name x points to is dynamic and can be different in each execution.)

Yeah i think you got it right! However if you for example would only append to an already existing log file it would not be considered as "created at runtime".

Here is the lifecycle of a program, source Wikipedia.
You can create the log file before the run time, just create a new file in your project, or let the program create one during the runtime, if the file already not exist. The extreme case is to save all the log data in memory during the execution end create the file during the end time, but I think it's a really bad idea XD...

Related

Save and load variables using shelve

I'm trying to implement a code that will generate a lot of scenarios. This scenarios will be executed after by a 3 different simulators.
I'm saving my scenarios in a file using shelve. My code is like this:
def save_variables(I, T, R, C, lambd, K, iteraction):
filename='/folder/shelve_{}.out'.format(iteraction)
my_shelf = shelve.open(filename,'n')
for key in dir():
try:
my_shelf[key] = locals()[key]
except TypeError:
pass
my_shelf.close()
and I'm loading data in each simulators like this:
my_shelf = shelve.open(filename)
my_shelf = shelve.open(filename)
for key in my_shelf:
globals()[key]=my_shelf[key]
my_shelf.close()
This part works great. My problem is:
If I run all the code together, in the same terminal of execution, for example, first the scenario_generator, after the simulator_1, after simulator_2 and at least simulator_3, in the same line of execution, the codes works great.
But if I run only the scenario_generator in one terminal and start each simulation in other 3 differents terminals of execution, I receive the follow error:
Number of arguments: 2 arguments.
Argument List: on
Iteraction 0
Traceback (most recent call last):
File "main.py", line 53, in <module>
onets.onets(i)
File "/Users/simulator_1.py", line 31, in simulator_1
n = [[0 for x in range(I+1)] for y in range(T+1)]
NameError: name 'T' is not defined
I understand with this notification that was not possible to read the data saved by shelved. But the files was in folder. Someone knows how I can fix this problem?
P.S.: Is important to me execute in this way (separately) because I should start the 3 simulator as the same time, in different terminals of execution. This will provide a big economy of execution time to have my results.
thanks everyone.
Without being sure, I suspect you are not storing all the variables you think you are in the shelve.
When the steps are together in a single script, if some values are not written in the shelve, you won't notice because you explicitely silent the TypeError` exception.
This is not recommended as errors go unnoticed. You probably made it exactly to avoid some error that was happening.
Because the scripts are together in the same file they share the same global scope.
When you read the values back into the global scope, and some values are missing, you are probably covered by the original values in that same global scope.
So you miss some values, but when you need them they were already there in global scope to be read from.
With separate files there is no more a common global scope. Any missing value will cause an error.
Remove the pass and make the except clause print the current key and value to check for missing keys.
Other than that I see no problem in generating file data to read later from whatever different processes/scripts like you described (given there are no cache/sync trouble and you open the files for reading only)

how to call a def in python?

I made this small program :
I wanna know how to automatically call it, so that when I open the .py it shows up immediatly.
Please understand that I am a beginner in Python.
The right way to do this is to add the following statement in the end of the file:
if __name__ == "__main__":
table_par_7()
Explanation
This will ensure that if you open the file directly (and thus makes it the main file), the function will run, but if another python file imports this file (thus this file isn't the main one), it wont run.
You can call it like this:
# Add this lines at the end of your code
table_par_7()
If you mean:- (1) When you will run the .py file, how to call it. Then the answer is, you will have to write the name of function and press ENTER to execute it.
(2) When you will open the .py file from any folder, is it possible to print final result. The the answer is a big NO. This is because using def in any program is just a keyword to create function. It do not have any property by which it will execute by its own. It must be called by the system which is known as system call.

Python variable value from separate file?

I have a Python script that runs 24hrs a day.
A module from this script is using variables values that I wish to change from time to time, without having to stop the script, edit the module file, then launch the script again (I need to avoid interruptions as much as I can).
I thought about storing the variables in a separate file, and the module would, when needed, fetch the new values from the file and use them.
Pickle seemed a solution but is not human readable and therefore not easily changeable. Maybe a JSON file, or another .py file I import over again ?
Another advantage of doing so, for me, is that in case of interruption (eg. server restart), I can resume the script with the latest variable values if I load them from a separate file.
Is there a recommended way of doing such things ?
Something along the lines :
# variables file:
variable1 = 10
variable2 = 25
# main file:
while True:
import variables
print('Sum:', str(variable1+variable2))
time.sleep(60)
An easy way to maintain a text file with variables would be the YAML format. This answer explains how to use it, basically:
import yaml
stream = open("vars.yaml", "r")
docs = yaml.load_all(stream)
If you have more than a few variables, it may be good to check the file descriptor to see if the file was recently updated, and only re-load variables when there was a change in the file.
import os
last_updated = os.path.getmtime('vars.yaml')
Finally, since you want avoid interruption of the script, it may be good to have the script catch any errors in the YAML file and warn the user, instead of just throwing an exception and die. But also remember that "errors should never pass silently". What is the best approach here would depend on your use-case.

How to detect a .lock file in a geodatabase

I am very new to python, but I have written a simple python script tool for automating the process of updating mosaic datasets at my job. The tool runs great, but sometimes I get the dreaded 9999999 error, or "the geodatase already exists" when I try to overwrite the data.
The file structure is c:\users\my.name\projects\ImageryMosaic\Alachua_2014\Alachua_2014_mosaic.gdb. After some research, I determined that the lock was being placed on the FGDB whenever I opened the newly created mosaic dataset inside of the FGDB to check for errors after running the tool. I would like to be able to overwrite the data instead of having to delete it, so I am using the arcpy.env.overwriteOutput statement in my script. This works fine unless I open the dataset after running the tool. Since other people will be using this tool, I don't want them scratching thier heads for hours like me, so it would be nice if the script tool could look for the presence of a .Lock file in the geodatabase. That way I could at least provide a statement in the script as to why the tool failed in lieu of the unhelpful 9999999 error. I know about arcpy.TestSchemaLock, but I don't think that will work in this case since I am not trying to place a lock and I want to overwrite the FGDB, not edit it.
Late but this function below will check for lock files in given (gdb) path.
def lockFileExist(path = None):
if path == None:
import traceback
raise Exception("Invalid Path!")
else:
import glob
full_file_paths = glob.glob(path+"\\*.lock")
for f in full_file_paths:
if f.endswith(".lock"):
return True
return False
if lockFileExist(r"D:\sample.gdb"):
print "Lock file found in gdb. Aborting..."
else:
print "No lock files found!. Ready for processing..."

"Unknown object" error when trying to capture artwork from a pict file and embed it into a track

I'm trying to capture artwork from a pict file and embed it into a track on iTunes using python appscript.
I did something like this:
imFile = open('/Users/kartikaiyer/temp.pict','r')
data = imFile.read()
it = app('iTunes')
sel = it.current_track.get()
sel.artworks[0].data_.set(data[513:])
I get an error OSERROR: -1731
MESSAGE: Unknown object
Similar applescript code looks like this:
tell application "iTunes"
set the_artwork to read (POSIX file "/Users/kartikaiyer/temp.pict") from 513 as picture
set data of artwork 1 of current track to the_artwork
end tell
I tried using ASTranslate but it never instantiates the_artwork and then throws an error when there is a reference to the_artwork.
This is an older question, but since I was having trouble doing this same thing now, I thought I'd post my solution in case someone else might benefit.
selected = appscript.app('iTunes').selection.get()
for t in selected:
myArt = open(/path/to/image.jpg,'r')
data = myArt.read()
t.artworks[1].data_.set(data) # no need to remove header but one-indexed as has said earlier
myArt.close()
Hope this helps.
At a quick guess, Appscript references, like AppleScript references, use 1-indexing, not zero-indexing like Python lists. So you probably need to write:
it.current_track.artworks[1].data_.set(...)
(Incidentally, the extra get command in your original script is unnecessary, though harmless in this case.)
As for ASTranslate, you need to enable the 'Send events to app' checkbox if you want it to actually send commands to applications and scripting additions and receive their results. As a rule, it's best to disable this option so that you don't have any unfortunate accidents when translating potentially destructive commands such as set or delete, so only to enable it if you really need it, and be careful what code you run when you do.
The read command is part of Scripting Additions, which ASTranslate doesn't translate to. Use ASDictionary to create a glue for Scripting Additions, by clicking "Choose Installed Scripting Additions" Under the Dictionary menu, and then selecting "Scripting Additions" from the list.

Categories

Resources