The following code is written in the morph.py file:
with open("morph.py", "r+") as f:
old = f.read() # read everything in the file
f.seek(0,2) # rewind
f.write("1") # write the new line before
a="BAD"
a1="Worked"
print a
The idea is that the morph.py file will be rewritten, and the text "Worked" will be printed.
This is not the case, I think it has to do with how Python interpreter loads the files. The only thing that makes sense is that the whole file is loaded, and then run.
Can somebody shed some light? Is it even possible to have self morphing code in python?
Partially related question:
Self decompressing and executing code in python
Not in the way you're trying to do it.
Before Python starts executing any piece of code, it compiles it into a bytecode representation, which is much faster to execute than reading line-by-line. This means that after Python has compiled the file, no changes to the file will be reflected in currently-running code.
However, you can manually load code from strings by using compile, exec, or eval. You can use this to create a program that is passed its own source code, alters and returns it, and executes the modified source code.
When I run the file the first time it outputs:
BAD
When I run it a second time it outputs:
Worked
Any subsequent times it will give an error:
... name 'a11' is not defined
When you run python on a file, it will load the file, then convert it to bytecode, then execute the bytecode. The file has already undergone conversion when you change the file, so you see no effect.
Related
When I run the code below, I get different results when I run in .py compared to when I run in .exe using pyinstaller
import win32com.client
import os
ConfigMacroName = "test.xls"
xl=win32com.client.Dispatch("Excel.Application")
Configmacrowb = xl.Workbooks.Open(os.getcwd()+ "\\Completed\\" + ConfigMacroName)
SlotPlansheet = Configmacrowb.Sheets("SlotPlan")
Header = SlotPlansheet.Rows(1)
SOcol = Header.Find('SO', LookAt=1).Column #I used LookAt=1 which is equivalent to LookAt:=xlWhole in excel VBA
SOlinecol = Header.Find('SO Line').Column
print("SO is " + str(SOcol) + "\nSo line is " + str(SOlinecol))
SlotPlansheet = None
Configmacrowb.Close(False)
Configmacrowb = None
xl.Quit()
xl = None
The excel input
The output in .py
The output in .exe
The output in .py file is the correct output I need. If I run it in .exe there will be duplicate variable since they both will refer to column B. For temporary solution I can just loop through the header to check each cell.
But I'm using find() function a lot so I don't know if my other programs are also affected by this inconsistency
Try changing the object creation line to:
xl=win32com.client.gencache.EnsureDispatch(‘Excel.Application’)
In my experience, the win32com.client.Dispatch() function can sometimes cause issues in that it does not guarantee the same result every time it runs. The caller doesn't know if they have an early- or late-bound object. If you have no cached makepy files then you will get a late-bound IDispatch automation interface, but if win32com finds an early-bound interface then it will use it (even if it wasn't your programme that created it). Hence code that ran fine previously may stop working.
Unless you have a good reason to be indifferent, I think it is better to be explicit and choose win32com.client.gencache.EnsureDispatch() or win32com.client.dynamic.Dispatch() for early- or late-binding respectively. I generally choose the EnsureDispatch() route, as it is quicker, enforces case-sensitivity, and gives access to any constants in the type library (eg win32com.client.constants.xlWhole) rather than rely on 'magic' integers.
Also, in the past, I have experienced odd behaviour around indexing (eg this SO question), and this was cured by deleted any gencache wrappers (see below).
Add this line to your debug code:
print('Cache directory:',win32com.client.gencache.GetGeneratePath())
This will tell you where the gencache early-binding python files are being generated, and where win32com.client.Dispatch() will look for any cached wrapper files to attempt early-binding. If you want to clear the cached of generated files just delete the contents of this directory. It will be interesting to see if the OP's two routes have the same directory.
I had this script working for me, before I decided I'm gonna rewrite everything and make it portable.
Without delving too much into the details, there's a central Bash script, which calls 5 other Bash scripts in their own respective folders. I have no intention of porting to Windows anytime soon, as of current this is just for Linux.
The execution path of the central Bash script is:
dos.1/1-init.sh dos.1/
dos.2/1-trace-to-file.sh dos.2/ dos.1/
dos.3/1-recognize-categories.sh dos.3/
dos.4/1-ping-in-groups.sh dos.4/ dos.3/
dos.5/init.sh dos.5/ dos.4/
I run with ./init.sh
Before the script was 'portable' I was using explicit file paths inside each respective script. All was well and good. The program itself is a combination of Bash and Python, and writes to files in one directory, so that they can be manipulated in various ways, before being read back into different parts of the program.
I understand that the fastest way to do this would be to write a monolithic Python script, using subprocess calls for the Bash side of things... However, I am doing it this way to ease maintenance, and (before I started making it 'portable') it was lightning fast.
My issue now is this: each time I have to read text into Python (either from SQL or from file) there's always this added garbage. Up until this point, I have been using sed, awk and Python's .rstrip() function to manage this... Which is all well and good, but this one damn function will not play nice... And I feel there must be a better way.
In bash I call it with:
$prog_dir=$1
$data_dir=$2
$prog_dir/2fast-ping.py $data_dir/group0.txt > $prog_dir/group0_averages.txt
$prog_dir/2fast-ping.py $data_dir/group1.txt > $prog_dir/group1_averages.txt
...
Now I know that I could write to file from within Python, but in this instance I have other reasons not to.
The issue, is that when the 2fast-ping.py script is ran, it reads the text file in with commas and a newline char. I have vigorously checked and I can confirm that the group#.txt files 100% do not contain commas. Here's the Python:
import sys
import subprocess
import select
from concurrent.futures import ThreadPoolExecutor
filename = sys.argv[1]
f = open(filename, "r")
ips = [elem.rstrip('\n') for elem in f]
print(ips)
f.close()
The script goes on to do some work on the IPs afterwards, but this is the painful part. If I call the script direct from CLI: ./2fast-ping.py ../dos.3/group0.txt, the text is processed PROPERLY and the superseding instructions actually function. But, when called from the first init script, the program basically sh*ts itself because each line is read in with commas. It works until the point where it starts to use the processed info, then:
<actual IP would be here>
ping: ('##.###.###.###',): Name or service not known
Of course, the issue is the ('',) But, Python is adding that in, and I don't know how to stop it :(
Any ideas?
Python code was okay, just passing an additional / with the argument :(
I am trying to write a program that initializes a dictionary with web scraping, and then serializes the dictionary using pickle so that the program doesn't need to scrape after the first time it is run. My issue is that after calling pickle.dump(someDict, dictFile), no data is written to the file and the program actually terminates. The code I am using is below:
if not Path(getcwd() + "\\dic.pickle").is_file():
someDict = funcToScrapeDict()
with open("dic.pickle", "wb") as dic_file:
pickle.dump(someDict, dic_file)
else:
with open("dic.pickle", "rb") as dic_file:
someDict = pickle.load(dic_file)
<<lots more code here processing someDict>>
So if the pickle file already exists it will just jump to unpickling.
I know that my scraping function works inside the loop because I test printed it immediately before calling pickle.dump(someDict, dic_file), so termination happens immediately after that call with no bytes written to file (though the file is created) and no error messages.
I am on Windows 10 using python 3.7.1
I also increased recursion limit because of a previous runtime error and tried using absolute paths with no luck.
[EDIT] Also worth noting that I tried this exact implementation outside of the scope of my problem with just a manually created dictionary of equal size (280) and it worked fine.
I made this small program :
I wanna know how to automatically call it, so that when I open the .py it shows up immediatly.
Please understand that I am a beginner in Python.
The right way to do this is to add the following statement in the end of the file:
if __name__ == "__main__":
table_par_7()
Explanation
This will ensure that if you open the file directly (and thus makes it the main file), the function will run, but if another python file imports this file (thus this file isn't the main one), it wont run.
You can call it like this:
# Add this lines at the end of your code
table_par_7()
If you mean:- (1) When you will run the .py file, how to call it. Then the answer is, you will have to write the name of function and press ENTER to execute it.
(2) When you will open the .py file from any folder, is it possible to print final result. The the answer is a big NO. This is because using def in any program is just a keyword to create function. It do not have any property by which it will execute by its own. It must be called by the system which is known as system call.
I am working on an extendscript code (Adobe After Effects - but it is basically just javascript) which needs to iterate over tens of thousands of file names on a server. This is extremely slow in extendscript but I can accomplish what I need to in just a few seconds with python, which is my preferred language anyway. So I would like to run a python file and return an array back into extendscript. I'm able to run my python file and pass an argument (the root folder) by creating and executing a batch file, but how would pass the result (an array) back into extendscript? I suppose I could write out a .csv and read this back in but that seems a bit "hacky".
In After Effects you can use the "system" object's callSystem() method. This gives you access to the system's shell so you can run any script from the code. So, you can write your python script that echos or prints the array and that is essentially what is returned by the system.callSystem() method. It's a synchronous call, so it has to complete before the next line in ExtendScript executes.
The actual code might by something like:
var stdOut = system.callSystem("python my-python-script.py")