I am very new to python, but I have written a simple python script tool for automating the process of updating mosaic datasets at my job. The tool runs great, but sometimes I get the dreaded 9999999 error, or "the geodatase already exists" when I try to overwrite the data.
The file structure is c:\users\my.name\projects\ImageryMosaic\Alachua_2014\Alachua_2014_mosaic.gdb. After some research, I determined that the lock was being placed on the FGDB whenever I opened the newly created mosaic dataset inside of the FGDB to check for errors after running the tool. I would like to be able to overwrite the data instead of having to delete it, so I am using the arcpy.env.overwriteOutput statement in my script. This works fine unless I open the dataset after running the tool. Since other people will be using this tool, I don't want them scratching thier heads for hours like me, so it would be nice if the script tool could look for the presence of a .Lock file in the geodatabase. That way I could at least provide a statement in the script as to why the tool failed in lieu of the unhelpful 9999999 error. I know about arcpy.TestSchemaLock, but I don't think that will work in this case since I am not trying to place a lock and I want to overwrite the FGDB, not edit it.
Late but this function below will check for lock files in given (gdb) path.
def lockFileExist(path = None):
if path == None:
import traceback
raise Exception("Invalid Path!")
else:
import glob
full_file_paths = glob.glob(path+"\\*.lock")
for f in full_file_paths:
if f.endswith(".lock"):
return True
return False
if lockFileExist(r"D:\sample.gdb"):
print "Lock file found in gdb. Aborting..."
else:
print "No lock files found!. Ready for processing..."
Related
My supervisor asked me the other day if a log file in a Python script is created at runtime or not. Is that another way of asking: is a new log file created every time the script is run? Because to me everything happens "at runtime" and it can be no other way. How else could it be?
The phrase is usually used when you cannot figure out the content of some variable/data by just analyzing the code and other associated files (such as configs). The easiest example I can think of is
x = 'hi'
print(x)
versus
x = input()
print(x)
In the second example "x is created at runtime". (Or more precisely, the data which the name x points to is dynamic and can be different in each execution.)
Yeah i think you got it right! However if you for example would only append to an already existing log file it would not be considered as "created at runtime".
Here is the lifecycle of a program, source Wikipedia.
You can create the log file before the run time, just create a new file in your project, or let the program create one during the runtime, if the file already not exist. The extreme case is to save all the log data in memory during the execution end create the file during the end time, but I think it's a really bad idea XD...
When I run the code below, I get different results when I run in .py compared to when I run in .exe using pyinstaller
import win32com.client
import os
ConfigMacroName = "test.xls"
xl=win32com.client.Dispatch("Excel.Application")
Configmacrowb = xl.Workbooks.Open(os.getcwd()+ "\\Completed\\" + ConfigMacroName)
SlotPlansheet = Configmacrowb.Sheets("SlotPlan")
Header = SlotPlansheet.Rows(1)
SOcol = Header.Find('SO', LookAt=1).Column #I used LookAt=1 which is equivalent to LookAt:=xlWhole in excel VBA
SOlinecol = Header.Find('SO Line').Column
print("SO is " + str(SOcol) + "\nSo line is " + str(SOlinecol))
SlotPlansheet = None
Configmacrowb.Close(False)
Configmacrowb = None
xl.Quit()
xl = None
The excel input
The output in .py
The output in .exe
The output in .py file is the correct output I need. If I run it in .exe there will be duplicate variable since they both will refer to column B. For temporary solution I can just loop through the header to check each cell.
But I'm using find() function a lot so I don't know if my other programs are also affected by this inconsistency
Try changing the object creation line to:
xl=win32com.client.gencache.EnsureDispatch(‘Excel.Application’)
In my experience, the win32com.client.Dispatch() function can sometimes cause issues in that it does not guarantee the same result every time it runs. The caller doesn't know if they have an early- or late-bound object. If you have no cached makepy files then you will get a late-bound IDispatch automation interface, but if win32com finds an early-bound interface then it will use it (even if it wasn't your programme that created it). Hence code that ran fine previously may stop working.
Unless you have a good reason to be indifferent, I think it is better to be explicit and choose win32com.client.gencache.EnsureDispatch() or win32com.client.dynamic.Dispatch() for early- or late-binding respectively. I generally choose the EnsureDispatch() route, as it is quicker, enforces case-sensitivity, and gives access to any constants in the type library (eg win32com.client.constants.xlWhole) rather than rely on 'magic' integers.
Also, in the past, I have experienced odd behaviour around indexing (eg this SO question), and this was cured by deleted any gencache wrappers (see below).
Add this line to your debug code:
print('Cache directory:',win32com.client.gencache.GetGeneratePath())
This will tell you where the gencache early-binding python files are being generated, and where win32com.client.Dispatch() will look for any cached wrapper files to attempt early-binding. If you want to clear the cached of generated files just delete the contents of this directory. It will be interesting to see if the OP's two routes have the same directory.
I'm trying to figure the root cause of an issue in a single-threaded Python program that essentially goes like this (heavily simplified):
# Before running
os.remove(path)
# While running
if os.path.isfile(path):
with open(path) as fd:
...
I'm essentially seeing erratic behavior where isfile (which uses stat, itself using GetFileAttributesExA under the hood in Python 2.7, see here) can return True when the file doesn't exist, failing the next open call.
path being on an SMB3 network share, I'm suspecting caching behavior of some kind. Is it possible that GetFileAttributesExA returns stale information?
Reducing SMB client caching from the default (10s) to 0s seems to make the issue disappear:
Set-SmbClientConfiguration -DirectoryCacheLifetime 0 -FileInfoCacheLifetime 0
(Note: The correct fix here is to try opening the file and catch the exception, of course, but I'm puzzled by the issue and would like to understand the root cause.)
I have a program where I am creating a multitude of LaTeX files one by one. It is important when creating these LaTeX files to check that they can actually compile to a .pdf without error.
To do so it uses subprocess.call(['pdflatex', '-halt-on-error', tex_file_name]).
Which returns 0 on a successful compile from a .tex to a .pdf, and a 1 otherwise.
The problem I am having is that the only circumstance under which this line of code does not do what I think it should do, is when py.test runs it. If I run this code from an interpreter, or running a script from the command line, it works. But py.test doesn't.
When py.test errors, it leaves behind a log file created by pdflatex, which has this error in it:
{c:/texlive/2012/texmf-var/fonts/map/pdftex/updmap/pdftex.map
!pdfTeX error: pdflatex.exe (file c:/texlive/2012/texmf-var/fonts/map/pdftex/up
dmap/pdftex.map): fflush() failed (Bad file descriptor)
==> Fatal error occurred, no output PDF file produced!
I am hazarding a guess here that py.test is doing some thing with the .tex file prior to pdflatex being able to compile it. But I don't know what.
Temporary files and directories are talked about in the py.test docs. I don't know if they are relevant to my problem, but I have only played around with them briefly.
In case you want to look at the code, a test case looks like this:
from a import Foo
from b import Tree
from latex_tester import latex_tester
def test_Foo():
q1 = foo.Foo()
latex_tester(Tree(1, q1))
and latex_tester looks like this:
import uuid
import os
import subprocess
def latex_tester(tree):
""" Test whether latex is compilable to a PDF.
"""
full_path = r'some_path'
uid = str(uuid.uuid1())
file_name = os.path.join(full_path, 'test' + uid + '.tex')
with open(file_name, 'w') as f:
_write_tree(f, tree)
retcode = subprocess.call(['pdflatex', '-halt-on-error', file_name])
if retcode != 0:
raise RuntimeError("This latex could not be compiled.")
Oddly enough, using 'xelatex' instead of 'pdflatex' makes things work as normal.
For any future readers - I have TeXworks installed which presumably installed both these tools. I don't know if xelatex influences the final pdf produced. It seems to be producing a good .pdf
Anyway, I made this answer to my own question since there doesn't seem to be anything else coming and it certainly solved my problem.
I had exactly the same problem.
I'm using C# as programming language for creating the .tex documents and it crashed during pdflatex when I included a image into the pdf.
And it worked if i started it manually...
Error:
pdfTeX warning: pdflatex
!pdfTeX error: pdflatex (file <linktoFile>/file.pdf): fflush() failed (Bad file descriptor)
Unfortunately xelatex didn't work either so I searched and eventually stumbled upon it.
Basically the error happened for me at this line:
string tex = tex.Replace("\uFFFD\uFFFDMEMNAME\uFFFD\uFFFD", user.Surname)
Here user.Surname was null.
When the tex string is saved into the file it mysteriously remembers the null and pdflatex crashes somewhere completely different. If you on the other hand start pdflatex on the same file again manually the null is gone and it works.
The whole mess went away when I entered a Surname and it works now from the program.
Maybe this will help someone with the same problem.
edit:OK, I could swear that the way I'd tested it showed that the getcwd was also causing the exception, but now it appears it's just the file creation. When I move the try-except blocks to their it actually does catch it like you'd think it would. So chalk that up to user error.
Original Question:
I have a script I'm working on that I want to be able to drop a file on it to have it run with that file as an argument. I checked in this question, and I already have the mentioned registry keys (apparently the Python 2.6 installer takes care of it.) However, it's throwing an exception that I can't catch. Running it from the console works correctly, but when I drop a file on it, it throws an exception then closes the console window. I tried to have it redirect standard error to a file, but it threw the exception before the redirection occurred in the script. With a little testing, and some quick eyesight I saw that it was throwing an IOError when I tried to create the file to write the error to.
import sys
import os
#os.chdir("C:/Python26/Projects/arguments")
try:
print sys.argv
raw_input()
os.getcwd()
except Exception,e:
print sys.argv + '\n'
print e
f = open("./myfile.txt", "w")
If I run this from the console with any or no arguments, it behaves as one would expect. If I run it by dropping a file on it, for instance test.txt, it runs, prints the arguments correctly, then when os.getcwd() is called, it throws the exception, and does not perform any of the stuff from the except: block, making it difficult for me to find any way to actually get the exception text to stay on screen. If I uncomment the os.chdir(), the script doesn't fail. If I move that line to within the except block, it's never executed.
I'm guessing running by dropping the file on it, which according to the other linked question, uses the WSH, is somehow messing up its permissions or the cwd, but I don't know how to work around it.
Seeing as this is probably not Python related, but a Windows problem (I for one could not reproduce the error given your code), I'd suggest attaching a debugger to the Python interpreter when it is started. Since you start the interpreter implicitly by a drag&drop action, you need to configure Windows to auto-attach a debugger each time Python starts. If I remember correctly, this article has the needed information to do that (you can substitute another debugger if you are not using Visual Studio).
Apart from that, I would take a snapshot with ProcMon while dragging a file onto your script, to get an idea of what is going on.
As pointed out in my edit above, the errors were caused by the working directory changing to C:\windows\system32, where the script isn't allowed to create files. I don't know how to get it to not change the working directory when started that way, but was able to work around it like this.
if len(sys.argv) == 1:
files = [filename for filename in os.listdir(os.getcwd())
if filename.endswith(".txt")]
else:
files = [filename for filename in sys.argv[1:]]
Fixing the working directory can be managed this way I guess.
exepath = sys.argv[0]
os.chdir(exepath[:exepath.rfind('\\')])