Python - Re-run subprocess call when it fails - python

I have a script that calls a subprocess (speedtest-cli).
The script seems to randomly fail with the following error message:-
ERROR: timed out
ERROR: 'speedtest-cli --share' failed (exit code 1).
Retrieving speedtest.net configuration... Cannot retrieve speedtest configuration
Traceback (most recent call last):
File "/home/steve/speedtest_dev.py", line 80, in <module>
data[1] = data[1].strip("'") ##Finish date and time
IndexError: list index out of range
As far as I can tell it looks like there are two errors in here:-
a) Speedtest-cli fails by timing out
ERROR: timed out
ERROR: 'speedtest-cli --share' failed (exit code 1).
and
b) The data strip then fails as one would expect because there is no data.
Traceback (most recent call last):
File "/home/steve/speedtest_dev.py", line 80, in <module>
data[1] = data[1].strip("'") ##Finish date and time
IndexError: list index out of range
I would like to catch the 1st error if possible and re-run the subprocess after an interval (60 seconds?).
I have tried creating a function:-
def run_speedtest():
outfile = open(dataFile, "w+")
subprocess.call(["/home/steve/speedtest-cli-extras/speedtest-csv", "--share"], stdout=outfile)
outfile.close()
and then using a try statement like:-
try:
run_speedtest()
except:
print("1st attempt failed") #for testing only
time.sleep(60)
run_speedtest()
For some reason I only manage to run the first part of this and when that errors out the except statement doesn't seem to run. The script then does this:-
#Separate Values from csv string
with open(dataFile, "r+") as f:
data = f.read()
data = data.strip()
data = data.replace("\t","|")
f.seek(0)
f.write(data)
f.truncate
f.close()
#Open file and process
with open(dataFile, "r") as g:
data = g.read()
data = data.split("|")
writes to a database and sends an email when the one of the parameters is less than a defined value.
It all works fine unless the initial run_speedtest() fails.
Any help would be appreciated.

I had the same issue after upgraded the recent python version and the solution can be:
export PYTHONHTTPSVERIFY=0
python your_script
It works for me though.

Related

How do you import json files to Blender?

I am Following Chris P's 'Visualize Real-world JSON Data in Blender (3D Chart Animation Nodes Tutorial)' on YouTube but I seem to have got stuck at the first hurdle of importing the data. I have followed his instructions completely and am unsure why the script keeps failing. I have attached his script, My script, my file location, my error message and a snap shot of his video. I am On windows OS, he is on Linux I'menter image description here not sure if that makes a difference. Here is the link to the video: https://www.youtube.com/watch?v=0aRjInmibSw&t=1055s THE TIMESTAMP FOR HIS CODE IS 6 min.
FILE NAME: Export.json
MY FILE LOCATION: C:\Users\Jordan\Downloads
MY CODE
import json
with open(r'C:/Users/Jordan/Downloads/Export.json','r') as f:
j=json.load(f)
print (j)
MY ERROR MESSAGE:
Traceback (most recent call last):
File "D:\Mixed Graphs\Blender json\3D Charts.blend\My Script", line 3, in <module>
OSError: [Errno 22] Invalid argument: '/C:/Users/Jordan/Downloads/Export'
Error: Python script failed, check the message in the system console
HIS CODE:
import json
with open('/Home/chris/downloads/tutorial1.json') as f:
json.load(f)
print (j)
Your problem seems to be that you're using "/"(slash) instead of ""(backslash) on Windows.
In addition you need to use two "\" as one backslash signals an escaping for the next character.
Fix therefore should be:
import json
with open(r'C:\\Users\\Jordan\\Downloads\\Export.json','r') as f:
j=json.load(f)
print (j)

Python Read Excel file with Error - "We Found a problem with some content..."

Here is my problem. We have an Excel based report that business users enter comments into two separate fields, as well as selecting a code form a drop down. We then have a manual process that collects those files and pushes the comments and codes to a Snowflake table to be able to use in various reports.
I am trying to improve the process with a Python script that will collect the files, copy them to a staging_folder location, then read in the data from the sheet, append it all together, do some cleanup and push to Snowflake. The plan is that this would be completely automated - but this is where we run into issues.
Initial step works perfectly. I have a loop that grabs the files based on the previous business day date, copies them to a staging folder. There are typically 32 files each day.
Next step reads those files to append to a dataframe. Here is the function that is loading the Excel files in my Python script.
def load_files():
file_list = glob.glob(file_path + r'\*')
df = pd.DataFrame()
print("Importing data to Pandas DF...")
for file in file_list:
try:
wb = load_workbook(file)
ws = wb["Daily Outs"]
data = ws.values
cols = next(data)[1:]
data = list(data)
idx = [r[0] for r in data]
data = (islice(r, 1, None) for r in data)
data_1 = pd.DataFrame(data, index=idx, columns=cols)
df = df.append(data_1, sort=False)
print(file + " Imported to Df...")
except Exception as e:
print("Error: " + e + " When attempting to open file: " + file)
# error_notify(e)
print(df.head(10))
return df
The problem is when we have files that have some sort of corruption. The files when opened manually will show an error like the one below.
I thought with my try, except code above this would catch an error like this and alert me with the error_notify(e) function. However, we get a result where the Python script crashes with an error like this: zipfile.BadZipFile: File is not a zip file
During handling of the above exception, another exception occurred.
There is more to the error, but I only copied & pasted this part in some communication with some folks int he office. Impossible to replicate the error on our own - I have no idea how the files get corrupted in this way - except that there are multiple people accessing the files throughout the day.
The way to make the file readable is completely manual - we must open the file, get that error, hit yes, and save the file over the existing one. Then re-launch the script. But since the try, except isn't catching it and alerting us to the failure, we have to run the script manually to see if it works or not.
Two questions - am I doing something incorrect in my try, except command? I am admittedly weak in error catching so my first thought is there is more I can do there to make that work. Secondly, is there a Python way to get past that error in the Excel workbook files?
Here is the error text:
Traceback (most recent call last):
File "G:/Replenishment/Reporting/00 - I&A Replenishment/02 - Service
Level/Daily Outs Comment Capture/Python/daily_outs_missed_files.py", line 48, in load_files
wb = load_workbook(file)
File "C:\ProgramData\Anaconda3\lib\site-packages\openpyxl\reader\excel.py", line 314, in load_workbook
data_only, keep_links)
File "C:\ProgramData\Anaconda3\lib\site-packages\openpyxl\reader\excel.py", line 124, in init
self.archive = _validate_archive(fn)
File "C:\ProgramData\Anaconda3\lib\site-packages\openpyxl\reader\excel.py", line 96, in _validate_archive
archive = ZipFile(filename, 'r')
File "C:\ProgramData\Anaconda3\lib\zipfile.py", line 1222, in init
self._RealGetContents()
File "C:\ProgramData\Anaconda3\lib\zipfile.py", line 1289, in _RealGetContents
raise BadZipFile("File is not a zip file")
zipfile.BadZipFile: File is not a zip file
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "G:/Replenishment/Reporting/00 - I&A Replenishment/02 - Service Level/Daily Outs Comment Capture/Python/daily_outs_missed_files.py", line 123, in <module>
main()
File "G:/Replenishment/Reporting/00 - I&A Replenishment/02 - Service Level/Daily Outs Comment Capture/Python/daily_outs_missed_files.py", line 86, in main
df_output = df_clean()
File "G:/Replenishment/Reporting/00 - I&A Replenishment/02 - Service Level/Daily Outs Comment Capture/Python/daily_outs_missed_files.py", line 68, in df_clean
df = load_files()
File "G:/Replenishment/Reporting/00 - I&A Replenishment/02 - Service Level/Daily Outs Comment Capture/Python/daily_outs_missed_files.py", line 61, in load_files
print("Error: " + e + " When attempting to open file: " + file)
TypeError: can only concatenate str (not "BadZipFile") to str
Your try/except code looks correct. All user defined exceptions in python should be classes based on Exception. See BaseException and
and Exception in python documentation :
"Exception (..) All user-defined exceptions should also be derived from this class" see also the exception class hierarchy tree at the end of the python doc sesction.
If your python script "crashes" it means one of the library procedures throws an exception which is not based on the Exception class, something that "should not" be. You could look at the Traceback and try catching the offending exception type separately, or find what part of the source code and which library is the cause, fix it and submit a PR. Here are two examples of a good and bad way of deriving own exceptions
class MyBadError(BaseException):
"""
my bad exception, do not make yours that way
"""
pass
instead of recommended
class MyGoodError(Exception):
"""
exception based on the Exception
"""
pass
Where and what exactly fails is a bit of mystery still but the problems with your exception from the Traceback is not new, see zipfile.BadZipfile issue in pandas discussion. Note that xlrd used by pandas to read Excel workbooks data is currently a "no-maintainer-ware" declaration about xlrd from the authors and in case of any issues the recommendation is to use openpyxl instead or fix any issues yourself (pandas maintainers are doing pontius pilate on that, but happily use xlrd as a dependency). I suggest you catch the BadZipfile as a special known corruption error separately from all other exceptions, see python error handling tutorial for example code (you probably already have seen it, this is for other readers). If that does not work I can trace it in the source code of your libraries / python modules to the exact offending section and find the culprit, if you reach out directly.

Problems with pickle python

I recently made a program using an external document with pickle. But when it tries to load the file with pickle, I got this error (the file is already existing but it also fails when the file in't existing):
python3.6 check-graph_amazon.py
a
b
g
URL to follow www.amazon.com
Product to follow Pool_table
h
i
[' www.amazon.com', ' Pool_table', []]
p
Traceback (most recent call last):
File "check-graph_amazon.py", line 17, in <module>
tab_simple = pickle.load(doc_simple)
io.UnsupportedOperation: read
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "check-graph_amazon.py", line 42, in <module>
pickle.dump(tab_simple, 'simple_data.dat')
TypeError: file must have a 'write' attribute
Here is the code :
import pickle5 as pickle
#import os
try:
print("a")
with open('simple_data.dat', 'rb') as doc_simple:
print("b")
tab_simple = pickle.load(doc_simple)
print("c")
print(tab_simple)
print("d")
URL = tab_simple[0]
produit_nom = tab_simple[1]
tous_jours = tab_simple[2]
print("f")
except :
print("g")
URL = str(input("URL to follow"))
produit_nom = str(input("Product to follow"))
with open('simple_data.dat', 'wb+') as doc_simple:
print("h")
#os.system('chmod +x simple_data.dat')
tab_simple = []
tab_simple.append(URL)
tab_simple.append(produit_nom)
tab_simple.append([])
print(tab_simple)
print("c'est le 2")
print("p")
pickle.dump(tab_simple, 'simple_data.dat')
print("q")
The prints are here to show which lines are executed. The os.system is here to allow writing on the file but the error is persisting.
I don't understand why it's said that the document doesn't have a write attribute because I opened it in writing mode. And I neither understand the first error where it can't load the file.
If it can help you the goal of this script is to initialise the program, with a try. It tries to open the document in reading mode in the try part and then set variables. If the document doesn't exist (because the program is lauched for the first time) it goes in the except part and create the document, before writing informations on it.
I hope you will have any clue, including changing the architecture of the code if you have a better way to make an initialisation for the 1st time the program is launched.
Thanks you in advance and sorry if the code isn't well formated, I'm a beginner with this website.
Quote from the docs for pickle.dump:
pickle.dumps(obj, protocol=None, *, fix_imports=True)
Write a pickled representation of obj to the open file object file. ...
...
The file argument must have a write() method that accepts a single bytes argument. It can thus be an on-disk file opened for binary writing, an io.BytesIO instance, or any other custom object that meets this interface.
So, you should pass to this function a file object, not a file name, like this:
with open("simple_data.dat", "wb"): as File:
pickle.dump(tab_simple, File)
Yeah, in your case the file has already been opened, so you should write to doc_simple.

attributeerror 'module' object has no attribute 'openfile'

I'm trying to figure out the error that occurred using Python. I'm trying to use the module detektspikes.py freely distributed by klustakwik team.
I'm having trouble with errors that occurred when run.
Error log:
Exiting directory C:\Users\user\Downloads\klusta-team-spikedetekt-82bcf06\klusta
-team-spikedetekt-82bcf06\scripts_1
Traceback (most recent call last):
File "C:\Users\user\Downloads\klusta-team-spikedetekt-82bcf06\klusta-team-spik
edetekt-82bcf06\scripts\detektspikes.py", line 82, in <module>
spike_detection_job(raw_data_files, probe_file, output_dir, output_name)
File "C:\Python27\lib\site-packages\spikedetekt\core.py", line 86, in
spike_de
tection_job
probe, max_spikes)
File "C:\Python27\lib\site-packages\spikedetekt\core.py", line 115, in
spike_d
etection_from_raw_data
h5s[n] = tables.openFile(filename, 'w')
AttributeError: 'module' object has no attribute 'openFile'
I guess the problem is on the core.py
Core.py :
Filter, detect, extract from raw data.
"""
### Detect spikes. For each detected spike, send it to spike writer, which
### writes it to a spk file. List of times is small (memorywise) so we just
### store the list and write it later.
np.savetxt("dat_channels.txt", Channels_dat, fmt="%i")
# Create HDF5 files
h5s = {}
h5s_filenames = {}
for n in ['main', 'waves']:
filename = basename+'.'+n+'.h5'
h5s[n] = tables.openFile(filename, 'w')
h5s_filenames[n] = filename
for n in ['raw', 'high', 'low']:
if Parameters['RECORD_'+n.upper()]:
filename = basename+'.'+n+'.h5'
h5s[n] = tables.openFile(filename, 'w')
h5s_filenames[n] = filename
main_h5 = h5s['main']
# Shanks groups
shanks_group = {}
shank_group = {}
shank_table = {}
for k in ['main', 'waves']:
h5 = h5s[k]
shanks_group[k] = h5.createGroup('/', 'shanks')
for i in probe.shanks_set:
I would pleased to be kindly helped!
The problem is that that code is for a very old version of Python and trying to access a no longer existing method of tables. See here: http://www.pytables.org/MIGRATING_TO_3.x.html
If you want to run the script you'd have to run it in an old version of Python like 2.3, or update the lines that use openFile to use open_file instead. Though there may be other incompatibilities that I'm not aware of.

Python: EOFError using cPickle while running a class instance

This is the code snippet causing the problem:
if str(sys.argv[2]) + '.pickle' in os.listdir(os.curdir): #os.path.isfile(str(sys.argv[2]) + '.pickle'):
path = sys.argv[2] + '.pickle'
#print path
instance = cPickle.load(open(str(path)))
This is the traceback:
Traceback (most recent call last):
File "parent_cls.py", line 92, in <module>
instance = cPickle.load(open(str(path)))
EOFError
If this keeps happening because of file.close() is not performed or some other ridiculous mistake, please let me know if there is a way to access the pickle file using subprocess. Thanks.
UPDATE: Another thing I notice. The filename.pickle to check if its there or not using the if condition actually is creating a filename.pickle although it wasn't there first.
I dont want to create it but to check its existence. is this some other problem?
Open it in binary mode :
open(str(path), 'rb')

Categories

Resources