Use python struct.error to stop a function? - python

I'm trying to write a small function for my programm (python) which writes some data into a csv file. The input data is from an other file. Both files are opened!
The code for actual read- and writeprocess is:
while aux != '':
data = f.read(4)
data = unpack('I', data)
data = list(data)
writer.writerow(data)
else:
print('done')
This code works fine so far, but sometimes my input data doesn't have left 4 bytes at the end for the last readprocess, so it gives me the error "struct.error: unpack requires a string argument of length 4".
This is totally fine for me, i don't mind some dataloss at the end but this error stops my whole programm.
Is there a way to stop the function and return to the main programm if this error occours? Or just stop the while loop und go on with the "else:" part?

This should do it:
try:
data = unpack('I', data)
except struct.error as err:
print(err)
This way you'll know when there is a problem, but program execution will continue.
Do read up on Python error handling for the full story.

Related

Json.load rising "json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)" even with an "anti_empty" condition

I already post about my problem and I thought it was solved, but after a few time the error rise again. I'm gonna explain my program from the beginning.
I got a Json file that contain values permanently update by another program, I want to get an overlay that display those values, that means I got to open and read my json file every second (or more) with the "after()" method. (Im using tkinter for my overlay).
When I run my tkinter window without the other program that update the values, everything work perfectly, I can update manually a value and it will be update on the overlay.
When I run both programs together after an amount of time, I get the empty json error, sometimes after 5 minutes, sometimes after 45 minutes, It's random.
I tried the following issues :
Issue 1 :
def is_json():
with open ('my_json') as my_file :
myjson = my_file.read()
try:
json_object = json.loads(myjson)
except json.JSONDecodeError as e:
return False
return True
if is_json():
with open ('my_json') as my_file:
data = json.load(my_file)
else :
time.sleep(0.1)
Issue 2:
while True:
if os.path.getsize("/my_json") > 0:
with open ('my_json') as my_file :
myjson = my_file.read()
else:
time.sleep(0.2)
I tryed another one, but I dont want to code it again, that was a function that allow one program to read/write on the json only in "even" seconds and the other one can only do it in "odd" second.
I try this to avoid interactions, cause I think that's my problem, but none of those solutions worked.
You should return the parsed JSON in the same function that has the try/except. Otherwise, the file could change between calling is_json() and json.load().
def json_load_retry(filename, sleeptime=0.1):
while True:
with open(filename) as f:
try:
return json.load(f)
except json.JSONDecodeError:
time.sleep(sleeptime)
myjson = json_load_retry('my_json', sleeptime=0.2)

How to continue a loop after catching exception in try ... except

I am reading a big file in chunks and I am doing some operations on each of the chunks. While reading one of get them I had the following message error:
pandas.errors.ParserError: Error tokenizing data. C error: Expected 26 fields in line 15929977, saw 118
which means that one of my file lines doesn't follow the same format as the others. What I thought I could do was to just omit this chunk but I couldn't get a way to do it. I tried to do a try/except block as follows:
data = pd.read_table('ny_data_file.txt', sep=',',
header=0, encoding = 'latin1', chunksize = 5000)
try:
for chunk in data:
# operations
except pandas.errors.ParseError:
# Here is my problem
What I have written here is my problem is that if the chunk is not well parsed, my code will automatically go to the exception not even entering the for loop, but what I would like is to skip this chunk and move forward to the next one, on which I would like to perform the operations inside the loop.
I have checked on stackoverflow but I couldn't find anything similar where the try was performed on the for loop. Any help would be appreciated.
UPDATE:
I have tried to do as suggested in the comments:
try:
for chunk in data:
#operations
except pandas.errors.ParserError:
# continue/pass/handle error
But still is not cathching the exception because as said the exception is created when getting the chyunk out of my data not when doing operations with it.
The way you use try - except makes it skip the entire for loop if an exception is caught in it. If you want to only skip one iteration you need to write the try-except inside the loop like so:
for chunk in data:
try:
# operations
except pandas.errors.ParseError as e:
# inform the user of the error
print("Error encountered while parsing chunk {}".format(chunk))
print(e)
I understood that, in the operations part you get exception. If it is like that: you should just continue:
for chunk in data:
try:
# operations
except pandas.errors.ParseError:
# continue
I am not sure where the exception is thrown. Maybe adding a full error stack would help. If the error is thrown by the read_table() call maybe you could try this:
try:
data = pd.read_table('ny_data_file.txt', sep=',',
header=0, encoding = 'latin1', chunksize = 5000)
except pandas.errors.ParseError:
pass
for chunk in data:
# operations
As suggested by #JonClements what solved my problem was to use error_bad_lines=False in the pd.read_csv so it just skipped that lines causing trouble and let me execute the rest of the for loop.

JSON File I/O : Extra Data Error

I am learning Python currently. For a small project, I am writing a script to dump and load JSON extracted from the web. The file needs to be constantly updated after pulling the data each time and for the same, I have written the following code.
with open(os.path.join(d,fname),'a+') as f:
try:
f.seek(0)
t = json.load(f)
for i in t:
tmp[i]=t[i]
except Exception as e:
print(e,"New File ",fname," is created in ",d)
f.truncate()
json.dump(tmp,f)
I have put a try-catch block since the first time this program runs, the file would have no data written.
When I run the script, it works as expected but when I run the same script the fourth time, it gives EXTRA DATA exception.
Extra data: line 1 column 29245 (char 29244) New File TSLA_dann is created in 2017-12-20
I am not sure how another dictionary is being written in the file. Please guide me to the same.
It is nearly impossible to write another json with such code. Your code is not good. You mix too much try open, seek and truncate, wrong file mode choice maybe. I will teach you little to be much better:
try should cover only what can raise error.
Seek is not need always seek(0) is after open.
open(x, 'a+) mean append to the end I think (i can be reason of error).
use spaces.
be patient.
Problem is probably 'a+' mode but it is not matter clean the code :)
Believe me I writing 250 000 line programs without problems.
Clean code for you as good example should work (I was not tested - you can fix it if one letter missed or just run):
# read
file_path = os.path.join(d, fname)
with open(file_path, 'r') as f: # 'r' is read can be skipped
try:
t = json.load(f)
except Exception as e:
print('%s %s' % (e, file_path))
for i in t:
tmp[i] = t[i]
# write
with open(file_path, 'w') as f:
json.dump(tmp, f)

EOF Error in python Hackerrank

Trying to solve a problem but the compiler of Hackerrank keeps on throwing error EOFError while parsing: dont know where is m i wrong.
#!usr/bin/python
b=[]
b=raw_input().split()
c=[]
d=[]
a=raw_input()
c=a.split()
f=b[1]
l=int(b[1])
if(len(c)==int(b[0])):
for i in range(l,len(c)):
d.append(c[i])
#print c[i]
for i in range(int(f)):
d.append(c[i])
#print c[i]
for j in range(len(d)):
print d[j],
i also tried try catch to solve it but then getting no input.
try:
a=input()
c=a.split()
except(EOFError):
a=""
input format is 2 spaced integers at beginning and then the array
the traceback error is:
Traceback (most recent call last):
File "solution.py", line 4, in <module>
b=raw_input().split()
EOFError: EOF when reading a line
There are several ways to handle the EOF error.
1.throw an exception:
while True:
try:
value = raw_input()
do_stuff(value) # next line was found
except (EOFError):
break #end of file reached
2.check input content:
while True:
value = raw_input()
if (value != ""):
do_stuff(value) # next line was found
else:
break
3. use sys.stdin.readlines() to convert them into a list, and then use a for-each loop. More detailed explanation is Why does standard input() cause an EOF error
import sys
# Read input and assemble Phone Book
n = int(input())
phoneBook = {}
for i in range(n):
contact = input().split(' ')
phoneBook[contact[0]] = contact[1]
# Process Queries
lines = sys.stdin.readlines() # convert lines to list
for i in lines:
name = i.strip()
if name in phoneBook:
print(name + '=' + str( phoneBook[name] ))
else:
print('Not found')
I faced the same issue. This is what I noticed. I haven't seen your "main" function but Hackerrank already reads in all the data for us. We do not have to read in anything. For example this is a function def doSomething(a, b):a and b whether its an array or just integer will be read in for us. We just have to focus on our main code without worrying about reading. Also at the end make sure your function return() something, otherwise you will get another error. Hackerrank takes care of printing the final output too. Their code samples and FAQs are a bit misleading. This was my observation according to my test. Your test could be different.
It's because your function is expecting an Input, but it was not provided. Provide a custom input and try to compile it. It should work.
i dont know but providing a custom input and compiling it and got me in! and passed all cases without even changing anything.
There are some codes hidden below the main visible code in HackerRank.
You need to expand that (observe the line no. where you got the error and check that line by expanding) code and those codes are valid, you need to match the top visible codes with the hidden codes.
For my case there was something like below:
regex_integer_in_range = r"___________" # Do not delete 'r'.
regex_alternating_repetitive_digit_pair = r"__________" # Do not delete 'r'.
I just filled up the above blank as like below and it was working fine with the given hidden codes:
regex_integer_in_range = r"^[0-9][\d]{5}$" # Do not delete 'r'.
regex_alternating_repetitive_digit_pair = r"(\d)(?=\d\1)" # Do not delete 'r'.

What is this JSON Decoder piece of code doing?

I have been using this piece of code:
def read_text_files(filename):
# Creates JSON Decoder
decoder = json.JSONDecoder()
with open(filename, 'r') as inputfile:
# Returns next item in input file, removes whitespace from it and saves it in line
line = next(inputfile).strip()
while line:
try:
# Returns 2-tuple of Python representation of data and index where data ended
obj, index = decoder.raw_decode(line)
# Remove object
yield obj
# Remove already scanned part of line from rest of file
line = line[index:]
except ValueError:
line += next(inputfile).strip()
if not line:
line += next(inputfile).strip()
global count
count+=1
print str(count)
all_files = glob.glob('Documents/*')
for filename in all_files:
for data in read_text_files(filename):
rawTweet = data['text']
print 'Here'
It reads in a JSON file and decodes it. However, what I realise is that when I place the count and print statements inside the ValueError, I'm losing almost half of the documents being scanned in here - they never make it back to the main method.
Could somebody explain to me exactly what the try statement is doing and why I'm losing documents in the except part. Is it due to bad JSON?
Edit: Including more code
Currently, with the code posted, the machine prints:
"Here"
2
3 etc...
199
Here
200
Here (alternating like this until)...
803
804
805 etc...
1200
Is this happening because some of the JSON is corrupt? Is it because some of the documents are duplicates (and some definitely are)?
Edit 2:
Interesting, deleting:
line=next(inputfile).strip()
while line
and replacing it with:
for line in inputfile:
appears to have fixed the problem. Is there a reason for this?
The try statement is specifying a block of statements for which exceptions are handled through the following except blocks (only one in your case).
My impression is that with your modifications you are making a second exception trigger inside the exception handler itself. This makes control go to a higher-level exception handler, even outside function read_text_files. If no exception occurs in the exception handler, the loop can continue.
Please check that count exists and has been initialized with an integer value (say 0).

Categories

Resources