My program uses a csv file to load up some initialization info.
I have a config file that loads data from this csv file. I am building a web app as part of this program, and the config file is accessed from various points in the entire application.
This program must operate cross-platform.
Problem: Depending on who calls the config file and where the caller lives in the file tree, the csv operation is throwing IOError errors. The csv data isn't even accessed, but on import of the config file, the csv read portion executes anyway.
The code below is rife with Band-Aids...
# print os.getcwd()
try:
with open('value_addresses.csv') as file: # located in code folder. used extensively below
reader = csv.reader(file)
lines = [l for l in reader]
except IOError:
try:
with open('_code/value_addresses.csv') as file: #
reader = csv.reader(file)
lines = [l for l in reader]
except IOError:
with open('../_code/value_addresses.csv') as file: #
reader = csv.reader(file)
lines = [l for l in reader]
I would refactor the common code into a function.
def read_file(path):
with open(path) as file:
reader = csv.reader(file)
lines = [l for l in reader]
return lines
try:
read_file("value_addresses.csv")
except IOError:
try:
read_file('_code/value_addresses.csv')
except IOError:
read_file('../_code/value_addresses.csv')
You can further simplify this by recursively figuring out the path to value_addresses.csv.
This is how I would do it:
from os.path import join
import csv
def myopen( filename, possible_dirs, mode ):
for dir in possible_dirs:
try:
return open(join(dir,filename),mode)
except IOError:
pass
raise IOError('File not found.')
with myopen('value_addresses.csv',['.','_code','../_code'],'r') as file:
reader = csv.reader(file)
lines = [l for l in reader]
Although maybe you want to look more into the specific IOError you are getting, but either way this is the general approach I would take.
Related
I am trying to create a small game for fun, and I want to save and load previous run scores. I started a test file to mess around and try to figure out how pickling works. I have a pickle file with a small set of number. How do I add numbers to the pickle file and save it for the next run.
Currently I have it like this:
new_score = 9
filename = "scoreTest.pk"
outfile = open(filename,'wb')
infile = open(filename,'rb')
with infile as f:
scores = pickle.load(f)
scores.add(new_score)
pickle.dump(scores, outfile)
When I run it like this I get this error:
EOFError: Ran out of input
If someone could please tell me what is wrong and how to do it correctly that would be great. Apologies for any un-optimal code, I'm new to code.
You are trying to juggle a reader and writer on the same file at the same time. The open(filename, 'wb') of the write deletes whatever happened to be in the file so there is no data for the reader. You should only open the file when you really need to use it. And its better to write to a temporary file and rename it. If something goes wrong you haven't lost your data.
import pickle
import os
new_score = 9
filename = "scoreTest.pk"
tmp_filename = "scoreTest.tmp"
try:
with open(filename, 'rb') as infile:
scores = pickle.load(f)
except (IOError, EOFError) as e:
scores = default # whatever that is
scores.add(new_score)
with open(tmp_filename, 'wb') as outfile:
pickle.dump(scores, outfile)
os.rename(tmp_filename, filename)
From what I understand, the only way to edit an object in a pickle file is to unpickle each object, edit the desired object, and repickle everything back into the original file.
This is what I tried doing:
pickleWrite = open(fileName, 'wb')
pickleRead = open(fileName, 'rb')
#unpickle objects and put it in dataList
dataList = list()
try:
while True:
dataList.append(pickle.load(pickleRead))
except EOFError:
pass
#change desired pickle object
dataList[0] = some change
#clear pickle file
pickleWrite.truncate(0)
#repickle each item in data list
for data in dataList:
pickle.dump(data, fileName)
For some reason, this makes the pickle file have some large number of unknown symbols at the front of the file making it unpickleable.
Error when we try to unpickle:
_pickle.UnpicklingError: invalid load key, '\x00'.
I would suggest avoiding to create multiple opened connection to the same file like this.
Instead, you can try:
# Read the contents
with open(filename, 'rb') as file:
dataList = pickle.load(file)
# something to dataList
# Overwrite the picle file
with open(filename, 'wb') as file:
pickle.dump(dataList, file)
I have written script that parses a web page and saves data of interest in a CSV file. Before I open the data and use it in a second script I check if the file with data exist and if not I am running the parser script first. The odd behaviour of the second script is, that it is able to detect that there is no file, then the file is created, but when it is read for the first time it is empty (part of else statement). I tried to provide some delay by using the time.sleep() method, but it does not work. The explorer clearly shows that the file is not empty, but at the first run, script recognizes the file as empty. At the subsequent runs the scripts clearly sees the file and is able to properly recognize it content.
Maybe You have some explanation for this behaviour.
def open_file():
# TARGET_DIR and URL are global variables.
all_lines = []
try:
current_file = codecs.open(TARGET_DIR, 'r', 'utf-8')
except FileNotFoundError:
procesed_data = parse_site(URL)
save_parsed(procesed_data)
compare_parsed()
open_file()
else:
time.sleep(10)
data = csv.reader(current_file, delimiter=';')
for row in data:
all_lines.append(row)
current_file.close()
return all_lines
You got some recursion going on.
Another way to do it—assuming I understand correctly—is this:
import os
def open_file():
# TARGET_DIR and URL are global variables.
all_lines = []
# If the file is not there, make it.
if not os.path.isfile(TARGET_DIR):
procesed_data = parse_site(URL)
save_parsed(procesed_data)
compare_parsed()
# Here I am assuming the file has been created.
current_file = codecs.open(TARGET_DIR, 'r', 'utf-8')
data = csv.reader(current_file, delimiter=';')
for row in data:
all_lines.append(row)
current_file.close()
return all_lines
you should return the result of your internal open_file call, or just opening the file in your except block:
def open_file():
# TARGET_DIR and URL are hopefully constants
try:
current_file = codecs.open(TARGET_DIR, 'r', 'utf-8')
except FileNotFoundError:
procesed_data = parse_site(URL)
save_parsed(procesed_data)
compare_parsed()
current_file = codecs.open(TARGET_DIR, 'r', 'utf-8')
data = csv.reader(current_file, delimiter=';')
all_lines = list(data)
current_file.close()
return all_lines
Using Python 3 on a windows machine:
I have a function to take a list of lists and open it as a csv file using my default application (excel). Despite closing the file after writing, I get a 'locked for editing' message when excel starts.
def opencsv(data):
"""saves a list of lists as a csv and opens"""
import tempfile
import os
import csv
handle, fn = tempfile.mkstemp(suffix='.csv')
with open(fn,"w", encoding='utf8',errors='surrogateescape',\
newline='') as f:
writer=csv.writer(f)
for row in data:
try:
writer.writerow(row)
except Exception as e:
print ('Error in writing row:',e)
f.close()
url = 'file://' + fn.replace(os.path.sep, '/')
os.startfile(fn)
opencsv([['d1','d2'],['d3','d4','d5']])
How can I fix this?
Answer from swstephe's input:
The issue is that mkstemp opens the file and associates it with an os handle. In my original code I was not closing this file properly. See below for updated code.
def opencsv(data):
"""saves a list of lists as a csv and opens"""
import tempfile
import os
import csv
handle, fn = tempfile.mkstemp(suffix='.csv')
with os.fdopen(handle,"w", encoding='utf8',errors='surrogateescape',\
newline='') as f:
writer=csv.writer(f)
for row in data:
try:
writer.writerow(row)
except Exception as e:
print ('Error in writing row:',e)
print (fn)
os.startfile(fn)
I have written this code
import os
import csv
import time
class upload_CSV:
def __init__(self):
coup = []
self.coup = coup
self.csv_name = 'Coup.csv'
def loader(self,rw):
with open(self.csv_name,'rb') as csv_file:
reader = csv.reader(csv_file,delimiter=',')
for row in reader:
self.coup.append(row[rw])
self.coup = self.coup[1:]
csv_file.flush()
csv_file.close()
return self.coup
def update(self,rw,message):
#try:
with open(self.csv_name,'rb') as csv_file1:
reader = csv.reader(csv_file1,delimiter=',')
csv_file1.flush()#To clean the register for reuse
csv_file1.close()
#except Exception as ex:
#error = 'An error occured loading data in the reader'
# #raise
# return ex
os.remove(self.csv_name)
writer = csv.writer(open(self.csv_name,'wb'))
for row in reader:
if row[rw]==message:
print str(row),' has been removed'
else:
writer.writerow(row)
return message
I am trying to read the content of a csv to a list first. Once i get the relevant data, i need to go back to my csv and create a new entry without that record. I keep getting the single error
Line 27 in update
with open(csv_name,'rb')as csvfile1:
Python: IOError: [Errno 2] No such file or directory 'Coup.csv'
when i call the Update function
I have looked at this question Python: IOError: [Errno 2] No such file or directory but nothing seems to work. Its as if the first function has a lock on the file. Any help would be appreciated
It would be enormously helpful if we saw the traceback to know exactly what line is producing the error...but here is a start...
First, you have two spots in your code where you are working with a filename that expects to only be available in the current directory. That is one possible point of failure in your code if you run it outside the directory containing the file:
self.csv_name = 'Coup.csv'
...
with open(self.csv_name,'rb') as csv_file:
...
with open('Coup.csv~','rb') as csv_file1:
...
And then, you are also referring to a variable that won't exist:
def update(self,rw,message):
...
# self.csv_name? or csv_file1?
os.remove(csv_name)
writer = csv.writer(open(csv_name,'wb'))
Also, how can you be sure this temp file will exist? Is it guaranteed? I normally wouldn't recommend relying on a system-temporary file.
with open('Coup.csv~','rb') as csv_file1: