I get the following error with my code:
Traceback (most recent call last):
File "C:\Users\XXX\Sentiment Analysis-vader.py", line 34, in <module>
f.printer()
File "C:\Users\XXX\Sentiment Analysis-vader.py", line 18, in printer
with csv.reader(open('analyse_' + str(bloombergcode) + '.csv', 'r'), delimiter= ",",quotechar='|') as q2:
AttributeError: __enter__
Process finished with exit code 1
I used the following code:
import csv
from nltk.sentiment.vader import SentimentIntensityAnalyzer
class VaderSentiment:
def __init__(self, bloomcode):
self.bloomcode = bloomcode
def print_sentiment_scores(self, sentence):
self.sentence = sentence
analyser = SentimentIntensityAnalyzer()
snt = analyser.polarity_scores(self.sentence)
print("{:-<40} {}".format(self.sentence, str(snt)))
def printer(self):
bloombergcode = self.bloomcode
with csv.reader(open('analyse_' + str(bloombergcode) + '.csv', 'r'), delimiter= ",",quotechar='|') as q2:
for line in q2:
for field in line:
print_sentiment_scores(field)
for code in ('AAPL', 'NFLX'):
f = VaderSentiment(code)
f.printer()
time.sleep(1)
I already saw some other similar problems (Python Json with returns AttributeError: __enter__) but the solutions do not work on my problem.
Does anyone see the problem?
You're not using csv.reader correctly. It does not support being placed inside a with statement.
Try to do it the same way as in the usage example:
with open('analyse_' + str(bloombergcode) + '.csv', 'r') as csv_file:
q2 = csv.reader(csv_file, delimiter=',', quotechar='|')
for line in q2:
# ..rest of your code..
Wrap open instead inside the with (because open supports it and is actually the recommended way of using it) then use csv.reader inside it.
Related
While trying to append a line to a csv file I am getting the following error message and can't understand what is wrong.
TypeError: '_csv.writer' object is not callable
This snippet of code works in ipynb, but not when I try to run the entire code. This is my code: Thanks in advance!
import os
from csv import writer
curr_game = 'Catan'
curr_game_filename = curr_game + '.csv'
path = r'D:\Python\Ratings\History'
with open(os.path.join(path, curr_game_filename), 'a', newline='') as gamehistory:
listtoappend = ['Playername', 'Eva', 'Louis', 'Jean']
writer_object = writer(gamehistory)
writer_object.writerow(listtoappend)
gamehistory.close()
I am trying to write some output to csv file line by line
Here what I tried:
import csv
today = datetime.datetime.now().date()
filter = "eventTimestamp ge {}".format(today)
select = ",".join([
"eventTimestamp",
"eventName",
"operationName",
"resourceGroupName",
])
activity_logs = client.activity_logs.list(
filter=filter,
select=select
)
with open(r"C:\scripts\logs.csv", 'w', newline='') as f:
for log in activity_logs:
result = (" ".join([
str(log.event_timestamp),
str(log.resource_group_name),
log.event_name.localized_value,
log.operation_name.localized_value
]))
f.writerow(result)
Its throwing error:
AttributeError: '_io.TextIOWrapper' object has no attribute 'writerow'
How can i fix this error, possibly any other module ?
This:
with open(r"C:\scripts\logs.csv", 'w', newline='') as f:
is creating just text file handle. You need to create csv.writer using f and then you might use writerow, that is:
import csv
...
with open(r"C:\scripts\logs.csv", 'w', newline='') as f:
writer = csv.writer(f)
for log in activity_logs:
result = (str(log.event_timestamp),str(log.resource_group_name),log.event_name.localized_value,log.operation_name.localized_value)
writer.writerow(result)
You might find useful examples of usage in csv article at PyMOTW-3
The error is coming from the line:
f.writerow(result)
and it's telling you that the f object does not have a function named writerow.
As Jannes has commented, use the write function instead:
f.write(result)
CSV.writer is required when your trying to write into CSV . then the code can be
import csv
today = datetime.datetime.now().date()
filter = "eventTimestamp ge {}".format(today)
select = ",".join([
"eventTimestamp",
"eventName",
"operationName",
"resourceGroupName",
])
activity_logs = client.activity_logs.list(
filter=filter,
select=select
)
with open(r"C:\scripts\logs.csv", 'w', newline='') as file:
f=csv.writer(file)
for log in activity_logs:
result = (str(log.event_timestamp),
str(log.resource_group_name),
log.event_name.localized_value,
log.operation_name.localized_value)
f.writerow(result)
When the csv.writer is added after opening the csv file it will work without TextIOwrapper error
I want use like list or dict but i am finding this error when i tried it. How can I use csv_reader object outside like a list?
import csv
def get_data(file):
with open(file,'r',encoding="ISO-8859-1") as csv_file:
csv_reader = csv.DictReader(csv_file,delimiter=',')
return csv_reader
for i in get_data('spam.csv'):
print(i)
Traceback (most recent call last):
File "test1.py", line 10, in <module>
for i in get_data('spam.csv'):
File "/home/indianic/anaconda3/lib/python3.7/csv.py", line 111, in __next__
self.fieldnames
File "/home/indianic/anaconda3/lib/python3.7/csv.py", line 98, in fieldnames
self._fieldnames = next(self.reader)
ValueError: I/O operation on closed file.
Since you are using with open() to open the file, as soon as the function block runs the file gets closed and you don't have access to it.
In order to keep the context of file you can use 2 of the following ways.
You can use it by yeild a generator object instead of returning.
import csv
def get_data(file):
with open(file,'r',encoding="ISO-8859-1") as csv_file:
csv_reader = csv.DictReader(csv_file,delimiter=',')
for row in csv_reader:
yield row
for i in get_data('spam.csv'):
print(i)
Open the file outside of the function
import csv
def main():
csv_file = open(file,'r',encoding="ISO-8859-1")
for i in get_data(csv_file):
print(i)
csv_file.close()
def get_data(file):
csv_reader = csv.DictReader(file,delimiter=',')
return csv_reader
main()
The with open construct opens the file at the beginning of the attached code and closes the file after the last line.
You may use this code:
def get_data(file):
csv_reader = csv.DictReader(file,delimiter=',')
return csv_reader
You are opening your file a context manager :
with open(...) as csv_file:
whithin which the file is open.
Howerver as far as you exit the context manager the file is closed automatically. and this is what happens after your return statement you are no longer in the context manager's context.
So all the operations you need to perform should be done under the with open(...) context
you can try:
import csv
def get_data(file):
with open(file,'r',encoding="ISO-8859-1") as csv_file:
result = []
csv_reader = csv.DictReader(csv_file,delimiter=',')
for i in csv_reader:
result.append(i)
return result
The following code i wrote, will run one iteration with no problems. However i want it to loop through all of the values of x (which in this case there are 8). After it does the first loop through, when it goes to the second, i get an error on this line (t = f[x]['master_int'])
Traceback (most recent call last):
File "Hd5_to_KML_test.py", line 16, in <module>
t = f[x]['master_int']
TypeError: '_io.TextIOWrapper' object is not subscriptable
So it only outputs results (a .csv file and a .kml file) for BEAM0000. I was expecting it to loop through and output the two files for all 8 beams. What am I missing, why won't it loop through the other beams?
import h5py
import numpy as np
import csv
import simplekml
import argparse
parser = argparse.ArgumentParser(description='Creating a KML from an HD5 file')
parser.add_argument('HD5file', type=str)
args = parser.parse_args()
HD5file = args.HD5file
f = h5py.File(HD5file, 'r')
beamlist = []
for x in f:
t = f[x]['master_int']
for i in range(0, len(t), 1000):
time = f[x]['master_int'][i]
geolat = f[x]['geolocation']['lat_ph_bin0'][i]
geolon = f[x]['geolocation']['lon_ph_bin0'][i]
beamlist.append([time, geolat, geolon])
file = x + '.csv'
with open(file, 'w') as f:
wr = csv.writer(f)
wr.writerows(beamlist)
inputfile = csv.reader(open(file, 'r'))
kml = simplekml.Kml()
for row in inputfile:
kml.newpoint(name=row[0], coords=[(row[2], row[1])])
kml.save(file + '.kml')
When you use the context manager here:
with open(file, 'w') as f:
it reassigns to f, so when you try to access a value like f[x], it tries to call __getitem__(x) on f, which raises a TypeError
replace this block:
with open(file, 'w') as f:
wr = csv.writer(f)
wr.writerows(beamlist)
with something like:
with open(file, 'w') as fileobj:
wr = csv.writer(fileobj)
wr.writerows(beamlist)
I am currently experiencing an issue while reading big files with Python 2.7 [GCC 4.9] on Ubuntu 14.04 LTS, 32-bit. I read other posts on the same topic, such as Reading a large file in python , and tried to follow their advice, but I still obtain MemoryErrors.
The file I am attempting to read is not that big (~425MB), so first I tried a naive block of code like:
data = []
isFirstLine = True
lineNumber = 0
print "Reading input file \"" + sys.argv[1] + "\"..."
with open(sys.argv[1], 'r') as fp :
for x in fp :
print "Now reading line #" + str(lineNumber) + "..."
if isFirstLine :
keys = [ y.replace('\"', '') for y in x.rstrip().split(',') ]
isFirstLine = False
else :
data.append( x.rstrip().split(',') )
lineNumber += 1
The code above crashes around line #3202 (of 3228), with output:
Now reading line #3200...
Now reading line #3201...
Now reading line #3202...
Segmentation fault (core dumped)
I tried invoking gc.collect() after reading every line, but I got the same error (and the code became slower). Then, following some indications I found here on StackOverflow, I tried numpy.loadtxt():
data = numpy.loadtxt(sys.argv[1], skiprows=1, delimiter=',')
This time, I got a slightly more verbose error:
Traceback (most recent call last):
File "plot-memory-efficient.py", line 360, in <module>
if __name__ == "__main__" : main()
File "plot-memory-efficient.py", line 40, in main
data = numpy.loadtxt(sys.argv[1], skiprows=1, delimiter=',')
File "/usr/lib/python2.7/dist-packages/numpy/lib/npyio.py", line 856, in loadtxt
X = np.array(X, dtype)
MemoryError
So, I am under the impression that something is not right. What am I missing? Thanks in advance for your help!
UPDATE
Following hd1's answer below, I tried the csv module, and it worked. However, I think there's something important that I might have overlooked: I was parsing each line, and I was actually storing the values as strings. Using csv like this still causes some errors:
with open(sys.argv[1], 'r') as fp :
reader = csv.reader(fp)
# get the header
keys = reader.next()
for line in reader:
print "Now reading line #" + str(lineNumber) + "..."
data.append( line )
lineNumber += 1
But storing the values as float solves the issue!
with open(sys.argv[1], 'r') as fp :
reader = csv.reader(fp)
# get the header
keys = reader.next()
for line in reader:
print "Now reading line #" + str(lineNumber) + "..."
floatLine = [float(x) for x in line]
data.append( floatLine )
lineNumber += 1
So, another issue might be connected with the data structures.
numpy's loadtxt method is known to be memory-inefficient. That may address your first problem. Per the second, why not use the csv module:
data = []
isFirstLine = True
lineNumber = 0
print "Reading input file \"" + sys.argv[1] + "\"..."
with open(sys.argv[1], 'r') as fp :
reader = csv.reader(fp)
reader.next()
for line in reader:
pass
# line is an array of comma-delimited fields in the file