I am a beginner of Python. I am trying now figuring out why the second 'for' loop doesn't work in the following script. I mean that I could only get the result of the first 'for' loop, but nothing from the second one. I copied and pasted my script and the data csv in the below.
It will be helpful if you tell me why it goes in this way and how to make the second 'for' loop work as well.
My SCRIPT:
import csv
file = "data.csv"
fh = open(file, 'rb')
read = csv.DictReader(fh)
for e in read:
print(e['a'])
for e in read:
print(e['b'])
"data.csv":
a,b,c
tree,bough,trunk
animal,leg,trunk
fish,fin,body
The csv reader is an iterator over the file. Once you go through it once, you read to the end of the file, so there is no more to read. If you need to go through it again, you can seek to the beginning of the file:
fh.seek(0)
This will reset the file to the beginning so you can read it again. Depending on the code, it may also be necessary to skip the field name header:
next(fh)
This is necessary for your code, since the DictReader consumed that line the first time around to determine the field names, and it's not going to do that again. It may not be necessary for other uses of csv.
If the file isn't too big and you need to do several things with the data, you could also just read the whole thing into a list:
data = list(read)
Then you can do what you want with data.
I have created small piece of function which doe take path of csv file read and return list of dict at once then you loop through list very easily,
def read_csv_data(path):
"""
Reads CSV from given path and Return list of dict with Mapping
"""
data = csv.reader(open(path))
# Read the column names from the first line of the file
fields = data.next()
data_lines = []
for row in data:
items = dict(zip(fields, row))
data_lines.append(items)
return data_lines
Regards
Related
This question already has answers here:
Iterating on a file doesn't work the second time [duplicate]
(4 answers)
Closed 1 year ago.
I was refactoring some code for my program and I have a mistake somewhere in the process. I am reading and writing .csv files.
In the beginning of my program I iterate through a .csv file in order to find which data from the file I need.
with open(csvPath, mode='r') as inputFile:
csvReader = csv.reader(inputFile)
potentialVals = []
paramVals = {}
for row in csvReader:
if row[3] == "Parameter":
continue
# Increment vales in dict
if row[3] not in paramVals:
paramVals[row[3]] = 1
else:
paramVals[row[3]] += 1
This iterates and works fine, the for loop gets me every row in the .csv file. I them perform some calculations and go to iterate through the same .csv file again later, and then select data to write to a new .csv file. My problem is here, when I go to iterate through a second time, it only gives me the first row of the .csv file, and nothing else.
# Write all of the information to our new csv file
with open(outputPath, mode='w') as outputFile:
csvWriter = csv.writer(outputFile, delimiter=',', quotechar='"', quoting=csv.QUOTE_ALL)
inputFile.seek(0)
rowNum = 0
for row in csvReader:
print(row)
Where the print statement is, it only prints the first line of the .csv file, and then exits the for loop. I'm not really sure what is causing this. I thought it might have been the
inputFile.seek(0)
But even if I opened a 2nd reader, the problem persisted. This for loop was working before I refactored it, all the other code is the same except the for loop I'm having trouble with, here is what it used to look like:
Edit: So I thought maybe it was a variable instance error, so I tried renaming my variables instead of reusing them and the issue persisted. Going to try a new file instance now,
Edit 2: Okay so this is interesting, when I look at the line_num value for my reader object (when I open a new one instead of using .seek) it does output 1, so I am at the beginning of my file. And when I look at the len(list(csvReader)) it is 229703, which shows that the .csv is fully there, so still not sure why it won't do anything besides the first row of the .csv
Edit 3: Just as a hail mary attempt, I tried creating a deep copy of the .csv file and iterating through that, but same results. I also tried just doing an entire separate .csv file and I also got the same issue of only getting 1 row. I guess that eliminates that it's a file issue, the information is there but there is something preventing it from reading it.
Edit 4: Here is where I'm currently at with the same issue. I might just have to rewrite this method completely haha but I'm going to lunch so I won't be able to actively respond now. Thank you for the help so far though!
# TODO: BUG HERE
with open(csvPath, mode='r') as inputFile2:
csvReader2 = csv.reader(inputFile2)
...
for row2 in csvReader2:
print("CSV Line Num: " + str(csvReader2.line_num))
print("CSV Index: " + str(rowNum))
print("CSV Length: " + str(len(list(csvReader2))))
print("CSV Row: " + str(row2))
Also incase it helps, here is csvPath:
nameOfInput = input("Please enter the file you'd like to convert: ")
csvPath = os.path.dirname(os.path.realpath(nameOfInput))
csvPath = os.path.join(csvPath, nameOfInput)
If you read the documentation carefully, it says csv reader is just a parser and all the heavy lifting is done by the underlying file object.
In your case, you are trying to read from a closed file in the second iteration and that is why it isn't working.
For csv reader to work you'll need an underlying object which supports the iterator protocol and returns a string each time its next() method is called — file objects and list objects are both suitable.
Link to the documentation: https://docs.python.org/3/library/csv.html
This syntax works for me when using a list.
My list is now so long it is unpractical to keep it in the program itself. Could I have a csv-file of one single column, where I could call element "i" - MyCSVfile(i) - in MyCSVfile?
import math
i = 0
while i < len(MyFile.csv):
Access element i in MyFile.csv
i += 1
If so what is the syntax change? (If it helps, the .csv file is in the same folder as the .py program)
Note! I do not need to change the .csv file, just read it. Preferably also get the length as you can with len(list).
To read a csv file, you need code like so:
f = open("mycsvfile.csv", "rb")
reader = csv.reader(f)
for line in reader:
do_something(line)
I want to generate a log file in which I have to print two lists for about 50 input files. So, there are approximately 100 lists reported in the log file. I tried using pickle.dump, but it adds some strange characters in the beginning of each value. Also, it writes each value in a different line and the enclosing brackets are also not shown.
Here is a sample output from a test code.
import pickle
x=[1,2,3,4]
fp=open('log.csv','w')
pickle.dump(x,fp)
fp.close()
output:
I want my log file to report:
list 1 is: [1,2,3,4]
If you want your log file to be readable, you are approaching it the wrong way by using pickle which "implements binary protocols"--i.e. it is unreadable.
To get what you want, replace the line
pickle.dump(x,fp)
with
fp.write(' list 1 is: '
fp.write(str(x))
This requires minimal change in the rest of your code. However, good practice would change your code to a better style.
pickle is for storing objects in a form which you could use to recreate the original object. If all you want to do is create a log message, the builtin __str__ method is sufficient.
x = [1, 2, 3, 4]
with open('log.csv', 'w') as fp:
print('list 1 is: {}'.format(x), file=fp)
Python's pickle is used to serialize objects, which is basically a way that an object and its hierarchy can be stored on your computer for use later.
If your goal is to write data to a csv, then read the csv file and output what you read inside of it, then read below.
Writing To A CSV File see here for a great tutorial if you need more info
import csv
list = [1,2,3,4]
myFile = open('yourFile.csv', 'w')
writer = csv.writer(myFile)
writer.writerow(list)
the function writerow() will write each element of an iterable (each element of the list in your case) to a column. You can run through each one of your lists and write it to its own row in this way. If you want to write multiple rows at once, check out the method writerows()
Your file will be automatically saved when you write.
Reading A CSV File
import csv
with open('example.csv', newline='') as File:
reader = csv.reader(File)
for row in reader:
print(row)
This will run through all the rows in your csv file and will print it to the console.
print("writing text to file")
prompt = '>'
data = [input(prompt) for i in range(3)]
with open('textfile.txt', 'w') as testfile:
testfile.write("\n".join(data))
with open('textfile.txt', 'r') as testfile:
print (testfile.read())
data = [line.strip('\n') for line in testfile]
data2 = testfile.readlines()
print(data)
print(data2)
After learning how to read and write from text files I have been trying to use
for line in textfile
But to no avail. In my above code both data and data2 print as empty arrays which makes me think I am doing something really wrong. Before I could get testfile.readlines() to work but I was never able to use a for loop. For some reason it wouldn't even enter the loop (even if I do a standard for loop outside of list comprehension).
Does anyone have any ideas what I am doing incorrectly? I could not find anyone else who has this problem.
When you called
print (testfile.read())
That put the file pointer to the end of the file. You need to bring it back to the beginning again by calling
testfile.seek(0)
After that, so that when the next file reading method is called it will be able to read the file from the beginning again. Likewise, after that list comprehension assignment to data you will need to do the same so that data2 can be populated.
The first thing you do is print(testfile.read()) which reads the entire contents of the file. After that any read is going to fail. You need to seek back to the beginning of the file:
testfile.seek(0)
I've created a very simple piece of code to read in tweets in JSON format in text files, determine if they contain an id and coordinates and if so, write these attributes to a csv file. This is the code:
f = csv.writer(open('GeotaggedTweets/ListOfTweets.csv', 'wb+'))
all_files = glob.glob('SampleTweets/*.txt')
for filename in all_files:
with open(filename, 'r') as file:
data = simplejson.load(file)
if 'text' and 'coordinates' in data:
f.writerow([data['id'], data['geo']['coordinates']])
I've been having some difficulties but with the help of the excellent JSON Lint website have realised my mistake. I have multiple JSON objects and from what I read these need to be separated by commas and have square brackets added to the start and end of the file.
How can I achieve this? I've seen some examples online where each individual line is read and it's added to the first and last line, but as I load the whole file I'm not entirely sure how to do this.
You have a file that either contains too many newlines (in the JSON values themselves) or too few (no newlines between the tweets at all).
You can still repair this by using some creative re-stitching. The following generator function should do it:
import json
def read_objects(filename):
decoder = json.JSONDecoder()
with open(filename, 'r') as inputfile:
line = next(inputfile).strip()
while line:
try:
obj, index = decoder.raw_decode(line)
yield obj
line = line[index:]
except ValueError:
# Assume we didn't have a complete object yet
line += next(inputfile).strip()
if not line:
line += next(inputfile).strip()
This should be able to read all your JSON objects in sequence:
for filename in all_files:
for data in read_objects(filename):
if 'text' and 'coordinates' in data:
f.writerow([data['id'], data['geo']['coordinates']])
It is otherwise fine to have multiple JSON strings written to one file, but you need to make sure that the entries are clearly separated somehow. Writing JSON entries that do not use newlines, then using newlines in between them, for example, makes sure you can later on read them one by one again and process them sequentially without this much hassle.