Trouble with Python order of operations/loop - python

I have some code that is meant to convert CSV files into tab delimited files. My problem is that I cannot figure out how to write the correct values in the correct order. Here is my code:
for file in import_dir:
data = csv.reader(open(file))
fields = data.next()
new_file = export_dir+os.path.basename(file)
tab_file = open(export_dir+os.path.basename(file), 'a+')
for row in data:
items = zip(fields, row)
item = {}
for (name, value) in items:
item[name] = value.strip()
tab_file.write(item['name']+'\t'+item['order_num']...)
tab_file.write('\n'+item['amt_due']+'\t'+item['due_date']...)
Now, since both my write statements are in the for row in data loop, my headers are being written multiple times over. If I outdent the first write statement, I'll have an obvious formatting error. If I move the second write statement above the first and then outdent, my data will be out of order. What can I do to make sure that the first write statement gets written once as a header, and the second gets written for each line in the CSV file? How do I extract the first 'write' statement outside of the loop without breaking the dictionary? Thanks!

The csv module contains methods for writing as well as reading, making this pretty trivial:
import csv
with open("test.csv") as file, open("test_tab.csv", "w") as out:
reader = csv.reader(file)
writer = csv.writer(out, dialect=csv.excel_tab)
for row in reader:
writer.writerow(row)
No need to do it all yourself. Note my use of the with statement, which should always be used when working with files in Python.
Edit: Naturally, if you want to select specific values, you can do that easily enough. You appear to be making your own dictionary to select the values - again, the csv module provides DictReader to do that for you:
import csv
with open("test.csv") as file, open("test_tab.csv", "w") as out:
reader = csv.DictReader(file)
writer = csv.writer(out, dialect=csv.excel_tab)
for row in reader:
writer.writerow([row["name"], row["order_num"], ...])
As kirelagin points out in the commends, csv.writerows() could also be used, here with a generator expression:
writer.writerows([row["name"], row["order_num"], ...] for row in reader)

Extract the code that writes the headers outside the main loop, in such a way that it only gets written exactly once at the beginning.
Also, consider using the CSV module for writing CSV files (not just for reading), don't reinvent the wheel!

Ok, so I figured it out, but it's not the most elegant solutions. Basically, I just ran the first loop, wrote to the file, then ran it a second time and appended the results. See my code below. I would love any input on a better way to accomplish what I've done here. Thanks!
for file in import_dir:
data = csv.reader(open(file))
fields = data.next()
new_file = export_dir+os.path.basename(file)
tab_file = open(export_dir+os.path.basename(file), 'a+')
for row in data:
items = zip(fields, row)
item = {}
for (name, value) in items:
item[name] = value.strip()
tab_file.write(item['name']+'\t'+item['order_num']...)
tab_file.close()
for file in import_dir:
data = csv.reader(open(file))
fields = data.next()
new_file = export_dir+os.path.basename(file)
tab_file = open(export_dir+os.path.basename(file), 'a+')
for row in data:
items = zip(fields, row)
item = {}
for (name, value) in items:
item[name] = value.strip()
tab_file.write('\n'+item['amt_due']+'\t'+item['due_date']...)
tab_file.close()

Related

How to find size of a csv and either be able to iterate on the reader object [duplicate]

I am probably making a stupid mistake, but I can't find where it is. I want to count the number of lines in my csv file. I wrote this, and obviously isn't working: I have row_count = 0 while it should be 400. Cheers.
f = open(adresse,"r")
reader = csv.reader(f,delimiter = ",")
data = [l for l in reader]
row_count = sum(1 for row in reader)
print row_count
with open(adresse,"r") as f:
reader = csv.reader(f,delimiter = ",")
data = list(reader)
row_count = len(data)
You are trying to read the file twice, when the file pointer has already reached the end of file after saving the data list.
First you have to open the file with open
input_file = open("nameOfFile.csv","r+")
Then use the csv.reader for open the csv
reader_file = csv.reader(input_file)
At the last, you can take the number of row with the instruction 'len'
value = len(list(reader_file))
The total code is this:
input_file = open("nameOfFile.csv","r+")
reader_file = csv.reader(input_file)
value = len(list(reader_file))
Remember that if you want to reuse the csv file, you have to make a input_file.fseek(0), because when you use a list for the reader_file, it reads all file, and the pointer in the file change its position
If you are working with python3 and have pandas library installed you can go with
import pandas as pd
results = pd.read_csv('f.csv')
print(len(results))
I would consider using a generator. It would do the job and keeps you safe from MemoryError of any kind
def generator_count_file_rows(input_file):
for row in open(input_file,'r'):
yield row
And then
for row in generator_count_file_rows('very_large_set.csv'):
count+=1
The important stuff is hidden in comments section of solution which is marked correct.
Re-sharing Erdős-Bacon's solution here for better visibility.
Why ?
Because: It saves lot of memory without having to create list.
So I think it is better do this way
def read_raw_csv(file_name):
with open(file_name, 'r') as file:
csvreader = csv.reader(file)
# count number of rows
entry_count = sum(1 for row in csvreader)
print(entry_count-1) # -1 is for discarding header row.
Checkout this link for more info
# with built in libraries
opened_file = open('f.csv')
from csv import reader
read_file = reader(opened_file)
apps_data = list(read_file)
rowcount = len(apps_data) #which incudes header row
print("Total rows incuding header: " + str(rowcount))
Simply Open the csv file in Notepad++. It shows the total row count in a jiffy. :)
Or
in cmd prompt , Provide file path and key in the command
find \c \v "some meaningless string" Filename.csv

Correlate data from two CSVs and write the data to the first CSV using Python

I'm having trouble figuring out where to dive in on this personal project and I was hoping this community could help me create a Python script to deal with this data.
I have a CSV file that contains a list of meals fed to dogs at an animal rescue, associated by with the kennel number:
Source CSV - mealsandtreats.csv
blank_column,Kennel_Number,Species,Food,Meal_ID
,1,Dog,Meal,11.2
,5,Dog,Meal,45.2
,3,Dog,Meal,21.4
,4,Dog,Meal,17
,2,Dog,Meal,11.2
,4,Dog,Meal,21.4
,6,Dog,Meal,17
,2,Dog,Meal,45.2
I have a second CSV file that provides a key which maps the meals to what treats come with the meal:
Meal to Treat Key - MealsToTreatsKey.csv
Meals_fed,Treats_fed
10.1,2.4
11.2,2.4
13.5,3
15.6,3.2
17,3.2
20.1,5.1
21.4,5.2
35.7,7.7
45.2,7.9
I need to take every meal type (eg; drop duplicate entries) that was delivered from table 1, find the associated treat type, and then create an individual entry for every time a treat was served to a specific kennel. The final result should look something like this:
Result CSV - mealsandtreats.csv
blank_column,Kennel_Number,Species,Food,Meal_ID
,1,Dog,Meal,11.2
,5,Dog,Meal,45.2
,3,Dog,Meal,21.4
,4,Dog,Meal,17
,2,Dog,Meal,11.2
,4,Dog,Meal,21.4
,6,Dog,Meal,17
,2,Dog,Meal,45.2
,1,Dog,Treat,2.4
,5,Dog,Treat,7.9
,3,Dog,Treat,5.2
,4,Dog,Treat,3.2
,1,Dog,Treat,2.4
,4,Dog,Treat,5.2
Would prefer to do this with the csv module and not Pandas, but I'm open to using Pandas if necessary.
I have a bit of code so far just opening the CSVs, but I'm really stuck on where to go next:
import csv
with open('./meals/results/foodToTreats.csv', 'r') as t1,
open('./results/food.csv', 'r') as t2:
key = t1.readlines()
map = t2.readlines()
with open('./results/food.csv', 'w') as outFileF:
for line in map:
if line not in key:
outFileF.write(line)
with open('./results/foodandtreats.csv', 'w') as outFileFT:
for line in map:
if line not in key:
outFileFT.write(line)
So basically I just need to take every treat entry in the 2nd sheet, search for matching associated food entries in the 1st sheet, look up the kennel number associated with that entry and then write it to the 1st sheet.
Giving it my best shot in pseudo code, something like:
for x in column 0,y:
y,1 = Z
food = x
treat = y
kennel_number = z
when x,z:
writerows('', {'kennel_number"}, 'species', '{food/treat}',
{'meal_id"})
Update: Here is the exact code I'm using, thanks to #wwii. Seeing a minor bug:
import csv
import collections
treats = {}
with open('mealsToTreatsKey.csv') as f2:
for line in f2:
meal,treat = line.strip().split(',')
treats[meal] = treat
new_items = set()
Treat = collections.namedtuple('Treat', ['blank_column','Kennel_Number','Species','Food','Meal_ID'])
with open('foodandtreats.csv') as f1:
reader = csv.DictReader(f1)
for row in reader:
row['Food'] = 'Treat'
row['Meal_ID'] = treats[row['Meal_ID']]
new_items.add(Treat(**row))
fieldnames = reader.fieldnames
with open('foodandtreats.csv', 'a') as f1:
writer = csv.DictWriter(f1, fieldnames)
for row in new_items:
writer.writerow(row._asdict())
This works perfectly except for one small bug. The first new row written isn't starting on its own line:
enter image description here
Make a dictionary mapping meals to treats
treats = {}
with open(treatfile) as f2:
for line in f2:
meal,treat = line.strip().split(',')
treats[meal] = treat
Iterate over the meal file and create set of new entries. Use namedtuples for the new items.
import collections
new_items = set()
Treat = collections.namedtuple('Treat', ['blank_column','Kennel_Number','Species','Food','Meal_ID'])
with open(mealfile) as f1:
reader = csv.DictReader(f1)
for row in reader:
row['Food'] = 'Treat'
row['Meal_ID'] = treats[row['Meal_ID']]
new_items.add(Treat(**row))
fieldnames = reader.fieldnames
Open the meal file (again) for appending and write the new entries
with open(mealfile, 'a') as f1:
writer = csv.DictWriter(f1, fieldnames)
for row in new_items:
writer.writerow(row._asdict())
If the meals file does not end with a newline character, you will need to add one before writing the new treat lines. Since you have control of the files you should just make sure it always ends in a blank line.

Extracting variable names and data from csv file using Python

I have a csv file that has each line formatted with the line name followed by 11 pieces of data. Here is an example of a line.
CW1,0,-0.38,2.04,1.34,0.76,1.07,0.98,0.81,0.92,0.70,0.64
There are 12 lines in total, each with a unique name and data.
What I would like to do is extract the first cell from each line and use that to name the corresponding data, either as a variable equal to a list containing that line's data, or maybe as a dictionary, with the first cell being the key.
I am new to working with inputting files, so the farthest I have gotten is to read the file in using the stock solution in the documentation
import csv
path = r'data.csv'
with open(path,'rb') as csvFile:
reader = csv.reader(csvFile,delimiter=' ')
for row in reader:
print(row[0])
I am failing to figure out how to assign each row to a new variable, especially when I am not sure what the variable names will be (this is because the csv file will be created by a user other than myself).
The destination for this data is a tool that I have written. It accepts lists as input such as...
CW1 = [0,-0.38,2.04,1.34,0.76,1.07,0.98,0.81,0.92,0.70,0.64]
so this would be the ideal end solution. If it is easier, and considered better to have the output of the file read be in another format, I can certainly re-write my tool to work with that data type.
As Scironic said in their answer, it is best to use a dict for this sort of thing.
However, be aware that dict objects do not have any "order" - the order of the rows will be lost if you use one. If this is a problem, you can use an OrderedDict instead (which is just what it sounds like: a dict that "remembers" the order of its contents):
import csv
from collections import OrderedDict as od
data = od() # ordered dict object remembers the order in the csv file
with open(path,'rb') as csvFile:
reader = csv.reader(csvFile, delimiter = ' ')
for row in reader:
data[row[0]] = row[1:] # Slice the row up into 0 (first item) and 1: (remaining)
Now if you go looping through your data object, the contents will be in the same order as in the csv file:
for d in data.values():
myspecialtool(*d)
You need to use a dict for these kinds of things (dynamic variables):
import csv
path = r'data.csv'
data = {}
with open(path,'rb') as csvFile:
reader = csv.reader(csvFile,delimiter=' ')
for row in reader:
data[row[0]] = row[1:]
dicts are especially useful for dynamic variables and are the best method to store things like this. to access you just need to use:
data['CW1']
This solution also means that if you add any extra rows in with new names, you won't have to change anything.
If you are desperate to have the variable names in the global namespace and not within a dict, use exec (N.B. IF ANY OF THIS USES INPUT FROM OUTSIDE SOURCES, USING EXEC/EVAL CAN BE HIGHLY DANGEROUS (rm * level) SO MAKE SURE ALL INPUT IS CONTROLLED AND UNDERSTOOD BY YOURSELF).
with open(path,'rb') as csvFile:
reader = csv.reader(csvFile,delimiter=' ')
for row in reader:
exec("{} = {}".format(row[0], row[1:])
In python, you can use slicing: row[1:] will contain the row, except the first element, so you could do:
>>> d={}
>>> with open("f") as f:
... c = csv.reader(f, delimiter=',')
... for r in c:
... d[r[0]]=map(int,r[1:])
...
>>> d
{'var1': [1, 3, 1], 'var2': [3, 0, -1]}
Regarding variable variables, check How do I do variable variables in Python? or How to get a variable name as a string in Python?. I would stick to dictionary though.
An alternative to using the proper csv library could be as follows:
path = r'data.csv'
csvRows = open(path, "r").readlines()
dataRows = [[float(col) for col in row.rstrip("\n").split(",")[1:]] for row in csvRows]
for dataRow in dataRows: # Where dataRow is a list of numbers
print dataRow
You could then call your function where the print statement is.
This reads the whole file in and produces a list of lines with trailing newlines. It then removes each newline and splits each row into a list of strings. It skips the initial column and calls float() for each entry. Resulting in a list of lists. It depends how important the first column is?

Add Dictionary to a list python

I am new to python and I'm trying to create a csv parsing script.
I pass rows from the csv to a list but what currently troubles me is that I need to add the first header line as a dictionary in each item.
def parse_csv(datafile):
data = []
with open(datafile, "r") as f:
next(f) #skip headerline
for line in f:
splitLine = line.strip(',')
rowL = splitLine.rstrip('\n') #remove the newline char
data.append(rowL)
pprint(data)
return data
If the 1st header line has the dictionaries (e.g Title, Name etc) how am I going to pass to each stripped element?
e.g {'Dict1': 'data1', 'Dict2': 'data2' }
This may be considered duplicate but tried various ways from similar posts but none worked properly on my case.
I strongly recommend to use the provided csv library. It will save you a lot of time and effort. Here is what you want to do:
import csv
data = []
with open(datafile, 'r') as csvfile:
reader = csv.DictReader(csvfile)
for row in reader:
data.append(row)
print(row['Title'], row['Name'])
In this example each row is actually a python dictionary.
#GeorgiDimitrov is certainly right that the proper approach is to use the csv module from the standard library, but, if you're doing this only for self-instruction purposes, then...:
def parse_csv(datafile):
data = []
with open(datafile, "r") as f:
headers = next(f).split(',')
for line in f:
splitLine = line.split(',')
dd = dict(zip(headers,splitLine))
data.append(dd
pprint(data)
return data
This will not properly deal with quoted/escaped commas, &c -- all subtleties that are definitely best left to the csv module:-).

Reading from a file into a list and have each element go through the program

Sorry if the title is confusing. What I want to do is have a text file (keywords.txt) read and then split into a list. So basically lets say the file contained "iphone, keys, wallet, pen, folder", I would want the list to be [iphone, keys, wallet, pen, folder].
Is there any way to set one variable to work for each element. Say the variable is query. Is there anyway for query to be each of the elements so it can go through the program and work for each element. Below is the code I have, it obviously doesnt work but that is what I want to happen if possible.
The reason I want to do it for each is because eventually the script will write a new text file for each of the elements and name it based on what the element is and the only way I know how to do that is by having one variable.
data = [line.strip() for line in open('keywords.txt', 'r')]
try:
query = sys.argv[1]
except IndexError:
query = item in data
Here is the rest of the code that I will be performing. It will take what is in the list that is created and create a new textfile and a csv file.
newFile = open("%s.txt" %query, 'w').write(txt.encode('utf8'))
with open("%s.txt" %query, 'rb') as input_file:
reader = csv.reader(input_file, delimiter='\n', quoting = csv.QUOTE_NONE)
with open("%s.csv" %query, 'wb') as output_file:
writer = csv.writer(output_file)
for row in reader:
writer.writerow(row)
Turn the query value taken from the command line into a list instead, then loop over the query list:
try:
query = [sys.argv[1]]
except IndexError:
query = data
for q in query:
# do something with q
def process_keywords_in_file(file_name):
with open(file_name) as f:
for line in f:
process(line.strip())
def process(keyword):
#your code
if you want to write a new file with the name of the keyword:
with open('%s.txt' % keyword, 'w') as fw:
fw.write('content')

Categories

Resources