CVS Sample
So I have a csv file(sample in link above) , with variable names in row 7 and values in row 8 . The Variable all have units after them, and the values are just numbers like this :
Velocity (ft/s) Volumetric (Mgal/d Mass Flow (klb/d) Sound Speed (ft/s)
.-0l.121 1.232 1.4533434 1.233423
There are alot more variables, but basically I need some way to search in the csv file for the specefic unit groups, and then append the value associated with that in a list. For example search for text "(ft/s)", and then make a dictionary with Velocity and Sound speed as Keys, and their associated values . I am unable to do this because the csv is formatted like an excel spreadsheet, and the cells contains the whole variable name with it's unit
In the end I will have a dictionary for each unit group, and I need to do it this way because each csv file generated, the unit groups change ( ft/s becomes m/s). I also can't use excel read, because it doesn't work in IronPython.
You can use csv module to read the appropriate lines into lists.
defaultdict is a good choice for data aggregation, while variable
names and units can be easily separated by splitting on '('.
import csv
import collections
with open(csv_file_name) as fp:
reader = csv.feader(fp)
for k in range(6): # skip 6 lines
next(reader)
varnames = next(reader) # 7th line
values = next(reader) # 8th line
groups = collections.defaultdict(dict)
for i, (col, value) in enumerate(zip(varnames, values)):
if i < 2:
continue
name, units = map(str.strip, col.strip(')').split('(', 1))
groups[units][name] = float(value)
Edit: added the code to skip first two columns
I'll help with the part I think you're stuck on, which is trying to extract the units from the category. Given your data, your best bet may be to use regex, the following should work:
import re
f = open('data.csv')
# I assume the first row has the header you listed in your question
header = f.readline().split(',') #since you said its a csv
for item in header:
print re.search(r'\(.+\)', item).group()
print re.sub(r'\(.+\)', '', item)
That should print the following for you:
(ft/s)
Velocity
(Mgal/d)
Volumetric
(klb/d)
Mass Flow
(ft/s)
Sound Speed
You can modify the above to store these in a list, then iterate through them to find duplicates and merge the appropriate strings to dictionaries or whatnot.
Related
I have the header name of a column from a series of massive csv files with 50+ fields. Across the files, the index of the column I need is not always the same.
I have written code that finds the index number of the column in each file. Now I'd like to add only this column as the key in a dictionary where the value counts the number of unique strings in this column.
Because these csv files are massive and I'm trying to use best-practices for efficient data engineering, I'm looking for a solution that uses minimal memory. Every solution I find for writing a csv to a dictionary involves writing all of the data in the csv to the dictionary and I don't think this is necessary. It seems that the best solution involves only reading in the data from this one column and adding this column to the dictionary key.
So, let's take this as sample data:
FOODS;CALS
"PIZZA";600
"PIZZA";600
"BURGERS";500
"PIZZA";600
"PASTA";400
"PIZZA";600
"SALAD";100
"CHICKEN WINGS";300
"PIZZA";600
"PIZZA";600
The result I want:
food_dict = {'PIZZA': 6, 'PASTA': 1, 'BURGERS': 1, 'SALAD': 1, 'CHICKEN WINGS': 1}
Now let's say that I want the data from only the FOODS column and in this case, I have set the index value as the variable food_index.
Here's what I have tried, the problem being that the columns are not always in the same index location across the different files, so this solution won't work:
from itertools import islice
with open(input_data_txt, "r") as file:
# This enables skipping the header line.
skipped = islice(file, 1, None)
for i, line in enumerate(skipped, 2):
try:
food, cals = line.split(";")
except ValueError:
pass
food_dict = {}
if food not in food_dict:
food_dict[food] = 1
else:
food_dict[food] += 1
This solution works for only this sample -- but only if I know the location of the columns ahead of time -- and again, a reminder that I have upwards of 50 columns and the index position of the column I need is different across files.
Is it possible to do this? Again, built-ins only -- no Pandas or Numpy or other such packages.
The important part here is that you do not skip the header line! You need to split that line and find the indices of the columns you need! Since you know the column headers for the information you need, put those into a reference list:
wanted_headers = ["FOODS", "RECYCLING"]
with open(input_data_txt, "r") as infile:
header = infile.read().split(';')
wanted_cols = [header.index(label) for label in wanted_headers if label in header]
# wanted_cols is now a list of column numbers you want
for line in infile.readlines(): # Iterate through remaining file
fields = line.split(';')
data = [fields[col] for col in wanted_cols]
You now have the data in the same order as your existing headers; you can match it up or rearrange as needed.
Does that solve your blocking point? I've left plenty of implementation for you ...
Use Counter and csv:
from collections import Counter
import csv
with open(filename) as f:
reader = csv.reader(f)
next(reader, None) # skips header
histogram = Counter(line[0] for line in reader)
I am attempting to use the CSV module of python to modify a CSV file. The file represents a stock and lists (as columns) the date, open price, high price, low price, close price, and volume for the day. What I would like to do is create multiple new columns by performing algebra on the existing data. For instance, I would like to create a column for the percentage from the open price to the high price for any given day and another for the percentage change from yesterday's close to today's close (no end in sight here, as of now thinking of about 10 columns to add).
Is there a compact way to do this? As of now, I am opening the original file and reading into a list(s) the values of interest. Then writing onto some temp file the modified values using that list(s). Then writing onto some new file using a for loop and adding the rows from each spreadsheet. Then writing the entire contents of that new file onto the original csv, as I would like to maintain the name of the csv (ticker.csv).
Hopefully I have made my issue clear. If you would like any clarification or further details, please do not hesitate.
edit: I have included a snippet of the code for one function below. The function seeks to create a new column that has the percent change from yesterday's close to today's close.
def add_col_pchange(ticker):
"""
Add column with percent change in closing price.
"""
original = open('file1', 'rb')
reader = csv.reader(original)
reader.next()
close = list()
for row in reader:
# build list of close values; entries from left to right are reverse chronological
# index 4 corresponds to "Close" column
close.append(float(row[4])
original.close()
new = open(file2, 'wb')
writer = csv.writer(new)
writer.writerow(["Percent Change"])
pchange = list()
for i in (0, len(close)-1):
x = (close[i]-close[i+1])/close[i+1]
pchange.append(x)
new.close()
# open original and new csv's as read, write out to some new file.
# later, copy that entire file to original csv in order to maintain
# original csv's name and include new data
Hope this helps
def add_col_pchange(ticker):
"""
Add column with percent change in closing price.
"""
# always use with to transparently manage opening/closing files
with open('ticker.csv', 'rb') as original:
spam = csv.reader(original)
headers = spam.next() # get header row
# get all of the data at one time, then transpose it using zip
data = zip(*[row for row in spam])
# build list of close values; entries from left to right are reverse chronological
# index 4 corresponds to "Close" column
close = data[4] # the 5th column has close values
# use map to process whole column at one time
f_pchange = lambda close0, close1: 100 * (float(close0) - float(close1)) / float(close1)
Ndays = len(close) # length of table
pchange = map(f_pchange, close[:-1], close[1:]) # list of percent changes
pchange = (None,) + tuple(pchange) # add something for the first or last day
headers.append("Percent Change") # add column name to headers
data.append(pchange)
data = zip(*data) # transpose back to rows
with open('ticker.csv', 'wb') as new:
spam = csv.writer(new)
spam.writerow(headers) # write headers
for row in data:
spam.writerow(row)
# open original and new csv's as read, write out to some new file.
# later, copy that entire file to original csv in order to maintain
# original csv's name and include new data
You should check out either numpy; you could use loadtxt() and vector math, but #lightalchemist is right, pandas was designed just for this.
I have a problem to print random values from a csv for a given column name/index (my second day in Python world :) )
I have so far managed to write the following -
#!/usr/bin/python
import csv # This will help us reading csv formated files.
import random # This will random method
load_file= open('<filename>',"rb")
reader= csv.reader(load_file) #The reader method will put each line
# of the csv file into a list of columns
for row in reader:
from random import shuffle
shuffle(row[2])
print row[2]
load_file.close();
It is printing a shuffled (random) values from the third column in the file.
objectives -
. define the number of values 1000,2000,50000 etc.
. The values are highly skewed how to ensure uniform distribution ? e.g. if the column has got mostly 0s & few 1s I want to see both values in the output for any sample size.
. write this into a file. (not urgent at this point)
I am using python 2.6.6
Here is an unrelated example to show you how the shuffle and pop methods can be used:
from random import shuffle
a = [1,2,3,4,5]
shuffle(a)
print a
[5,1,4,2,3]
print a.pop()
3
print a
[5,1,4,2]
The pop method without any arguments deletes the last element from a list and then returns it. However, since you are shuffling the list before hand, you will get a random sequence everytime.
From what I understand, you want to do this:
Read a CSV file with an unknown number of rows;
Gather all the items in a given column, say column 2;
Choose at random one row from that column.
If that is correct, it is fairly easy to do.
Suppose we have a CSV file like so:
1,2,3,4
5,6,7,8
9,10,11,12
13,14,15,16
Usually you would deal with a CSV file row by row. Since you want all the data from a column, you need to read the entire file before you have a set of data you can work with since the total number of rows is not known.
Here is a way:
import csv
col=2
with open(fn, 'r') as f:
reader=csv.reader(f)
data=[row[col] for row in reader]
print data
# ['3', '7', '11', '15']
Then if you want a single random number out of that list, use random.choice(data)
If you want to shuffle all the items in that column, use random.shuffle(data) then print it as a column using something like print '\n'.join(data) if all the elements of data are strings.
Thanks #dawg, #sshashank124 and others -
here is the code -
#!/usr/bin/python
import csv # This will help us reading csv formated files.
import random # random method
col=2
with open('<filename>','r') as f:
reader=csv.reader(f)
data=[row[col] for row in reader]
from random import shuffle
shuffle(data)
print '\n'.join(data[:100])
f.close();
It is giving me output in the form of a column.
I am going to try to write it as a function and add other features next. I might start a separate thread for that.
I asked a question about two hours ago regarding the reading and writing of data from a website. I've spent the last two hours since then trying to find a way to read the maximum date value from column 'A' of the output, comparing that value to the refreshed website data, and appending any new data to the csv file without overriding the old ones or creating duplicates.
The code that is currently 100% working is this:
import requests
symbol = "mtgoxUSD"
url = 'http://api.bitcoincharts.com/v1/trades.csv?symbol={}'.format(symbol)
data = requests.get(url)
with open("trades_{}.csv".format(symbol), "r+") as f:
f.write(data.text)
I've tried various ways of finding the maximum value of column 'A'. I've tried a bunch of different ways of using "Dict" and other methods of sorting/finding max, and even using pandas and numpy libs. None of which seem to work. Could someone point me in the direction of a decent way to find the maximum of a column from the .csv file? Thanks!
if you have it in a pandas DataFrame, you can get the max of any column like this:
>>> max(data['time'])
'2012-01-18 15:52:26'
where data is the variable name for the DataFrame and time is the name of the column
I'll give you two answers, one that just returns the max value, and one that returns the row from the CSV that includes the max value.
import csv
import operator as op
import requests
symbol = "mtgoxUSD"
url = 'http://api.bitcoincharts.com/v1/trades.csv?symbol={}'.format(symbol)
csv_file = "trades_{}.csv".format(symbol)
data = requests.get(url)
with open(csv_file, "w") as f:
f.write(data.text)
with open(csv_file) as f:
next(f) # discard first row from file -- see notes
max_value = max(row[0] for row in csv.reader(f))
with open(csv_file) as f:
next(f) # discard first row from file -- see notes
max_row = max(csv.reader(f), key=op.itemgetter(0))
Notes:
max() can directly consume an iterator, and csv.reader() gives us an iterator, so we can just pass that in. I'm assuming you might need to throw away a header line so I showed how to do that. If you had multiple header lines to discard, you might want to use islice() from the itertools module.
In the first one, we use a "generator expression" to select a single value from each row, and find the max. This is very similar to a "list comprehension" but it doesn't build a whole list, it just lets us iterate over the resulting values. Then max() consumes the iterable and we get the max value.
max() can use a key= argument where you specify a "key function". It will use the key function to get a value and use that value to figure the max... but the value returned by max() will be the unmodified original value (in this case, a row value from the CSV). In this case, the key function is manufactured for you by operator.itemgetter()... you pass in which column you want, and operator.itemgetter() builds a function for you that gets that column.
The resulting function is the equivalent of:
def get_col_0(row):
return row[0]
max_row = max(csv.reader(f), key=get_col_0)
Or, people will use lambda for this:
max_row = max(csv.reader(f), key=lambda row: row[0])
But I think operator.itemgetter() is convenient and nice to read. And it's fast.
I showed saving the data in a file, then pulling from the file again. If you want to go through the data without saving it anywhere, you just need to iterate over it by lines.
Perhaps something like:
text = data.text
rows = [line.split(',') for line in text.split("\n") if line]
rows.pop(0) # get rid of first row from data
max_value = max(row[0] for row in rows)
max_row = max(rows, key=op.itemgetter(0))
I don't know which column you want... column "A" might be column 0 so I used 0 in the above. Replace the column number as you like.
It seems like something like this should work:
import requests
import csv
symbol = "mtgoxUSD"
url = 'http://api.bitcoincharts.com/v1/trades.csv?symbol={}'.format(symbol)
data = requests.get(url)
with open("trades_{}.csv".format(symbol), "r+") as f:
all_values = list(csv.reader(f))
max_value = max([int(row[2]) for row in all_values[1:]])
(write-out-the-value?)
EDITS: I used "row[2]" because that was the sample column I was taking max of in my csv. Also, I had to strip off the column headers, which were all text, which was why I looked at "all_values[1:]" from the second row to the end of the file.
The long (winded) version:
I'm gathering research data using Python. My initial parsing is ugly (but functional) code which gives me some basic information and turns my raw data into a format suitable for heavy duty statistical analysis using SPSS. However, every time I modify the experiment, I have to dive into the analysis code.
For a typical experiment, I'll have 30 files, each for a unique user. Field count is fixed for each experiment (but can vary from one to another 10-20). Files are typically 700-1000 records long with a header row. Record format is tab separated (see sample which is 4 integers, 3 strings, and 10 floats).
I need to sort my list into categories. In a 1000 line file, I could have 4-256 categories. Rather than trying to pre-determine how many categories each file has, I'm using the code below to count them. The integers at the beginning of each line dictate what category the float values in the row correspond to. Integer combinations can be modified by the string values to produce wildly different results, and multiple combinations can sometimes be lumped together.
Once they're in categories, number crunching begins. I get statistical info (mean, sd, etc. for each category for each file).
The essentials:
I need to parse data like the sample below into categories. Categories are combos of the non-floats in each record. I'm also trying to come up with a dynamic (graphical) way to associate column combinations with categories. Will make a new post fot this.
I'm looking for suggestions on how to do both.
# data is a list of tab separated records
# fields is a list of my field names
# get a list of fieldtypes via gettype on our first row
# gettype is a function to get type from string without changing data
fieldtype = [gettype(n) for n in data[1].split('\t')]
# get the indexes for fields that aren't floats
mask = [i for i, field in enumerate(fieldtype) if field!="float"]
# for each row of data[skipping first and last empty lists] we split(on tabs)
# and take the ith element of that split where i is taken from the list mask
# which tells us which fields are not floats
records = [[row.split('\t')[i] for i in mask] for row in data[1:-1]]
# we now get a unique set of combos
# since set doesn't happily take a list of lists, we join each row of values
# together in a comma seperated string. So we end up with a list of strings.
uniquerecs = set([",".join(row) for row in records])
print len(uniquerecs)
quit()
def gettype(s):
try:
int(s)
return "int"
except ValueError:
pass
try:
float(s)
return "float"
except ValueError:
return "string"
Sample Data:
field0 field1 field2 field3 field4 field5 field6 field7 field8 field9 field10 field11 field12 field13 field14 field15
10 0 2 1 Right Right Right 5.76765674196 0.0310912272139 0.0573603238282 0.0582901376612 0.0648936500524 0.0655294305058 0.0720571099855 0.0748289246137 0.446033755751
3 1 3 0 Left Left Right 8.00982745764 0.0313840132052 0.0576521406854 0.0585844966069 0.0644905497442 0.0653386429438 0.0712603578765 0.0740345755708 0.2641076191
5 19 1 0 Right Left Left 4.69440026591 0.0313852052224 0.0583165354345 0.0592403274967 0.0659404609478 0.0666070804916 0.0715314027001 0.0743022054775 0.465994962101
3 1 4 2 Left Right Left 9.58648184552 0.0303649003017 0.0571579895338 0.0580911765412 0.0634304670863 0.0640132919609 0.0702920967445 0.0730697946335 0.556525293
9 0 0 7 Left Left Left 7.65374257547 0.030318719717 0.0568551744109 0.0577785415066 0.0640577002605 0.0647226582655 0.0711459854908 0.0739256050784 1.23421547397
Not sure if I understand your question, but here are a few thoughts:
For parsing the data files, you usually use the Python csv module.
For categorizing the data you could use a defaultdict with the non-float fields joined as a key for the dict. Example:
from collections import defaultdict
import csv
reader = csv.reader(open('data.file', 'rb'), delimiter='\t')
data_of_category = defaultdict(list)
lines = [line for line in reader]
mask = [i for i, n in enumerate(lines[1]) if gettype(n)!="float"]
for line in lines[1:]:
category = ','.join([line[i] for i in mask])
data_of_category[category].append(line)
This way you don't have to calculate the categories in the first place an can process the data in one pass.
And I didn't understand the part about "a dynamic (graphical) way to associate column combinations with categories".
For at least part of your question, have a look at Named Tuples
Step 1: Use something like csv.DictReader to turn the text file into an iterable of rows.
Step 2: Turn that into a dict of first entry: rest of entries.
with open("...", "rb") as data_file:
lines = csv.Reader(data_file, some_custom_dialect)
categories = {line[0]: line[1:] for line in lines}
Step 3: Iterate over the items() of the data and do something with each line.
for category, line in categories.items():
do_stats_to_line(line)
Some useful answers already but I'll throw mine in as well. Key points:
Use the csv module
Use collections.namedtuple for each row
Group the rows using a tuple of int field values as the key
If your source rows are sorted by the keys (the integer column values), you could use itertools.groupby. This would likely reduce memory consumption. Given your example data, and the fact that your files contain >= 1000 rows, this is probably not an issue to worry about.
def coerce_to_type(value):
_types = (int, float)
for _type in _types:
try:
return _type(value)
except ValueError:
continue
return value
def parse_row(row):
return [coerce_to_type(field) for field in row]
with open(datafile) as srcfile:
data = csv.reader(srcfile, delimiter='\t')
## Read headers, create namedtuple
headers = srcfile.next().strip().split('\t')
datarow = namedtuple('datarow', headers)
## Wrap with parser and namedtuple
data = (parse_row(row) for row in data)
data = (datarow(*row) for row in data)
## Group by the leading integer columns
grouped_rows = defaultdict(list)
for row in data:
integer_fields = [field for field in row if isinstance(field, int)]
grouped_rows[tuple(integer_fields)].append(row)
## DO SOMETHING INTERESTING WITH THE GROUPS
import pprint
pprint.pprint(dict(grouped_rows))
EDIT You may find the code at https://gist.github.com/985882 useful.