I have 2 excel files: IDList.csv and Database.csv. IDList contains a list of 300 ID numbers that I want to filter out of the Database, which contains 2000 entries (leaving 1700 entries in the Database).
I tried writing a for loop (For each ID in the IDList, filter out that ID in Database.csv) but am having some troubles with the filter function. I am using Pyvot (http://packages.python.org/Pyvot/tutorial.html). I get a syntax error...Python/Pyvot doesn't like my syntax for xl.filter, but I can't figure out how to correct the syntax. This is what the documentation says:
xl.tools.filter(func, range)
Filters rows or columns by applying func to the given range. func is called for each value in the range. If it returns False, the corresponding row / column is hidden. Otherwise, the row / column is made visible.
range must be a row or column vector. If it is a row vector, columns are hidden, and vice versa.
Note that, to unhide rows / columns, range must include hidden cells. For example, to unhide a range:
xl.filter(lambda v: True, some_vector.including_hidden)
And here's my code:
import xl
IDList = xl.Workbook("IDList.xls").get("A1:A200").get()
for i in range(1,301):
xl.filter(!=IDList[i-1],"A1:A2000")
How can I filter a column in Database.csv using criteria in IDList.csv? I am open to solutions in Python or an Excel VBA macro, although I prefer Python.
import csv
with open("IDList.csv","rb") as inf:
incsv = csv.reader(inf)
not_wanted = set(row[0] for row in incsv)
with open("Database.csv","rb") as inf, open("FilteredDatabase.csv","wb") as outf:
incsv = csv.reader(inf)
outcsv = csv.writer(outf)
outcsv.writerows(row for row in incsv if row[0] not in not_wanted)
Related
I'm using PyQt5 and want to compare values from a csv file with values imputed by the user through QLineEdit(). Then, if the values are the same, I want to get the whole row imported to a QTableWidget.
The csv file contains 3 different columns, with width values, height values and thickness values.
I've tried this to solve the first problem:
import csv
with open('csvTest.csv') as file:
reader = csv.reader(file)
for row in reader:
if row[0] == self.widthTextbox.text() or row[1] == self.heightTextbox.text() or row[2] == self.thickTextbox.text():
print("Found: {}".format(row))
This didn't work, and I know that using "or" is problematic because I want this to act like a filter, so if the user only inputs one of the three atributes he'll get some rows, if he inputs two he'll get fewer rows and if he inputs all three of them he will get even fewer. But using "or" allows any line that fits any condition valid.
The second problem is, if this worked, I'd like to make the number of rows in the table equal to the number of rows that passed through the filter, using something like self.tableWidget.setRowCount('''number of rows found''') .
Finally, the last issue would be to make the QTableWidget rows identical to the ones that the filter found.
To solve first and second issue this could be a way:
import csv
from collections import Counter
rows_finded = []
with open('csvTest.csv') as file:
reader = csv.reader(file)
for row in reader:
values = [self.widthTextbox.text(), self.heightTextbox.text(), self.thickTextbox.text()]
if Counter(values) == Counter(row):
rows_finded.append(row)
self.tableWidget.setRowCount(len(rows_finded))
To solve last issue (source: Python - PyQt - QTable Widget - adding rows):
for i, row in enumerate(rows_finded):
for j, col in enumerate(row):
item = QTableWidgetItem(col)
self.tableWidget.setItem(i, j, item)
CVS Sample
So I have a csv file(sample in link above) , with variable names in row 7 and values in row 8 . The Variable all have units after them, and the values are just numbers like this :
Velocity (ft/s) Volumetric (Mgal/d Mass Flow (klb/d) Sound Speed (ft/s)
.-0l.121 1.232 1.4533434 1.233423
There are alot more variables, but basically I need some way to search in the csv file for the specefic unit groups, and then append the value associated with that in a list. For example search for text "(ft/s)", and then make a dictionary with Velocity and Sound speed as Keys, and their associated values . I am unable to do this because the csv is formatted like an excel spreadsheet, and the cells contains the whole variable name with it's unit
In the end I will have a dictionary for each unit group, and I need to do it this way because each csv file generated, the unit groups change ( ft/s becomes m/s). I also can't use excel read, because it doesn't work in IronPython.
You can use csv module to read the appropriate lines into lists.
defaultdict is a good choice for data aggregation, while variable
names and units can be easily separated by splitting on '('.
import csv
import collections
with open(csv_file_name) as fp:
reader = csv.feader(fp)
for k in range(6): # skip 6 lines
next(reader)
varnames = next(reader) # 7th line
values = next(reader) # 8th line
groups = collections.defaultdict(dict)
for i, (col, value) in enumerate(zip(varnames, values)):
if i < 2:
continue
name, units = map(str.strip, col.strip(')').split('(', 1))
groups[units][name] = float(value)
Edit: added the code to skip first two columns
I'll help with the part I think you're stuck on, which is trying to extract the units from the category. Given your data, your best bet may be to use regex, the following should work:
import re
f = open('data.csv')
# I assume the first row has the header you listed in your question
header = f.readline().split(',') #since you said its a csv
for item in header:
print re.search(r'\(.+\)', item).group()
print re.sub(r'\(.+\)', '', item)
That should print the following for you:
(ft/s)
Velocity
(Mgal/d)
Volumetric
(klb/d)
Mass Flow
(ft/s)
Sound Speed
You can modify the above to store these in a list, then iterate through them to find duplicates and merge the appropriate strings to dictionaries or whatnot.
I have a csv file read with python and I need to find the average for each row and put it in a list. The problem is the average should be found ignoring null values for each row. To be precise, the length of the row should ignore null entries. In the example below, the average of A is 7 and B should be 67.3
csv file
the python standard csv library should work here.
It returns a list of rows and columns i.e. [[row0column0, row0column1..], ... [rowNcolumn0, rowNcolumn1]]
I think this code sample should provide a good framework...
import csv
columns_to_avg = [1,2] #a list of the indexes of the columns you
# want to avg. In this case, 1 and 2.
with open('example.csv', 'rb') as csvfile:
csvreader = csv.reader(csvfile)
for row in csvreader:
#'row' is just a list of column-organized entries
for i, column in enumerate(row):
#Check if this column has a value that is not "null"
# and if it's a column we want to average!
if column != "null" and i in columns_to_avg:
entry_value = float(column) #Convert string to number
...
#Update sum for this column...
...
...
#Calculate final averages for each column here
...
modified from https://docs.python.org/2/library/csv.html
I want to compare each row of a CSV file with itself and every other row within a column. For example, if the column values are like this:
Value_1
Value_2
Value_3
The code should pick Value_1 and compare it with Value_1 (yes, with itself too), Value_2 and then with Value_3. Then it should pick up Value_2 and compare it with Value_1, Value_2, Value_3, and so on.
I've written following code for this purpose:
csvfile = "c:\temp\temp.csv"
with open(csvfile, newline='') as f:
reader = csv.reader(f, delimiter=',')
for row in reader:
for compare_row in reader:
if row == compare_row
print(row,'is equal to',compare_row)
else:
print(row,'is not equal to',compare_row)
The code gives the following output:
['Value_1'] is not equal to ['Value_2']
['Value_1'] is not equal to ['Value_3']
The code compares Value_1 to Value_2 and Value_3 and then stops. Loop 1 does not pick Value_2, and Value_3. In short, the first loop appears to iterate over only the first row of the CSV file before stopping.
Also, I can't compare Value_1 to itself using this code. Any suggestions for the solution?
I would have suggested loading the CSV into memory but this is not an option considering the size.
Instead think of it like a SQL statement, for every row in the left table you want to match it to a value in the right table. So you would only scan through the left table once and start re-scanning the right table until left has reached EoF.
with open(csvfile, newline='') as f_left:
reader_left = csv.reader(f_left, delimiter=',')
with open(csvfile, newline='') as f_right:
reader_right = csv.reader(f_right, delimiter=',')
for row in reader_left:
for compare_row in reader_right:
if row == compare_row:
print(row,'is equal to',compare_row)
else:
print(row,'is not equal to',compare_row)
f_right.seek(0)
Try to use inbuilt package from Python : Itertools
from itertools import product
with open("abcTest.txt") as inputFile:
aList = inputFile.read().split("\n")
aProduct = product(aList,aList)
for aElem,bElem in aProduct:
if aElem == bElem:
print aElem,'is equal to',bElem
else:
print aElem,'is not equal to',bElem
The problem you are facing is called Cartesian product in Python where we need to compare the row of data with itself and every other row.
For this if you are doing multiple time read from source then it will cause signficant performance issue if the file is big.
Instead you can store the the data in list and iterate it over multiple time but this also will have huge performance over head.
The itertool package is useful in this case as it is optimized for these kind of problems.
I asked a question about two hours ago regarding the reading and writing of data from a website. I've spent the last two hours since then trying to find a way to read the maximum date value from column 'A' of the output, comparing that value to the refreshed website data, and appending any new data to the csv file without overriding the old ones or creating duplicates.
The code that is currently 100% working is this:
import requests
symbol = "mtgoxUSD"
url = 'http://api.bitcoincharts.com/v1/trades.csv?symbol={}'.format(symbol)
data = requests.get(url)
with open("trades_{}.csv".format(symbol), "r+") as f:
f.write(data.text)
I've tried various ways of finding the maximum value of column 'A'. I've tried a bunch of different ways of using "Dict" and other methods of sorting/finding max, and even using pandas and numpy libs. None of which seem to work. Could someone point me in the direction of a decent way to find the maximum of a column from the .csv file? Thanks!
if you have it in a pandas DataFrame, you can get the max of any column like this:
>>> max(data['time'])
'2012-01-18 15:52:26'
where data is the variable name for the DataFrame and time is the name of the column
I'll give you two answers, one that just returns the max value, and one that returns the row from the CSV that includes the max value.
import csv
import operator as op
import requests
symbol = "mtgoxUSD"
url = 'http://api.bitcoincharts.com/v1/trades.csv?symbol={}'.format(symbol)
csv_file = "trades_{}.csv".format(symbol)
data = requests.get(url)
with open(csv_file, "w") as f:
f.write(data.text)
with open(csv_file) as f:
next(f) # discard first row from file -- see notes
max_value = max(row[0] for row in csv.reader(f))
with open(csv_file) as f:
next(f) # discard first row from file -- see notes
max_row = max(csv.reader(f), key=op.itemgetter(0))
Notes:
max() can directly consume an iterator, and csv.reader() gives us an iterator, so we can just pass that in. I'm assuming you might need to throw away a header line so I showed how to do that. If you had multiple header lines to discard, you might want to use islice() from the itertools module.
In the first one, we use a "generator expression" to select a single value from each row, and find the max. This is very similar to a "list comprehension" but it doesn't build a whole list, it just lets us iterate over the resulting values. Then max() consumes the iterable and we get the max value.
max() can use a key= argument where you specify a "key function". It will use the key function to get a value and use that value to figure the max... but the value returned by max() will be the unmodified original value (in this case, a row value from the CSV). In this case, the key function is manufactured for you by operator.itemgetter()... you pass in which column you want, and operator.itemgetter() builds a function for you that gets that column.
The resulting function is the equivalent of:
def get_col_0(row):
return row[0]
max_row = max(csv.reader(f), key=get_col_0)
Or, people will use lambda for this:
max_row = max(csv.reader(f), key=lambda row: row[0])
But I think operator.itemgetter() is convenient and nice to read. And it's fast.
I showed saving the data in a file, then pulling from the file again. If you want to go through the data without saving it anywhere, you just need to iterate over it by lines.
Perhaps something like:
text = data.text
rows = [line.split(',') for line in text.split("\n") if line]
rows.pop(0) # get rid of first row from data
max_value = max(row[0] for row in rows)
max_row = max(rows, key=op.itemgetter(0))
I don't know which column you want... column "A" might be column 0 so I used 0 in the above. Replace the column number as you like.
It seems like something like this should work:
import requests
import csv
symbol = "mtgoxUSD"
url = 'http://api.bitcoincharts.com/v1/trades.csv?symbol={}'.format(symbol)
data = requests.get(url)
with open("trades_{}.csv".format(symbol), "r+") as f:
all_values = list(csv.reader(f))
max_value = max([int(row[2]) for row in all_values[1:]])
(write-out-the-value?)
EDITS: I used "row[2]" because that was the sample column I was taking max of in my csv. Also, I had to strip off the column headers, which were all text, which was why I looked at "all_values[1:]" from the second row to the end of the file.