Python: change rows in txt to columns [closed] - python

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 1 year ago.
Improve this question
I have txt file like this:
Name | Class | Points
--------------------------
Name1 | (2) | 30
Name1 | (3) | 50
Name1 | (5) | 15
Name2 | (1) | 25
Name2 | (3) | 88
Name2 | (4) | 3
Classes are from 1 - 100
I would like to change this table into-
| (1) | (2) | (3) | ...
Name1 | .. | 30 | 50 |
Name2 | 25 | .. | 88 |
So far I have file with header and Names, but I can't figure out how to put data in proper place, proper column.
f = open("file.txt", "r")
classes = set()
names = set()
for line in f:
line = line.split("\t")
if line[1] == "Class":
continue
else:
classes.add(int(line[1]))
names.add(line[0])
sorted(classes)
print(classes)
with open("new_file.txt", "a") as file:
for i in classes:
file.write(f"\t{i}")
for j in names:
file.write(f"\n{j}")

You can use two-dimensional array, using this data structure you can access data by [row][col].In you case, if you want to change rows to columns, access by [col][row].
In python, defaultdict is the answer.
Here's the sample code:
import collections
# use defaultdict to store data accessed by [row][col]
data = collections.defaultdict(dict)
# read data
columns = set()
with open('file.txt', 'r') as f:
for line in f:
row, col, value = line.split()
if col == 'Class':
continue
data[row][col] = value
columns.add(col)
# sort columns and rows
columns = sorted(columns)
rows = sorted(data.keys())
# write data
with open('new_file.txt', 'w') as f:
header = '\t' + '\t'.join(columns) + '\n'
lines = [header]
for row in rows:
parts = [row]
for col in columns:
v = data[row].get(col)
if v:
parts.append(v)
else:
parts.append('..')
lines.append('\t'.join(parts) + '\n')
f.writelines(lines)

Related

If the same string is in the first column of a sorted dataframe take the rows associated with the unique value and create new dataframes

What I am looking to do is for every email address that is the same, take the corresponding rows with that same email and create new dataframes and then send an email with the row information to the email address in col 1.
| email | Acct # | Acct Status |
| ------------------|--------|-------------|
| janedoe#gmail.com | 1230 | Closed |
| janedoe#gmail.com | 2546 | Closed |
| janedoe#gmail.com | 2468 | Closed |
| janedoe#gmail.com | 7896 | Closed |
| michaeldoe#aol.com| 4565 | Closed |
| michaeldoe#aol.com| 9686 | Closed |
|jackdoe#aol.com | 4656 | Closed |
I tried something along the lines of converting the dataframe into a list by using groupby but I am stuck:
df_list = [x for _, x in df.groupby(['email'])
I am not sure how you want to store you data or what you want to do with it. I've chosen to store the output in a Python dictionary with email contact as the key and all their various accounts and their status as the value. You can use a combination of groupby and drop_duplicates to extract and form the information you want.
df_grouped = df.groupby('email').groups
df_contacts = df.drop_duplicates(subset = ['email'])
result = {} # dictionary for results
for item in df_contacts['email']:
rows = df_grouped[item].tolist()
my_data = []
for x in rows:
info = df[['Accnt #', 'Accnt Status']].iloc[x].values
my_data.append(info.tolist())
result[item] = my_data
Then you can use the data as required. For example:
for i, j in result.items():
print('Send email to ', i, ' with their account info as follows')
for z in j:
print('Account : ', z[0], ' Status :', z[1])
If for some reason you really want the resulting data to go in separate DataFrames then this could be in a Dictionary of DataFrames as follows:
dx = {}
for i, j in result.items():
dfx = pd.DataFrame.from_dict(result[i])
dfx.columns =['Accnt', 'Accnt Status']
dx[i]=dfx
print(dx['janedoe#gmail.com']) #as an example of accessing the data

TypeError: '_csv.reader' object is not subscriptable and days passed [duplicate]

I'm trying to parse through a csv file and extract the data from only specific columns.
Example csv:
ID | Name | Address | City | State | Zip | Phone | OPEID | IPEDS |
10 | C... | 130 W.. | Mo.. | AL... | 3.. | 334.. | 01023 | 10063 |
I'm trying to capture only specific columns, say ID, Name, Zip and Phone.
Code I've looked at has led me to believe I can call the specific column by its corresponding number, so ie: Name would correspond to 2 and iterating through each row using row[2] would produce all the items in column 2. Only it doesn't.
Here's what I've done so far:
import sys, argparse, csv
from settings import *
# command arguments
parser = argparse.ArgumentParser(description='csv to postgres',\
fromfile_prefix_chars="#" )
parser.add_argument('file', help='csv file to import', action='store')
args = parser.parse_args()
csv_file = args.file
# open csv file
with open(csv_file, 'rb') as csvfile:
# get number of columns
for line in csvfile.readlines():
array = line.split(',')
first_item = array[0]
num_columns = len(array)
csvfile.seek(0)
reader = csv.reader(csvfile, delimiter=' ')
included_cols = [1, 2, 6, 7]
for row in reader:
content = list(row[i] for i in included_cols)
print content
and I'm expecting that this will print out only the specific columns I want for each row except it doesn't, I get the last column only.
The only way you would be getting the last column from this code is if you don't include your print statement in your for loop.
This is most likely the end of your code:
for row in reader:
content = list(row[i] for i in included_cols)
print content
You want it to be this:
for row in reader:
content = list(row[i] for i in included_cols)
print content
Now that we have covered your mistake, I would like to take this time to introduce you to the pandas module.
Pandas is spectacular for dealing with csv files, and the following code would be all you need to read a csv and save an entire column into a variable:
import pandas as pd
df = pd.read_csv(csv_file)
saved_column = df.column_name #you can also use df['column_name']
so if you wanted to save all of the info in your column Names into a variable, this is all you need to do:
names = df.Names
It's a great module and I suggest you look into it. If for some reason your print statement was in for loop and it was still only printing out the last column, which shouldn't happen, but let me know if my assumption was wrong. Your posted code has a lot of indentation errors so it was hard to know what was supposed to be where. Hope this was helpful!
import csv
from collections import defaultdict
columns = defaultdict(list) # each value in each column is appended to a list
with open('file.txt') as f:
reader = csv.DictReader(f) # read rows into a dictionary format
for row in reader: # read a row as {column1: value1, column2: value2,...}
for (k,v) in row.items(): # go over each column name and value
columns[k].append(v) # append the value into the appropriate list
# based on column name k
print(columns['name'])
print(columns['phone'])
print(columns['street'])
With a file like
name,phone,street
Bob,0893,32 Silly
James,000,400 McHilly
Smithers,4442,23 Looped St.
Will output
>>>
['Bob', 'James', 'Smithers']
['0893', '000', '4442']
['32 Silly', '400 McHilly', '23 Looped St.']
Or alternatively if you want numerical indexing for the columns:
with open('file.txt') as f:
reader = csv.reader(f)
next(reader)
for row in reader:
for (i,v) in enumerate(row):
columns[i].append(v)
print(columns[0])
>>>
['Bob', 'James', 'Smithers']
To change the deliminator add delimiter=" " to the appropriate instantiation, i.e reader = csv.reader(f,delimiter=" ")
Use pandas:
import pandas as pd
my_csv = pd.read_csv(filename)
column = my_csv.column_name
# you can also use my_csv['column_name']
Discard unneeded columns at parse time:
my_filtered_csv = pd.read_csv(filename, usecols=['col1', 'col3', 'col7'])
P.S. I'm just aggregating what other's have said in a simple manner. Actual answers are taken from here and here.
You can use numpy.loadtext(filename). For example if this is your database .csv:
ID | Name | Address | City | State | Zip | Phone | OPEID | IPEDS |
10 | Adam | 130 W.. | Mo.. | AL... | 3.. | 334.. | 01023 | 10063 |
10 | Carl | 130 W.. | Mo.. | AL... | 3.. | 334.. | 01023 | 10063 |
10 | Adolf | 130 W.. | Mo.. | AL... | 3.. | 334.. | 01023 | 10063 |
10 | Den | 130 W.. | Mo.. | AL... | 3.. | 334.. | 01023 | 10063 |
And you want the Name column:
import numpy as np
b=np.loadtxt(r'filepath\name.csv',dtype=str,delimiter='|',skiprows=1,usecols=(1,))
>>> b
array([' Adam ', ' Carl ', ' Adolf ', ' Den '],
dtype='|S7')
More easily you can use genfromtext:
b = np.genfromtxt(r'filepath\name.csv', delimiter='|', names=True,dtype=None)
>>> b['Name']
array([' Adam ', ' Carl ', ' Adolf ', ' Den '],
dtype='|S7')
With pandas you can use read_csv with usecols parameter:
df = pd.read_csv(filename, usecols=['col1', 'col3', 'col7'])
Example:
import pandas as pd
import io
s = '''
total_bill,tip,sex,smoker,day,time,size
16.99,1.01,Female,No,Sun,Dinner,2
10.34,1.66,Male,No,Sun,Dinner,3
21.01,3.5,Male,No,Sun,Dinner,3
'''
df = pd.read_csv(io.StringIO(s), usecols=['total_bill', 'day', 'size'])
print(df)
total_bill day size
0 16.99 Sun 2
1 10.34 Sun 3
2 21.01 Sun 3
Context: For this type of work you should use the amazing python petl library. That will save you a lot of work and potential frustration from doing things 'manually' with the standard csv module. AFAIK, the only people who still use the csv module are those who have not yet discovered better tools for working with tabular data (pandas, petl, etc.), which is fine, but if you plan to work with a lot of data in your career from various strange sources, learning something like petl is one of the best investments you can make. To get started should only take 30 minutes after you've done pip install petl. The documentation is excellent.
Answer: Let's say you have the first table in a csv file (you can also load directly from the database using petl). Then you would simply load it and do the following.
from petl import fromcsv, look, cut, tocsv
#Load the table
table1 = fromcsv('table1.csv')
# Alter the colums
table2 = cut(table1, 'Song_Name','Artist_ID')
#have a quick look to make sure things are ok. Prints a nicely formatted table to your console
print look(table2)
# Save to new file
tocsv(table2, 'new.csv')
I think there is an easier way
import pandas as pd
dataset = pd.read_csv('table1.csv')
ftCol = dataset.iloc[:, 0].values
So in here iloc[:, 0], : means all values, 0 means the position of the column.
in the example below ID will be selected
ID | Name | Address | City | State | Zip | Phone | OPEID | IPEDS |
10 | C... | 130 W.. | Mo.. | AL... | 3.. | 334.. | 01023 | 10063 |
import pandas as pd
csv_file = pd.read_csv("file.csv")
column_val_list = csv_file.column_name._ndarray_values
Thanks to the way you can index and subset a pandas dataframe, a very easy way to extract a single column from a csv file into a variable is:
myVar = pd.read_csv('YourPath', sep = ",")['ColumnName']
A few things to consider:
The snippet above will produce a pandas Series and not dataframe.
The suggestion from ayhan with usecols will also be faster if speed is an issue.
Testing the two different approaches using %timeit on a 2122 KB sized csv file yields 22.8 ms for the usecols approach and 53 ms for my suggested approach.
And don't forget import pandas as pd
If you need to process the columns separately, I like to destructure the columns with the zip(*iterable) pattern (effectively "unzip"). So for your example:
ids, names, zips, phones = zip(*(
(row[1], row[2], row[6], row[7])
for row in reader
))
import pandas as pd
dataset = pd.read_csv('Train.csv')
X = dataset.iloc[:, 1:-1].values
y = dataset.iloc[:, -1].values
X is a a bunch of columns, use it if you want to read more that one column
y is single column, use it to read one column
[:, 1:-1] are [row_index : to_row_index, column_index : to_column_index]
SAMPLE.CSV
a, 1, +
b, 2, -
c, 3, *
d, 4, /
column_names = ["Letter", "Number", "Symbol"]
df = pd.read_csv("sample.csv", names=column_names)
print(df)
OUTPUT
Letter Number Symbol
0 a 1 +
1 b 2 -
2 c 3 *
3 d 4 /
letters = df.Letter.to_list()
print(letters)
OUTPUT
['a', 'b', 'c', 'd']
import csv
with open('input.csv', encoding='utf-8-sig') as csv_file:
# the below statement will skip the first row
next(csv_file)
reader= csv.DictReader(csv_file)
Time_col ={'Time' : []}
#print(Time_col)
for record in reader :
Time_col['Time'].append(record['Time'])
print(Time_col)
From CSV File Reading and Writing you can import csv and use this code:
with open('names.csv', newline='') as csvfile:
reader = csv.DictReader(csvfile)
for row in reader:
print(row['first_name'], row['last_name'])
To fetch column name, instead of using readlines() better use readline() to avoid loop & reading the complete file & storing it in the array.
with open(csv_file, 'rb') as csvfile:
# get number of columns
line = csvfile.readline()
first_item = line.split(',')

string manipulation, data wrangling, regex

I have a .txt file of 3 million rows. The file contains data that looks like this:
# RSYNC: 0 1 1 0 512 0
#$SOA 5m localhost. hostmaster.localhost. 1906022338 1h 10m 5d 1s
# random_number_ofspaces_before_this text $TTL 60s
#more random information
:127.0.1.2:https://www.spamhaus.org/query/domain/$
test
:127.0.1.2:https://www.spamhaus.org/query/domain/$
.0-0m5tk.com
.0-1-hub.com
.zzzy1129.cn
:127.0.1.4:https://www.spamhaus.org/query/domain/$
.0-il.ml
.005verf-desj.com
.01accesfunds.com
In the above data, there is a code associated with all domains listed beneath it.
I want to turn the above data into a format that can be loaded into a HiveQL/SQL. The HiveQL table should look like:
+--------------------+--------------+-------------+-----------------------------------------------------+
| domain_name | period_count | parsed_code | raw_code |
+--------------------+--------------+-------------+-----------------------------------------------------+
| test | 0 | 127.0.1.2 | :127.0.1.2:https://www.spamhaus.org/query/domain/$ |
| .0-0m5tk.com | 2 | 127.0.1.2 | :127.0.1.2:https://www.spamhaus.org/query/domain/$ |
| .0-1-hub.com | 2 | 127.0.1.2 | :127.0.1.2:https://www.spamhaus.org/query/domain/$ |
| .zzzy1129.cn | 2 | 127.0.1.2 | :127.0.1.2:https://www.spamhaus.org/query/domain/$ |
| .0-il.ml | 2 | 127.0.1.4 | :127.0.1.4:https://www.spamhaus.org/query/domain/$ |
| .005verf-desj.com | 2 | 127.0.1.4 | :127.0.1.4:https://www.spamhaus.org/query/domain/$ |
| .01accesfunds.com | 2 | 127.0.1.4 | :127.0.1.4:https://www.spamhaus.org/query/domain/$ |
+--------------------+--------------+-------------+-----------------------------------------------------+
Please note that I do not want the vertical bars in any output. They are just to make the above look like a table
I'm guessing that creating a HiveQL table like the above will involve converting the .txt into a .csv or a Pandas data frame. If creating a .csv, then the .csv would probably look like:
domain_name,period_count,parsed_code,raw_code
test,0,127.0.1.2,:127.0.1.2:https://www.spamhaus.org/query/domain/$
.0-0m5tk.com,2,127.0.1.2,:127.0.1.2:https://www.spamhaus.org/query/domain/$
.0-1-hub.com,2,127.0.1.2,:127.0.1.2:https://www.spamhaus.org/query/domain/$
.zzzy1129.cn,2,127.0.1.2,:127.0.1.2:https://www.spamhaus.org/query/domain/$
.0-il.ml,2,127.0.1.4,:127.0.1.4:https://www.spamhaus.org/query/domain/$
.005verf-desj.com,2,127.0.1.4,:127.0.1.4:https://www.spamhaus.org/query/domain/$
.01accesfunds.com,2,127.0.1.4,:127.0.1.4:https://www.spamhaus.org/query/domain/$
I'd be interested in a Python solution, but lack familiarity with the packages and functions necessary to complete the above data wrangling steps. I'm looking for a complete solution, or code tidbits to construct my own solution. I'm guessing regular expressions will be needed to identify the "category" or "code" line in the raw data. They always start with ":127.0.1." I'd also like to parse the code out to create a parsed_code column, and a period_count column that counts the number of periods in the domain_name string. For testing purposes, please create a .txt of the sample data I have provided at the beginning of this post.
Regardless of how you want to format in the end, I suppose the first step is to separate the domain_name and code. That part is pure python
rows = []
code = None
parsed_code = None
with open('input.txt', 'r') as f:
for line in f:
line = line.rstrip('\n')
if line.startswith(':127'):
code = line
parsed_code = line.split(':')[1]
continue
if line.startswith('#'):
continue
period_count = line.count('.')
rows.append((line,period_count,parsed_code, code))
Just for illustration, you can use pandas to format the data nicely as tables, which might help if you want to pipe this to SQL, but it's not absolutely necessary. Post-processing of strings are also quite straightforward in pandas.
import pandas as pd
df = pd.DataFrame(rows, columns=['domain_name', 'period_count', 'parsed_code', 'raw_code'])
print (df)
prints this:
domain_name period_count parsed_code raw_code
0 test 0 127.0.1.2 :127.0.1.2:https://www.spamhaus.org/query/doma...
1 .0-0m5tk.com 2 127.0.1.2 :127.0.1.2:https://www.spamhaus.org/query/doma...
2 .0-1-hub.com 2 127.0.1.2 :127.0.1.2:https://www.spamhaus.org/query/doma...
3 .zzzy1129.cn 2 127.0.1.2 :127.0.1.2:https://www.spamhaus.org/query/doma...
4 .0-il.ml 2 127.0.1.4 :127.0.1.4:https://www.spamhaus.org/query/doma...
5 .005verf-desj.com 2 127.0.1.4 :127.0.1.4:https://www.spamhaus.org/query/doma...
6 .01accesfunds.com 2 127.0.1.4 :127.0.1.4:https://www.spamhaus.org/query/doma...
You can do all of this with the Python standard library.
HEADER = "domain_name | code"
# Open files
with open("input.txt") as f_in, open("output.txt", "w") as f_out:
# Write header
print(HEADER, file=f_out)
print("-" * len(HEADER), file=f_out)
# Parse file and output in correct format
code = None
for line in f_in:
if line.startswith("#"):
# Ignore comments
continue
if line.endswith("$"):
# Store line as the current "code"
code = line
else:
# Write these domain_name entries into the
# output file separated by ' | '
print(line, code, sep=" | ", file=f_out)

Compare two columns in each of 200,000 rows in an Excel document with Python

I am trying to compare the following data:
|text_col|corr_acc|
+--------+--------+
|Car123 |xxx1 |
|Car234 |xxx2 |
|Car123 |xxx1 |
|Car456 |xxx3 |
|Car234 |xxx2 |
|Car123 |xxx5 |
If text_col in first row (Car123) can be found in any of the other rows (for example in row 3) then corr_acc must be compared.
If corr_acc for each row is the same then this must be written to a new list called match.
If not then both values of corr_acc must be added to a list called no_match with the original value.
The end results for the no_match list would look something like this:
|text_col |corr_acc|Result |
+---------+--------+---------+
|Car123 |xxx1 |xxx1,xxx5|
|Car234 |xxx2 | |
|Car123 |xxx1 |xxx1,xxx5|
|Car456 |xxx3 | |
|Car234 |xxx2 | |
|Car123 |xxx5 |xxx1,xxx5|
I have the following code that worked for me but is too slow (need to compare 200 000 rows):
wb = openpyxl.load_workbook('D:\\peter\\Book3.xlsx')
sheet = wb['2018']
i = 0
j = 0
list_match = []
list_no_match = []
for i in range(2,(len(sheet['A']))+1):
text_col_c_1 = str((sheet.cell(row=i, column=3).value))
corr_acc_1 = str((sheet.cell(row=i, column=11).value))
for j in range(2+i,(len(sheet['A']))+1):
text_col_c_2 = str((sheet.cell(row=j, column=3).value))
corr_acc_2 = str((sheet.cell(row=j, column=11).value))
if text_col_c_1 is text_col_c_2:
if corr_acc_1 is corr_acc_2:
list_match.append(text_col_c_1+","+corr_acc_1+","+corr_acc_2+"\n")
else:
list_no_match.append(text_col_c_1+","+corr_acc_1+","+corr_acc_2+"\n")
else:
#list_no_match.append(text_col_c_1+","+corr_acc_1+"\n")
pass
F = open("d:\\peter\\match_list.txt", "w")
for each in list_match:
F.write(each)
F.close()
F = open("d:\\peter\\no_match_list.txt", "w")
for each in list_no_match:
F.write(each)
F.close()
How can I improve the speed of my code?

python csv | If row = x, print column containing x

I am new to python programming, pardon me if I make any mistakes. I am writing a python script to read a csv file and print out the required cell of the column if it contains the information in the row.
| A | B | C
---|----|---|---
1 | Re | Mg| 23
---|----|---|---
2 | Ra | Fe| 90
For example, I if-else the row C for value between 20 to 24. Then if the condition passes, it will return Cell A1 (Re) as the result.
At the moment, i only have the following and i have no idea how to proceed from here on.
f = open( 'imageResults.csv', 'rU' )
for line in f:
cells = line.split( "," )
if(cells[2] >= 20 and cells[2] <= 24):
f.close()
This might contain the answer to my question but i can't seem to make it work.
UPDATE
If in the row, there is a header, how do i get it to work? I wanted to change the condition to string but it don't work if I want to search for a range of values.
| A | B | C
---|----|---|---
1 |Name|Lat|Ref
---|----|---|---
2 | Re | Mg| 23
---|----|---|---
3 | Ra | Fe| 90
You should use a csv reader. It's built into python so there's no dependencies to install. Then you need to tell python that the third column is an integer. Something like this will do it:
import csv
with open('data.csv', 'rb') as f:
for line in csv.reader(f):
if 20 <= int(line[2]) <= 24:
print(line)
With this data in data.csv:
Re,Mg,23
Ra,Fe,90
Ha,Ns,50
Ku,Rt,20
the output will be:
$ python script.py
['Re', 'Mg', '23']
['Ku', 'Rt', '20']
Update:
If in the [first] row, there is a header, how do i get it to work?
There's csv.DictReader which is for that. Indeed it is safer to work with DictReader, especially when the order of the columns might change or you insert a column before the third column. Given this data in data.csv
Name,Lat,Ref
Re,Mg,23
Ra,Fe,90
Ha,Ns,50
Ku,Rt,20
Then is this the python script:
import csv
with open('data.csv', 'rb') as f:
for line in csv.DictReader(f):
if 20 <= int(line['Ref']) <= 24:
print(line)
P.S. Welcome at python. It's a good language for learning to program

Categories

Resources