read txt file input and add values to two arrays - python

A B C D
2 4 5 6
4 5 3 7
3 6 7 8
I want to get A, B, C column values to array(3 x 3) and D column to another array(3 x 1).

simple brute-force method:
a33 = [[],[],[]]
a31 = []
with open('dat.txt') as f:
for ln in f:
a,b,c,d = ln.split()
a33[0] += a
a33[1] += b
a33[2] += c
a31 += d
print a33
print a31
[['2', '4', '3'], ['4', '5', '6'], ['5', '3', '7']]
['6', '7', '8']

import numpy as np
# Read the data from a file
with open('data.txt') as file:
lines = file.readlines()
# Chop of the columns
raw_data = lines[1:]
# Now fetch all the data
data_abc = []
data_d = []
for line in raw_data:
values = line.split()
data_abc.append(values[:3])
data_d.append(values[3])
# Convert to matrix
data_abc = np.asmatrix(data_abc)
data_d = np.asmatrix(data_d)
# Display the result
print('Data A B C:', data_abc)
print('Data D:', data_d)

Related

Insert String values into 2d array/matrix

I have a string of 1s and 0s that I need to insert into a [4] by [4] matrix, that I can then use for other things.
This is my attempt at it:
b = '0110000101101000'
m = [[], [], [], []]
for i in range(4):
for j in range(4):
m[i].append(b[i * j])
But where I expected to get
[['0', '1', '1', '0'], ['0', '0', '0', '1'], ['0', '1', '1', '0'], ['1', '0', '0', '0']
I got [['0', '0', '0', '0'], ['0', '1', '1', '0'], ['0', '1', '0', '0'], ['0', '0', '0', '1']].
Could someone point me in the right direction here?
Get paper and a pencil and write a table of what you have now vs what you want:
i j i*j desired
0 0 0 0
0 1 0 1
0 2 0 2
0 3 0 3
1 0 0 4
1 1 1 5
... up to i=3, j=3
Now you can see that i * j is not the correct index in b. Can you see what the desired index formula is?
I'd agree with #John Zwinck that you can easily figure it out but if you hate math simply do
counter = 0
for i in range(4):
for j in range(4):
m[i].append(b[counter])
counter += 1 # keep track of the overall iterations
otherwise you have to find the starting row you are in (i * columns) and add the current column index
m[i].append(b[i * 4 + j]) # i * 4 gives the overall index of the 0th element of the current row
Here is a hint: range(4) starts from 0 and ends at 3.
See the python documentation: https://docs.python.org/3.9/library/stdtypes.html#typesseq
First of all, the rule to convert coordinates to index is index = row * NRO_COLS + col. You should use i * 4 + j.
Second, you can use list comprehension:
m = [[b[i * 4 + j] for j in range(4)] for i in range(4)]
then, it can be rewritten as:
m = [[b[i + j] for j in range(4)] for i in range(0, len(b), 4)]
or
m = [list(b[i:i+4]) for i in range(0, len(b), 4)]
another alternative is to use numpy, which is a great library, specially to handle multidimensional arrays
import numpy as np
m = np.array(list(b)).reshape(4,4)
or:
print(np.array(list(b)).reshape(4, -1))
print(np.array(list(b)).reshape(-1, 4))

how to perform a line by line merge for multiple files without using pandas merge which reads all data frames to memory

I wish to merge multiple files with a single (f1.txt) file based on 2 column matches after comparison with that file. I can do it in pandas but it reads everything to memory which can get big really fast. I am thinking a line by line reading will not load everything into memory. Pandas is also not an option now. How do I perform the operation while filling in null for cells where a match with f1.txt does not occur?
Here, I used a dictionary, which I am not sure if it will hold in memory and I also can't find a way to add null where there is no match in the other files with f1.txt. The other files could be as many as 1000 different files. The time does not matter as long as I do not read everything to memory
FILES (tab-delimited)
f1.txt
A B num val scol
1 a1 1000 2 3
2 a2 456 7 2
3 a3 23 2 7
4 a4 800 7 3
5 a5 10 8 7
a1.txt
A B num val scol fcol dcol
1 a1 1000 2 3 0.2 0.77
2 a2 456 7 2 0.3 0.4
3 a3 23 2 7 0.5 0.6
4 a4 800 7 3 0.003 0.088
a2.txt
A B num val scol fcol2 dcol1
2 a2 456 7 2 0.7 0.8
4 a4 800 7 3 0.9 0.01
5 a5 10 8 7 0.03 0.07
Current Code
import os
import csv
m1 = os.getcwd() + '/f1.txt'
files_to_compare = [i for i in os.listdir('dir')]
dictionary = dict()
dictionary1 = dict()
with open(m1, 'rt') as a:
reader1 = csv.reader(a, delimiter='\t')
for x in files_to_compare:
with open(os.getcwd() + '/dir/' + x, 'rt') as b:
reader2 = csv.reader(b, delimiter='\t')
for row1 in list(reader1):
dictionary[row1[0]] = list()
dictionary1[row1[0]] = list(row1)
for row2 in list(reader2):
try:
dictionary[row2[0]].append(row2[5:])
except KeyError:
pass
print(dictionary)
print(dictionary1)
What I am trying to achieve is similar to using: df.merge(df1, on=['A','B'], how='left').fillna('null')
current result
{'A': [['fcol1', 'dcol1'], ['fcol', 'dcol']], '1': [['0.2', '0.77']], '2': [['0.7', '0.8'], ['0.3', '0.4']], '3': [['0.5', '0.6']], '4': [['0.9', '0.01'], ['0.003', '0.088']], '5': [['0.03', '0.07']]}
{'A': ['A', 'B', 'num', 'val', 'scol'], '1': ['1', 'a1', '1000', '2', '3'], '2': ['2', 'a2', '456', '7', '2'], '3': ['3', 'a3', '23', '2', '7'], '4': ['4', 'a4', '800', '7', '3'], '5': ['5', 'a5', '10', '8', '7']}
Desired result
{'A': [['fcol1', 'dcol1'], ['fcol', 'dcol']], '1': [['0.2', '0.77'],['null', 'null']], '2': [['0.7', '0.8'], ['0.3', '0.4']], '3': [['0.5', '0.6'],['null', 'null']], '4': [['0.9', '0.01'], ['0.003', '0.088']], '5': [['null', 'null'],['0.03', '0.07']]}
{'A': ['A', 'B', 'num', 'val', 'scol'], '1': ['1', 'a1', '1000', '2', '3'], '2': ['2', 'a2', '456', '7', '2'], '3': ['3', 'a3', '23', '2', '7'], '4': ['4', 'a4', '800', '7', '3'], '5': ['5', 'a5', '10', '8', '7']}
My final intent is to write the dictionary to a text file. I do not know how much memory will be used or if it will even fit in memory. if there is a better way without using pandas, that will be nice else how do I make dictionary work?
DASK ATTEMPT:
import dask.dataframe as dd
directory = 'input_dir/'
first_file = dd.read_csv('f1.txt', sep='\t')
df = dd.read_csv(directory + '*.txt', sep='\t')
df2 = dd.merge(first_file, df, on=[A, B])
I kept getting ValueError: Metadata mismatch found in 'from_delayed'
+-----------+--------------------+
| column | Found | Expected |
+--------------------------------+
| fcol | int64 | float64 |
+-----------+--------------------+
I googled, found similar complaints but could not fix it. That was why I decided to try this. Checked my files and all dtypes seem to be consistent. My version of dask was 2.9.1
If you want hand made solution, you can look at heapq.merge and itertools.groupby. This assumes your files are sorted by the first two columns (the key).
I made simple example that merges and groups the files and produces two files, instead of dictionaries (so (almost) nothing is stored in memory, everything is reading/writing from/to disk):
from heapq import merge
from itertools import groupby
first_file_name = 'f1.txt'
other_files = ['a1.txt', 'a2.txt']
def get_lines(filename):
with open(filename, 'r') as f_in:
for line in f_in:
yield [filename, *line.strip().split()]
def get_values(lines):
for line in lines:
yield line
while True:
yield ['null']
opened_files = [get_lines(f) for f in [first_file_name] + other_files]
# save headers
headers = [next(f) for f in opened_files]
with open('out1.txt', 'w') as out1, open('out2.txt', 'w') as out2:
# print headers to files
print(*headers[0][1:6], sep='\t', file=out1)
new_header = []
for h in headers[1:]:
new_header.extend(h[6:])
print(*(['ID'] + new_header), sep='\t', file=out2)
for v, g in groupby(merge(*opened_files, key=lambda k: (k[1], k[2])), lambda k: (k[1], k[2])):
lines = [*g]
print(*lines[0][1:6], sep='\t', file=out1)
out_line = [lines[0][1]]
iter_lines = get_values(lines[1:])
current_line = next(iter_lines)
for current_file in other_files:
if current_line[0] == current_file:
out_line.extend(current_line[6:])
current_line = next(iter_lines)
else:
out_line.extend(['null', 'null'])
print(*out_line, sep='\t', file=out2)
Produces two files:
out1.txt:
A B num val scol
1 a1 1000 2 3
2 a2 456 7 2
3 a3 23 2 7
4 a4 800 7 3
5 a5 10 8 7
out2.txt:
ID fcol dcol fcol2 dcol1
1 0.2 0.77 null null
2 0.3 0.4 0.7 0.8
3 0.5 0.6 null null
4 0.003 0.088 0.9 0.01
5 null null 0.03 0.07

How do I Store input data into multiple matrices in Python?

I have a text file with multiple matrices like this:
4 5 1
4 1 5
1 2 3
[space]
4 8 9
7 5 6
7 4 5
[space]
2 1 3
5 8 9
4 5 6
I want to read this input file in python and store it in multiple matrices like:
matrixA = [...] # first matrix
matrixB = [...] # second matrix
so on. I know how to read external files in python but don't know how to divide this input file in multiple matrices, how can I do this?
Thank you
You can write a code like this:
all_matrices = [] # hold matrixA, matrixB, ...
matrix = [] # hold current matrix
with open('file.txt', 'r') as f:
values = line.split()
if values: # if line contains numbers
matrix.append(values)
else: # if line contains nothing then add matrix to all_matrices
all_matrices.append(matrix)
matrix = []
# do what every you want with all_matrices ...
I am sure the algorithm could be optimized somewhere, but the answer I found is quite simple:
file = open('matrix_list.txt').read() #Open the File
matrix_list = file.split("\n\n") #Split the file in a list of Matrices
for i, m in enumerate(matrix_list):
matrix_list[i]=m.split("\n") #Split the row of each matrix
for j, r in enumerate(matrix_list[i]):
matrix_list[i][j] = r.split() #Split the value of each row
This will result in the following format:
[[['4', '5', '1'], ['4', '1', '5'], ['1', '2', '3']], [['4', '8', '9'], ['7', '5', '6'], ['7', '4', '5']], [['2', '1', '3'], ['5', '8', '9'], ['4', '5', '6']]]
Example on how to use the list:
print(matrix_list) #prints all matrices
print(matrix_list[0]) #prints the first matrix
print(matrix_list[0][1]) #prints the second row of the first matrix
print(matrix_list[0][1][2]) #prints the value from the second row and third column of the first matrix

How to edit the *.txt or *.dat file information in Python?

I am a very beginner in Python and have the next 'problem'. I would be glad, if you could help me)
I have a *.dat file (let's name it file-1, first row is just a headline which I use only here to mark the columns) which looks like:
1 2 3 4 5 6
6 5 -1000 "" "" ""
6 5 -1000 "" "" ""
6 5 -1000 "" "" ""
6 5 -1000 "" "" ""
6 5 -1000 "" "" ""
6 5 -1000 "" "" ""
6 5 -1000 "" "" ""
I need it to be like (file-1 (converted)):
6 5 1 -1000
6 5 1 -1000
6 5 2 -1000
6 5 3 -1000
6 5 3 -1000
6 5 3 -1000
6 5 3 -1000
So, file-1 has 9 rows (7 with information and 2 empty) and 6 columns and I have to do the next:
Delete the last 3 columns in the file-1.
Add 1 new column that will take place between the columns 2 and 3.
The value of this new column should be increased by 1 unit (like '+= 1') after passing the empty line.
Delete all the empty lines. The result is represented as the 'file-1 (converted)'.
I've tried to do this but stucked. For now I am on the level of:
import sys
import csv
with open("file-1.dat", "r", newline="") as f:
sys.stdout = open('%s2 (converted).txt' % f.name, 'a')
incsv = csv.reader(f, delimiter="\t")
for row in incsv:
if len(row) == 6:
i = 0
row = row[0:3]
row.insert(2, i)
print(row)
and it looks like:
['6', '5', 0, '-1000']
['6', '5', 0, '-1000']
['6', '5', 0, '-1000']
['6', '5', 0, '-1000']
['6', '5', 0, '-1000']
['6', '5', 0, '-1000']
['6', '5', 0, '-1000']
I don't know for now how to change 0 to 1 and 2 and so on, so it could be like:
['6', '5', 0, '-1000']
['6', '5', 0, '-1000']
['6', '5', 1, '-1000']
['6', '5', 2, '-1000']
['6', '5', 2, '-1000']
['6', '5', 2, '-1000']
['6', '5', 2, '-1000']
And the result should be like the 'file-1 (converted)' file.
P.S. All the examples are simplified, real file has a lot of rows and I don't know where the empty lines appear.
P.P.S. Sorry for such a long post, hope, it makes sense. Ask, suggest - I would be really glad to see other opinions) Thank you.
seems like you're almost there, you're just inserting i=0 all the time instead of the count of empty rows, try something like:
with open("file-1.dat", "r", newline="") as f:
sys.stdout = open('%s2 (converted).txt' % f.name, 'a')
incsv = csv.reader(f, delimiter="\t")
empties = 0 # init empty row counter
for row in incsv:
if len(row) == 6:
row = row[0:3]
row.insert(2, empties) # insert number of empty rows
print(row)
else:
empties += 1 # if row is empty, increase counter
This is bit different without using csv module. Hope this helps. :)
import sys
count = 0
with open("file-1.dat", "r") as f:
sys.stdout = open('%s2 (converted).txt' % f.name, 'a')
for line in f:
converted_line = line.split()[:-3] #split each line and remove last 3 column
if not converted_line: # if list/line is empty
count += 1 #increase count but DO NOT PRINT/ WRITE TO FILE
else:
converted_line.insert(2,str(count)) # insert between 2nd and 3rd column
print ('\t'.join(converted_line)) # join them and print them with tab delimiter
You need to increment i on every empty line
import sys
import csv
with open("file-1.dat", "r") as f:
sys.stdout = open('%s2 (converted).txt' % f.name, 'a')
incsv = csv.reader(f, delimiter="\t")
incsv.next() # ignore first line
i = 0
for row in incsv:
if len(row) == 6:
row = row[0:3]
row.insert(2, i)
print(row)
elif len(row) == 0:
i += 1
Also, I couldn't execute your code on my machine (with Python 2.7.6). I changed the code according to run with Python 2.x.
Edit: I see it runs with Python 3.x

Python, How to use line from file as key in dictionary, use next line for value?

I have a file like this:
Ben
0 1 5 2 0 1 0 1
Tim
3 2 1 5 4 0 0 1
I would like to make a dictionary that looks like this:
{Ben: 0 1 5 2 0 1 0 1, Tim : 3 2 1 5 4 0 0 1}
so I was thinking something like:
for line in file:
dict[line] = line + 1
but you can't iterate through a file like that, so how would I go about
doing this?
This might be what you want:
dict_data = {}
with open('data.txt') as f:
for key in f:
dict_data[key.strip()] = next(f).split()
print dict_data
Output:
{'Tim': ['3', '2', '1', '5', '4', '0', '0', '1'], 'Ben': ['0', '1', '5', '2', '0', '1', '0', '1']}
Discussion
The for loop assumes each line is a key, we will read the next line in the body of the loop
key.strip() will turn 'Tim\n' to 'Tim'
f.next() reads and returns the next line -- the line after the key line
f.next().split() therefore splitting that line into a list
dict_data[key.strip()] = ... will do something like: dict_data['Tim'] = [ ... ]
Update
Thank to Blckknght for the pointer. I changed f.next() to next(f)
Update 2
If you want to turn the list into a list of integers instead of string, then instead of:
dict_data[key.strip()] = next(f).split()
Do this:
dict_data[key.strip()] = [int(i) for i in next(f).split()]
state = 0
d = {}
for line in file:
if state == 0:
key = line.strip()
state = 1
elif state == 1:
d[key] = line.split()
state = 0
I think the easiest approach is to first load the full file with file.readlines(), which loads the whole file and returns a list of the lines. Then you can create your dictionary with a comprehension:
lines = my_file.readlines()
my_dict = dict(lines[i:i+2] for i in range(0, len(lines), 2))
For your example file, this will give my_dict the contents:
{"Ben\n": "0 1 5 2 0 1 0 1\n", "Tim\n": "3 2 1 5 4 0 0 1\n"}
An alternative approach would be to use a while loop that reads two lines at a time:
my_dict = {}
while True:
name = file.readline().strip()
if not name: # detect the end of the file, where readline returns ""
break
numbers = [int(n) for n in file.readline().split()]
my_dict[name] = numbers
This approach allows you easily do some processing of the lines than the comprehension in the earlier version, such as stripping newlines and splitting the line of numbers into a list of actual int objects.
The result for the example file would be:
{"Ben": [0, 1, 5, 2, 0, 1, 0, 1], "Tim": [3, 2, 1, 5, 4, 0, 0, 1]}

Categories

Resources