I've been struggling to get something to work for the following text file format.
My overall goal is to extract the value for one of the variable names throughout the entire text file. For example, I want all the values for B rows and D rows. Then put them in a normal numpy array and run calculations.
Here is what the data file looks like:
[SECTION1a]
[a] 1424457484310
[b] 5313402937
[c] 873348378938
[d] 882992596992
[e] 14957596088
[SECTION1b]
243 62 184 145 250 180 106 208 248 87 186 137 127 204 18 142 37 67 36 72 48 204 255 30 243 78 44 121 112 139 76 71 131 50 118 10 42 8 67 4 98 110 37 5 208 104 56 55 225 56 0 102 0 21 0 156 0 174 255 171 0 42 0 233 0 50 0 254 0 245 255 110
[END SECTION1]
[SECTION2a]
[a] 1424457484310
[b] 5313402937
[c] 873348378938
[d] 882992596992
[e] 14957596088
[SECTION2b]
243 62 184 145 250 180 106 208 248 87 186 137 127 204 18 142 37 67 36 72 48 204 255 30 243 78 44 121 112 139 76 71 131 50 118 10 42 8 67 4 98 110 37 5 208 104 56 55 225 56 0 102 0 21 0 156 0 174 255 171 0 42 0 233 0 50 0 254 0 245 255 110
[END SECTION2]
That pattern continues for N sections.
Currently I read the file and put it into two columns:
filename_load = fileopenbox(msg=None, title='Load Data File',
default="Z:\*",
filetypes=None)
col1_data = np.genfromtxt(filename_load, skip_header=1, dtype=None,
usecols=(0,), usemask=True, invalid_raise=False)
col2_data = np.genfromtxt(filename_load, skip_header=1, dtype=None,
usecols=(1,), usemask=True, invalid_raise=False)
I was going to then use where, to find the index of the value I wanted, then make a new array of those values:
arr_index = np.where(col1_data == '[b]')
new_array = col2_data[arr_index]
Problem with that is, I end up with arrays of two different sizes because of the weird file format so obviously the data in the array won't match up properly to the right variable name.
I have tried a few other alternatives and get stuck due to the weird text file format and how to read it into python.
Not sure if I should stay on this track an if so how to address the problem, or, try a totally different approach.
Thanks in advance!
A possible solution sorting your data into hierachy of OrdedDict() dictionaries:
from collections import OrderedDict
import re
ss = """[SECTION1a]
[a] 1424457484310
[b] 5313402937
[c] 873348378938
[d] 882992596992
[e] 14957596088
[SECTION1b]
243 62 184 145 250 180 106 208 248 87 186 137 127 204 18 142 37 67 36 72 48 204 255 30 243 78 44 121 112 139 76 71 131 50 118 10 42 8 67 4 98 110 37 5 208 104 56 55 225 56 0 102 0 21 0 156 0 174 255 171 0 42 0 233 0 50 0 254 0 245 255 110
[END SECTION1]
[SECTION2a]
[a] 1424457484310
[b] 5313402937
[c] 873348378938
[d] 882992596992
[e] 14957596088
[SECTION2b]
243 62 184 145 250 180 106 208 248 87 186 137 127 204 18 142 37 67 36 72 48 204 255 30 243 78 44 121 112 139 76 71 131 50 118 10 42 8 67 4 98 110 37 5 208 104 56 55 225 56 0 102 0 21 0 156 0 174 255 171 0 42 0 233 0 50 0 254 0 245 255 110
[END SECTION2]"""
# regular expressions for matching SECTIONs
p1 = re.compile("^\[SECTION[0-9]+a\]")
p2 = re.compile("^\[SECTION[0-9]+b\]")
p3 = re.compile("^\[END SECTION[0-9]+\]")
def parse(ss):
""" Make hierachial dict from string """
ll, l_cnt = ss.splitlines(), 0
d = OrderedDict()
while l_cnt < len(ll): # iterate through lines
l = ll[l_cnt].strip()
if p1.match(l): # new sub dict for [SECTION*a]
dd, nn = OrderedDict(), l[1:-1]
l_cnt += 1
while (p2.match(ll[l_cnt].strip()) is None and
p3.match(ll[l_cnt].strip()) is None):
ww = ll[l_cnt].split()
dd[ww[0][1:-1]] = int(ww[1])
l_cnt += 1
d[nn] = dd
elif p2.match(l): # array of ints for [SECTION*b]
d[l[1:-1]] = [int(w) for w in ll[l_cnt+1].split()]
l_cnt += 2
elif p3.match(l):
l_cnt += 1
return d
dd = parse(ss)
Note that you can get much more robust code, if you use an existing parsing tool (e.g., Parsley).
To retrieve'[c]' from all sections, do:
print("All entries for [c]: ", end="")
cc = [d['c'] for s,d in dd.items() if s.endswith('a')]
print(", ".join(["{}".format(c) for c in cc]))
# Gives: All entries for [c]: 873348378938, 873348378938
Or you could traverse the whole dictionary:
def print_recdicts(d, tbw=0):
"""print the hierachial dict """
for k,v in d.items():
if type(v) is OrderedDict:
print(" "*tbw + "* {}:".format(k))
print_recdicts(v, tbw+2)
else:
print(" "*tbw + "* {}: {}".format(k,v))
print_recdicts(dd)
# Gives:
# * SECTION1a:
# * a: 1424457484310
# * b: 5313402937
# ...
The following should do it. It uses a running store (tally) to cope with missing values, then writes the state out when hitting the end marker.
import re
import numpy as np
filename = "yourfilenamehere.txt"
# [e] 14957596088
match_line_re = re.compile(r"^\[([a-z])\]\W(\d*)")
result = {
'b':[],
'd':[],
}
tally_empty = dict( zip( result.keys(), [np.nan] * len(result) ) )
tally = tally_empty
with open(filename, 'r') as f:
for line in f:
if line.startswith('[END SECTION'):
# Write accumulated data to the lists
for k, v in tally.items():
result[k].append(v)
tally = tally_empty
else:
# Map the items using regex
m = match_line_re.search(line)
if m:
k, v = m.group(1), m.group(2)
print(k,v)
if k in tally:
tally[k] = v
b = np.array(result['b'])
d = np.array(result['d'])
Note, whatever keys are in the result dict definition will be in the output.
Related
I want to loop through dataset and replace specific columns value with one the same [value]
The whole dataset has 91164 rows.
The case here i need to replace vec_red ,vec_greem, vec_blue with new_data
new_data has shape of (91164,) and its number of appearance equals index of my dataframe.
For e.g. last item is
This 1 need to be value in val_red , val_blue, val_green.
So I want to loop through whole dataframe and replace the calues in columns from 3 to 5.
What I have is :
label_idx = 0
for i in range(321):
for j in range(284):
(sth here) = new_data[label_idx]
label_idx += 1
The case here is that I am updating my pixel values after filtration. Thank you.
The shape of 91164 is result of multiplication 321 * 284. These are my pixel values in an RGB image.
Looping over rows of a dataframe is a code smell. If the 3 columns must receive the same values, you can do it in one single operation:
df[['vec_red', 'vec_green', 'vec_blue']] = np.transpose(
np.array([new_data, new_data, new_data]))
Demo:
np.random.seed(0)
nx = 284
ny = 321
df = pd.DataFrame({'x_indices': [i for j in range(ny) for i in range(nx)],
'y_indices': [j for j in range(ny) for i in range(nx)],
'vec_red': np.random.randint(0, 256, nx * ny),
'vec_green': np.random.randint(0, 256, nx * ny),
'vec_blue': np.random.randint(0, 256, nx * ny)
})
new_data = np.random.randint(0, 256, nx * ny)
print(df)
print(new_data)
df[['vec_red', 'vec_green', 'vec_blue']] = np.transpose(
np.array([new_data, new_data, new_data]))
print(df)
It gives as expected:
x_indices y_indices vec_red vec_green vec_blue
0 0 0 172 167 100
1 1 0 47 92 124
2 2 0 117 65 174
3 3 0 192 249 72
4 4 0 67 108 144
... ... ... ... ... ...
91159 279 320 16 162 42
91160 280 320 142 169 145
91161 281 320 225 81 143
91162 282 320 106 93 68
91163 283 320 85 65 130
[91164 rows x 5 columns]
[ 32 48 245 ... 26 66 58]
x_indices y_indices vec_red vec_green vec_blue
0 0 0 32 32 32
1 1 0 48 48 48
2 2 0 245 245 245
3 3 0 6 6 6
4 4 0 178 178 178
... ... ... ... ... ...
91159 279 320 27 27 27
91160 280 320 118 118 118
91161 281 320 26 26 26
91162 282 320 66 66 66
91163 283 320 58 58 58
[91164 rows x 5 columns]
I have a dataframe A of index and column labelled 0 to F (0-15) in hex.
0 1 2 3 4 5 6 7 8 9 A B C D E F
0 99 124 119 123 242 107 111 197 48 1 103 43 254 215 171 118
1 202 130 201 125 250 89 71 240 173 212 162 175 156 164 114 192
2 183 253 147 38 54 63 247 204 52 165 229 241 113 216 49 21
3 4 199 35 195 24 150 5 154 7 18 128 226 235 39 178 117
4 9 131 44 26 27 110 90 160 82 59 214 179 41 227 47 132
5 83 209 0 237 32 252 177 91 106 203 190 57 74 76 88 207
6 208 239 170 251 67 77 51 133 69 249 2 127 80 60 159 168
7 81 163 64 143 146 157 56 245 188 182 218 33 16 255 243 210
8 205 12 19 236 95 151 68 23 196 167 126 61 100 93 25 115
9 96 129 79 220 34 42 144 136 70 238 184 20 222 94 11 219
A 224 50 58 10 73 6 36 92 194 211 172 98 145 149 228 121
B 231 200 55 109 141 213 78 169 108 86 244 234 101 122 174 8
C 186 120 37 46 28 166 180 198 232 221 116 31 75 189 139 138
D 112 62 181 102 72 3 246 14 97 53 87 185 134 193 29 158
E 225 248 152 17 105 217 142 148 155 30 135 233 206 85 40 223
F 140 161 137 13 191 230 66 104 65 153 45 15 176 84 187 22
I did dataframe A by this
df_sbox=pd.DataFrame(from_a_2d_nparray)
df_sbox.index = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 'A', 'B', 'C', 'D', 'E', 'F']
df_sbox.columns = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 'A', 'B', 'C', 'D', 'E', 'F']
I want to select A where index == 0 - F and column == 0 -F and assign it to a 2D matrix.
What can i use for selecting A where "index == 0 - F and column == 0 -F" in 1 statement?
You can use hex with pandas.DataFrame.loc:
num1 = 10 #row 'A' in hex
num2 = 3 #column 3
df.loc[hex(num1)[2:].upper(), hex(num2)[2:].upper()]
#10
Explanation
You can use python built-in function hex to get the hex representation of an integer:
hex(12)
#0xc
Since we are not interested in the first two characters, we can omit them slicing the str:
hex(12)[2:] #from index 2 onwards
#c
Since the dataframe uses uppercase for its indices and columns, we can use str.upper to match them:
hex(12)[2:].upper()
#'C'
Additional
You can also get the upper-case hex representation using the Standard Format Specifiers:
"{:X}".format(43)
#2B
i have this dataframe:
Timestamp DATA0 DATA1 DATA2 DATA3 DATA4 DATA5 DATA6 DATA7
0 1.478196e+09 219 128 220 27 141 193 95 50
1 1.478196e+09 95 237 27 121 90 194 232 137
2 1.478196e+09 193 22 103 217 138 195 153 172
3 1.478196e+09 181 120 186 73 120 239 121 218
4 1.478196e+09 70 194 36 16 81 129 95 217
... ... ... ... ... ... ... ... ... ...
242 1.478198e+09 15 133 112 2 236 81 94 252
243 1.478198e+09 0 123 163 160 13 156 145 32
244 1.478198e+09 83 147 61 61 33 199 147 110
245 1.478198e+09 172 95 87 220 226 99 108 176
246 1.478198e+09 123 240 180 145 132 213 47 60
I need to create a temporal features like this:
Timestamp DATA0 DATA1 DATA2 DATA3 DATA4 DATA5 DATA6 DATA7
0 1.478196e+09 219 128 220 27 141 193 95 50
1 1.478196e+09 95 237 27 121 90 194 232 137
2 1.478196e+09 193 22 103 217 138 195 153 172
3 1.478196e+09 181 120 186 73 120 239 121 218
4 1.478196e+09 70 194 36 16 81 129 95 217
Timestamp DATA0 DATA1 DATA2 DATA3 DATA4 DATA5 DATA6 DATA7
1 1.478196e+09 95 237 27 121 90 194 232 137
2 1.478196e+09 193 22 103 217 138 195 153 172
3 1.478196e+09 181 120 186 73 120 239 121 218
4 1.478196e+09 70 194 36 16 81 129 95 217
5 1.478196e+09 121 69 111 204 134 92 51 190
Timestamp DATA0 DATA1 DATA2 DATA3 DATA4 DATA5 DATA6 DATA7
2 1.478196e+09 193 22 103 217 138 195 153 172
3 1.478196e+09 181 120 186 73 120 239 121 218
4 1.478196e+09 70 194 36 16 81 129 95 217
5 1.478196e+09 121 69 111 204 134 92 51 190
6 1.478196e+09 199 132 39 197 159 242 153 104
How can I do this automatically?
what structure should I use, what functions?
I was told that the dataframe should become an array of arrays
it's not very clear to me
If I understand it correctly, you want e.g. a list of dataframes, where each dataframe is a progressing slice of the original frame. This example would give you a list of dataframes:
import pandas as pd
# dummy dataframe
df = pd.DataFrame({'col_1': range(10), 'col_2': range(10)})
# returns slices of size slice_length with step size 1
slice_length = 5
lst = [df.iloc[i:i+slice_length,: ] for i in range(df.shape[0] - slice_length)]
Please note that you are duplicating a lot of data and thus increasing memory usage. If you merely have to perform an operation on subsequent slices, you should better loop over the dataframe and apply your function. Even better, if possible, you should try to verctorize your operation, as this will likely make a huge difference in performance.
EDIT: saving the slices to file:
If you're only interested in saving the slices to file (e.g. in a csv), you don't need to first create a list of all slices (with the associated memory usage). Instead, loop over the slices (by looping over the starting indices that define each slice), and save each slice to file.
slice_length = 5
# loop over indices (i.e. slices)
for idx_from in range(df.shape[0] - slice_length):
# create the slice and write to file
df.iloc[idx_from: idx_from + slice_length, :].to_csv(f'slice_starting_idx_{idx_from}.csv', sep=';', index=False)
hi I have tried this which might results to your expectations, based on indexes:
import numpy as np
import pandas as pd
x=np.array([[8,9],[2,3],[9,10],[25,78],[56,67],[56,67],[72,12],[98,24],
[8,9],[2,3],[9,10],[25,78],[56,67],[56,67],[72,12],[98,24]])
df=pd.DataFrame(np.reshape(x,(16,2)),columns=['Col1','Col2'])
print(df)
print("**********************************")
count=df['Col1'].count() # number of rows in dataframe
i=0 # to set index from starting point for every iteration
n=4 # to set index to end point for every iteration
count2=3 # This is important , if you want 4 row then yo must set this count2 4-1 i.e 3,let say if you want 5 rows then count2 must be 5-1 i.e 4
while count !=0: # condition till the count gets set to 0
df1=df[i:n] # first iteration i=0, n=4(if you want four rows), second iteration i=n i.e i=4, and n will be n=n+4 i.e 8
if i>0:
print(df1.set_index(np.arange(i-count2,n-count2)))
count2=count2+3 # Incrementing count2, so the index will be like in first iteration 0 to 3 then 1 to 4 and so on.
else:
print(df1.set_index(np.arange(i,n)))
i=n
count=count-4
n=n+4
First output of Dataframe
Col1 Col2
0 8 9
1 2 3
2 9 10
3 25 78
4 56 67
5 56 67
6 72 12
7 98 24
8 8 9
9 2 3
10 9 10
11 25 78
12 56 67
13 56 67
14 72 12
15 98 24
Final Ouput
Col1 Col2
0 8 9
1 2 3
2 9 10
3 25 78
Col1 Col2
1 56 67
2 56 67
3 72 12
4 98 24
Col1 Col2
2 8 9
3 2 3
4 9 10
5 25 78
Col1 Col2
3 56 67
4 56 67
5 72 12
6 98 24
Note: I am also new in python there can be some possible shortest ways to achieve the expected output.
I have 2 arrays of shape (128,). I want the elementwise difference between them.
for idx, x in enumerate(test):
if idx == 0:
print (test[idx])
print()
print(library[idx])
print()
print(np.abs(np.subtract(library[idx],test[idx])))
output:
[186 3 172 80 187 120 127 172 96 213 103 107 137 119 33 53 54 113
200 78 140 234 77 94 151 64 199 218 170 73 152 73 0 5 121 42
0 106 166 80 115 220 56 66 194 187 51 132 55 73 150 83 91 204
108 58 183 0 32 240 255 55 151 255 189 153 77 89 42 176 204 170
93 117 194 195 59 204 149 55 111 255 218 48 72 171 122 163 255 155
198 179 69 173 108 0 0 176 249 214 193 255 106 116 0 47 255 255
255 255 210 175 67 0 95 120 21 158 0 72 120 255 121 208 255 0
61 255]
[189 0 178 72 177 124 123 167 81 235 110 123 139 107 39 54 34 102
195 59 156 255 66 112 161 65 180 236 181 69 142 82 0 0 152 38
0 102 146 86 117 230 59 77 220 182 44 121 63 59 146 41 92 213
146 70 184 0 0 255 255 42 165 255 245 152 114 88 63 138 255 158
96 141 221 201 47 191 179 42 156 255 237 7 136 168 133 142 254 164
236 250 56 202 141 0 0 197 255 184 212 255 108 133 0 7 255 255
255 255 243 197 74 0 50 143 24 175 0 74 101 255 121 207 255 0
146 255]
[ 3 253 6 248 246 4 252 251 241 22 7 16 2 244 6 1 236 245
251 237 16 21 245 18 10 1 237 18 11 252 246 9 0 251 31 252
0 252 236 6 2 10 3 11 26 251 249 245 8 242 252 214 1 9
38 12 1 0 224 15 0 243 14 0 56 255 37 255 21 218 51 244
3 24 27 6 244 243 30 243 45 0 19 215 64 253 11 235 255 9
38 71 243 29 33 0 0 21 6 226 19 0 2 17 0 216 0 0
0 0 33 22 7 0 211 23 3 17 0 2 237 0 0 255 0 0
85 0]
So it reads, the last array printed out is the difference between the first two arrays.
189 - 186 is 3
3 - 0 is 3 (not 253)
I must be missing something trivial.
I'd rather not zip and subtract the values as I have a ton of data.
Your arrays probably have dtype uint8; they cannot hold values outside the interval [0, 256), and subtracting 3 from 0 wraps around to 253. The absolute value of 253 is still 253.
Use a different dtype, or restructure your computation to avoid hitting the limits of the dtype you're using.
You can just simply subtract two numpy arrays like this, it is element-wise operation:
>test = np.array([1,2,3])
>library = np.array([1,1,1])
>np.abs(library - test)
array([0, 1, 2])
so I have webscraped data from an mlb betting site aggregator and have my data points in two lists. The first list is all of the teams. The way it is formatted is that teamlist[1] and teamlist[2] are playing each other, then teamlist[3] and teamlist[4] play each other and so on. Each row index is a team, and each column index is a betting site.
site1|site2|site3|site4|...
team1
team2
team3
team4
...
This outlines the general form.
I have figured out the pattern I need to put each betting odd I need to put it into, but I cannot figure out the way to input them properly.
I apologize, I do not have to reputation to post the actual image so I must do a link instead.This outlines the structure I need to index. The data points are the index I need to go there. As you can see df[0,0] = moneylines[0], and df[0,1]= moneylines[1]. My Primary issue is once I make it through the first two rows (which are done in the same loop) and it tries to go to the third row, it reindexes over the first two rows.link
Here is the code I am currently using to populate the DataFrame. moneylines is the list of betting odds I am trying to populate the dataframe with, and teams is the row index:
ctr = 0
for t in range(0,int(len(teams)/2)):
for m in range(14):
df.ix[m,t] = moneylines[ctr]
df.ix[m,t+1] = moneylines[ctr+1]
ctr = ctr + 2
Please let me know if there is anything else I can include to help solve this question.
Your issue is due to your first for loop. You increment it one by one so:
first loop :
t = 0
you fill line 0 and line 1
then
t = 1
you fill line 1 and line 2
and so on...
You should use instead of :
for t in range(0,int(len(teams)/2)):
this:
for t in range(0, len(teams), 2)
NB :You can also multiply t by 2 in the index but it's not as logic as using the above solution
I hope it helps,
I'm posting an alternative to looping over the values of a dataframe, which you can avoid pretty easily here, because doing so loses the efficiency boost of using a dataframe in the first place.
It's not entirely clear to me what the formatting of your starting data is, but if, say, you have a series s with values 0 through 195:
s = pd.Series(range(196))
Then, using numpy.reshape you could get the pairings:
>>>s.values.reshape((len(s)//2, 2))
array([[ 0, 1],
[ 2, 3],
[ 4, 5],
...,
[190, 191],
[192, 193],
[194, 195]])
And using it again you could get the desired output:
>>>pd.DataFrame(s.values.reshape((len(s)//2, 2)).T.reshape((len(s)//14, 14))).sort_values(0)
0 1 2 3 4 5 6 7 8 9 10 11 12 13
0 0 2 4 6 8 10 12 14 16 18 20 22 24 26
7 1 3 5 7 9 11 13 15 17 19 21 23 25 27
1 28 30 32 34 36 38 40 42 44 46 48 50 52 54
8 29 31 33 35 37 39 41 43 45 47 49 51 53 55
2 56 58 60 62 64 66 68 70 72 74 76 78 80 82
9 57 59 61 63 65 67 69 71 73 75 77 79 81 83
3 84 86 88 90 92 94 96 98 100 102 104 106 108 110
10 85 87 89 91 93 95 97 99 101 103 105 107 109 111
4 112 114 116 118 120 122 124 126 128 130 132 134 136 138
11 113 115 117 119 121 123 125 127 129 131 133 135 137 139
5 140 142 144 146 148 150 152 154 156 158 160 162 164 166
12 141 143 145 147 149 151 153 155 157 159 161 163 165 167
6 168 170 172 174 176 178 180 182 184 186 188 190 192 194
13 169 171 173 175 177 179 181 183 185 187 189 191 193 195