I'm trying to read a binary file which is composed by several matrices of float numbers separated by a single int. The code in Matlab to achieve this is the following:
fid1=fopen(fname1,'r');
for i=1:xx
Rstart= fread(fid1,1,'int32'); #read blank at the begining
ZZ1 = fread(fid1,[Nx Ny],'real*4'); #read z
Rend = fread(fid1,1,'int32'); #read blank at the end
end
As you can see, each matrix size is Nx by Ny. Rstart and Rend are just dummy values. ZZ1 is the matrix I'm interested in.
I am trying to do the same in python, doing the following:
Rstart = np.fromfile(fname1,dtype='int32',count=1)
ZZ1 = np.fromfile(fname1,dtype='float32',count=Ny1*Nx1).reshape(Ny1,Nx1)
Rend = np.fromfile(fname1,dtype='int32',count=1)
Then, I have to iterate to read the subsequent matrices, but the function np.fromfile doesn't retain the pointer in the file.
Another option:
with open(fname1,'r') as f:
ZZ1=np.memmap(f, dtype='float32', mode='r', offset = 4,shape=(Ny1,Nx1))
plt.pcolor(ZZ1)
This works fine for the first array, but doesn't read the next matrices. Any idea how can I do this?
I searched for similar questions but didn't find a suitable answer.
Thanks
The cleanest way to read all your matrices in a single vectorized statement is to use a struct array:
dtype = [('start', np.int32), ('ZZ', np.float32, (Ny1, Nx1)), ('end', np.int32)]
with open(fname1, 'rb') as fh:
data = np.fromfile(fh, dtype)
print(data['ZZ'])
There are 2 solutions for this problem.
The first one:
for i in range(x):
ZZ1=np.memmap(fname1, dtype='float32', mode='r', offset = 4+8*i+(Nx1*Ny1)*4*i,shape=(Ny1,Nx1))
Where i is the array you want to get.
The second one:
fid=open('fname','rb')
for i in range(x):
Rstart = np.fromfile(fid,dtype='int32',count=1)
ZZ1 = np.fromfile(fid,dtype='float32',count=Ny1*Nx1).reshape(Ny1,Nx1)
Rend = np.fromfile(fid,dtype='int32',count=1)
So as morningsun points out, np.fromfile can receive a file object as an argument and keep track of the pointer. Notice that you must open the file in binary mode 'rb'.
Related
After appending data to an empty array and verifying it's shape, I use tofile to save it. When I read it back with fromfile, the shape is much larger (4x).
# Create empty array
rx_data = np.empty((0), dtype='int16')
# File name to write data
file_name = "Test_file.bin"
# Generate two sinewaves and add to rx_data array in interleaved fashion
for i in range(100):
I = np.sin(2*3.14*(i/100))
rx_data = np.append(rx_data, I)
Q = np.cos(2*3.14*(i/100))
rx_data = np.append(rx_data, Q)
print(rx_data.shape)
# Write array to .bin file
rx_data.tofile(file_name)
# Read back data from the .file
s_interleaved = np.fromfile(file_name, dtype=np.int16)
print(s_interleaved.shape)
The above code returns (200) before the array is saved, but return 800 when the array is read back in. Why does this happen?
The problem is that rx_data has a float64 data type before you save it. This is because I and Q are float64 arrays, so when you use np.append, the type will be promoted to be compatible with the float64 values.
Also, populating an array using np.append is an anti-pattern in numpy. It's common to do in standard python, but with numpy, it is often better to create an array of the shape and data type you need, then to fill in the values in the for loop. This is better because you only need to create the array once. With np.append, you create a new copy every time you call it.
rx_data = np.empty(200, dtype="float64")
for i in range(200):
rx_data[i] = np.sin(...)
Here is code that works, because it uses float64. But this pattern should be avoided in general. Prefer pre-allocating an array and replacing the values in the for loop.
# Create empty array
rx_data = np.empty((0), dtype='int16')
# File name to write data
file_name = "Test_file.bin"
# Generate two sinewaves and add to rx_data array in interleaved fashion
for i in range(100):
I = np.sin(2*3.14*(i/100))
rx_data = np.append(rx_data, I)
Q = np.cos(2*3.14*(i/100))
rx_data = np.append(rx_data, Q)
print(rx_data.shape)
# Write array to .bin file
print("rx_data data type:", rx_data.dtype)
rx_data.tofile(file_name)
# Read back data from the .file
s_interleaved = np.fromfile(file_name, dtype=np.float64)
print(s_interleaved.shape)
# Test that the arrays are equal.
np.allclose(rx_data, s_interleaved) # True
I am relatively new to python. As part of my astronomy project work, I have to deal with binary files (which of course is again new to me). I was given a binary file and a python code which reads data from the binary file. I was then asked by my professor to understand how the code works on the binary file. I spent couple of days trying to figure out, but nothing helped. Can anyone here help me with the code?
# Read the binary opacity file
f = open(file, "r")
# read file dimension sizes
a = np.fromfile(f, dtype=np.int32, count=16)
NX, NY, NZ = a[1], a[4], a[7]
# read the time and time step
time, time_step = np.fromfile(f, dtype=np.float64, count=2)
# number of iterations
nite = np.fromfile(f, dtype=np.int32, count=1)
# radius array
trash = np.fromfile(f, dtype=np.float64, count=1)
rad = np.fromfile(f, dtype=np.float64, count=a[1])
# phi array
trash = np.fromfile(f, dtype=np.float64, count=1)
phi = np.fromfile(f, dtype=np.float64, count=a[4])
# close the file
f.close()
The binary file as far as I know contains several parameters (eg: radius, phi, sound speed, radiation energy) and its many values. The above code extract the values 2 parameters- radius and phi from the binary file. Both radius and phi have more than 100 values. The program works, but I am not able to understand how it works. Any help would be appreciated.
The binary file is essentially just a long list of continuous data; you need to tell np.fromfile() both where to look and what type of data to expect.
Perhaps it's easiest to understand if you create your own file:
import numpy as np
with open('numpy_testfile', 'w+') as f:
## we create a "header" line, which collects the lengths of all relevant arrays
## you can then use this header line to tell np.fromfile() *how long* the arrays are
dimensions=np.array([0,10,0,0,10,0,3,10],dtype=np.int32)
dimensions.tofile(f) ## write to file
a=np.arange(0,10,1) ## some fake data, length 10
a.tofile(f) ## write to file
print(a.dtype)
b=np.arange(30,40,1) ## more fake data, length 10
b.tofile(f) ## write to file
print(b.dtype)
## more interesting data, this time it's of type float, length 3
c=np.array([3.14,4.22,55.0],dtype=np.float64)
c.tofile(f) ## write to file
print(c.dtype)
a.tofile(f) ## just for fun, let's write "a" again
with open('numpy_testfile', 'r+b') as f:
### what's important to know about this step is that
# numpy is "seeking" the file automatically, i.e. it is considering
# the first count=8, than the next count=10, and so on
# as "continuous data"
dim=np.fromfile(f,dtype=np.int32,count=8)
print(dim) ## our header line: [ 0 10 0 0 10 0 3 10]
a=np.fromfile(f,dtype=np.int64,count=dim[1])## read the dim[1]=10 numbers
b=np.fromfile(f,dtype=np.int64,count=dim[4])## and the next 10
## now it's dim[6]=3, and the dtype is float 10
c=np.fromfile(f,dtype=np.float64,count=dim[6] )#count=30)
## read "the rest", unspecified length, let's hope it's all int64 actually!
d=np.fromfile(f,dtype=np.int64)
print(a)
print(b)
print(c)
print(d)
Addendum: the numpy documentation is quite explicit when it comes to discouraging the use of np.tofile() and np.fromfile():
Do not rely on the combination of tofile and fromfile for data storage, as the binary files generated are are not platform independent. In particular, no byte-order or data-type information is saved. Data can be stored in the platform independent .npy format using save and load instead.
Personal side note: if you spent a couple of days to understand this code, don't feel discouraged of learning python; we all start somewhere. I'd suggest to be honest about the obstacles you've hit to your Professor (if this comes up in conversation), as she/he should be able to correctly assert "where you're at" when it comes to programming. :-)
from astropy.io import ascii
data = ascii.read('/directory/filename')
column1data = data[nameofcolumn1]
column2data = data[nameofcolumn2]
ect.
column1data is now an array of all the values under that header
I use this method to import SourceExtractor dat files which are in the ASCII format.
I believe this a more elegant way to import data from ascii files.
In python, using the OpenCV library, I need to create some polylines. The example code for the polylines method shows:
cv2.polylines(img,[pts],True,(0,255,255))
I have all the 'pts' laid out in a text file in the format:
x1,y1,x2,y2,x3,y3,x4,y4
x1,y1,x2,y2,x3,y3,x4,y4
x1,y1,x2,y2,x3,y3,x4,y4
How can I read this file and provide the data to the [pts] variable in the method call?
I've tried the np.array(csv.reader(...)) method as well as a few others I've found examples of. I can successfully read the file, but it's not in the format the polylines method wants. (I am a newbie when it comes to python, if this was C++ or Java, it wouldn't be a problem).
I would try to use numpy to read the csv as an array.
from numpy import genfromtxt
p = genfromtxt('myfile.csv', delimiter=',')
cv2.polylines(img,p,True,(0,255,255))
You may have to pass a dtype argument to the genfromtext if you need to coerce the data to a specific format.
https://docs.scipy.org/doc/numpy/reference/generated/numpy.genfromtxt.html
In case you know it is a fixed number of items in each row:
import csv
with open('myfile.csv') as csvfile:
rows = csv.reader(csvfile)
res = list(zip(*rows))
print(res)
I know it's not pretty and there is probably a MUCH BETTER way to do this, but it works. That being said, if someone could show me a better way, it would be much appreciated.
pointlist = []
f = open(args["slots"])
data = f.read().split()
for row in data:
tmp = []
col = row.split(";")
for points in col:
xy = points.split(",")
tmp += [[int(pt) for pt in xy]]
pointlist += [tmp]
slots = np.asarray(pointlist)
You might need to draw each polyline individually (to expand on #Chris's answer):
from numpy import genfromtxt
lines = genfromtxt('myfile.csv', delimiter=',')
for line in lines:
cv2.polylines(img, line.reshape((-1, 2)), True, (0,255,255))
This is a .dat file.
In Matlab, I can use this code to read.
lonlatfile='NOM_ITG_2288_2288(0E0N)_LE.dat';
f=fopen(lonlatfile,'r');
lat_fy=fread(f,[2288*2288,1],'float32');
lon_fy=fread(f,[2288*2288,1],'float32')+86.5;
lon=reshape(lon_fy,2288,2288);
lat=reshape(lat_fy,2288,2288);
Here are some results of Matlab:
matalab
How to do in python to get the same result?
PS: My code is this:
def fromfileskip(fid,shape,counts,skip,dtype):
"""
fid : file object, Should be open binary file.
shape : tuple of ints, This is the desired shape of each data block.
For a 2d array with xdim,ydim = 3000,2000 and xdim = fastest
dimension, then shape = (2000,3000).
counts : int, Number of times to read a data block.
skip : int, Number of bytes to skip between reads.
dtype : np.dtype object, Type of each binary element.
"""
data = np.zeros((counts,) + shape)
for c in range(counts):
block = np.fromfile(fid,dtype=np.float32,count=np.product(shape))
data[c] = block.reshape(shape)
fid.seek( fid.tell() + skip)
return data
fid = open(r'NOM_ITG_2288_2288(0E0N)_LE.dat','rb')
data = fromfileskip(fid,(2288,2288),1,0,np.float32)
loncenter = 86.5 #Footpoint of FY2E
latcenter = 0
lon2e = data+loncenter
lat2e = data+latcenter
Lon = lon2e.reshape(2288,2288)
Lat = lat2e.reshape(2288,2288)
But, the result is different from that of Matlab.
You should be able to translate the code directly into Python with little change:
lonlatfile = 'NOM_ITG_2288_2288(0E0N)_LE.dat'
with open(lonlatfile, 'rb') as f:
lat_fy = np.fromfile(f, count=2288*2288, dtype='float32')
lon_fy = np.fromfile(f, count=2288*2288, dtype='float32')+86.5
lon = lon_ft.reshape([2288, 2288], order='F');
lat = lat_ft.reshape([2288, 2288], order='F');
Normally the numpy reshape would be transposed compared to the MATLAB result, due to different index orders. The order='F' part makes sure the final output has the same layout as the MATLAB version. It is optional, if you remember the different index order you can leave that off.
The with open() as f: opens the file in a safe manner, making sure it is closed again when you are done even if the program has an error or is cancelled for whatever reason. Strictly speaking it is not needed, but you really should always use it when opening a file.
I'm having a lot of trouble writing to and reading from a binary file when working with a nx1 row vector that has been written to a binary file using struct.pack. The file structure looks like this (given an argument data that is of type numpy.array) :
test.file
--------
[format_code = 3] : 4 bytes (the code 3 means a vector) - fid.write(struct.pack('i',3))
[rows] : 4 bytes (fid.write(struct.pack('i',sz[0])) where sz = data.shape
[cols] : 4 bytes (fid.write(struct.pack('i',sz[1]))
[data] : type double = 8 bytes * (rows * cols)
Unfortunately, since these files are mostly written in MATLAB, where I have a working class that reads and writes these fields, I can't only write the amount of rows (I need columns as well even if a column does only = 1).
I've tried a few ways to pack data, none of which have worked when trying to unpack it (assume I've opened my file denoted by fid in 'rb'/'wb' and have done some error checking):
# write data
sz = data.shape
datalen=8*sz[0]*sz[1]
fid.write(struct.pack('i',3)) # format code
fid.write(struct.pack('i',sz[0])) # rows
fid.write(struct.pack('i',sz[1])) # columns
### write attempt ###
for i in xrange(sz[0]):
for j in xrange(sz[1]):
fid.write(struct.pack('d',float(data[i][j]))) # write in 'c' convention, so we transpose
### read attempt ###
format_code = struct.unpack('i',fid.read(struct.calcsize('i')))[0]
rows = struct.unpack('i',fid.read(struct.calcsize('i')))[0]
cols = struct.unpack('i',fid.read(struct.calcsize('i')))[0]
out_datalen = 8 * rows * cols # size of structure
output_data=numpy.array(struct.unpack('%dd' % out_datalen,fid.read(datalen)),dtype=float)
So far, when reading, my output has just seemingly been multiplied by random things. I don't know whats happening.
I found another similar question, and so I wrote my data as such:
fid.write(struct.pack('%sd' % len(data), *data))
However, when reading it back using:
numpy.array(struct.unpack('%sd' % out_datalen,fid.read(datalen)),dtype=float)
I get nothing in my array.
Similarly, just doing:
fid.write(struct.pack('%dd' % datalen, *data))
and reading it back with:
numpy.array(struct.unpack('%dd' % out_datalen,fid.read(datalen)),dtype=float)
also gives me an empty array. How can I fix this?