I have a 2d array with int value that I want to convert into an image.
The 2d array is generated randomly between 1-3, with consideration for what the neighboring int it in the array, I want to convert 1,2,3 to R,G,B in an image to better see what the outcome of the generator is.
What is the best way to do this?
I would use the matplotlib library. Just use plt.imshow or plt.pcolormesh (the second one is technically better for discrete values). The default colormap is pretty close to RGB in this case, but you could use another colormap if you wanted to. For example:
import numpy as np
import matplotlib.pyplot as plt
# Creating random 1-3 data in a 2D array
data = np.random.randint(1,4,[100,150])
plt.pcolormesh(a)
I'm using IPython and %matplotlib inline, you might need to call plt.show() to get it to draw if you are not using IPython.
Related
I have a 3d image stored in fet_img np array. The size is (400,400,74).
I want to access the 74 2D images seperately, each of size (400,400).
I would expect that this would do the trick:
fet_img[:][:][0]
However, when I print the shape of this, I get (400,74)
I tried
fet_img[0][:][:]
and
fet_img[:][0][:]
but the size of all three of these are (400,74)...
I'm overlooking something but I can't quite figure out what?
Note: I'm runnning this from a local jupyter notebook and all values are dtype('float64') if that matters at all.
You should use fet_img[:, :, 0] instead.
I have to translate a code from Octave to Python, among many things the program does something like this:
load_image = imread('image.bmp')
which as you can see its a bitmap, then if I do
size(load_image) that prints (1200,1600,3) which its ok, but, when I do:
load_image
it prints a one dimensional array, that does not make any sense to me, my question is how in Octave are these values interpreted because I have to load the same image in opencv and I couldn't find the way.
thanks.
What you have is a 3D array in octave. Here in the x-dimension you seem to have RGB values for each pixel and Y and Z dimension are the rows and columns respectively. However when you print it you will see all the values in the array and hence it looks like a 1D array.
Try something like this and look at the output:
load_image(:,:,i)
The i stands for the dimensions of your image RGB. If you want to 2D print your 3D image using matplotlib or similar, you need to do the same.
Is it possible for matplotlib only update the newest point to the figure instead of re-draw the whole figure?
For example: this may be the fastest way for dynamic plotting
initiate:
fig1 = Figure(figsize = (8.0,8.0),dpi = 100)
axes1 = fig1.add_subplot(111)
line1, = axes1.plot([],[],animated = True)
when new data is coming:
line1.set_data(new_xarray,new_yarray)
axes1.draw_artist(line1)
fig1.canvas.update()
fig1.canvas.flush_events()
But this will re-draw the whole figure! I'm think whether this is possible:
when new data is coming:
axes1.draw_only_last_point(new_x,new_y)
update_the_canvas()
It will only add this new point(new_x,new_y) to the axes instead of re-draw every point.
And if you know which graphic library for python can do that, please answer or comment, thank you so much!!!!!
Really appreciate your help!
Is only redrawing the entire figure the problem, i.e. it is ok to redraw the line itself as long as the figure is unchanged? Is the data known beforehand?
If the answer to those questions are NO, and YES, then it might be worth looking into the animate-class for matplotlib. One example where the data is known beforehand, but the points are plotted one by one is this example. In the example, the figure is redrawn if the newest point is outside of the current x-lim. If you know the range of your data you can avoid it by setting the limits beforehand.
You might also want to look into this answer, the animate example list or the animate documentation.
this is my (so far) little experience.
I started some month ago with Python(2.x) and openCV (2.4.13) as graphic library.I found in may first project that openCV for python works with numpy structure as much as matplotlib and (with slight difference) they can work together.
I had to update some pixel after some condition. I first did my elaboration from images with opencv obtaining a numpy 2D array, like a matrix.
The trick is: opencv mainly thinks about input as images, in terms of X as width first, then Y as height. The numpy structure wants rows and columns wich in fact is Y before X.
With this in mind I updated pixel by pixel the image-matrix A and plot it again with a colormap
import matplotlib as plt
import cv2
A = cv2.imread('your_image.png',0) # 0 means grayscale
# now you loaded an image in a numpy array A
for every new x,y pixel
A[y,x] = new pixel intensity value
plot = plt.imshow(A, 'CMRmap')
plt.show()
If you want images again, consider use this
import matplotlib.image as mpimg
#previous code
mpimg.imsave("newA.png", A)
If you want to work with colors remember that images in colour are X by Y by 3 numpy array but matplotlib has RGB as the right order of channels, openCv works with BGR order. So
C = cv2.imread('colour_reference.png',1) # 1 means BGR
A[y,x,0] = newRedvalue = C[y,x][2]
A[y,x,1] = newGreenvalue = C[y,x][1]
A[y,x,2] = newBluevalue = C[y,x][0]
I hope this will help you in some way
Consider the following code
import matplotlib.pyplot as plt
import numpy as np
time=np.arange(-100,100,01)
val =np.sin(time/10.)
time=-1.0*time
plt.figure()
plt.plot(time,val)
plt.xlim([70,-70])
plt.savefig('test.pdf')
when I open the pdf in inkscape, I can select (with F2) the entire data, it's just invisible outside of the specified xlim interval:
The problem seems to be the line
time=-1.0*time
If I omit this line, everything works perfectly.. no idea why this is. I often need such transformations because I deal with paleo-climate data which are sometimes given in year B.C. and year A.D., respectively.
The problem I see with this behavior is that someone could in principle get the data outside the range which I want to show.
Has someone a clue how to solve this problem (except for an slice of the arrays before plotting)?
I use matplotlib 1.1.1rc2
You can mask your array when plotting according to the limits you choose. Yes, this also requires changes to the code, but maybe not as extensive as you might fear. Here's an updated version of your example:
import matplotlib.pyplot as plt
import numpy as np
time=np.arange(-100,100,01)
val =np.sin(time/10.)
time=-1.0*time
plt.figure()
# store the x-limites in variables for easy multi-use
XMIN = -70.0
XMAX = 70.0
plt.plot(np.ma.masked_outside(time,XMIN,XMAX),val)
plt.xlim([XMIN,XMAX])
plt.savefig('test.pdf')
The key change is using np.ma.masked_outside for your x-axis value (note: the order of XMIN and XMAX in the mask-command is not important).
That way, you don't have to change the array time if you wanted to use other parts of it later on.
When I checked with inkscape, no data outside of the plot was highlighted.
I have obtained the detrended data from the following python code:
Detrended_Data = signal.detrend(Original_Data)
Is there a function in python wherein the "Original_Data" can be reconstructed using the "Detrended_Data" and some "correction factor"?
Are you referring to scipy.signal.detrend? If so, the answer is no -- there is no (and can never be an) un-detrend function. detrend maps many arrays to the same array. For example,
import numpy as np
import scipy.signal as signal
t = np.linspace(0, 5, 100)
assert np.allclose(signal.detrend(t), signal.detrend(2*t))
If there were an undetrend function, it would have to map signal.detrend(t) back to t, and also map signal.detrend(2*t) back to 2*t. That's impossible, since signal.detrend(t) is the same array as signal.detrend(2*t).
I guess you could use numpy to trend your data. That wouldn't properly give you the original data but it would make less 'noisy'.
Read this question as it goes much more in detail into this.