How to use np.transpose() [closed] - python

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I am trying to understand how to use np.transpose() properly.
I have a np array of shape (4,28,28,8,8). So this is 4 images of shape (224,224) that I viewed as shown by the previous shape.
I would like to revert back to (4,224,224). I feel the best way to go about is to use np.transpose() and reshape() functions. But I am hitting a roadblock as to how to revert back correctly.
Help. Please and thank you.
EDIT: (4,224,224) is 4 (this variable is subject to change as the number of images I load can change it can be 4, it can be 1000) images of shape (224,224). I used listdir() to load images. While loading I resized to the current shape of (224,224). I am going to perform operations on the shape (4,28,28,8,8) which is technically, 4 images, of shape (224,224) broken into (28,28) blocks, each containing (8,8) blocks. This shape I got by using view_as_blocks provided by scikit-image. Once I perform the operations which require that shape, I must revert back to (4,224,224). Where I am stuck.

What you need is probably :
b=np.swapaxes(a,3,2).reshape(4,28*8,28*8)
a beeing your array.

Related

Element wise multiplication using Numpy [Python] [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 1 year ago.
Improve this question
I'm trying to get the element-wise mulitplication using normal *, and i tried np.multiply(), both give a weird answers.
now (1-y) is (100,) and np.log(1-sigmoid(np.dot(X,theta)))) is (100,1), so when i muliply them by element-wise, it should give (100,1); but it gives me (100,100) matrix(ALL are higlighted BLUE)
Here is my original function if it can help.
Can anyone help me with getting the source of erre here?
I'm not 100% sure why python does this, but the way to solve it would be to apply np.reshape((1-y),(100,1)) and then apply np.multiply(). In general it's always better to reshape your arrays and to give them a second dimension.
EDIT: This explains how numpy does the broadcasting when using an array of dimensions (n,) and (n,1).

Detecting a watermark using opencv [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I want to detect a watermark in an image using OpenCV.
Particularly, I want a rectangular box around the watermark, if present.
Can you please help me out with the python code?
Though the solution would be dependent on the actual image content (that needs to be preserved) and the watermark. But in these kinds of problems, following sequence of steps is usually followed:
Converting the image to grayscale (cv2.cvtColor(img, cv2.COLOR_BGR2GRAY))
Applying morphological filtering Erosion, Dilation
Taking the difference of this output from the actual image

How do I load a 14,000 image data-set into a variable without running out of memory? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I'm trying to make a function to load a large image data-set of 14,000 images into a variable but I'm running into memory (RAM) issues.
What I'm trying to make is something like a cifar100.load_data function but it's not working out for me.
The function I defined looks like this:
def load_data():
trn_x_names=os.listdir('data/train_x')
trn_y_names=os.listdir('data/train_y')
trn_x_list=[]
trn_y_list=[]
for image in trn_x_names[0:]:
img=cv2.imread('data/train_x/%s'%image)
img=cv2.cvtColor(img,cv2.COLOR_BGR2RGB)
trn_x_list.append(img)
for image in trn_y_names[0:]:
img=cv2.imread('data/train_y/%s'%image)
img=cv2.cvtColor(img,cv2.COLOR_BGR2RGB)
trn_y_list.append(img)
x_train= np.array(trn_x_list)
y_train= np.array(trn_y_list)
return x_train,y_train
I first load all the images one by one, adding them to corresponding lists and at the end changing those lists to a numpy array and assigning them to some variables and returning them. But on the way, I ran into RAM issues as it consumed 100 % of my RAM.
You need to read in your images in batches as opposed to loading the entire data set into memory. If you are using tensorflow use the ImageDataGenerator.flowfromdirectory. Documentation is here. If your data is not organized into sub directories then you will need to create a python generator that reads in the data in batches. You can see how to build such a generator here.. Set the batch size to a value say 30 that will not fill up your memory.

normalize a matrix along one specific dimension [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
I have a matrix of shape [1000,500], and I would like to normalize the matrix along the second dimension. Is the following implementation right?
def norm(x):
return (x - np.mean(x)) / (np.std(x) + 1e-7)
for row_id in range(datamatrix.shape[0]):
datamatrix[row_id,:] = norm(datamatrix[row_id,:])
Your implementation would indeed normalize along the row-axis (I'm not sure what you mean by second dimension as rows are usually the first dimension of matrices, and numpy starts with dimension 0). You don't need to include the colon as it's implicit that you want all the rows.
Do remember to use the float32 dtype in your datamatrix as opposed to a integer dtype as it doesn't do automatic typecasting.
A more efficient, or clean implementation might be to use sklearn.preprocessing.normalize.
But be aware that you're using standard score normalization which assumes your dataset is normally distributed.

How to create very high resolution bitmaps in python? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
Currently as per what I know the max resolution that can be created using pillow's image function is 3000*3000 but I need to create image which has resolution of 10000*10000 or more programmatically ???
If you people didn't get what I meant ,just comment to me rahter than closing this question (please)I will give a more detailed question!!!
The only ways to create a pixel-perfect SVG from a bitmap is to <rect/> elements for each pixel (or block of same-colored pixels), or to use an <image> element to reference your bitmap. In neither case will you end up reducing the file size.
A vector format like SVG is not well-suited to representing hand-tweaked pixels. You likely want to use a bitmap format that supports lossless compression, such as PNG. If file size is of critical importance, you may wish to use a tool like OptiPNG to ensure that your PNG files are as small as possible.

Categories

Resources