Iterate over a Tensor flow Placeholder - python

docArray is a placeholder used to build the tensorflow graph. The graph is built properly but when the data is fed using feed_dict in session, the variable length do not get adjusted dynamically. Following is the code snippet.
lContext = tf.zeros((100,1), dtype=tf.float64)
rContext = tf.zeros((100,1), dtype=tf.float64)
for i in range(1, docArray.shape[1].valu):
j = docArrayShape - 1 - i
lContext = tf.concat([lContext,somefun1()], 1)
rContext = tf.concat([somefun2(), rContext], 1)
X = tf.concat([lContext, docArray, rContext], axis= 0)
When this code is used as forward pass, error comes up when docArray is initialised as
docArray = tf.placeholder(tf.float64, [100, None])
In case i randomly initialise the docArray with random shape, while feeding the realtime docArray data of shape (100 x N), where N is number of words in a document, i get error while training when concatenating, as the lContext and docArray will be in different shape.
The size of sample document is not fixed.
Thanks in advance, for the help.

Since you have not mentioned what is the size of the variables during the time of concatenation, it is difficult to estimate where it is going wrong. But in general, for concatenation to take place, the tensors undergoing concatenation should have same dtype and same dimensions in all axis except the axis which is undergoing concatenation.
For example,
Not allowed: (different dtype)
x = tf.placeholder(tf.float32, (100, None))
y = tf.placeholder(tf.float64, (100, None))
z = tf.concat((x,y), axis = 0)
Not allowed: (shape of 100 and 200 mismatch)
x = tf.placeholder(tf.float32, (100, None))
y = tf.placeholder(tf.float32, (200, None))
z = tf.concat((x,y), axis = 1)
Allowed: (Same dtype and axis match)
x = tf.placeholder(tf.float32, (100, 300))
y = tf.placeholder(tf.float32, (200, 300))
z = tf.concat((x,y), axis = 0)
In above example, if using None like other example, it will compile, but during run-time, the None has to represent same shape.

Related

Understanding fancy einsum equation

I was reading about attention and came across this equation:
import einops
from fancy_einsum import einsum
import torch
x = torch.rand((200, 10, 768))
y = torch.rand((20, 768, 64))
res = einsum("batch query_pos d_model, n_heads d_model d_head -> batch query_pos n_heads d_head", x, y)
And I am not able to understand the underlying operations that give the result res
I thought it might be matmul and tried this:
import torch
x_ = x.unsqueeze(dim = 2).unsqueeze(dim = 2)
y_ = torch.broadcast_to(y, (1, 1, 20, 768, 64))
res2 = x_ # y_
res2 = res2.squeeze(dim = -2)
(res == res2).all() # Prints False
But that does not seem to be right.
Any help regarding this is greatly appreciated
So whenever using einsum you best think about the meaning of the dimensions. Basically we perform a multiplication between the two inputs in this case. The signature passed to einsum shows what dimensions will be preserved and which ones will be "summed away". I simplified the signature with single letters here:
res = einsum("b q m, n m h -> b q n h", x, y)
We can read from this that both x and y have three dimensions. Furthermore both have a dimension called m, and this doesn't appear in the output. So we can conclude that it gets "summed away". So for each entry of the output we have following formula. For simplicity I reused the dimension names as indices, so for every b,q,n,h we get
___
\
res[b,q,n,h] = / x[b,q,m] * y[n,m,h]
/__
m
To do this with any other function than einsum is usually more cumbersome. So first we need to reorder and unsqueeze the dimensions in a way that they are compatible to be multiplied, so we can do the following (the shapes annotated above):
#(b,q,m,n,h) (b, q, m, 1, 1) (m, n, h)
product = x[:, :, :, None, None] * y.permute([1,0,2])
Due to the broadcasting rules, the second (y-) term will implicitly get the required leading dummy dimensions.
Then we can "sum away" the dimension m:
res = product.sum(dim=2) # (b,q,n,h)
So you can interpret that as a matrix multiplication if you want, or also just a scalar product, but of course with many "batch"-dimensions.

Unable to loop through Nested Loop (Python)

I am attempting to do a nested loop in order to find the mean-squared for a variety of different sized distributions. I keep getting an error that reads: "ValueError: could not broadcast input array from shape (0,) into shape (1000,)".
I am a beginner coder so I know this may be trivial for some...
My code:
#%% Initialize variables.
rng = np.random.default_rng()
rand = rng.random
num_steps = 1000
num_walks = 1000
x_step = np.zeros((num_steps, num_walks))
y_step = np.zeros((num_steps, num_walks))
x_final = np.zeros((1, num_walks))
y_final = np.zeros((1, num_walks))
displacement = np.zeros((num_walks, 1))
mean_squared_displacement = np.zeros(10)
#%% Find the mean-squared displacement for a variety of step numbers.
step_variation = np.linspace(0, 10000, 11)
for n in range(np.size(step_variation)-1):
for m in range(num_walks):
x_step[:,m] = np.cumsum(2*(rand(int(step_variation[n]))<.5)-1) # ERROR APPEARS ON THIS LINE
y_step[:,m] = np.cumsum(2*(rand(int(step_variation[n]))<.5)-1)
x_final[0,m] = x_step[-1,m]
y_final[0,m] = y_step[-1,m]
displacement[m,0] = np.sqrt(x_final[0,m]**2 + y_final[0,m]**2)
mean_squared_displacement[n] = np.mean(displacement[m,0]**2)
What steps did you take to debug this? Any? or did you just throw your hands up in despair, not understanding that the error means?
Did you examine the problem line? Test pieces in it?
x_step[:,m] = np.cumsum(2*(rand(int(step_variation[n]))<.5)-1)
The first value of step_variation is 0 (from linspace). rand(0) produces a (0,) shape array. The rest of that expression is thus also (0,) shape.
In [13]: rand(0)
Out[13]: array([], dtype=float64)
x_step is (1000,1000), so x_step[:,m] is (1000,) shape. The error tells us/you that it can't put a (0,) (no values) array into that (1000,) shape slot.

How to use pose heatmap for GAN conditioning?

I would like to ask a question about building a pose-conditioned StyleGAN in PyTorch. My intent here is to generate images of human models only in conditioned poses (based on 17x64x64 pose heatmaps). Assuming that generator adjustments are already more or less finished, how can I include pose conditioning into discriminator?
We can use Discriminator class from https://github.com/NVlabs/stylegan2-ada-pytorch/blob/main/training/networks.py as an example: here, in the forward() method of the DiscriminatorEpilogue a simple label-based conditioning is applied.
def forward(self, x, img, cmap, force_fp32=False):
misc.assert_shape(x, [None, self.in_channels, self.resolution, self.resolution]) # [NCHW]
# Here, cmap is just a simple class label mapping. In my case, cmap would include
# a 17-channel pose heatmap from a certain source image.
_ = force_fp32 # unused
dtype = torch.float32
memory_format = torch.contiguous_format
# FromRGB.
x = x.to(dtype=dtype, memory_format=memory_format)
if self.architecture == 'skip':
misc.assert_shape(img, [None, self.img_channels, self.resolution, self.resolution])
img = img.to(dtype=dtype, memory_format=memory_format)
x = x + self.fromrgb(img)
# Main layers.
if self.mbstd is not None:
x = self.mbstd(x)
x = self.conv(x)
x = self.fc(x.flatten(1))
x = self.out(x)
# Conditioning.
if self.cmap_dim > 0:
misc.assert_shape(cmap, [None, self.cmap_dim])
x = (x * cmap).sum(dim=1, keepdim=True) * (1 / np.sqrt(self.cmap_dim))
assert x.dtype == dtype
return x
How could I adjust this code to accomodate my problem, with heatmap dimensions being [batch_size, 17, 64, 64]? I thought abbout flattening the heatmap, but that would lose the spatial information. Another option would be to extract heatmap xmap from the image and calculate some form of distance between xmap and gmap(some form of pixel-wise MAE?). However, I struggle to imagine how to combine such a result with the base output x for the purpose of conditioning.

ValueError: cannot reshape array of size 300 into shape (100,100,3)

I'm struggeling in reshaping my image. Which is of dimension (100,100,3). The total array for all images makes up (3267, 100, 3)
def get_batch(batch_size,s="train"):
"""Create batch of n pairs, half same class, half different class"""
if s == 'train':
X = Xtrain
X= X.reshape(-1,100,100,3)
#X= X.reshape(-1,20,105,105)
categories = train_classes
else:
X = Xval
X= X.reshape(-1,100,100,3)
categories = val_classes
n_classes, n_examples, w, h, chan = X.shape
print(n_classes)
print(type(n_classes))
print(n_classes.shape)
# randomly sample several classes to use in the batch
categories = rng.choice(n_classes,size=(batch_size,),replace=False)
# initialize 2 empty arrays for the input image batch
pairs=[np.zeros((batch_size, h, w,1)) for i in range(2)]
# initialize vector for the targets
targets=np.zeros((batch_size,))
# make one half of it '1's, so 2nd half of batch has same class
targets[batch_size//2:] = 1
for i in range(batch_size):
category = categories[i]
idx_1 = rng.randint(0, n_examples)
pairs[0][i,:,:,:] = X[category, idx_1].reshape(w, h, chan)
idx_2 = rng.randint(0, n_examples)
# pick images of same class for 1st half, different for 2nd
if i >= batch_size // 2:
category_2 = category
else:
# add a random number to the category modulo n classes to ensure 2nd image has a different category
category_2 = (category + rng.randint(1,n_classes)) % n_classes
pairs[1][i,:,:,:] = X[category_2,idx_2].reshape(w, h,1)
return pairs, targets
However when trying to reshape the array pairs[0][i,:,:,:] = X[category, idx_1].reshape(w, h, chan) I always obtain the error that an array size of 300 is not reshapable into (100,100,3). I honestly don't see the problem why it should be...
Can anybody help me out?
you want array of 300 into 100,100,3. it cannot be because (100*100*3)=30000 and 30000 not equal to 300 you can only reshape if output shape has same number of values as input.
i suggest you should do (10,10,3) instead because (10*10*3)=300

Alternatives to using numpy reshape((-1,1))

I find myself reshaping 1D vectors way to many times. I wonder if this is because I'm doing something wrong, or because it is an inherit fault of numpy.
Why can't numpy infer that when he gets an object of shape (400,) to transform it to (400,1) ? And why do so many numpy operations result in removing the axis completely?
e.g.
def predict(Theta1, Theta2, X):
m = X.shape[0]
X = np.c_[np.ones(m), X]
hidden = sigmoid(X # Theta1.T)
hidden = np.c_[np.ones(m), hidden]
output = sigmoid(hidden # Theta2.T)
result = np.argmax(output, axis=1) + 1 # removes the 2nd axis - (400,)
return result.reshape((-1, 1)) # re-add the axis - (400,1)
pred = predict(Theta1, Theta2, X)
print(np.mean(pred == y))
If I don't reshape the result in the last row, I get funky behavior when comparing pred (400,) and y (400,1).
you can use
np.array_split(data, s)
knowing that new dimensions will have length of s (data shape is s * s)
The new numpy version (1.22) now added an optional keepdims to argmax. Source: here.

Categories

Resources