Creating images from a string of random functions - python

I've rewritten a bit of what was done here in an attempt to not have to use recursion so as to produce the images. While I can get what appears to be the correct string of random functions, I am unable to get the correct output arrays so as to build the image.
You'll notice I've put the xVar function first in the random functions because it will operate on an empty string and give me back values. This is similar to what the original code does except that (by recursion) uses the value 0 to pick out one of three functions that will operate on empty strings. I am thinking that the results are passed back in so that functions such as np.sin will work.
I think the issue might lie in my usage of the identity decorator func(*testlist), perhaps I'm using it incorrectly.
import numpy as np, random
from PIL import Image
width, height = 256,256
xArray = np.linspace(0.0, 1.0, width).reshape((1, width, 1))
yArray = np.linspace(0.0, 1.0, height).reshape((height, 1, 1))
def xVar(): return xArray
def yVar(): return yArray
def safeDivide(a, b): return np.divide(a, np.maximum(b, 0.001))
def add(x,y):
added = np.add(x, y)
return added
def Color():
randColorarray = np.array([random.random(), random.random(), random.random()]).reshape((1, 1, 3))
return randColorarray
# def circle(x,y):
# circles = (x- 100) ** 2 + (y - 100) ** 2
# return circles
functions = (Color, xVar, yVar, np.sin, np.multiply, safeDivide)
depth = 5
def functionArray(depth = 0):
FunctList = []
FunctList.append(xVar)
for x in range(depth):
func = random.choice(functions)
FunctList.append(func)
return FunctList
def ImageBuilder():
FunctionList = functionArray(depth)
testlist = []
for func in FunctionList:
values = func(*testlist)
return values
vals = ImageBuilder()
repetitions = (int(xArray / vals.shape[0]), int(yArray / vals.shape[1]), int(3 / vals.shape[2]))
img = np.tile(vals, repetitions)
# Convert to 8-bit, send to PIL and save
img8Bit = np.uint8(np.rint(img.clip(0.0, 1.0) * 255.0))
Image.fromarray(img8Bit).save('Images/' + '.png', "PNG")
Depending on which random function is chosen, I'll either get
values = func(*testlist)
ValueError: invalid number of arguments
or
TypeError: safeDivide() missing 2 required positional arguments: 'a' and 'b'
Note however that the linked program does not get a safe divide error and both a and b are not being explicitly passed in (as is the same with np.multiply).
Thanks for any help.

Related

Image values with rgb2gray python

I'm a beginner in image processing.
I work with an RGB image image.shape = (4512,3000,3)
I saw the value of the the first pixel: image[0][0] = [210 213 220]
When I use the rgb2gray function the result is rgb2gray(image[0][0]) = 0.8347733333333334
But I saw that the relation used by the function is Y = 0.2125 * R + 0.7454 * G + 0.0721 * B. I did the calculation, I should have Y = im[0,0,0] * 0.2125 + im[0,0,1] * 0.7154 + im[0,0,2] * 0.0721 = 212.8672
It seems my result is 212.8672/255 = 0.8347733333333334
Why is the result between 0 and 1 and not between 0 and 255?
I assume you are using scikit-image's rgb2gray. In that case, you can see in the code from https://github.com/scikit-image/scikit-image/blob/main/skimage/color/colorconv.py that every color conversion in the color module starts with the _prepare_colorarray methods which converts to floating point representation.
def _prepare_colorarray(arr, force_copy=False, *, channel_axis=-1):
"""Check the shape of the array and convert it to
floating point representation.
"""
arr = np.asanyarray(arr)
if arr.shape[channel_axis] != 3:
msg = (f'the input array must have size 3 along `channel_axis`, '
f'got {arr.shape}')
raise ValueError(msg)
float_dtype = _supported_float_type(arr.dtype)
if float_dtype == np.float32:
_func = dtype.img_as_float32
else:
_func = dtype.img_as_float64
return _func(arr, force_copy=force_copy)
The module does (thankfully) support 8-bit int representation as an input, but converts the image array to float representation and uses that representation all along.

Multiple return using scipy.odeint method in Python

I am trying to use scipy.odeint() method in order to solve an second order partial derivative function.
I can do that for a single value of constant k, which is a constant of the function I have.
But I want to try this solution for many values of k.
To do so, I included the values that I want in a list k, and going through a loop I want to plug in these values for the final solution as arguments.
However, I am getting an error
error: Extra arguments must be in a tuple
import numpy as np
from scipy.integrate import odeint
### Code with a single value of K.THAT WORKS FINE!!!! ###
k = 1 #attributes to be changed
t = [0.1,0.2,0.3] #Data
init = [45,0] #initial values
#Function to apply an integration
def f(init, t, args=(k,)):
dOdt = init[1]
dwdt = -np.cos(init[0]) + k*dOdt
return [dOdt, dwdt]
#integrating function that returns a list of 2D numpy arrays
zCH = odeint(f,init,t)
################################################################
### Code that DOES NOT WORK!###
k = [1,2,3] #attributes to be changed
t = [0.1,0.2,0.3] #Data
init = [45,0] #initial values
#Function to apply an integration
def f(init, t, args=(k,)):
dOdt = init[1]
dwdt = -np.cos(init[0]) + k*dOdt
return [dOdt, dwdt]
solutions = []
for i in k:
#integrating function that returns a list of 2D numpy arrays
zCH = odeint(f,init,t,(k[i-1]))
solutions.append(zCH)```
It has to do with the way you are passing k into your function f().
The following changes the value of k on each iteration
k_list = [1,2,3] #attributes to be changed
t = [0.1,0.2,0.3] #Data
init = [45,0] #initial values
#Function to apply an integration
def f(init, t, args=(k,)):
dOdt = init[1]
dwdt = -np.cos(init[0]) + k*dOdt
return [dOdt, dwdt]
solutions = []
for k in k_list:
#integrating function that returns a list of 2D numpy arrays
zCH = odeint(f, init, t)
solutions.append(zCH)

Quick pixel manipulation with Pillow and/or NumPy

I'm trying to improve the speed of my image manipulation as it's been too slow for actual use.
What I need to do is apply a complex transformation on the colour of every pixel on an image. The manipulation is basically apply a vector transform like T(r, g, b, a) => (r * x, g * x, b * y, a) or in layman's terms, it's a multiplication of Red and Green values by a constant, a different multiplication for Blue and keep Alpha. But I also need to manipulate it differently if the RGB colour falls under some specific colours, in those cases they must follow a dictionary/transformation table where RGB => newRGB again keeping alpha.
The algorithm would be:
for each pixel in image:
if pixel[r, g, b] in special:
return special[pixel[r, g, b]] + pixel[a]
else:
return T(pixel)
It's simple but speed has been sub-optimal. I believe there's some way using numpy vectors, but I could not find how.
Important details about the implementation:
I don't care about the original buffer/image (manipulation can be in place)
I can use wxPython, Pillow and NumPy
Order or dimension of the array is not important as long as the buffer keeps the length
The buffer is obtained from a wxPython Bitmap and special and (RG|B)_pal are transformation tables, the end result will become a wxPython Bitmap too. They're obtained like these:
# buffer
bitmap = wx.Bitmap # it's valid wxBitmap here, this is just to let you know it exists
buff = bytearray(bitmap.GetWidth() * bitmap.GetHeight() * 4)
bitmap.CopyToBuffer(buff, wx.BitmapBufferFormat_RGBA)
self.RG_mult= 0.75
self.B_mult = 0.83
self.RG_pal = []
self.B_pal = []
for i in range(0, 256):
self.RG_pal.append(int(i * self.RG_mult))
self.B_pal.append(int(i * self.B_mult))
self.special = {
# RGB: new_RGB
# Implementation specific for the fastest access
# with buffer keys are 24bit numbers, with PIL keys are tuples
}
Implementations I tried include direct buffer manipulation:
for x in range(0, bitmap.GetWidth() * bitmap.GetHeight()):
index = x * 4
r = buf[index]
g = buf[index + 1]
b = buf[index + 2]
rgb = buf[index:index + 3]
if rgb in self.special:
special = self.special[rgb]
buf[index] = special[0]
buf[index + 1] = special[1]
buf[index + 2] = special[2]
else:
buf[index] = self.RG_pal[r]
buf[index + 1] = self.RG_pal[g]
buf[index + 2] = self.B_pal[b]
Use Pillow with getdata():
pil = Image.frombuffer("RGBA", (bitmap.GetWidth(), bitmap.GetHeight()), buf)
pil_buf = []
for colour in pil.getdata():
colour_idx = colour[0:3]
if (colour_idx in self.special):
special = self.special[colour_idx]
pil_buf.append((
special[0],
special[1],
special[2],
colour[3],
))
else:
pil_buf.append((
self.RG_pal[colour[0]],
self.RG_pal[colour[1]],
self.B_pal[colour[2]],
colour[3],
))
pil.putdata(pil_buf)
buf = pil.tobytes()
Pillow with point() and getdata() (fastest I achieved, more than twice times faster than others)
pil = Image.frombuffer("RGBA", (bitmap.GetWidth(), bitmap.GetHeight()), buf)
r, g, b, a = pil.split()
r = r.point(lambda r: r * self.RG_mult)
g = g.point(lambda g: g * self.RG_mult)
b = b.point(lambda b: b * self.B_mult)
pil = Image.merge("RGBA", (r, g, b, a))
i = 0
for colour in pil.getdata():
colour_idx = colour[0:3]
if (colour_idx in self.special):
special = self.special[colour_idx]
pil.putpixel(
(i % bitmap.GetWidth(), i // bitmap.GetWidth()),
(
special[0],
special[1],
special[2],
colour[3],
)
)
i += 1
buf = pil.tobytes()
I also tried working with numpy.where but then I could not get it to work. With numpy.apply_along_axis it worked but the performance was terrible. Other tries with numpy I could not access the RGB together, only as separated bands.
Pure Numpy Version
This first optimization relies on the fact, that one probably has way less special colors than pixels. I use numpy to do all the inner loops. This works well with images of up to 1MP. If You have multiple images I'd recommend the parallel approach.
Let's define a test case:
import requests
from io import BytesIO
from PIL import Image
import numpy as np
# Load some image, so we have the same
response = requests.get("https://upload.wikimedia.org/wikipedia/commons/4/41/Rick_Astley_Dallas.jpg")
# Make areas of known color
img = Image.open(BytesIO(response.content)).rotate(10, expand=True).rotate(-10,expand=True, fillcolor=(255,255,255)).convert('RGBA')
print("height: %d, width: %d (%.2f MP)"%(img.height, img.width, img.width*img.height/10e6))
height: 5034, width: 5792 (2.92 MP)
Define our special colors
specials = {
(4,1,6):(255,255,255),
(0, 0, 0):(255, 0, 255),
(255, 255, 255):(0, 255, 0)
}
Algorithm
def transform_map(img, specials, R_factor, G_factor, B_factor):
# Your transform
def transform(x, a):
a *= x
return a.clip(0, 255).astype(np.uint8)
# Convert to array
img_array = np.asarray(img)
# Extract channels
R = img_array.T[0]
G = img_array.T[1]
B = img_array.T[2]
A = img_array.T[3]
# Find Special colors
# First, calculate a uniqe hash
color_hashes = (R + 2**8 * G + 2**16 * B)
# Find inidices of special colors
special_idxs = []
for k, v in specials.items():
key_arr = np.array(list(k))
val_arr = np.array(list(v))
spec_hash = key_arr[0] + 2**8 * key_arr[1] + 2**16 * key_arr[2]
special_idxs.append(
{
'mask': np.where(np.isin(color_hashes, spec_hash)),
'value': val_arr
}
)
# Apply transform to whole image
R = transform(R, R_factor)
G = transform(G, G_factor)
B = transform(B, B_factor)
# Replace values where special colors were found
for idx in special_idxs:
R[idx['mask']] = idx['value'][0]
G[idx['mask']] = idx['value'][1]
B[idx['mask']] = idx['value'][2]
return Image.fromarray(np.array([R,G,B,A]).T, mode='RGBA')
And finally some bench marks on a Intel Core i5-6300U # 2.40GHz
import time
times = []
for i in range(10):
t0 = time.time()
# Test
transform_map(img, specials, 1.2, .9, 1.2)
#
t1 = time.time()
times.append(t1-t0)
np.round(times, 2)
print('average run time: %.2f +/-%.2f'%(np.mean(times), np.std(times)))
average run time: 9.72 +/-0.91
EDIT Parallelization
With the same setup as above, we can get a 2x speed increase on large images. (Small ones are faster without numba)
from numba import njit, prange
from numba.core import types
from numba.typed import Dict
# Map dict of special colors or transform over array of pixel values
#njit(parallel=True, locals={'px_hash': types.uint32})
def check_and_transform(img_array, d, T):
#Save Shape for later
shape = img_array.shape
# Flatten image for 1-d iteration
img_array_flat = img_array.reshape(-1,3).copy()
N = img_array_flat.shape[0]
# Replace or map
for i in prange(N):
px_hash = np.uint32(0)
px_hash += img_array_flat[i,0]
px_hash += types.uint32(2**8) * img_array_flat[i,1]
px_hash += types.uint32(2**16) * img_array_flat[i,2]
try:
img_array_flat[i] = d[px_hash]
except Exception:
img_array_flat[i] = (img_array_flat[i] * T).astype(np.uint8)
# return image
return img_array_flat.reshape(shape)
# Wrapper for function above
def map_or_transform_jit(image: Image, specials: dict, T: np.ndarray):
# assemble numba typed dict
d = Dict.empty(
key_type=types.uint32,
value_type=types.uint8[:],
)
for k, v in specials.items():
k = types.uint32(k[0] + 2**8 * k[1] + 2**16 * k[2])
v = np.array(v, dtype=np.uint8)
d[k] = v
# get rgb channels
img_arr = np.array(img)
rgb = img_arr[:,:,:3].copy()
img_shape = img_arr.shape
# apply map
rgb = check_and_transform(rgb, d, T)
# set color channels
img_arr[:,:,:3] = rgb
return Image.fromarray(img_arr, mode='RGBA')
# Benchmark
import time
times = []
for i in range(10):
t0 = time.time()
# Test
test_img = map_or_transform_jit(img, specials, np.array([1, .5, .5]))
#
t1 = time.time()
times.append(t1-t0)
np.round(times, 2)
print('average run time: %.2f +/- %.2f'%(np.mean(times), np.std(times)))
test_img
average run time: 3.76 +/- 0.08

Is it possible to convert this numpy function to tensorflow?

I have a function that takes a [32, 32, 3] tensor, and outputs a [256,256,3] tensor.
Specifically, the function interprets the smaller array as if it was a .svg file, and 'renders' it to a 256x256 array as a canvas using this algorithm
For an explanation of WHY I would want to do this, see This question
The function behaves exactly as intended, until I try to include it in the training loop of a GAN. The current error I'm seeing is:
NotImplementedError: Cannot convert a symbolic Tensor (mul:0) to a numpy array.
A lot of other answers to similar errors seem to boil down to "You need to re-write the function using tensorflow, not numpy"
Here's the working code using numpy - is it possible to re-write it to exclusively use tensorflow functions?
def convert_to_bitmap(input_tensor, target, j):
#implied conversion to nparray - the tensorflow docs seem to indicate this is okay, but the error is thrown here when training
array = input_tensor
outputArray = target
output = target
for i in range(32):
col = float(array[i,0,j])
if ((float(array[i,0,0]))+(float(array[i,0,1]))+(float(array[i,0,2]))/3)< 0:
continue
#slice only the red channel from the i line, multiply by 255
red_array = array[i,:,0]*255
#slice only the green channel, multiply by 255
green_array = array[i,:,1]*255
#combine and flatten them
combined_array = np.dstack((red_array, green_array)).flatten()
#remove the first two and last two indices of the combined array
index = [0,1,62,63]
clipped_array = np.delete(combined_array,index)
#filter array to remove values less than 0
filtered = clipped_array > 0
filtered_array = clipped_array[filtered]
#check array has an even number of values, delete the last index if it doesn't
if len(filtered_array) % 2 == 0:
pass
else:
filtered_array = np.delete(filtered_array,-1)
#convert into a set of tuples
l = filtered_array.tolist()
t = list(zip(l, l[1:] + l[:1]))
if not t:
continue
output = fill_polygon(t, outputArray, col)
return(output)
The 'fill polygon' function is copied from the 'mahotas' library:
def fill_polygon(polygon, canvas, color):
if not len(polygon):
return
min_y = min(int(y) for y,x in polygon)
max_y = max(int(y) for y,x in polygon)
polygon = [(float(y),float(x)) for y,x in polygon]
if max_y < canvas.shape[0]:
max_y += 1
for y in range(min_y, max_y):
nodes = []
j = -1
for i,p in enumerate(polygon):
pj = polygon[j]
if p[0] < y and pj[0] >= y or pj[0] < y and p[0] >= y:
dy = pj[0] - p[0]
if dy:
nodes.append( (p[1] + (y-p[0])/(pj[0]-p[0])*(pj[1]-p[1])) )
elif p[0] == y:
nodes.append(p[1])
j = i
nodes.sort()
for n,nn in zip(nodes[::2],nodes[1::2]):
nn += 1
canvas[y, int(n):int(nn)] = color
return(canvas)
NOTE: I'm not trying to get someone to convert the whole thing for me! There are some functions that are pretty obvious (tf.stack instead of np.dstack), but others that I don't even know how to start, like the last few lines of the fill_polygon function above.
Yes you can actually do this, you can use a python function in sth called tf.pyfunc. Its a python wrapper but its extremely slow in comparison to plain tensorflow. However, tensorflow and Cuda for example are so damn fast because they use stuff like vectorization, meaning you can rewrite a lot , really many of the loops in terms of mathematical tensor operations which are very fast.
In general:
If you want to use custom code as a custom layer, i would recommend you to rethink the algebra behind those loops and try to express them somehow different. If its just preprocessing before the training is going to start, you can use tensorflow but doing the same with numpy and other libraries is easier.
To your main question: Yes its possible, but better dont use loops. Tensorflow has a build-in loop optimizer but then you have to use tf.while() and thats anyoing (maybe just for me). I just blinked over your code, but it looks like you should be able to vectorize it quite good using the standard tensorflow vocabulary. If you want it fast, i mean really fast with GPU support write all in tensorflow, but nothing like 50/50 with tf.convert_to_tensor(), because than its going to be slow again. because than you switch between GPU and CPU and plain Python interpreter and the tensorflow low level API. Hope i could help you at least a bit
This code 'works', in that it only uses tensorflow functions, and does allow the model to train when used in a training loop:
def convert_image (x):
#split off the first column of the generator output, and store it for later (remove the 'colours' column)
colours_column = tf.slice(img_to_convert, tf.constant([0,0,0], dtype=tf.int32), tf.constant([32,1,3], dtype=tf.int32))
#split off the rest of the data, only keeping R + G, and discarding B
image_data_red = tf.slice(img_to_convert, tf.constant([0,1,0], dtype=tf.int32), tf.constant([32,31,1], dtype=tf.int32))
image_data_green = tf.slice(img_to_convert, tf.constant([0,1,1], dtype=tf.int32), tf.constant([32, 31,1], dtype=tf.int32))
#roll each row by 1 position, and make two more 2D tensors
rolled_red = tf.roll(image_data_red, shift=-1, axis=0)
rolled_green = tf.roll(image_data_green, shift=-1, axis=0)
#remove all values where either the red OR green channels are 0
zeroes = tf.constant(0, dtype=tf.float32)
#this is for the 'count_nonzero' command
boolean_red_data = tf.not_equal(image_data_red, zeroes)
boolean_green_data = tf.not_equal(image_data_green, zeroes)
initial_data_mask = tf.logical_and(boolean_red_data, boolean_green_data)
#count non-zero values per row and flatten it
count = tf.math.count_nonzero(initial_data_mask, 1)
count_flat = tf.reshape(count, [-1])
flat_red = tf.reshape(image_data_red, [-1])
flat_green = tf.reshape(image_data_green, [-1])
boolean_red = tf.math.logical_not(tf.equal(flat_red, tf.zeros_like(flat_red)))
boolean_green = tf.math.logical_not(tf.equal(flat_green, tf.zeros_like(flat_red)))
mask = tf.logical_and(boolean_red, boolean_green)
flat_red_without_zero = tf.boolean_mask(flat_red, mask)
flat_green_without_zero = tf.boolean_mask(flat_green, mask)
# create a ragged tensor
X0_ragged = tf.RaggedTensor.from_row_lengths(values=flat_red_without_zero, row_lengths=count_flat)
Y0_ragged = tf.RaggedTensor.from_row_lengths(values=flat_green_without_zero, row_lengths=count_flat)
#do the same for the rolled version
rolled_data_mask = tf.roll(initial_data_mask, shift=-1, axis=1)
flat_rolled_red = tf.reshape(rolled_red, [-1])
flat_rolled_green = tf.reshape(rolled_green, [-1])
#from SO "shift zeros to the end"
boolean_rolled_red = tf.math.logical_not(tf.equal(flat_rolled_red, tf.zeros_like(flat_rolled_red)))
boolean_rolled_green = tf.math.logical_not(tf.equal(flat_rolled_green, tf.zeros_like(flat_rolled_red)))
rolled_mask = tf.logical_and(boolean_rolled_red, boolean_rolled_green)
flat_rolled_red_without_zero = tf.boolean_mask(flat_rolled_red, rolled_mask)
flat_rolled_green_without_zero = tf.boolean_mask(flat_rolled_green, rolled_mask)
# create a ragged tensor
X1_ragged = tf.RaggedTensor.from_row_lengths(values=flat_rolled_red_without_zero, row_lengths=count_flat)
Y1_ragged = tf.RaggedTensor.from_row_lengths(values=flat_rolled_green_without_zero, row_lengths=count_flat)
#available outputs for future use are:
X0 = X0_ragged.to_tensor(default_value=0.)
Y0 = Y0_ragged.to_tensor(default_value=0.)
X1 = X1_ragged.to_tensor(default_value=0.)
Y1 = Y1_ragged.to_tensor(default_value=0.)
#Example tensor cel (replace with (x))
P = tf.cast(x, dtype=tf.float32)
#split out P.x and P.y, and fill a ragged tensor to the same shape as Rx
Px_value = tf.cast(x, dtype=tf.float32) - tf.cast((tf.math.floor(x/255)*255), dtype=tf.float32)
Py_value = tf.cast(tf.math.floor(x/255), dtype=tf.float32)
Px = tf.squeeze(tf.ones_like(X0)*Px_value)
Py = tf.squeeze(tf.ones_like(Y0)*Py_value)
#for each pair of values (Y0, Y1, make a vector, and check to see if it crosses the y-value (Py) either up or down
a = tf.math.less(Y0, Py)
b = tf.math.greater_equal(Y1, Py)
c = tf.logical_and(a, b)
d = tf.math.greater_equal(Y0, Py)
e = tf.math.less(Y1, Py)
f = tf.logical_and(d, e)
g = tf.logical_or(c, f)
#Makes boolean bitwise mask
#calculate the intersection of the line with the y-value, assuming it intersects
#P.x <= (G.x - R.x) * (P.y - R.y) / (G.y - R.y + R.x) - use tf.divide_no_nan for safe divide
h = tf.math.less(Px,(tf.math.divide_no_nan(((X1-X0)*(Py-Y0)),(Y1-Y0+X0))))
#combine using AND with the mask above
i = tf.logical_and(g,h)
#tf.count_nonzero
#reshape to make a column tensor with the same dimensions as the colours
#divide by 2 using tf.floor_mod (returns remainder of division - any remainder means the value is odd, and hence the point is IN the polygon)
final_count = tf.cast((tf.math.count_nonzero(i, 1)), dtype=tf.int32)
twos = tf.ones_like(final_count, dtype=tf.int32)*tf.constant([2], dtype=tf.int32)
divide = tf.cast(tf.math.floormod(final_count, twos), dtype=tf.int32)
index = tf.cast(tf.range(0,32, delta=1), dtype=tf.int32)
clipped_index = divide*index
sort = tf.sort(clipped_index)
reverse = tf.reverse(sort, [-1])
value = tf.slice(reverse, [0], [1])
pair = tf.constant([0], dtype=tf.int32)
slice_tensor = tf.reshape(tf.stack([value, pair, pair], axis=0),[-1])
output_colour = tf.slice(colours_column, slice_tensor, [1,1,3])
return output_colour
This is where the 'convert image' function is applied using tf.vectorize_map:
def convert_images(image_to_convert):
global img_to_convert
img_to_convert = image_to_convert
process_list = tf.reshape((tf.range(0,65536, delta=1, dtype=tf.int32)), [65536, 1])
output_line = tf.vectorized_map(convert_image, process_list)
output_line_squeezed = tf.squeeze(output_line)
output_reshape = (tf.reshape(output_line_squeezed, [256,256,3])/127.5)-1
output = tf.expand_dims(output_reshape, axis=0)
return output
It is PAINFULLY slow, though - It does not appear to be using the GPU, and looks to be single threaded as well.
I'm adding it as an answer to my own question because is clearly IS possible to do this numpy function entirely in tensorflow - it just probably shouldn't be done like this.

Passing Arguments in a correct way to scipy minimzer

I am trying to minimize a loglikelihood wrt Fsc, Qsc and Rsc:
def llik_scalars(Fsc, Qsc, Rsc, pred_state, pred_P, y):
T = len(pred_P)
#pred_state = np.array([pred_state[t].item() for t in range(len(pred_state))])
#pred_P = np.array([pred_P[t].item() for t in range(len(pred_P))])
Sigmat = np.array(pred_P) + Rsc
Mut = pred_state
for t in range(T):
exponent = -0.5 * (y[t]-Mut[t])**2 / Sigmat[t]
cc = 1 / math.sqrt(2*math.pi*Sigmat[t])
LL -= math.log(cc*math.exp(exponent))
return LL
At first i tried to pass my pred_state and pred_P as lists of matrices. These matrices are of size 1x1, so with the code that is commented out i retrieved list of the numbers in the matrices.
However, as i was not sure the arguments could be passed in that form, but I read that arrays can be passed, the code that is commented out is now performed BEFORE i pass pred_state and pred_P as arguments. I thus pass them as numpy arrays.
I tried to do this using the scipy minimzer
x0 = [0.5, np.var(y)/3, np.var(y) *2/3]
minimize(llik_scalars, x0, method = 'nelder-mead', args=(pred_state, pred_P, y))
I get this error:
llik_scalars() missing 2 required positional arguments: 'pred_P' and 'y'
Following another topic on stackoverflow i adapted my code to the following, hoping to solve my problem:
def llik_scalars(Fsc, Qsc, Rsc, *args):
pred_state = args[0]
pred_P = args[1]
y = args[2]
T = len(pred_P)
#pred_state = np.array([pred_state[t].item() for t in range(len(pred_state))])
#pred_P = np.array([pred_P[t].item() for t in range(len(pred_P))])
Sigmat = np.array(pred_P) + Rsc
Mut = pred_state
for t in range(T):
exponent = -0.5 * (y[t]-Mut[t])**2 / Sigmat[t]
cc = 1 / math.sqrt(2*math.pi*Sigmat[t])
LL -= math.log(cc*math.exp(exponent))
return LL
This however, results in the following error:
pred_P = args[1]
IndexError: tuple index out of range
I don't see how this is not working. Please help me out :)
-- EDIT:--
the first few entries of pred_state and pred_p and y, how i pass them into llik_scalars. Note inital guess for the state is 0, and I use a sort of diffuse prior by setting my variance (pred_P) to a million. I retrieved my pred_state and pred_P using a Kalman filter with initial guesses for my F, Q and R:
pred_state[:5]
Out[121]: array([ 0. , 0.6097107 , 0.29789331, 0.30998801, -0.33307371])
pred_P[:5]
Out[122]:
array([1.00000000e+06, 1.24999975e+00, 1.13888888e+00, 1.13311688e+00,
1.13280061e+00])
y[:5]
Out[123]: array([ 1.21942262, 0.58464737, 0.90278035, -1.52760793, -0.80572172])

Categories

Resources