Printing from Numba CUDA kernel (In google Colab) - python

I'm trying to learn CUDA for python using Numba in a Google Colab jupyter notebook. To learn how to apply 3D thread allocation for nested loops I wrote the following kernel:
from numba import cuda as cd
# Kernel to loop over 3D grid
#cd.jit
def grid_coordinate_GPU():
i = cd.blockDim.x * cd.blockIdx.x + cd.threadIdx.x
j = cd.blockDim.y * cd.blockIdx.y + cd.threadIdx.y
k = cd.blockDim.z * cd.blockIdx.z + cd.threadIdx.z
print(f"[{i},{j},{k}]")
# Grid Dimensions
Nx = 2
Ny = 2
Nz = 2
threadsperblock = (1,1,1)
blockspergrid = (Nx,Ny,Nz)
grid_coordinate_GPU[blockspergrid, threadsperblock]()
The problem I however find is that printing the coordinates in format string does not work. The exact error I get is:
TypingError: Failed in cuda mode pipeline (step: nopython frontend)
No implementation of function Function(<class 'str'>) found for signature:
>>> str(int64)
There are 10 candidate implementations:
- Of which 8 did not match due to:
Overload of function 'str': File: <numerous>: Line N/A.
With argument(s): '(int64)':
No match.
- Of which 2 did not match due to:
Overload in function 'integer_str': File: numba/cpython/unicode.py: Line 2394.
With argument(s): '(int64)':
Rejected as the implementation raised a specific error:
NumbaRuntimeError: Failed in nopython mode pipeline (step: native lowering)
NRT required but not enabled
During: lowering "s = call $76load_global.17(kind, char_width, length, $const84.21, func=$76load_global.17, args=[Var(kind, unicode.py:2408), Var(char_width, unicode.py:2409), Var(length, unicode.py:2407), Var($const84.21, unicode.py:2410)], kws=(), vararg=None, varkwarg=None, target=None)" at /usr/local/lib/python3.7/dist-packages/numba/cpython/unicode.py (2410)
raised from /usr/local/lib/python3.7/dist-packages/numba/core/runtime/context.py:19
During: resolving callee type: Function(<class 'str'>)
During: typing of call at <ipython-input-12-4a28d7f41e76> (12)
To solve this I tried a couple of things.
Firstly I tried to initialise the CUDA simulator by setting the environment variable NUMBA_ENABLE_CUDASIM = 1 following the Numba Documentation. This however dit not change much.
Secondly I thought that the problem laid within the inability of the Jupiter notebook to print the result in the notebook instead of the terminal. I tried to solve this by following this GitHub post which instructed me to use wurlitzer. This however did not do much.
Lastly I added cd.synchronize() after the call to the kernel to try and mimic the c++ example I tried to implement in the first place. This sadly did not work either.
It would be amazing if someone could help me out!

The simple solution was to skip the formatted string and just use print(i,j,k) within the kernel instead.

Related

Tensorflow C++ API: How to pass input parameters to CUDA kernel

I'm quite new to CUDA/C++ programming and I'm stuck at passing the input parameters to the CUDA Kernel from the Tensorflow C++ API.
First off I register the following Op:
REGISTER_OP("Op")
.Attr("T: {float, int64}")
.Input("in: T")
.Input("angles: T")
.Output("out: T");
Afterwards I want to pass the second Input (angles) through to the CPU/GPU Kernel. Somehow the following implementation works fine for the CPU implementation but throws an error in Python when I run it on my GPU...
Python Error message:
Process finished with exit code -1073741819 (0xC0000005)
This is how I'm trying to access the value of the Input. Note that the input for "angles" is allways a single value (float or int):
void Compute(OpKernelContext* context) override {
...
const Tensor &input_angles = context->input(1);
auto angles_flat = input_angles.flat<float>();
const float N = angles_flat(0);
...
}
Calling the CPU/GPU Kernels as follows:
...
Functor<Device, T>()(
context->eigen_device<Device>(),
static_cast<int>(input_tensor.NumElements()),
input_tensor.flat<T>().data(),
output_tensor->flat<T>().data(),
N);
...
As I said before, running this Op on the CPU works just how I it want to, but when I run it on the GPU I always get the abovementioned Python Error... Does someone know how to fix this? I can only guess that I'm trying to access a wrong address on the GPU with angles_flat(0)... So if anybody can help me out here it would be highly appreciated!!

Why is this numba.cuda lookup table implementation failing?

I'm trying to implement an transform which at some stage in it has a lookup table < 1K in size. This seems to me like it shouldn't pose a problem to a modern graphics card.
But the code below is failing with an unknown error:
from numba import cuda, vectorize
import numpy as np
tmp = np.random.uniform( 0, 100, 1000000 ).astype(np.int16)
tmp_device = cuda.to_device( tmp )
lut = np.arange(100).astype(np.float32) * 2.5
lut_device = cuda.to_device(lut)
#cuda.jit(device=True)
def lookup(x):
return lut[x]
#vectorize("float32(int16)", target="cuda")
def test_lookup(x):
return lookup(x)
test_lookup(tmp_device).copy_to_host() # <-- fails with cuMemAlloc returning UNKNOWN_CUDA_ERROR
What am I doing against the spirit of numba.cuda?
Even replacing lookup with the following simplified code results in the same error:
#cuda.jit(device=True)
def lookup(x):
return x + lut[1]
Once this error occurs, I am essentially no longer able to utilize the cuda context at all. For instance, allocating a new array via cuda.to_device results in a:
numba.cuda.cudadrv.driver.CudaAPIError: [719] Call to cuMemAlloc results in UNKNOWN_CUDA_ERROR
Running on: 4.9.0-5-amd64 #1 SMP Debian 4.9.65-3+deb9u2 (2018-01-04)
Driver Version: 390.25
numba: 0.33.0
The above code is fixed by modifying the part in bold:
#cuda.jit(device=True)
def lookup(x):
lut_device = cuda.const.array_like(lut)
return lut_device[x]
I ran multiple variations of the code including simply touching the lookup table from within this kernel, but not using its output. This combined with #talonmies' assertion that UNKNOWN_CUDA_ERROR usually occurs with invalid instructions, I thought that perhaps there was a shared memory constraint that was causing the issue.
The above code makes the whole thing work. However, I still don't understand why in a profound way.
If anyone knows and understands why, please feel free to contribute to this answer.

Solving linear system using Python with numba and CUDA

I am trying to solve a linear system using numba with GPU processing using CUDA.
I have installed all the relevant packages and tested it so it seems that my GPU and CUDA etc is set up properly.
My code is:
import numpy as np
import time
from numba import vectorize, cuda
#vectorize(['float64(float64, float64)'], target='cuda')
def solver(A, b):
return np.linalg.solve(A, b)
def main():
A = np.random.rand(100, 100).astype(np.float64)
b = np.random.rand(100, 1).astype(np.float64)
start = time.time()
C = solver(A, b)
vector_add_time = time.time() - start
print("Took " + str(vector_add_time) + " seconds to solve")
if __name__ == '__main__':
main()
Commenting the #vectorize... line, the code runs fine. However, when I try to do it with numba and cuda, I get a long list of errors, where I think he most relevant one is:
raise TypingError(msg)
numba.errors.TypingError: Failed at nopython (nopython frontend)
np.linalg.solve() only supported for array types
I assume the problem is that numpy.linalg.solve does not accept the data types required by cuda.
Am I correct in assuming this? Are there other data types that will work?
In this example problem, the same data type is passed to the function, so I think the problem lies with numpy.linalg.
Am I correct in assuming this?
No
Are there other data types that will work?
No
The problem here is that you cannot use numpy.linalg in code which is targeted to run on the numba GPU backend.

TensorFlow nullptr check failed on GPU

I am using the python API of TensorFlow to train a variant of an LSTM.
For that purpose I use the tf.while_loop function to iterate over the time steps.
When running my script on the cpu, it does not produce any error messages, but on the gpu python crashes due to:
...tensorflow/tensorflow/core/framework/tensor.cc:885] Check failed: nullptr != b.buf_ (nullptr vs. 00...)
The part of my code, that causes this failure (when commenting it out, it works) is in the body of the while loop:
...
h_gathered = h_ta.gather(tf.range(time))
h_gathered = tf.transpose(h_gathered, [1, 0, 2])
syn_t = self.syntactic_weights_ta.read(time)[:, :time]
syn_t = tf.expand_dims(syn_t, 1)
syn_state_t = tf.squeeze(tf.tanh(tf.matmul(syn_t, h_gathered)), 1)
...
where time is zero based and incremented after each step, h_ta is a TensorArray
h_ta = tf.TensorArray(
dtype=dtype,
size=max_seq_len,
clear_after_read=False,
element_shape=[batch_size, num_hidden],
tensor_array_name="fw_output")
and self.syntactic_weights_ta is also a TensorArray
self.syntactic_weights_ta = tf.TensorArray(
dtype=dtype,
size=max_seq_len,
tensor_array_name="fw_syntactic_weights")
self.syntactic_weights_ta = self.syntactic_weights_ta.unstack(syntactic_weights)
What I am trying to achieve in the code snippet is basically a weighted sum over the past outputs, stored in h_ta.
In the end I train the network with tf.train.AdamOptimizer.
I have tested the script again, but this time with swap_memory parameter in the while loop set to False and it works on GPU as well, though I'd really like to know why it does not work with swap_memory=True.
This looks like a bug in the way that TensorArray's tensor storage mechanisms interact with the allocation magic that is performed by while_loop when swap_memory=True.
Can you open an issue on TF's github? Please also include:
A full stack trace (TF built with -c dbg preferrable)
A minimal code example to reproduce
Describe whether the issue requires you to be calling backprop.
Whether this is reproducible in TF 1.2 / nightlies / master branch.
And respond here with the link to the github issue?

Theano matrix multiplication

I have a piece of code that is supposed to calculate a simple
matrix product, in python (using theano). The matrix that I intend to multiply with is a shared variable.
The example is the smallest example that demonstrates my problem.
I have made use of two helper-functions. floatX converts its input to something of type theano.config.floatX
init_weights generates a random matrix (in type floatX), of given dimensions.
The last line causes the code to crash. In fact, this forces so much output on the commandline that I can't even scroll to the top of it anymore.
So, can anyone tell me what I'm doing wrong?
def floatX(x):
return numpy.asarray(x,dtype=theano.config.floatX)
def init_weights(shape):
return floatX(numpy.random.randn(*shape))
a = init_weights([3,3])
b = theano.shared(value=a,name="b")
x = T.matrix()
y = T.dot(x,b)
f = theano.function([x],y)
This work for me. So my guess is that you have a problem with your blas installation. Make sure to use Theano development version:
http://deeplearning.net/software/theano/install.html#bleeding-edge-install-instructions
It have better default for some configuration. If that do not fix the problem, look at the error message. There is main part that is after the code dump. After the stack trace. This is what is the most useful normally.
You can disable direct linking by Theano to blas with this Theano flag: blas.ldflags=
This can cause slowdown. But it is a quick check to confirm the problem is blas.
If you want more help, dump the error message to a text file and put it on the web and link to it from here.

Categories

Resources