Using linalg.block_diag for variable number of blocks - python

So I have a code that generates various matrices. These matrices need to be stored in a block diagonal matrix. This should be fairly simply as I can use scipy's:
scipy.linalg.block_diag(*arrs)
However the problem I have is I don't know how many matrices will need to be stored like this. I want to keep things as simply as possible (naturally). I thought of doing something like:
scipy.linalg.block_diag( matrix_list[ii] for ii in range(len(matrix_list)) )
But this doesn't work. I can think of a few other ways to do it... but they all become quite convoluted for something I feel should be much simpler.
Does anyone have an idea (or know) a simple way of carrying this out?
Thanks in advance!

When you do:
scipy.linalg.block_diag( matrix_list[ii] for ii in range(len(matrix_list)) )
you're passing a generator expression to block_diag, which is not the way to use it.
Instead, use the * opertor, for expanding the argument list in the function call, like:
scipy.linalg.block_diag(*matrix_list)

Related

Sympy pretty print matrix

I am using sympy to do symbolic matrix multiplication of 13 2x2 matrices (for optics). The resulting matrix is of course a 2x2 matrix but is huge.
I am using pprint() in order to display stuff in a nice manner.
Problem is that pprint is basically "splitting" the matrix over many rows making it basically unreadable. To put things into perspective, below is the first element of the matrix as it is pretty printed, so imagine how the whole thing is going to look like.
Any tips, tricks to pretty print the matrix in a continuous way?
Many thanks,
P.S; I am using jupyter notebook
This is probably a little late. After over an hour searching for this tiny problem, I finally found a fix: As stated in their internal documentation for pretty_print (pprint is essentially a wrapper for this):
num_columns : int or None, optional (default=None)
Number of columns before line breaking (default to None which reads
the terminal width), useful when using SymPy without terminal.
I would recommend setting the limit to something you will never exceed, e.g. 10,000 or even 100,000. This at least worked for me:
pprint(expression, num_columns=10_000)

Scipy zoom with complex values

I have a numpy array of values and I wanted to scale (zoom) it. With floats I was able to use scipy.ndimage.zoom but now my array contains complex values which are not supported by scipy.ndimage.zoom. My workaround was to separate the array into two parts (real and imaginary) and scale them independently. After that I add them back together. Unfortunately this produces a lot of tiny artifacts in my 'image'. Does somebody know a better way? Maybe there also exists a python library for this? I couldn't find one.
Thank you!
This is not a good answer but it seems to work quite well. Instead of using the default parameters for the zoom method, I'm using order=0. I then proceed to deal with the real and imaginary part separately, as described in my question. This seems to reduce the artifacts although some smaller artifacts remain. It is by no means perfect and if somebody has a better answer, I would be very interested.

Numpy reapeat array attached to array?! what is the syntax here?

So, I currently need to understand the following code properly:
J = p['M'].repeat(p['N'],1).T
p is a dictionary in which the entry under key M is simply an array, the T transposes, that much is clear.
But, the only version I can find for the repeat function is syntax in the form of
numpy.repeat(array , repeats [,axis])
This leaves me wondering what the meaning of a syntax of type array.repeat(something) actually means and I can neither find an answer in my head or the internet for now. This is numpy though, isnt it? It is imported, without being tagged with an 'as' clause.
So currently am on a machine without a python/numpy shell installed to simply try it, so I thought I give this a shot: What is repeated how many times?
My first interpretation would be p['M'] is repeated p['N'] times along the first axis, then transposed, but every example specifying an axis I find uses something like axis=1.
Thanks a lot =)
There is another version of repeat in numpy: numpy.ndarray.repeat
Please see the documentation here
Hope this helps

anything wrong with retuning more than one thing? - from python function

As a follow up from this post:
How to return more than one value from a function in Python?
A separate question:
As a beginner programmer, I was taught to only return one thing from a function.
a. Is there any hidden problem with returning more than one thing?
b. If so, and I want to return 2 lists from a long function (ie not call 2 separate similar functions), is there anything wrong with making a tuple out of the lists?
Thanks
If returning two things makes sense, yes... return two things.
For example, if you want to split a string based on some criteria, like find key/value pairs, you might call split_string_to_get_key_pair() you would expect it to return a tuple of (key, value).
The question is, is returning a tuple (which is how multiple return values often work) returning one thing or two? An argument can be made either way, but as long as what is returned is consistent and documented, then you can return what ever makes sense for your program.
Python encourages returning multiple values if that makes sense. Go for it.
If you put the two lists into one tuple, you're essentially returning one thing. =D
This question is not really specific to Python and is perhaps better suited for Programmers.SE.
Anyway, regarding your questions:
a. Not if these things you are returning are actually a single thing in disguise, and they will be used together most of the time. For example, if you request the color of something, you rarely need only the value of the red channel. So if the returned value is simple enough, or if the wrapping class would not make much sense, you just return a tuple.
b. There isn't, it's a common practice in Python as #Amber has noted.

how do I check that two slices of numpy arrays are the same (or overlapping)?

I would like to check if two ndarrays are overlapping views of the same underlying ndarray.
To check that two slices are exactly the same, I can do something like:
a.base is b.base and a.shape == b.shape and a.data == b.data
The comparison of buffers seemed to work in one simple case -- can anyone tell me if it works in general?
Unfortunately, this wont work for overlapping slices, and I haven't figured out how to extract from the buffer exactly what its offset is in the underlying data -- perhaps someone can help me with this?
Also, say a and b are slices of x, and c is a slice of b. As the underlying data is the same, I would also like to detect overlaps between c and a. It would seem that I should be able to get away with comparing just buffer and shape... if anyone could tell me exactly how, I would be grateful.
numpy.may_share_memory() is the best heuristic that we have at the moment. It is conservatively heuristic; it may give you false positives, but it will not give you false negatives. I think there might be ways to improve the heuristic to be 100% correct. If they pan out, they will be folded into that function, so that's the best way forward.
It might be possible to compare where the indices live in memory using the ctypes property of the arrays. It might take some work, so you might want to step back and see if there is a different way of solving your problem.

Categories

Resources