I am trying to test a simple mpi code on python with the following code :
from scipy.sparse import csr_matrix
from mpi4py import MPI
comm=MPI.COMM_WORLD
rank=comm.Get_rank()
size=comm.Get_size()
if rank == 0:
data = [1, 2, 3, 4, 5]
indices = [1, 3, 2, 1, 0]
indptr = [0, 2, 3, 4, 5]
#A=csr_matrix((data,indices,indptr),shape=(4,4))
data=comm.bcast(data, root=0)
indices=comm.bcast(indices, root=0)
indptr=comm.bcast(indptr, root=0)
print rank,data,indices,indptr
which returns the following error:
Traceback (most recent call last):
File "test.py", line 14, in <module>
data=comm.bcast(data, root=0)
NameError: name 'data' is not defined
Traceback (most recent call last):
File "test.py", line 14, in <module>
data=comm.bcast(data, root=0)
NameError: name 'data' is not defined
Traceback (most recent call last):
File "test.py", line 14, in <module>
data=comm.bcast(data, root=0)
NameError: name 'data' is not defined
0 [1, 2, 3, 4, 5] [1, 3, 2, 1, 0] [0, 2, 3, 4, 5]
-------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code.. Per user-direction, the job has been aborted.
-------------------------------------------------------
--------------------------------------------------------------------------
mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:
Process name: [[10263,1],1]
Exit code: 1
It seems like the error is due to me not using comm.bcast properly, but that is exactly how its used in the docs.
you are defining data in the if block. What happens when the if block is false? the variable data is not defined.
from scipy.sparse import csr_matrix
from mpi4py import MPI
comm=MPI.COMM_WORLD
rank=comm.Get_rank()
size=comm.Get_size()
data = []
indices = []
indptr = []
if rank == 0:
data = [1, 2, 3, 4, 5]
indices = [1, 3, 2, 1, 0]
indptr = [0, 2, 3, 4, 5]
#A=csr_matrix((data,indices,indptr),shape=(4,4))
data=comm.bcast(data, root=0)
indices=comm.bcast(indices, root=0)
indptr=comm.bcast(indptr, root=0)
print rank,data,indices,indptr
This should now work.
Related
I am a little confused after a couple attempts while importing Operator and receiving errors. Along with a couple of examples, I've shared a python doc link for reference below.
What I'm expecting to happen below is that operator will run the product and multiply 3 * 4 in the data list which the answer will start [3, 12....] then multiply 12 by the next element '6' to give, [3, 12, 72...]. However importing Operator here isn't working as expected?
The Output I'm expecting for this problem is:
[3, 12, 72, 144, 144, 1296, 0, 0, 0, 0]
Running the below code in PythonTutor.com gives me an Error:
ImportError: cannot import name 'operator'
from itertools import operator
data = [3, 4, 6, 2, 1, 9, 0, 7, 5, 8]
list(accumulate(data, operator.mul))
I've gotten the same type of error running this in Jupyter notebook:
ImportError Traceback (most recent call last)
<ipython-input-1-bc61652bebb8> in <module>
----> 1 from itertools import operator
2
3 data = [3, 4, 6, 2, 1, 9, 0, 7, 5, 8]
4 list(accumulate(data, operator.mul))
ImportError: cannot import name 'operator' from 'itertools' (unknown location)
I've spelled check about 100 times and I've ran these on both PythonTutor and Jupyter NB, and both are giving me errors - can this be an issue with itertools?
Below is from The Python Docs. I'm using the first case:
operator.mul(a, b)
I'll share for your reference: Here
----> operator.mul(a, b)
operator.__mul__(a, b)
Return a * b, for a and b numbers.
Why isn't this working, and how can I fix it?
operator is its own module, not part of itertools:
import itertools
import operator
Note that itertools.accumulate doesn't modify the iterable it is given. It returns a new object which you are not using above. Consider assigning it to a new variable:
data = [3, 4, 6, 2, 1, 9, 0, 7, 5, 8]
accumulated_list = list(itertools.accumulate(data, operator.mul))
When ever I am trying to import a file named "tttnums.py" I always get this error:
Traceback (most recent call last):
File "C:/Users/Marcsegal/Dropbox/Programs/ttt finished.py", line 1, in <module>
import tttnums
RuntimeError: maximum recursion depth exceeded during compilation
This is the contents of tttnums.py:
tttSets = [
[7, 1, 4, 0, 3, 2, 8, 6, 5, 'L']
[0, 6, 5, 4, 2, 8, 1, 3, 7, 'W']
[2, 8, 0, 5, 6, 7, 4, 3, 1, 'W']
(continued with 40317 more lists)
]
I assume the reason I got this error is because I have so many lists in the file (40320 to be exact). How do I fix this error?
If the whole content of tttnums.py is just that data structure, it makes much more sense to store it in a plain text or .json file and just read it than to import it as a .py file.
I am trying to format an array using the prettytable library. Here is my code:
from prettytable import PrettyTable
arrayHR = [1,2,3,4,5,6,7,8,9,10]
print ("arrayHR:", arrayHR)
x = PrettyTable(["Heart Rate"])
for row in arrayHR:
x.add_row(row)
This results in the following error:
arrayHR: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
Traceback (most recent call last):
File "C:\Users\aag\Documents\Python\test.py", line 7, in <module>
x.add_row(row)
File "C:\Python33\lib\site-packages\prettytable.py", line 817, in add_row
if self._field_names and len(row) != len(self._field_names):
TypeError: object of type 'int' has no len()
What am I doing wrong?
According to the documentation, add_row is expecting a list, not an int, as an argument. Assuming that you want the values in arrayHR to be the first value in each row, you could do:
x = PrettyTable(["Heart Rate"])
for row in arrayHR:
x.add_row([row])
or adopt the add_column example, also from the documentation:
x = PrettyTable()
x.add_column("Heart Rate", arrayHR)
Why do I get an error here? Using Python 2.6 and pandas v.0.13.1
In [2]: df = pd.DataFrame({'x': [1, 1, 2, 2, 1, 1], 'y':[1, 2, 2, 2, 2, 1]})
In [3]: print pd.factorize(pd.lib.fast_zip([df.x, df.y]))[0]
---------------------------------------------------------------------------
SystemError Traceback (most recent call last)
<ipython-input-3-d98d985f2794> in <module>()
----> 1 print pd.factorize(pd.lib.fast_zip([df.x, df.y]))[0]
/usr/lib64/python2.6/site-packages/pandas/lib.so in pandas.lib.fast_zip (pandas/lib.c:8026)()
SystemError: numpy/core/src/multiarray/iterators.c:370: bad argument to internal function
You have to use df.x.values and df.y.values instead, in order to access the np.ndarray objects needed in pd.lib.fast_zip():
print(pd.factorize(pd.lib.fast_zip([df.x.values, df.y.values]))[0])
>>> from multiprocessing import Array, Value
>>> import numpy as np
>>> a = [(i,[]) for i in range(3)]
>>> a
[(0, []), (1, []), (2, [])]
>>> a[0][1].extend(np.array([1,2,3]))
>>> a[1][1].extend(np.array([4,5]))
>>> a[2][1].extend(np.array([6,7,8]))
>>> a
[(0, [1, 2, 3]), (1, [4, 5]), (2, [6, 7, 8])]
Following the python multiprocessing example: def test_sharedvalues(): I am trying to create a shared Proxy object using the below code:
shared_a = [multiprocessing.Array(id, e) for id, e in a]
but it is giving me an error
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib64/python2.6/multiprocessing/__init__.py", line 255, in Array
return Array(typecode_or_type, size_or_initializer, **kwds)
File "/usr/lib64/python2.6/multiprocessing/sharedctypes.py", line 87, in Array
obj = RawArray(typecode_or_type, size_or_initializer)
File "/usr/lib64/python2.6/multiprocessing/sharedctypes.py", line 60, in RawArray
result = _new_value(type_)
File "/usr/lib64/python2.6/multiprocessing/sharedctypes.py", line 36, in _new_value
size = ctypes.sizeof(type_)
TypeError: this type has no size
Ok. The problem is solved
I changed
>>> a = [(i,[]) for i in range(3)]
to
>>> a = [('i',[]) for i in range(3)]
and this solved the TypeError.
Actually, I also found out that I did not necessarily had to use the i as count within range(3) (since Array automatically allows indexing), The 'i' is for c_int typecode under multiprocessing.sharedctypes
Hope this helps.