Stuck converting binary to integers in python - python

hi i have this code by which my output comes in binary.
b=np.array(np.zeros((1000,4)))
for i in range(1000):
n = "{0:04b}".format(int(Y[i]))
digits = [int(x) for x in str(n)]
b[i] = digits
this gives an output like this:
[0, 0, 0, 1]
please ignore the commas.. read it like [0 0 0 1]---binary number.
im trying to change it such that it gives integers like:
1
can anyone help me with this.. how to edit my code to bring out integers rather than binary numbers.?

Python's built-in int() will do this for you:
binary_num = ''.join(arr)
regular_num = int(binary_num, 2) # 2 is the base

Related

How can I correctly convert binary characters in list to decimal

I want to create a function which will convert the binary numbers in list array to the decimal numbers. For that I have reversed a list and used for loop to iterate the list items. However I am unable to get correct result. Can anyone help me where am I committing the mistake?
def binary_array_to_number(arr):
#arr = arr.reverse()
arr = arr [::-1]
new_num = 0
for item in arr:
for i in range(len(arr)):
new_num = new_num+item*(2**i)
print(new_num)
binary_array_to_number(arr= [0, 0, 0, 1])
Python has built-in conversion from binary to base-10 numbers using the int() constructor. By converting the array to a string representing the binary number, you can use int to convert it to decimal:
binary_arr = [0, 0, 0, 1]
binary_string = ''.join(str(digit) for digit in binary_arr) # this line simply converts the array to a string representing the binary number [0, 0, 0, 1] -> '0001'
decimal_number = int(binary_string, 2) # the 2 here represents the base for the number - base 2 means binary.
print(decimal_number) # prints '1'
You're not checking if the current bit is 1 or not, so you'll always generate 2**n - 1. Also, you've got two loops running instead of one, which will also lead to an incorrect result.
def binary_array_to_number(arr):
#arr = arr.reverse()
arr = arr [::-1]
new_num = 0
for i in range(len(arr)):
if arr[i]:
new_num += (2**i)
print(new_num)
binary_array_to_number(arr= [0, 1, 0, 1])

How to change a binary numpy array into an int? [duplicate]

This question already has answers here:
Convert binary (0|1) numpy to integer or binary-string?
(5 answers)
Closed 1 year ago.
I have task of converting an array with binary numbers into a decimal number.
My array code looks like this:
koo = np.random.randint(2, size=10)
print(koo)
Example of the output is:
[1 0 1 0 0 0 0 0 1 0]
And what I'm supposed to do is to turn it into a string (1010000010), in order to then turn it into a decimal number (642). What could be a way to do this?
For this task I have to create it with the array, I cannot use np.binary_repr()
I have tried solving it with the solution offered here, but when working on it the numbers would be wildly different from what's true (i.e.: 1011111010 was 381 according to the program, while in reality it's 762)
Anybody knows how to solve this problem? Thank you in advance!
At first, convert Numpy array to string by simply appending elements.
Then, using int(), recast the string using base of 2.
koo = np.random.randint(2, size=10)
print(koo)
binaryString = ""
for digit in koo:
binaryString += str(digit)
print(binaryString)
decimalString = int(binaryString, base=2)
print(decimalString)
An example run:
[0 0 1 0 1 0 0 0 0 0]
0010100000
160
you can use a list comprehension to enumerate the digits from end to beginning and then calculate the numbers in base ten, then sum. for example:
np.sum([digit * 2 ** i for i, digit in enumerate(koo[::-1])])
>>> koo = np.random.randint(2, size=10)
>>> koo
array([1, 1, 0, 0, 0, 0, 1, 1, 1, 1])
>>> binary_string = ''.join(koo.astype('str').tolist())
>>> binary_string
'1100001111'
>>> binary_val = int('1100001111', 2)
>>> binary_val
783

I am getting out of range error on code, see details below

Code:
import random
randomnum = random.randint(1, 256)
enter = "john"
enterlen = (len(enter))
enterarray = [randomnum, enterlen]
i = 0
while i <= enterlen:
enterarray.append(enter[i])
i = i + 1
Error:
Traceback (most recent call last):
File "/home/kdogg/PycharmProjects/starter/encode.py", line 9, in <module>
enterarray.append(enter[i])
IndexError: string index out of range
I am trying to make an encryption code using this array. I am supposed to fill up the array starting with a random number, then the length of the word I want to encode, then using a 1 to 1 code (a = 1, b = 2, c = 3, etc).
I am going to randomly multiply the numbers given (a = 1, b = 2 etc) to get the numbers between two values(a could be between 1 and 25, etc, etc) and then convert it into hex to dim it down.
Example: let's say you put the word ABC. it would end up being something like (100)(3)(1545784) first the random number, then the word length, then each letter would be assigned a random number between two values, then everything would be converted to hex.
However, my array is not filling up even when using append().
It's simple. Arrays start at 0 but len() returns a count of items, so you need to decrease this number. I recommend to change it in while loop to i < enterlen. You should consider the use of for loop.
Use while i<enterlen: instead of while i<=enterlen:
Because i in your case goes till 4 [5th character] whereas John just has 4 characters, so it gives index out of range
You are trying to access enter[4] which is not a valid index for that string value. Arrays are 0 based indexed therefore the only valid indices for enter are 0, 1, 2, and 3.
To solve your problem you must adjust your while loop to be i < enterlen. Thereby concluding when i = 4.
This will now print correctly [98, 4, 'j', 'o', 'h', 'n'].
As you can see in other answer you need to use < instead of <= because list uses indexes from 0 to len(enter)-1
i = 0
while i < enterlen:
enterarray.append(enter[i])
i = i + 1
But instead while you could use for-loop and then you don't have problem with indexes.
for char in enter:
enterarray.append(char)
For example convert ASCII chars to numbers
for char in enter:
number = ord(char) - ord('A') + 1
enterarray.append(number)
A = 1, B = 2, C = 3, etc. a = 33, b = 34, c = 34, etc.
If you want only lower case chars then you should use ord('a') instead of ord('A') and you should convertentertolower()` to make sure you don't have upper case chars.
enter = enter.lower()
BTW: If you wanted only add chars to array then you could do
enterarray = [randomnum, enterlen] + list(enter)

How to print a matrix using Python [duplicate]

This question already has answers here:
Displaying python 2d list without commas, brackets, etc. and newline after every row
(6 answers)
Closed 6 years ago.
I am trying to create a matrix and then print it using python with the following expected output:
0 0 0 0
0 0 0 0
0 0 0 0
0 0 0 0
The code follows:
matrix=[[]]
matrix = [[0 for x in range(4)] for x in range(4)]
print matrix
But the output comes as:
[[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]
Please tell me why I'm getting this type of output and help with the correct one
What you are getting is the correct representation of a matrix. You have created a list that contains 4 lists, each list containing 4 '0's. If you want to print it differently then you have some options, but what you are printing is the representation of the above mentioned data structure.
for row in matrix:
print ' '.join(map(str,row))
this should work for you.
Try this:
matrix = "\n".join([" ".join(["0" for x in range(4)]) for x in range(4)])
print matrix
The thing is that you're trying to print a list object instead of a string. This way you join all of the numbers into 1 large string separated with spaces for columns and \n (line endings) for rows.
Keep in mind this method will not work for 2 digit numbers unless you pad the 1 digit with a leading zero or space. In order for it to work in all cases, I suggest using the Tabulate module.
Maybe something like this:
matrix=[[]]
matrix = [[0 for x in range(4)] for x in range(4)]
for a in matrix:
ln = ""
for i in a:
ln += str(i) + " "
print(ln)

how to create a range of random decimal numbers between 0 and 1

how do I define decimal range between 0 to 1 in python? Range() function in python returns only int values. I have some variables in my code whose number varies from 0 to 1. I am confused on how do I put that in the code. Thank you
I would add more to my question. There is no step or any increment value that would generate the decimal values. I have to use a variable which could have a value from 0 to 1. It can be any value. But the program should know its boundary that it has a range from 0 to 1. I hope I made myself clear. Thank you
http://docs.python.org/library/random.html
If you are looking for a list of random numbers between 0 and 1, I think you may have a good use of the random module
>>> import random
>>> [random.random() for _ in range(0, 10)]
[0.9445162222544106, 0.17063032908425135, 0.20110591438189673,
0.8392299590767177, 0.2841838551284578, 0.48562600723583027,
0.15468445000916797, 0.4314435745393854, 0.11913358976315869,
0.6793348370697525]
for i in range(100):
i /= 100.0
print i
Also, take a look at decimal.
def float_range(start, end, increment):
int_start = int(start / increment)
int_end = int(end / increment)
for i in range(int_start, int_end):
yield i * increment
This will output decimal number between 0 to 1 with step size 0.1
import numpy as np
mylist = np.arange(0, 1, 0.1)
It seems like list comprehensions would be fairly useful here.
mylist = [x / n for x in range(n)]
Something like that? My Python's rusty.
>>> from decimal import Decimal
>>> [Decimal(x)/10 for x in xrange(11)]
[Decimal("0"), Decimal("0.1"), Decimal("0.2"), Decimal("0.3"), Decimal("0.4"), Decimal("0.5"), Decimal("0.6"), Decimal("0.7"), Decimal("0.8"), Decimal("0.9"), Decimal("1")]
Edit, given comment on Mark Random's answer:
If you really don't want a smoothly incrementing range, but rather a random number between 0 and 1, use random.random().

Categories

Resources