Using Ranges and Multiplying - python

I have started creating a mini program, but I have 1 issue. I've created a variable, and I want to add something like this:
for x in range(0,12):
print (rand_no) * (x)
The variable rand_no is defined earlier in my program, but I want to multiply it by x. Please help me.

If you're using Python 3 as your tags indicate, you need to be aware that print is now a function. As a result, this code doesn't do what you're expecting:
for x in range(12):
# the call to `print` returns `None` and you try to multiply it by `x`
print(rand_no) * (x)
Instead, you want:
for x in range(12):
print(rand_no * x)

Related

How do I fix this while loop? [duplicate]

I have noticed that it's common for beginners to have the following simple logical error. Since they genuinely don't understand the problem, a) their questions can't really be said to be caused by a typo (a full explanation would be useful); b) they lack the understanding necessary to create a proper example, explain the problem with proper terminology, and ask clearly. So, I am asking on their behalf, to make a canonical duplicate target.
Consider this code example:
x = 1
y = x + 2
for _ in range(5):
x = x * 2 # so it will be 2 the first time, then 4, then 8, then 16, then 32
print(y)
Each time through the loop, x is doubled. Since y was defined as x + 2, why doesn't it change when x changes? How can I make it so that the value is automatically updated, and I get the expected output
4
6
10
18
34
?
Declarative programming
Many beginners expect Python to work this way, but it does not. Worse, they may inconsistently expect it to work that way. Carefully consider this line from the example:
x = x * 2
If assignments were like mathematical formulas, we'd have to solve for x here. The only possible (numeric) value for x would be zero, since any other number is not equal to twice that number. And how should we account for the fact that the code previously says x = 1? Isn't that a contradiction? Should we get an error message for trying to define x two different ways? Or expect x to blow up to infinity, as the program keeps trying to double the old value of x
Of course, none of those things happen. Like most programming languages in common use, Python is a declarative language, meaning that lines of code describe actions that occur in a defined order. Where there is a loop, the code inside the loop is repeated; where there is something like if/else, some code might be skipped; but in general, code within the same "block" simply happens in the order that it's written.
In the example, first x = 1 happens, so x is equal to 1. Then y = x + 2 happens, which makes y equal to 3 for the time being. This happened because of the assignment, not because of x having a value. Thus, when x changes later on in the code, that does not cause y to change.
Going with the (control) flow
So, how do we make y change? The simplest answer is: the same way that we gave it this value in the first place - by assignment, using =. In fact, thinking about the x = x * 2 code again, we already have seen how to do this.
In the example code, we want y to change multiple times - once each time through the loop, since that is where print(y) happens. What value should be assigned? It depends on x - the current value of x at that point in the process, which is determined by using... x. Just like how x = x * 2 checks the existing value of x, doubles it, and changes x to that doubled result, so we can write y = x + 2 to check the existing value of x, add two, and change y to be that new value.
Thus:
x = 1
for _ in range(5):
x = x * 2
y = x + 2
print(y)
All that changed is that the line y = x + 2 is now inside the loop. We want that update to happen every time that x = x * 2 happens, immediately after that happens (i.e., so that the change is made in time for the print(y)). So, that directly tells us where the code needs to go.
defining relationships
Suppose there were multiple places in the program where x changes:
x = x * 2
y = x + 2
print(y)
x = 24
y = x + 2
print(y)
Eventually, it will get annoying to remember to update y after every line of code that changes x. It's also a potential source of bugs, that will get worse as the program grows.
In the original code, the idea behind writing y = x + 2 was to express a relationship between x and y: we want the code to treat y as if it meant the same thing as x + 2, anywhere that it appears. In mathematical terms, we want to treat y as a function of x.
In Python, like most other programming languages, we express the mathematical concept of a function, using something called... a function. In Python specifically, we use the def function to write functions. It looks like:
def y(z):
return z + 2
We can write whatever code we like inside the function, and when the function is "called", that code will run, much like our existing "top-level" code runs. When Python first encounters the block starting with def, though, it only creates a function from that code - it doesn't run the code yet.
So, now we have something named y, which is a function that takes in some z value and gives back (i.e., returns) the result of calculating z + 2. We can call it by writing something like y(x), which will give it our existing x value and evaluate to the result of adding 2 to that value.
Notice that the z here is the function's own name for the value was passed in, and it does not have to match our own name for that value. In fact, we don't have to have our own name for that value at all: for example, we can write y(1), and the function will compute 3.
What do we mean by "evaluating to", or "giving back", or "returning"? Simply, the code that calls the function is an expression, just like 1 + 2, and when the value is computed, it gets used in place, in the same way. So, for example, a = y(1) will make a be equal to 3:
The function receives a value 1, calling it z internally.
The function computes z + 2, i.e. 1 + 2, getting a result of 3.
The function returns the result of 3.
That means that y(1) evaluated to 3; thus, the code proceeds as if we had put 3 where the y(1) is.
Now we have the equivalent of a = 3.
For more about using functions, see How do I get a result (output) from a function? How can I use the result later?.
Going back to the beginning of this section, we can therefore use calls to y directly for our prints:
x = x * 2
print(y(x))
x = 24
print(y(x))
We don't need to "update" y when x changes; instead, we determine the value when and where it is used. Of course, we technically could have done that anyway: it only matters that y is "correct" at the points where it's actually used for something. But by using the function, the logic for the x + 2 calculation is wrapped up, given a name, and put in a single place. We don't need to write x + 2 every time. It looks trivial in this example, but y(x) would do the trick no matter how complicated the calculation is, as long as x is the only needed input. The calculation only needs to be written once: inside the function definition, and everything else just says y(x).
It's also possible to make the y function use the x value directly from our "top-level" code, rather than passing it in explicitly. This can be useful, but in the general case it gets complicated and can make code much harder to understand and prone to bugs. For a proper understanding, please read Using global variables in a function and Short description of the scoping rules?.

Issue with function definition for Write and Call Some Simple Functions [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I know you guys are not here to do my homework for me, but I just don't have enough python program knowledge to resolve the assignment. I tried, but it doesn't seems working. Can you please help me out? Below is the assignment.
Assignment instruction:
Page 1 of 3
Instructions
Examine the starter code in the code editor window and understand what it is doing.
Content of the exercise.py file is below which is in the editor window.
x = 3
y = 4
if x < y:
min_xy = x
else:
min_xy = y
print(min_xy)
a = 12.3
b = 13.7
if a < b:
min_ab = a
else:
min_ab = b
print(min_ab)
w = -3.9
z = -4.7
if w < z:
min_wz = w
else:
min_wz = z
print(min_wz)
Instructions
Now we want to write a Python function that carries out this repeated
operation, so that we can just write it once and call it repeatedly.
In the code editor, at the top of the file before the existing code,
write a function named find_min that takes two inputs and returns the
lesser of the two. Remember to use the def keyword to start the
definition of a new function, indent the body of the function under
the def line, and return the result at the end.
Start the ipython interpreter by typing ipython at the command prompt.
Run the exercise in the interpreter (%run exercise.py): you should see
some variables printed out from the starter code. Type %who to see
that the interpreter also knows about your new function find_min. Test
out your new function interactively within the interpreter, with some
input values of your choosing.
Now we want to reorganize exercise.py so that it does the same thing
as before, except more efficiently by using your new find_min
function.
Replace the appropriate blocks of code with new code that accomplishes
the same thing by calling your find_min function. Do not replace the
variables or change their values.
For example, one such operation will call find_min with the variables
x (with x having a value of 3) and y (with y having a value of 4) as
arguments and assign the returned value to the new variable min_xy.
Verify that your new code runs and produces the same results as the
original code.
My function definition is below.
def find_min(x, y):
if x < y:
return x
else:
return y
min_xy = find_min(3, 4)
print(min_xy)
I am getting the following errors:
Tests that failed:
! Call function find_min to find the lesser of x (x has a value of 3) and y (y has a value of 4) and assign returned value to min_xy
make sure x and y are defined create min_xy
! Call function find_min to find the lesser of a (a has a value of 12.3) and b (b has a value of 13.7) and assign returned value to min_ab
make sure a and b are defined create variable min_ab
! Call function find_min to find the lesser of w (w has a value of -3.9) and z (z has a value of -4.7) and assign returned value to min_wz
make sure w and z are defined create min_wz
Page 2 of 3
Instructions
The code editor should contain your find_min function. We are free to
define more than one function in our code. In the code editor, write a
second function called find_max that takes two inputs and returns the
greater of the two. Do Not Delete the Existing Code in the Code
Editor.
Use your new function to find the greater of x (x has a value of 3)
and y (y has a value of 4) and assign it to the variable max_xy.
Thanks in advance,
Marie
You need to take the call of the function out of the function definition.
def find_min(x, y):
if x < y:
return x
else:
return y
min_xy = find_min(3, 4)
print(min_xy)

Default Parameters in Function Definition, Syntax problems.

I am having an issue finding a straightforward answer to a question that I have.
I am coding a program that has some default values for certain parameters that do not end up being called by the user. My program is somewhat complicated so I decided to try a simplified problem.
def mult(x = 1, y = 2, z = 3):
ans = x * y * z
print(ans)
mult()
In this quick program, the function call would result in 6. Which makes sense because it goes to the default values I provided. My question is, how can I call this function if, for example, I wanted to define y and not any other variable? What would be the correct syntax in that situation.
My intuition was to call mult(x, 5, z) to indicate default values for x and z but a new value for y. I know that does not work and would like to know what the correct syntax would be.
You can specify the parameter to supply by using = at the call site:
mult(y = 5)
you can call it with keywords
mult(y=7)
mult(z=55)
mult(z=12,y=16,x=5)
mult(x=15)
although as an aside its probably preferable to return ans instead of just printing it ...

solve equations using multiple values of x

I need help on the fallowing: Lets say you ask the user for an ecuation it can be anything, for illustrating this example lets the the user choose this one:
x**5+3, and he also needs to assign any value of x, so it will look like this:
write the equation:
write the value of x you want to calculate:
my question is: how can you modify the x in the equation the user gave first to assign the value he wants to calculate?? in other worlds how can I make python calculate x**5+2, for x= any input value??
It looks like you want the user to enter an equation with variables and the value for the variable. And you want your code to evaluate the user input equation with user input value for the variable in the equation. Sounds like a candidate for eval():
In [185]: equation = 'x ** 5 + 2'
In [186]: x = 3
In [187]: eval(equation)
Out[187]: 245
Another example:
In [188]: equation = '(x + 99) * 2'
In [189]: x = 1
In [190]: eval(equation)
Out[190]: 200
Since in the above demonstration, equation is a string, it might as well be a user input. When you ask the user to input for the equation, store it in a variable (variable equation here). Then when you ask them for a value for the variables in the equation, cast it to int and just do a eval(equation) in your code.
Your question is hard to decipher, but if I understand correctly, you're talking about writing a function. Try the following:
def f(x):
return x**5 + 2
And you may use the function for any value of x, e.g., f(2).
This is relativity easily to do if you look up each piece of the puzzle (or coding problem in this case).
First you need user input:
x_string = input("Enter an x value: ")
Then you need to convert the input String into an integer:
x = int(x_string)
Finally you need to calculate the value and print it out:
results = x**5 + 2
print results
You seem to be new to Python, so I highly recommend looking up some tutorials (like this one).

How to execute only parts of a function?

I have defined a class which contains some basic matrix functions. My function for transposing a matrix looks like this:
def transpose(self):
'''
Transpose a matrix.
'''
C = Z ## creating a new zero matrix
for i in range(0, self.rows):
for j in range(0, self.cols):
C.matrix[i][j] = self.matrix[j][i]
## printing the resultant matrix
C.show()
return C
So when I call this function from the interpreter, it prints the result after execution (because of the show() function).
However, when I call this function from another function in the same class, I don't want the matrix to be printed, that is, I don't want the C.show() part to execute.
Is there any way to do this? I was thinking on the lines of __name__ == "__main__" but that doesn't really apply here it seems.
Just add another, default, parameter to the function and put the print in an if:
def transpose(self, print_matrix=True):
'''
Transpose a matrix.
'''
C = Z ## creating a new zero matrix
for i in range(0, self.rows):
for j in range(0, self.cols):
C.matrix[i][j] = self.matrix[j][i]
## printing the resultant matrix
if print_matrix:
C.show()
return C
As it has a default value you don't need to change any current method calls, but you can add another parameter to your new one. Call it as transpose(False), if you don't want to print.
The problem you have is that the calculation and display are both coupled into the same function. In general, tight coupling like this is considered undesirable. Why? Well, you are seeing the problem now, you can't do one part of the function without the other.
Now I could give you crazy answers about how to only print when called from the interpreter, but I would be encouraging bad code. Instead, we should decouple this function into two different function.
Decoupling is simple, take the two different things your code is doing -- printing and calculating -- and separate them into two different functions.
def transpose(self):
'''
Transpose a matrix.
'''
C = Z ## creating a new zero matrix
for i in range(0, self.rows):
for j in range(0, self.cols):
C.matrix[i][j] = self.matrix[j][i]
return C
def transposeAndPrint(self):
C = transpose(self)
C.show()
Now you can call transposeAndPrint when you need to print, and transpose when you don't need to.
Of course I could tell you to add an extra parameter but that won't change the fact that a transpose function should not do printing (except for debug).
def transpose(self):
'''
Transpose a matrix.
'''
C = Z ## creating a new zero matrix
for i in range(0, self.rows):
for j in range(0, self.cols):
C.matrix[i][j] = self.matrix[j][i]
return C
And somewhere else, where you need it:
o.transpose()
o.show()
Another option would be to use Python's logging module. You can set multiple logging levels, which are intended for just this type of situation. In this case, you could make the show function output at the DEBUG level, which you can then easily turn on or off as need be.
See:
http://docs.python.org/2/howto/logging.html

Categories

Resources