How declare for in function with parameter - python

I new in Python. I don't know how work this.
I have this code:
def main():
a=input("Type number")
e=int(a)
function2(e);
def function2(e):
for h in range(e):
print("X")
main();
write me this error:
Traceback (most recent call last)
for h in range(e):
NameError: name 'e' is not defined

Thats because of an indentation error, and you forgot to put a : after your for loop:
def main():
a = input("Type number")
e = int(a)
function2(e)
def function2(e):
for h in range(e):
print("X")
main()
Also, no semicolons are required in python.

Your problem seems to be a series of misunderstandings in general. Your errors in your code are simple, but let's see if I can't walk you through some concepts so that when I show you the fixed code, you understand it completely. Keep in mind that I expect you already know a lot of it, but I'm writing this answer not just for you, but for any other beginners who stumble upon this page. :)
It seems we can cover the following topics (each concept is simple in itself, but you need to get them completely in order to get the next one):
Variables in programming
Variables in python
Whitespace in python
Functions: Parameters vs Arguments
Solution: fixing your code
Variables in programming
Variables, as you probably know, are simply the labels we give to data. In most languages, you have to declare the variable first so that you can have the appropriate type assigned to it (that is, so the computer knows whether it's an integer, a string, a boolean, etc). Thus, you need the following code:
int myVariable1 = 3;
string myVariable2 = "hello";
bool myVariable3 = true;
(In some languages, you need to declare variables and then assign a value to them.)
Variables in python
Python, apart from many starter languages, is dynamically typed. This means that the variables (the labels on the data) have no type, but the values do.
That means that your code can look like this
myVariable1 = 3
myVariable2 = "hello"
myVariable3 = True
And python can figure out what types to use, based on the data assigned to the variables.
(Note: in python, you don't need ; to end a line, and boolean values are capitalized (True, `False))
Whitespace in python
Python was designed to be easy to read. Computers use hints inside the language ((), [], {}, :, ;, etc) to know what's going on. In Python, whitespace (      ) is part of the hinting, or syntax. In most languages, whitespace is ignored but because Python does not ignore it, it can be used to format your languages in a visually pleasing way. In C++,
function myFunction() {string myString = "wow such learn good job, many doge wow";}
and
function myFunction() {
string myString = "wow such learn good job, many doge wow";
}
are the same. You can see how this could confuse a new programmer, as it doesn't even look the same. But in Python, the code has to look like:
def myFunction():
myString = "wow such learn good job, many doge wow"
And it is this uniformity that makes Python so much easier to work with, for a lot of people.
Functions: Parameters vs Arguments
In every decent language, the use of functions is vital, and understanding them completely is even more vital.
Functions can easily be related to basic concepts of Algebra. Functions already exist in Algebra, this being why the comparison is so easiy.
In Algebra, a function is an equation with variables in it. Inside the function, work is ready to be done with the equation that is set up, and it's just waiting for you to fill in the missing pieces. That is to say,
f(x) = 3 + 2x + x^2
is a function that is ready to go, but it needs to you put in x.
This is the same thing in programming. When I write
def myFunction(x):
3+2x+x**2
I am writing the exact same thing as f(x); A working equation that depends on the information it is given.
A note: Not all programming functions do math exactly, some operate on strings, but they all alter data and that is my point. Some functions don't even need input, because they operate on data independent of what you're doing. Here, the comparison falls apart somewhat, but I hope you're still onboard.
So, what are arguments and what are parameters?
When defining the function and then calling the function:
def myFunction(x): #defining the function f(x)
3+2x+x**2
print myFunction(3) #calling the function f(x) where x=3
The parameter is x in the first line. Parameters are the variables that you put into the definition of a function.
The argument is the 3 that you put in place of the x when you called the function. Arguments are the values you use to fill in the variables in a function.
As such, you are now giving the function the value 3 and it solves the following:
3+2*(3)+(3)^2
3+6+9
9+9
18
The resulting output will of course print:
18.
Solution: fixing your code
Now that we've gone over all of the base concepts that lead to your code getting errors. Here is your original code:
def main():
a=input("Type number")
e=int(a)
function2(e);
def function2(e):
for h in range(e):
print("X")
main();
There are a multitude of errors here:
Your def main(): is written mostly correct, but the indentation may not be sufficient. Python standard, the one that may confuse less sophisticated interpreters for it, requires about 4 spaces as its whitespace and indentation.
Your def main(): also uses a ; at the end, which, as a difference between Python and lots of other languages, is a syntax problem. Python doesn't need ;, and just removing it fixes that error.
Your def function2(e): appears to have no errors aside from the whitespace problem that we saw in def main():
Your def function2(e): makes use of print(), which, while this is no error, is a syntax difference that is significant between Python 2.7 and Python 3.3; For this reason, I'll be adding the tag Python 3.3 for future-proofing reasons.
When you call main();, the ending ; is unneccessary, and can be removed.
Here is a revised version of your code that works.
def main():
a = input("Type number")
e = int(a)
function2(e)
def function2(e):
for h in range(e):
print("X")
main()
Do you understand how it works completely now? Sorry for all the reading, hopefully you are much more comfortable now, having gone through the entire thing!
For any questions, don't hesitate to ask in a comment below.
Happy Coding!
PS - I see that you already picked the best answer. But maybe after reading this one, you'll change your mind ;)

You're missing a colon : inside of your function2, so if you change that bit to:
def function2(e):
for h in range(e):
print("X")
You should be good to go.

None Any mistake or synatx erorr in Your Code It is Working Fine .
And You can indent the code by (tab) or by one space or any thing else .as long as you are saved the Blocks

Related

I **really** need a function defined by user input. What are my options?

I'm writing a Python script that parses a user-inputted string defining a differential equation, such as 'x\' = 2*x'. My main problem is that I don't want to implement numerical solution methods myself, and instead rely on SciPy's solve_ivp method, for which a function such as
def my_de(t, x):
return 2*x
is absolutely necessary, since solve_ivp's first argument must be a function. Currently, I'm working around this problem with the following piece of code (in a simplified version):
var = 'x'
de = '2*x'
def my_de(t, y):
exec(f'{var} = {y}')
return eval(de)
A quick explanation for this terribleness: I do not know what variable the user's going to use in the input. var may be theta, it may be sleepyjoe, it may be donalddump. The only thing guaranteed is that the only variable on de is var. You can forget about t for the purposes of this post.
My question is, how can I avoid using exec and eval in this context? I know using any of these is a terrible idea, and I don't want to do it. However, I'm not really seeing any other option.
I am already parsing the user input beforehand, so I can try to make this safe (prohibited variable names, etc.), but anyone who wants to abuse this will be able to anyway.
In addition to the previous comments, another possibility is to evaluate the function definition itself:
userInput="2*x + x**3" # the input you wish to implement
exec("""def test(x): return {}""".format(userInput))
print(test(1.))
This will avoid the overhead of evaluating the userInput at each call.

Why must a variable be declared a global variable before it gets assigned?

Why do we have to do this:
global x
x = "Hello World!"
When this is more readable:
global x = "Hello World"
Why is this, is there a reason behind it?
The goal of Python is to be as readable as possible. To reach this goal the user must be forced act in a clear defined way - e.g. you must use exactly four spaces. And just like this it defines that the global keyword is a simple statment. This means:
A simple statement is comprised within a single logical line.
Simple Statements
And
Programmer’s note: the global is a directive to the parser. It applies only to code parsed at the same time as the global statement.
The global statement
If you would write this:
global x = 5
You would have two logical operations:
Interpreter please use the global x not a local one
Assign 5 to x
in one line. Also it would seem like the global only applies to the current line, and not to the whole code block.
TL;TR
It's to force the user to write better readably code, which is splitted to single logical operations.
The document writes that
Names listed in a global statement must not be used in the same code block textually preceding that global statement.
CPython implementation detail: The current implementation does not enforce the latter two restrictions, but programs should not abuse this freedom, as future implementations may enforce them or silently change the meaning of the program.
As for the readability question, I think the second one seems like a C statement. Also it not syntactically correct
I like to think it puts your focus squarely on the fact that you are using globals, always a questionable practice in software engineering.
Python definitely isn't about representing a problem solution in the most compact way. Next you'll be saying that we should only indent one space, or use tabs! ;-)

Is it advisable to use print statements in a python function rather than return

Lets say I have the function:
def function(a)
c = a+b
print(c)
Is it advisable to use the print statement in the function to display output rather than placing a return statement at the end and using the print(function(a))?
Also what implications would there be if I used both a print statement and a return statement in a function to display the same output? Lets imagine I need to show the answer for c and then use the value of c somewhere else. Does this break any coding conventions?
So the highlight of the question isn't the difference between print and return, but rather if it is considered a good style to use both in the same function and if it has a possible impact on a program. For example in:
def function(a)
c = a+b
print(c)
return c
value = function
print(value)
Would the result be two c's? Assume c = 5; therefore, would the output be(?):
5
5
print and return solve two completely different problems. They appear to do the same thing when running trivial examples interactively, but they are completely different.
If you indeed just want to print a result, use print. If you need the result for further calculation, use return. It's relatively rare to use both, except during a debugging phase where the print statements help see what's going on if you don't use a debugger.
As a rule of thumb I think it's good to avoid adding print statement in functions, unless the explicit purpose of the function is to print something out.
In all other cases, a function should return a value. The function (or person) that calls the function can then decide to print it, write it to a file, pass it to another function, etc.
So the highlight of the question isnt the difference between print and
return but rather if it is considered good style to use both in the
same function and its possible impact on a program.
It's not good style, it's not bad style. There is no significant impact on the program other than the fact you end up printing a lot of stuff that may not need to be printed.
If you need the function to both print and return a value, it's perfectly acceptable. In general, this sort of thing is rarely done in programming. It goes back to the concept of what the function is designed to do. If it's designed to print, there's usually no point in returning a value, and if it's designed to return a value, there's usually no point in printing since the caller can print it if it wants.
Well return and print are entirely two different processes.
Whereas print will display information to the user or through the console; and return is used for collecting data from a method that fulfills a certain purpose (to use later on throughout your program).
And to answer your question, I believe it would return the two values; since one prints the c variable itself, and the other returns the value c to present as well? Correct me if I'm wrong.

When to type-check a function's arguments?

I'm asking about situations where if a wrong type of argument is passed to the function, it could:
Blow up the whole thing.
Return unexpected results
Return nothing
For instance, the function below expects the argument name to be a string. It would throw an exception for all other types that doesn't have a startswith method.
def fruits(name):
if name.startswith('O'):
print('Is it Orange?')
There are other cases where a function could halt or cause damage to the system if execution proceeds without type-checking. Whenever there are a lot of functions or functions with a lot of arguments, type checking is tedious and makes the code unreadable. So, is there a standard for doing this? As to 'how to type check' - there are plenty of examples here on stackexchange, but I couldn't find any about where it would be appropriate to do so.
Another example would be:
def fruits(names):
with open('important_file.txt', 'r+') as fil:
for name in names:
if name in fil:
# Edit the file
Here if the name is a string each character in it will influence the editing of the file. If it is any other iterable, each element provided by it would influence the editing. Both of these could produce different results.
So, when should we type-check an argument and should we not?
The answer off the top of my head would be: it depends where the input comes from.
If the functions are class methods that get invokes internally or things like that, you can assume the inputs are valid, because you wrote it!
For example
def add(x,y):
return x + y
def multiply(a,b):
product = 0
for i in range(a):
product = add(product, b)
return product
In my add function, I could check that there is a + operator for the parameters x and y. But since I wrote the multiply function, and that is the only function that uses add, it is safe to assume the inputs will be int because that's how I wrote it. Now that argument stands on shaky ground for large code bases where you (hopefully) have shared code, so you can't be sure people don't misuse your functions. But that's why you comment them well to describe the correct use of said function.
If it has to read from a file, get user input, etc, then you may want to do some validation first.
I almost never do type checking in Python. In accordance with Pythonic philosophy I assume that me and other programmers are adult people capable of reading the code (or at least the documentation) and using it properly. I assume that we test our code before we let it destroy something important. After all in most cases if you do something wrong, you'll just see an error and Python's error messages are quite informative most of the time.
The only occasion when I sometimes check types is when I want my function to behave differently depending on the argument's type. But although I sometimes feel compelled to do this, I don't consider it a good practice.
Most often it happens when my function iterates over a list of strings and I fear (or want) I could get a single string passed into it by accident - this won't throw an error at once because unfortunately string is an iterable too.

Parameter naming convention in Python

For functions with closely related formal parameters, such as
def add_two_numbers(n1, n2):
return n1 + n2
def multiply_two_numbers(n1, n2):
return n1 * n2
Is it a good idea to give the same names to the parameters in both functions, as shown above?
The alternative is to rename the parameters in one of the functions. For example:
def add_two_numbers(num1, num2):
return num1 + num2
Keeping them the same in both functions looks more consistent since the parameters each one takes are analogous, but is that more confusing?
Similarly, which would be better for the example below?
def count(steps1, steps2):
a = 0
b = 0
for i in range(steps1):
a += 1
for j in range(steps2):
b += 1
return a, b
def do_a_count(steps1, steps2):
print "Counting first and second steps..."
print count(steps1, steps2)
Otherwise, changing the arguments in the second function gives:
def do_a_count(s1, s2):
print "Counting first and second steps..."
print count(s1, s2)
Again, I'm a little unsure of which way is best. Keeping the same parameter names makes the relation between the two functions clearer, while the second means there is no possibility of confusing parameters in the two functions.
I have done a bit of searching around (including skimming through PEP-8), but couldn't find a definitive answer. (Similar questions on SO I found included:
Naming practice for optional argument in python function
and
In Python, what's the best way to avoid using the same name for a __init__ argument and an instance variable?)
I would keep the names the same unless you have a good reason to use different names ... Remember that even positional arguments can be called by keywords, e.g.:
>>> def foo(a,b):
... print a
... print b
...
>>> foo(b=2,a=1)
Keeping the "keywords" the same helps in that rare, but legal corner case ...
In general, you should give your function's parameters names that make sense, and not even consider anything to do with other functions when you choose those names. The names of two functions' parameters simply don't have anything to do with each other, even if the functions do similar things.
Although... if the functions do do similar things, maybe that's a sign that you've missed a refactoring? :)
The important guideline here is that you should give them reasonable names—ideally the first names that you'd guess the parameters to have when you come back to your code a year later.
Every programmer has a set of conventions. Some people like to call integer arguments to binary functions n1 and n2; others like n, m; or lhs, rhs; or something else. As long as you pick one and stick to it (as long as it's reasonable), anyone else can read your code, and understand it, after a few seconds learning your style. If you use different names all over the place, they'll have to do a lot more guessing.
As mgilson points out, this allows you to use keyword parameters if you want. It also means an IDE with auto-complete is more useful—when you see the (…n1…, …n2…) pop up you know you want to pass two integers. But mainly it's for readability.
Of course if there are different meanings, give them different names. In some contexts it might be reasonable to have add_two_numbers(augend, addend) and multiply_two_numbers(factor, multiplicand).

Categories

Resources