Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 3 months ago.
The community reviewed whether to reopen this question 3 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I'm trying to generate random numbers using user input. This is for a homework question and is structured as the professor instructed. I'm returning x amount of this instead of numbers. function generate at 0x0000021EE6848700
I feel like this is a stupid question and I'm missing something obvious. When I try to define main with generate I get an error that I'm missing a positional argument. I have tried using print and return, and neither under generate. Am I not defining something correctly?
import random
def generate():
print(random.randint(-100, 100))
def main():
howMany=int(input('How many random numbers do you want: '))
for count in range(howMany):
print(generate)
main()
You're not calling the generate method - you're literally printing the method itself. I think that you meant print(generate()).
That being said, print(generate()) doesn't make sense either because generate() doesn't return anything - it already prints. If you want to print the result of generate in main, you should return the result instead of printing. If you do that, though, the generate function wouldn't make much sense in the first place because all it would do is return the result of random.randint(-100, 100). In that case, you should just call that method "directly."
So, the corrected versions could be any one of the following:
def generate():
print(random.randint(-100, 100))
def main():
howMany=int(input('How many random numbers do you want: '))
for count in range(howMany):
# Let the generate() function print
generate()
or
def generate():
return random.randint(-100, 100)
def main():
howMany=int(input('How many random numbers do you want: '))
for count in range(howMany):
print(generate())
However, in this second version, the generate function is kind of pointless; it would be better to just do:
def main():
howMany=int(input('How many random numbers do you want: '))
for count in range(howMany):
print(random.randint(-100, 100))
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
Two questions/clarifications:
Most people may ask the difference between yield and return.
def square(n):
for i in range(n):
yield i**2
for i in square(10000000000):
print(i)
I understand this is a way to run the 'function' in a generator way. If we directly run print([i**2 for i in range(10000000000)]) or this way
def square():
return [i**2 for i in range(10000000000)]
Python will run almost infinite time. However, if we run in the generator way, we will save a lot of memories, and Python can keep printing the squared numbers. In the generator, we directly print out the result we got and do not save it anywhere. However, in a function-return way, we will save the results into a list so it will waste a lot of memories and time. Am I correct about this?
My another question is yield vs. print: if we want to save memories and hope to directly print out the results, then why don't we just use print to directly print out the result, like:
def square(n):
for i in range(n):
print(i**2)
square(10000000000)
Python will do the same thing as the yield-generator way, right? So what is the difference between yield and print? Thanks!
Depends on the purpose and the caller.
When you are print-ing, you are calling a console writer in the loop. Thus print sends to a fixed and forced destination.
When you are yielding, you are not calling anything, but simply return-ing your output in gradual way to "any" caller which can take it and do whatever it like.
Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 2 years ago.
Improve this question
I was using timeit module for measure some the execution of my code, and while checking different scripts I found this big difference on 'the same script' but defined on different ways.
Code.py
import timeit
code = '''
def count():
for i in range(100):
pass
'''
def count():
for i in range(100):
pass
print(timeit.timeit(code))
print(timeit.timeit(count))
Output:
0.17939981000017724
3.7566073690004487
What is really happening under the hood? I mean, in both cases the piece of code is exactly the same, but the time execution difference is huge.
In the string example, you're only defining the function but not actually calling it, so you're essentially timing the function creation rather than its execution.
You'll need to append a count() function call at the end of your code string for it to actually run and include in the profiling.
code = '''
def count():
for i in range(100):
pass
count()
'''
But note that technically you're timing both the function declaration and it's execution in the string example, and just the function call in the second example.
This would be a fairer comparison:
code = '''
def count():
for i in range(100):
pass
'''
def count2():
for i in range(100):
pass
print(timeit.timeit("count()", setup=code))
print(timeit.timeit(count2))
code in only a string not a function definition.
code = '''
def count():
for i in range(100):
pass
'''
It does not executing any calculation or iteration. code:
'\ndef count():\n for i in range(100):\n pass\n'
but count is:
<function __main__.count()>
Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 2 years ago.
Improve this question
So I'm fairly new still to python, and I'm trying to create a "word guessing" program, and part of that is creating a function that takes the inputted letter and then compares it to a list of letters, however it does not seem to recognize my definition of the variable "guess" in while (guess != ans):, any thoughts on why that is?
(Also here's my code as a reference):
def main():
ans = str("a")
guess = str("null")
letterList = [A,B,C,D,E,F,G,H,I,J,K,L,M,N,O,P,Q,R,S,T,U,V,W,X,Y,Z]
while (guess != ans):
if(letterList[0] == ans):
guess = letterList[0]
main()
Error:
NameError: Name 'guess' is not defined on line 5
Python is an interpreted language and the interpreter relies on indentation to distinguish the different blocks of code.
If you don't know what that is, here's something you could read:
https://www.geeksforgeeks.org/indentation-in-python/
The problem in your code lies within the while loop.
Because your variable "guess" is within the scope of the function main, so only other variables and functions inside "main" can see it.
Also, your letterList has a problem, because each character should be within quotes(" " or ' ') otherwise python interprets them as variables which are not even defined.
Your while loop is in a different scope, the "global" scope and that is why your program doesn't recognize the "guess" variable.
And your loop is infinite because "guess" will always be different than "ans" is not changing.
If you could express what you want to accomplish in a better way, I could help you out.
def main():
ans = str("a")
guess = str("null")
letterList=['A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z']
while (guess != ans):
if(letterList[0] == ans):
guess = letterList[0]
Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 4 years ago.
Improve this question
I am building code to find the largest prime factor of a number. I have written the code. Please point out the changes that need to be made to the code. Any help is appreciated !!
I have tried removing the class declaration and just declaring two functions as mentioned in the code but still the NameError: 'isPrime' used before declaration still persists.
class Soln(object):
def isPrime(self,num):
c=0
if (num<2):
return False
else:
for x in range(1,num):
if (num%x == 0):
c+=1
if (c>1):
return False
else:
return True
def nLargest(self,n):
for x in range(n,0,-1):
if(isPrime(x)):
print ("\nLargest Prime Factor is"+str(x))
exit()
a=Soln()
a.nLargest(12)
The output should be 3 but it gives
NameError: isPrime used before declaration.
You don't have 'global' method isPrime defined. You probably want to use the one, which is in the class Soln. In order to do this, you can write:
self.isPrime(x) (while calling from another method of Soln)
or
a.isPrime(x) (while calling on Soln object a from outside of this object)
or make the method static and use:
Soln.isPrime(x)
Use rhis code:
def nLargest(self,n):
for x in range(n,0,-1):
if(self.isPrime(x)):
print ("\nLargest Prime Factor is"+str(x))
exit()
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 7 years ago.
Improve this question
I am making an algorithm that converts numbers into roman numerals, which involves an array. However, when I run my code, I tell it to check if they array exists, and create it if it doesn't (I use this complex method because the code has to restart itself a lot, you will see).
number = input("What number would you like to convert?")
number = int(number)
def alg(n):
if 'roman' in locals():
print("yes")
if n >= 1000:
roman = roman + ["M"]
n = n - 1000
print(roman)
print(n)
#alg(number)
else:
print("end")
else:
print("no")
roman = [""]
print(len(roman))
print(locals())
alg(number)
alg(number)
I have tried researching it on the Python documentation and on this site, but to no avail.
Every invocation of a function has its own set of local variables. Your check will always return false, because you have just entered the function.
If you want to keep the recursive implementation that you currently have, you should pass the roman variable as a second parameter to the alg function.
Are you sure you want to find it in locals() (which will only look in the scope of your function alg which is just starting, thus cannot containing your roman variable) ?
Maybe you want to look up into globals variables with globals() ?
You check for the word 'roman' in locals; this is not going to happen, as you have yet to define roman. A function's locals do not persist from call to call.
Your decrement
n = n - 1000
doesn't do anything useful (yet). The local copy of n gets changed, but you never use it. The original copy (from the main program) is still intact, as integers are not mutable.
Note that your recursive call passes number -- the main program's original number. You're always converting the original, never working on the smaller quantity.
I'll stop here: without design, structure, or comments, this code is a bit hard to follow.