already tried to solve my issue via existing posts - however, did not manage to. Thx for your help in advance.
The objective is to share a variable across threads. Please find below the code. It keeps printing '1' for the accounting variable, although I want it to print '2'. Any suggestions why?
main.py:
account = 1
import threading
import cfg
import time
if __name__ == "__main__":
thread_cfg = threading.Thread(target=cfg.global_cfg(),args= ())
thread_cfg.start()
time.sleep(5)
print(account)
cfg.py:
def global_cfg():
global account
account = 2
return()
Globals are not shared across files.
If we disregard locks and other synchronization primitives, just put account inside cfg.py:
account = 1
def global_cfg():
global account
account = 2
return
And inside main.py:
import threading
import time
import cfg
if __name__ == "__main__":
thread_cfg = threading.Thread(target=cfg.global_cfg,args= ())
print(cfg.account)
thread_cfg.start()
time.sleep(5)
print(cfg.account)
Running it:
> py main.py
1
2
In more advanced cases, you should use Locks, Queues and other structures, but that's out of scope.
Related
This question already has an answer here:
How to process requests from multiiple users using ML model and FastAPI?
(1 answer)
Closed 15 days ago.
I have a Python file named main.py. I am running it on Python 3.9.13 on Windows.
import uvicorn
from fastapi import FastAPI
app = FastAPI()
#app.post('/c')
async def c(b: str):
print(a)
if __name__ == '__main__':
a = load_embeddings('embeddings')
uvicorn.run('main:app', host='127.0.0.1', port=80)
Running this, then invoking POST /c will cause a 500 error with NameError 'a' is not defined.
However it is obvious that a will be defined first before the server is ran. If I move a outside of the if __name__ == '__main__': then it works, but it causes load_embeddings to be ran multiple times for unknown reasons (3 exact). Since load_embeddings for me takes long time, I do not want the duplicate execution.
I wish to look for either of these as a solution to my issue: stop whatever outside if __name__ == '__main__': from executing multiple times, OR make a defined globally when it is being defined under if __name__ == '__main__':.
Note: variable names are intentionally renamed for ease of reading. Please do not advise me anything on coding style/naming conventions. I know the community is helpful but that's not the point here, thanks.
You can resolve the issue by moving the a variable definition inside the c function. Then, you can add a check inside the function to load the embeddings only if they have not been loaded yet. You can achieve this by using a global variable, which will keep track of whether the embeddings have been loaded or not.
Here is an example:
import uvicorn
from fastapi import FastAPI
app = FastAPI()
EMBEDDINGS_LOADED = False
def load_embeddings(filename):
# Load embeddings code here
...
#app.post('/c')
async def c(b: str):
global EMBEDDINGS_LOADED
if not EMBEDDINGS_LOADED:
load_embeddings('embeddings')
EMBEDDINGS_LOADED = True
print(a)
if __name__ == '__main__':
uvicorn.run('main:app', host='127.0.0.1', port=80)
Apologies in advance as this is probably the most basic question to be found here but I'm the greenest of newbies and cannot get my head around how to call a function in flask so it runs when I land on the URL.
My purpose is to try and get a python script to run when a GET request is made to the URL from WebCore (for those who don't know it's a program that allows you to code smart home functionality for SmartThings) or when I simply land at the URL. I will then tie this to a virtual switch which will start the code which controls a motor in a cat feeder so I can feed my cat remotely/by voice.
All very frivolous stuff but trying to learn some basics here, can anyone help?
As it stands I have two files, both in a root directory named 'CatFeeder'
catfeeder.py
from flask import Flask
from feed import start
app = Flask(__name__)
#app.route('/')
def feed()
return feed.start
if __name__ == '__main__':
app.run(host='0.0.0.0', port='5000', debug=True)
feed.py
import time
import RPi.GPIO as GPIO
def start():
# Next we setup the pins for use!
GPIO.setmode(GPIO.BCM)
GPIO.setwarnings(False)
Motor1Forward = 17
Motor1Backwards = 18
GPIO.setup(Motor1Forward,GPIO.OUT)
GPIO.setup(Motor1Backwards,GPIO.OUT)
print('Feeding Lola')
# Makes the motor spin forward for x seconds
# James Well Beloved - 11 seconds = 28g food (RDA Portion = 52g/4kg cat or 61g/5kg cat)
# XXXX - X seconds - Xg food
GPIO.output(Motor1Forward, True)
GPIO.output(Motor1Backwards, False)
time.sleep(11)
print('Lola Fed!')
GPIO.output(Motor1Forward, False)
GPIO.output(Motor1Backwards, False)
GPIO.cleanup()
quit()
When I set export FLASK_APP=catfeeder.py and then flask run the service runs but nothing happens when I land on the page. I assume there is something wrong in the way I am calling things.
I guess it would be easiest if I just integrated the code from feed.py into catfeeder.py but I wasn't sure what the syntax would be for this and it felt like a messy way to go about it.
Thanks in advance!
you've imported the function but didn't actually invoke it as you missed adding your brackets (), try return start()
In case you meant to return a function object and not invoke the function, you should return it by typing return start
I have been trying for the past day to come up with a fix to my current problem.
I have a python script which is supposed to count up using threads and perform requests based on each thread.
Each thread is going through a function called doit(), which has a while True function. This loop only breaks if it meets a certain criteria and when it breaks, the following thread breaks as well.
What I want to achieve is that once one of these threads/workers gets status code 200 from their request, all workers/threads should stop. My problem is that it won't stop even though the criteria is met.
Here is my code:
import threading
import requests
import sys
import urllib.parse
import concurrent.futures
import simplejson
from requests.auth import HTTPDigestAuth
from requests.packages import urllib3
from concurrent.futures import ThreadPoolExecutor
def doit(PINStart):
PIN = PINStart
while True:
req1 = requests.post(url, data=json.dumps(data), headers=headers1, verify=False)
if str(req1.status_code) == "200":
print(str(PINs))
c0 = req1.content
j0 = simplejson.loads(c0)
AuthUID = j0['UserId']
print(UnAuthUID)
AuthReqUser()
#Kill all threads/workers if any of the threads get here.
break
elif(PIN > (PINStart + 99)):
break
else:
PIN+=1
def main():
threads = 100
threads = int(threads)
Calcu = 10000/threads
NList = [0]
for i in range(1,threads):
ListAdd = i*Calcu
if ListAdd == 10000:
NList.append(int(ListAdd))
else:
NList.append(int(ListAdd)+1)
with concurrent.futures.ThreadPoolExecutor(max_workers=threads) as executor:
tGen = {executor.submit(doit, PinS): PinS for PinS in NList}
for NLister in concurrent.futures.as_completed(tGen):
PinS = tGen[NLister]
if __name__ == "__main__":
main()
I understand why this is happening. As I only break the while True loop in one of the threads, so the other 99 (I run the code with 100 threads by default) doesn't break until they finish their count (which is running through the loop 100 times or getting status code 200).
What I originally did was to define a global variable at the top of the code and I changed while Counter < 10000, meaning it will run the loop for all workers until Counter is greater than 10000. And inside the loop it will increment the global variable. This way, when a worker gets status code 200, I set Counter (my global variable) to for example 15000 (something above 10000), so all the other workers stop running the loop 100 times.
This did not work. When I add that into the code, all threads instantly stop, doesn't even run through the loop once.
Here is an example code of this solution:
import threading
import requests
import sys
import urllib.parse
import concurrent.futures
import simplejson
from requests.auth import HTTPDigestAuth
from requests.packages import urllib3
from concurrent.futures import ThreadPoolExecutor
global Counter
def doit(PINStart):
PIN = PINStart
while Counter < 10000:
req1 = requests.post(url, data=json.dumps(data), headers=headers1, verify=False)
if str(req1.status_code) == "200":
print(str(PINs))
c0 = req1.content
j0 = simplejson.loads(c0)
AuthUID = j0['UserId']
print(UnAuthUID)
AuthReqUser()
#Kill all threads/workers if any of the threads get here.
Counter = 15000
break
elif(PIN > (PINStart + 99)):
Counter = Counter+1
break
else:
Counter = Counter+1
PIN+=1
def main():
threads = 100
threads = int(threads)
Calcu = 10000/threads
NList = [0]
for i in range(1,threads):
ListAdd = i*Calcu
if ListAdd == 10000:
NList.append(int(ListAdd))
else:
NList.append(int(ListAdd)+1)
with concurrent.futures.ThreadPoolExecutor(max_workers=threads) as executor:
tGen = {executor.submit(doit, PinS): PinS for PinS in NList}
for NLister in concurrent.futures.as_completed(tGen):
PinS = tGen[NLister]
if __name__ == "__main__":
main()
Any idea on how I can kill all workers once I get status code 200 from one of the requests I sending out?
The problem is that you're not using a global variable.
To use a global variable in a function, you have to put the global statement in that function, not at the top level. Because you didn't, the Counter inside doit is a local variable. Any variable that you assign to anywhere in a function is local, unless you have a global (or nonlocal) declaration.
And the first time you use that local Counter is right at the top of the while loop, before you've assigned anything to it. So, it's going to raise an UnboundLocalError immediately.
This exception will be propagated back to the main thread as the result of the future. Which you would have seen, except that you never actually evaluate your futures. You just do this:
tGen = {executor.submit(doit, PinS): PinS for PinS in NList}
for NLister in concurrent.futures.as_completed(tGen):
PinS = tGen[NLister]
So, you get the PinS corresponding to the function you ran, but you don't look at the result or exception; you just ignore it. Hence you don't see that you're getting back 100 exceptions, any of which would have told you what was actually wrong. This is equivalent to having a bare except: pass in non-threaded code. Even if you don't want to check the result of your futures in "production" for some reason, you definitely should do it when debugging a problem.
Anyway, just put the global in the right place, and your bug is fixed.
However, you do have at least two other problems.
First, sharing globals between threads without synchronizing them is not safe. In CPython, thanks to the GIL, you never get a segfault because of it, and you often get away with it completely, but you often don't. You can miss counts because two threads tried to do Counter = Counter + 1 at the same time, so they both incremented it from 42 to 43. And you can get a stale value in the while Counter < 10000: check and go through the loop an extra time.
Second, you don't check the Counter until you've finished downloading and processing a complete requests. This could take seconds, maybe even minutes depending on your timeout settings. And add that to the fact that you might go through the loop an extra time before knowing it's time to quit…
In a project, I would like to separate the visualization and calculation in two different modules. The goal is to transfer the variables of the calculation-module to a main-script, in order to visualize it with the visualization-script.
Following this post
Using global variables between files?,
I am able to use a config-script in order to transfer a variable between to scripts now. But unfortunately, this is not working when using threading. The Output of the main.py is always "get: 1".
Does anyone have an idea?
main.py:
from threading import Thread
from time import sleep
import viz
import change
add_Thread = Thread(target=change.add)
add_Thread.start()
viz.py:
import config
from time import sleep
while True:
config.init()
print("get:", config.x)
sleep(1)
config.py:
x = 1
def init():
global x
change.py:
import config
def add():
while True:
config.x += 1
config.init()
OK, fount the answer by myself. Problem was in the "main.py". One has to put the "import viz" after starting the thread:
from threading import Thread
from time import sleep
import change
add_Thread = Thread(target=change.add)
add_Thread.start()
import viz
I'm having an issue with threading that I can't solve in any way I've tried. I searched in StackOverflow too, but all I could find was cases that didn't apply to me, or explanations that I didn't understand.
I'm trying to build an app with BottlePy, and one of the features I want requires a function to run in background. For this, I'm trying to make it run in a thread. However, when I start the thread, it runs twice.
I've read in some places that it would be possible to check if the function was in the main script or in a module using if __name__ == '__main__':, however I'm not able to do this, since __name__ is always returning the name of the module.
Below is an example of what I'm doing right now.
The main script:
# main.py
from MyClass import *
from bottle import *
arg = something
myObject = Myclass(arg1)
app = Bottle()
app.run('''bottle args''')
The class:
# MyClass.py
import threading
import time
class MyClass:
def check_list(self, theList, arg1):
a_list = something()
time.sleep(5)
self.check_list(a_list, arg1)
def __init__(self, arg1):
if __name__ == '__main__':
self.a_list = arg.returnAList()
t = threading.Thread(target=self.check_list, args=(a_list, arg1))
So what I intend here is to have check_list running in a thread all the time, doing something and waiting some seconds to run again. All this so I can have the list updated, and be able to read it with the main script.
Can you explain to me what I'm doing wrong, why the thread is running twice, and how can I avoid this?
This works fine:
import threading
import time
class MyClass:
def check_list(self, theList, arg1):
keep_going=True
while keep_going:
print("check list")
#do stuff
time.sleep(1)
def __init__(self, arg1):
self.a_list = ["1","2"]
t = threading.Thread(target=self.check_list, args=(self.a_list, arg1))
t.start()
myObject = MyClass("something")
Figured out what was wrong thanks to the user Weeble's comment. When he said 'something is causing your main.py to run twice' I remembered that Bottle has an argument that is called 'reloader'. When set to True, this will make the application load twice, and thus the thread creation is run twice as well.