I would like to have a computational simulation running on a background process (started with redis rq) where I can query its current state, as well as change parameters using Django.
For the sake of simplicity: let's say I want to run the following code for a long time (which I would set up through a python worker):
def simulation(a=1):
value = 0
while a != None:
value += a
time.sleep(5)
Then, by visiting a URL, it would tell me the current value of value. I could also POST to a URL to change the value of a i.e. a=None to stop the simulation or a=-10 to change the behavior.
What is the best way to do this?
This best way I've found to do this is using cache
from django.core.cache import cache
def simulation(a=1):
value = 0
while a != None:
value += a
cache.set('value', value, 3600)
time.sleep(5)
a = cache.get('a', None)
This does work, but it's quite slow for my needs. Perhaps there's a method using sockets, but I wasn't abe to get it to work. The socket is blocked in the background process.
Related
I want to program that changes a rooms lights. The system runs on a controller that I can access via MQTT and REST. I have a way to change the intensity but it is very abrupt. Because I want the rest of the system to continue running while the change is happening (because I have Sensors running the rest of the lighting) I can't just use a loop to steadily increase the intensity. I looked into Timers but I can't get them to work properly for what I need. Is there a way to do that?
Here is my Problem with the loop:
client.message_callback_add(path("zones",Office,"devices",Sensor1_Presence,"data","presence"), on_message_Presence_callback)
client.message_callback_add(path("zones",Office,"devices",Sensor2_Presence,"data","presence"), on_message_colorchange_callback)
#client.message_callback_add(path("zones","#"), on_message_callback)
startTimer()
WeatherTimer()
client.connect(MQTT_HOST, port=MQTT_PORT)
client.loop_forever()
I want to be able to start and stop the function (preferably with a bool)
I have a change function that changes the specific parameters already:
def change_parameter(URL, parameter_name ,parameter_value):
r = requests.put(
f"https://{MQTT_HOST}/rest/v1/{URL}",
headers=litecom_headers(),
verify=False,
json={f"{parameter_name}": parameter_value}
)
return r.status_code
Is there a way to do what I want to do?
Thanks in advance!
I assume you have the python script controlling your lights running on your desktop pc?
If so, you most certainly have at least 2 cpu cores at your disposal and can make use of python's ProcessPoolExecutor to run your parameter changing function in parallel. Then you can just gradually change your parameter in small steps until you reach the desired value.
To get a smooth effect you then just have to experiment with the step-size and the sleep value between the steps until you are satisfied with the results.
Pseudo-ish implementation:
def change_param_smooth(URL, parameter_name, target_value, stepsize, duration):
current_value = 0 // get the current value from your device
while current_value < target_value:
current_value += step_size
# avoid overshooting
if current_value > target_value:
current_value = target_value
change_parameter(URL, parameter_name, current_value)
time.sleep(duration)
I need to detect when the minutes of the clock/time change and do something,
This is mi code so far, for the clock but still can figuruate out in python how to detect the value has change and do action after. Any help will be apreciated i come from a c++ backgorund my implementations seems so far not working.
while True:
now = datetime.now()
print(now.strftime("%M), end = " ", flush = true)
time.sleep(1)
currentMin = now.srtftime("%M")
that worked for me:
from datetime import datetime
import time
past_min = None
while True:
#current min
now_min = int(datetime.now().strftime("%M"))
#first iteration
if not past_min:
past_min = now_min
if now_min != past_min:
#call your function here
print("Min change detected")
past_min = now_min
#print the seconds
print(datetime.now().strftime("%S"))
time.sleep(1.5)
I think you can create a class (in the below example Minute) with a property currenMin to store the current minute value. By using #<property>.setter function, when the property <property> is changed, it will trigger the setter function
from datetime import datetime
import time
class Minute(object):
def __init__(self):
self._currenMin = ''
#property
def currentMin(self):
return self._currenMin
#currentMin.setter
def currentMin(self, value):
if value != self._currenMin:
# ACTION CODE BELOW
print('Minute changed')
self._currenMin = value
minute = Minute()
while True:
now = datetime.now()
print(now.strftime("%M"), end=" ", flush = True)
time.sleep(1)
minute.currentMin = now.strftime("%M")
Well, for the general case with simple variables, you can't simply do it. There are two simple options to do something similar:
if you control EVERYTHING that writes it, make them trigger that action
write code that regularly checks it and triggers the action when it changes
use language tools like a custom setter (see #user696969's answer)
The first case needs you to control everything that could modify that value. At that point, you might not even need a variable, and just pass the new value (and you can reverse this by having a variable that is always updated). This is a very common pattern, called Event-driven programming, and heavily used for example in UIs, websites (client-side, see a list of DOM events for example) and game frameworks (see pygame's documentation on events)
The second-case of writing a loop or checking it regularly can also work, however, there are some downsides to it as well. You probably don't want to write an infinite loop waiting for it to change, especially not in a way that also blocks the changing of that variable, and thus dead-locking the entire program as it's preventing something it's waiting for. If you just check it regularly between other, it might be hard to ensure it will be checked regardless of what else is the program doing. You might use multiple threads for it, but that brings it's own set of problems. You also have to store and update the previous value, so you can compare it. This might be slow or memory-consuming if the variable holds too much data.
You can also use language tools with custom setters. This is clean, but can not be used for any variable, just for class attributes, so you still need some control over the rest of the program.
Generally I'd use the event-driven approach or the setter one, depending on the wider context. However, for such simple cases, the checking is also fine. The simplest solution might event be to remove the need for this entirely.
I'm new to both flask and python. I've got an application I'm working on to hold weather data. I'm allowing for both get and post commands to come into my flask application. unfortunately, the automated calls for my API are not always coming back with the proper results. I'm currently storing my data in a global variable when a post command is called, the new data is appended to my existing data. Unfortunately sometimes when the get is called, it is not receiving the most up to date version of my global data variable. I believe that the issue is that the change is not being passed up from the post function to the global variable before the get is called because I can run the get and the proper result comes back.
weatherData = [filed with data read from csv on initialization]
class FullHistory(Resource):
def get(self):
ret = [];
for row in weatherData:
val = row['DATE']
ret.append({"DATE":str(val)})
return ret
def post(self):
global weatherData
newWeatherData = weatherData
args = parser.parse_args()
newVal = int(args['DATE'])
newWeatherData.append({'DATE':int(args['DATE']),'TMAX':float(args['TMAX']),'TMIN':float(args['TMIN'])})
weatherData = newWeatherData
#time.sleep(5)
return {"DATE":str(newVal)},201
class SelectHistory(Resource):
def get(self, date_id):
val = int(date_id)
bVal = False
#time.sleep(5)
global weatherData
for row in weatherData:
if(row['DATE'] == val):
wd = row
bVal = True
break
if bVal:
return {"DATE":str(wd['DATE']),"TMAX":float(wd['TMAX']),"TMIN":float(wd['TMIN'])}
else:
return "HTTP Error code 404",404
def delete(self, date_id):
val = int(date_id)
wdIter = None
for row in weatherData:
if(row['DATE'] == val):
wdIter = row
break
if wdIter != None:
weatherData.remove(wdIter)
return {"DATE":str(val)},204
else:
return "HTTP Error code 404",404
Is there any way I can assure that my global variable is up to date or make my API wait to return until I'm sure that the update has been passed along? This was supposed to be a simple application. I would really rather not have to learn how to use threads in python just yet. I've made sure that my calls get request is not starting until after the post has given a response. I know that one workaround was to use sleep to delay my responses, I would rather understand why my update isn't occurring immediately in the first place.
I believe your problem is the application context. As stated here:
The application context is created and destroyed as necessary. It
never moves between threads and it will not be shared between
requests. As such it is the perfect place to store database connection
information and other things. The internal stack object is called
flask._app_ctx_stack. Extensions are free to store additional
information on the topmost level, assuming they pick a sufficiently
unique name and should put their information there, instead of on the
flask.g object which is reserved for user code.
Though it says you can store data at the "topmost level," it's not reliable, and if you extrapolate your project to use worker processes with uWSGI, for instance, you'll need persistence to share data between threads regardless. You should be using a database, redis, or at very least updating your .csv file each time you mutate your data.
My flask program (simulation in the view) runs in the following order (detailed code is also attached):
1> read my variable 'tx_list' from session. tx_list = session.get('tx_list', None)
2> for t in tx_list: do someting with t.
3> store tx_list in session: session['tx_list'] = tx_list
The reason I use session is because I want to change 'tx_list' every time I invoke this 'simulation' function.
The problem now is that if I print (console.log(tx_list)) in the front-end, it only updates itself a few times. But in the same time, when I print the values in the simulation function, it always updates. So I suspect the problem is because of the session???
I've tried to add another 'time_now' variable in the simulation function, which is independent of session. Then in the front-end (html) always updates 'time_now'. So the problem must be because of the usage of session??? How can update my 'tx_list' if session is not the best way to do it?
-------------------code is below----------------------------
My view is like below: In my view, I simply read my var 'tx_list' from session, do something with it, then store it back to the session.
#app.route('/simulation/<param>')
def simulation(param):
tx_list = session.get('tx_list', None)
today = date.today()
if t0 == '0':
time_now = today.strftime("%Y-%m-%d %H")
else:
time_now = (today + relativedelta(hours=int(param))).strftime("%Y-%m-%d %H")
return_val = jsonify({'time':time_now, 'tx_list':tx_list_0})
for t in tx_list:
###########I have my code here to change t.
print(t)
session['tx_list'] = tx_list
return return_val
problem solved once I installed Flask-Session and initilize it.
I feel puzzled why it updates OK for only a few times without installing the module.
I have python/django code hosted at dotcloud and redhat openshift. For handling different user, I use token and save it in dictionary. But when I get the value from dict, it sometimes throws an error(key value error).
import threading
thread_queue = {}
def download(request):
dl_val = request.POST["input1"]
client_token = str(request.POST["pagecookie"])
# save client token as keys and thread object as value in dictionary
thread_queue[client_token] = DownloadThread(dl_val,client_token)
thread_queue[client_token].start()
return render_to_response("progress.html",
{ "dl_val" : dl_val, "token" : client_token })
The code below is executed in 1 second intervals via javascript xmlhttprequest to server.
It will check variable inside another thread and return the value to user page.
def downloadProgress(request, token):
# sometimes i use this for check the content of dict
#resp = HttpResponse("thread_queue = "+str(thread_queue))
#return resp
prog, total = thread_queue[str(token)].getValue() # problematic line !
if prog == 0:
# prevent division by zero
return HttpResponse("0")
percent = float(prog) / float(total)
percent = round(percent*100, 2)
if percent >= 100:
try:
f_name = thread_queue[token].getFileName()[1]
except:
downloadProgress(request,token)
resp = HttpResponse('<a href="http://'+request.META['HTTP_HOST']+
'/dl/'+token+'/">'+f_name+'</a><br />')
return resp
else:
return HttpResponse(str(percent))
After testing for several days, it sometimes return :
thread_queue = {}
It sometimes succeeds :
thread_queue = {'wFVdMDF9a2qSQCAXi7za': , 'EVukb7QdNdDgCf2ZtVSw': , 'C7pkqYRvRadTfEce5j2b': , '2xPFhR6wm9bs9BEQNfdd': }
I never get this result when I'm running django locally via manage.py runserver, and accessing it with google chrome, but when I upload it to dotcloud or openshift, it always gives the above problem.
My question :
How can I solve this problem ?
Does dotcloud and openshift limit their python cpu usage ?
Or is the problem inside the python dictionary ?
Thank You.
dotCloud has 4 worker processes by default for the python service. When you run the dev server locally, you are only running one process. Like #martijn said, your issue is related to the fact that your dict isn't going to be shared between these processes.
To fix this issue, you could use something like redis or memcached to store this information instead. If you need a more long term storage solution, then using a database is probably better suited.
dotCloud does not limit the CPU usage, The CPU is shared amongst others on the same host, and allows bursting, but in the end everyone has the same amount of CPU.
Looking at your code, you should check to make sure there is a value in the dict before you access it, or at a minimum surround the code with a try except block, to handle the case when the data isn't there.
str_token = str(token)
if str_token in thread_queue:
prog, total = thread_queue[str_token].getValue() # problematic line !
else:
# value isn't there, do something else
Presumably dotcloud and openshift run multiple processes of your code; the dict is not going to be shared between these processes.
Note that that also means the extra processes will not have access to your extra tread either.
Use an external database for this kind of information instead. For long-running asynchronous jobs like these you also need to run them in a separate worker process. Look at Celery for an all-in-one solution for asynchronous job handling, for example.