At the time I'm just doing a python myprogra.py & and let this program do its thing:
import urllib2
import threading
import json
url = 'https://something.com'
a = []
def refresh():
# refresh in 5 minutes
threading.Timer(300.0, refresh).start()
# open url
try:
data = urllib2.urlopen(url).read(1000)
except:
return 0
# decode json
q = data.decode('utf-8')
q = json.loads(q)
# store in a
a.append(q['ticker'])
if len(a) > 288:
a.pop()
truc = json.dumps(a)
f = open('ticker.json', 'w')
f.write(truc)
f.close()
refresh()
I have two questions:
how comes it work since I didn't write global a at the start of the function
should I use a cron for this kind of thing instead of what I'm doing? (I'm using a debian server)
There is no issue with accessing the variable a the way you do, because you never assign to it within the refresh function. It is accessed the very same way as the url variable or even the json import is accessed. If you were to assign to a (rather than calling a method such as append on it), then you would create a local variable shadowing the global a. The global keyword avoids the creation of a local variable for assignments.
It is up to you whether you use a program that sleeps or cron, but here are some things to keep in mind:
Your program keeps state across requests in the variable a. If you were to use cron and invoke your program multiple times, you would need to store this state somewhere else.
If your program crashes (e.g. invalid data is returned and json decoding fails with an exception), cron would start it again, so it would eventually recover. This may or may not be desired.
When run via cron, you lower the memory footprint of the system at the expense of more computation (Python interpreter being initialized every five minutes).
Related
I need to detect when the minutes of the clock/time change and do something,
This is mi code so far, for the clock but still can figuruate out in python how to detect the value has change and do action after. Any help will be apreciated i come from a c++ backgorund my implementations seems so far not working.
while True:
now = datetime.now()
print(now.strftime("%M), end = " ", flush = true)
time.sleep(1)
currentMin = now.srtftime("%M")
that worked for me:
from datetime import datetime
import time
past_min = None
while True:
#current min
now_min = int(datetime.now().strftime("%M"))
#first iteration
if not past_min:
past_min = now_min
if now_min != past_min:
#call your function here
print("Min change detected")
past_min = now_min
#print the seconds
print(datetime.now().strftime("%S"))
time.sleep(1.5)
I think you can create a class (in the below example Minute) with a property currenMin to store the current minute value. By using #<property>.setter function, when the property <property> is changed, it will trigger the setter function
from datetime import datetime
import time
class Minute(object):
def __init__(self):
self._currenMin = ''
#property
def currentMin(self):
return self._currenMin
#currentMin.setter
def currentMin(self, value):
if value != self._currenMin:
# ACTION CODE BELOW
print('Minute changed')
self._currenMin = value
minute = Minute()
while True:
now = datetime.now()
print(now.strftime("%M"), end=" ", flush = True)
time.sleep(1)
minute.currentMin = now.strftime("%M")
Well, for the general case with simple variables, you can't simply do it. There are two simple options to do something similar:
if you control EVERYTHING that writes it, make them trigger that action
write code that regularly checks it and triggers the action when it changes
use language tools like a custom setter (see #user696969's answer)
The first case needs you to control everything that could modify that value. At that point, you might not even need a variable, and just pass the new value (and you can reverse this by having a variable that is always updated). This is a very common pattern, called Event-driven programming, and heavily used for example in UIs, websites (client-side, see a list of DOM events for example) and game frameworks (see pygame's documentation on events)
The second-case of writing a loop or checking it regularly can also work, however, there are some downsides to it as well. You probably don't want to write an infinite loop waiting for it to change, especially not in a way that also blocks the changing of that variable, and thus dead-locking the entire program as it's preventing something it's waiting for. If you just check it regularly between other, it might be hard to ensure it will be checked regardless of what else is the program doing. You might use multiple threads for it, but that brings it's own set of problems. You also have to store and update the previous value, so you can compare it. This might be slow or memory-consuming if the variable holds too much data.
You can also use language tools with custom setters. This is clean, but can not be used for any variable, just for class attributes, so you still need some control over the rest of the program.
Generally I'd use the event-driven approach or the setter one, depending on the wider context. However, for such simple cases, the checking is also fine. The simplest solution might event be to remove the need for this entirely.
My flask program (simulation in the view) runs in the following order (detailed code is also attached):
1> read my variable 'tx_list' from session. tx_list = session.get('tx_list', None)
2> for t in tx_list: do someting with t.
3> store tx_list in session: session['tx_list'] = tx_list
The reason I use session is because I want to change 'tx_list' every time I invoke this 'simulation' function.
The problem now is that if I print (console.log(tx_list)) in the front-end, it only updates itself a few times. But in the same time, when I print the values in the simulation function, it always updates. So I suspect the problem is because of the session???
I've tried to add another 'time_now' variable in the simulation function, which is independent of session. Then in the front-end (html) always updates 'time_now'. So the problem must be because of the usage of session??? How can update my 'tx_list' if session is not the best way to do it?
-------------------code is below----------------------------
My view is like below: In my view, I simply read my var 'tx_list' from session, do something with it, then store it back to the session.
#app.route('/simulation/<param>')
def simulation(param):
tx_list = session.get('tx_list', None)
today = date.today()
if t0 == '0':
time_now = today.strftime("%Y-%m-%d %H")
else:
time_now = (today + relativedelta(hours=int(param))).strftime("%Y-%m-%d %H")
return_val = jsonify({'time':time_now, 'tx_list':tx_list_0})
for t in tx_list:
###########I have my code here to change t.
print(t)
session['tx_list'] = tx_list
return return_val
problem solved once I installed Flask-Session and initilize it.
I feel puzzled why it updates OK for only a few times without installing the module.
Preface -- I'm new to Python and programming in general. I am inheriting code and am attempting to modify existing code to make it more efficient.
Current issue is an existing "for server in serverlist" loop that appends the output to a list. It takes too long because it is processing the list one-by-one. It takes about 5seconds per machine lasting around 4minutes.
What do I need to do if I want to execute this single call on all 50 servers at once thereby getting back data for all 50 servers within a total of around 5 seconds? Or rather -- is there anything that can be done to speed this up in general to call on multiple machines at once instead of just one?
Current code example:
def dosomething(server):
stuff = []
try:
# function here
stuff = # function stuff here
except Exception:
pass
return stuff
def getdata(self):
serverlist = [servera, serverb, serverc]
data = []
for server in serverlist:
results = dosomething(server)
data.append(results)
return data
I'm currently working on a project where I need to send data via Serial persistently but need to occasionally change that data based in new inputs. My issue is that my current loop only functions exactly when a new input is offered by raw_input(). Nothing runs again until another raw_input() is received.
My current (very slimmed down) loop looks like this:
while True:
foo = raw_input()
print(foo)
I would like for the latest values to be printed (or passed to another function) constantly regardless of how often changes occur.
Any help is appreciated.
The select (or in Python 3.4+, selectors) module can allow you to solve this without threading, while still performing periodic updates.
Basically, you just write the normal loop but use select to determine if new input is available, and if so, grab it:
import select
while True:
# Polls for availability of data on stdin without blocking
if select.select((sys.stdin,), (), (), 0)[0]:
foo = raw_input()
print(foo)
As written, this would print far more than you probably want; you could either time.sleep after each print, or change the timeout argument to select.select to something other than 0; if you make it 1 for instance, then you'll update immediately when new data is available, otherwise, you'll wait a second before giving up and printing the old data again.
How will you type in your data at the same time while data is being printed?
However, you can use multithreading if you make sure your source of data doesn't interfere with your output of data.
import thread
def give_output():
while True:
pass # output stuff here
def get_input():
while True:
pass # get input here
thread.start_new_thread(give_output, ())
thread.start_new_thread(get_input, ())
Your source of data could be another program. You could connect them using a file or a socket.
I'm new to Python (disclaimer: I'm new to programming and I've been reading python online for two weeks) and I've written a simple multi-processing script that should allow me to use four subprocesses at once. I was using a global variable (YES, I KNOW BETTER NOW) to keep track of how many processes were running at once. Start a new process, increment by one; end a process, decrement by one. This was messy but I was only focused on getting the multi-processes working, which it does.
So far I've been doing the equivalent of:
processes = 0
def function(value)
global processes
do stuff to value
processes-=1
While read line
if processes < 4
processes+=1
create a new subprocess - function(line)
1: I need to keep track of processes in a better way than a global. I saw some use of a 'pool' in python to have 4 workers, but I failed hard at it. I like the idea of a pool but I don't know how to pass each line of a list to the next worker. Thoughts?
2: On general principles, why is my global var decrement not working? I know it's ugly, but I at least expected it to be ugly and successful.
3: I know I'm not locking the var before editing, I was going to add that once the decrementation was working properly.
Sorry if that's horrible pseudo-code, but I think you can see the gist. Here is the real code if you want to dive in:
MAX_THREADS = 4
CURRENT_THREADS = 0
MAX_LOAD = 8
# Iterate through all users in the userlist and call the funWork function on each user
def funReader(filename):
# I defined the logger in detail above, I skipped about 200 lines of code to get it slimmed down
logger.info("Starting 'move' function for file \"{0}\"...".format(filename))
# Read in the entire user list file
file = open(filename, 'r')
lines = file.read()
file.close()
for line in lines:
user = line.rstrip()
funControl(user)
# Accept a username and query system load and current funWork thread count; decide when to start next thread
def funControl(user):
# Global variables that control whether a new thread starts
global MAX_THREADS
global CURRENT_THREADS
global MAX_LOAD
# Decide whether to start a new subprocess of funWork for the current user
print
logger.info("Trying to start a new thread for user {0}".format(user))
sysLoad = os.getloadavg()[1]
logger.info("The current threads before starting a new loop are: {0}.".format(CURRENT_THREADS))
if CURRENT_THREADS < MAX_THREADS:
if sysLoad < MAX_LOAD:
CURRENT_THREADS+=1
logger.info("Starting a new thread for user {0}.".format(user))
p = Process(target=funWork, args=(user,))
p.start()
else:
print "Max Load is {0}".format(MAX_LOAD)
logger.info("System load is too high ({0}), process delayed for four minutes.".format(sysLoad))
time.sleep(240)
funControl(user)
else:
logger.info("There are already {0} threads running, user {1} delayed for ten minutes.".format(CURRENT_THREADS, user))
time.sleep(600)
funControl(user)
# Actually do the work for one user
def funWork(user):
global CURRENT_THREADS
for x in range (0,10):
logger.info("Processing user {0}.".format(user))
time.sleep(1)
CURRENT_THREADS-=1
Lastly: any errors you see are likely to be transcription mistakes because the code executes without bugs on a server at work. However, any horrible coding practices you see are completely mine.
Thanks in advance!
how about this: (not tested)
MAX_PROCS = 4
# Actually do the work for one user
def funWork(user):
for x in range (0,10):
logger.info("Processing user {0}.".format(user))
time.sleep(1)
return
# Iterate through all users in the userlist and call the funWork function on each user
def funReader(filename):
# I defined the logger in detail above, I skipped about 200 lines of code to get it slimmed down
logger.info("Starting 'move' function for file \"{0}\"...".format(filename))
# Read in the entire user list file
file = open(filename, 'r')
lines = file.read()
file.close()
work = []
for line in lines:
user = line.rstrip()
work.append(user)
pool = multiprocessing.Pool(processes=MAX_PROCS) #threads are different from processes...
return pool.map(funWork, work)