Python polling MSSQL at set interval - python

I have a need to poll a MSSQL database to watch the status of a running job. I want to run a status check every X seconds to see if the status=done. I am trying to use the threading module. I have tested the threading module with some simple print statements and it seems to work, but when I try inside my pymssql script it does not.
def watcher_query(cursor):
print 'Watching'
return cursor.execute(""" select *
from some_table' """)
def is_it_done(row):
if row['status'] == 'done':
return row['the_id'], True
else:
return row['the_id'], False
def d(cur):
watcher_query(cur)
for row in cur:
return is_it_done(row)[1]
threading.Timer(100, d).start()
def job_watch(server):
with pymssql.connect(server_info) as conn:
with conn.cursor(as_dict=True) as cur:
is_done = false
while is_done:
is_done = d(cur)
No matter what I set the threading.Timer to I see the 'Watching' statement print constantly. Is there a better way to set the polling timer perhaps?
I have also tried to use Twisted to set up a basic function which makes a function call every X sec until some condition is met. I haven't tried it with MSSQL yet though.

The way your code is written it doesn't seems to be in a working order:
It doesn't compile because of is_done = false,
If fixed to is_done = False, it skips the loop straight away,
Even if the loop is fixed in some reasonable way you never get to call threading.Timer(100, d).start() and don't examine any other rows as you return from d straight away after examining the first row using return is_it_done(row)[1]
It doesn't matter what the actual timed worker method does, prints to console or checks the database, should work just the same with the same timer.
What about something like this:
import threading
def is_it_done():
# get some dummy predictable results
if not hasattr(is_it_done, 'results'):
is_it_done.results = iter([False] * 3)
return next(is_it_done.results, True)
def job_watch():
is_done = False
def d():
is_done = is_it_done()
print('is_done: {}'.format(is_done))
timer = threading.Timer(3, d).start()
d()
job_watch()

Related

Best way to output concurrent task results in shell on the go

That is maybe a very basic question, and I can think of solutions, but I wondered if there is a more elegant one I don't know of (quick googling didn't bring anything useful).
I wrote a script to communicate to a remote device. However, now that I have more than one of that type, I thought I just make the communication in concurrent futures and handle it simultaneously:
with concurrent.futures.ThreadPoolExecutor(20) as executor:
executor.map(device_ctl, ids, repeat(args))
So it just calls up to 20 threads of device_ctl with respective IDs and the same args. device_ctl is now printing some results, but since they all run in parallel, it gets mixed up and looks messy. Ideally I could have 1 line per ID that shows the current state of the communication and gets updated once it changes, e.g.:
Dev1 Connecting...
Dev2 Connected! Status: Idle
Dev3 Connected! Status: Updating
However, I don't really know how to solve it nicely. I can think of a status list that outside of the threads gets assembled into one status string, which gets frequently updated. But it feels like there could be a simpler method! Ideas?
Since there was no good answer, I made my own solution, which is quite compact but efficient. I define a class that I call globally. Each thread populates it or updates a value based on its ID. The ID is meant to be the same list entry as taken for the thread. Here I made a simple example how to use it:
class collect:
ids = []
outs = []
LINE_UP = "\033[1A"
LINE_CLEAR = "\x1b[2K"
printed = 0
def init(list):
collect.ids = [i for i in list]
collect.outs = ["" for i in list]
collect.printall()
def write(id, out):
if id not in collect.ids:
collect.ids.append(id)
collect.outs.append(out)
else:
collect.outs[collect.ids.index(id)] = out
def writeout(id, out):
if id not in collect.ids:
collect.ids.append(str(id))
collect.outs.append(str(out))
else:
collect.outs[collect.ids.index(id)] = str(out)
collect.printall()
def append(id, out):
if id not in collect.ids:
collect.ids.append(str(id))
collect.outs.append(str(out))
else:
collect.outs[collect.ids.index(id)] += str(out)
def appendout(id, out):
if id not in collect.ids:
collect.ids.append(id)
collect.outs.append(out)
else:
collect.outs[collect.ids.index(id)] += str(out)
collect.printall()
def read(id):
return collect.outs[collect.ids.index(str(id))]
def readall():
return collect.outs, "\n".join(collect.outs)
def printall(filter=""):
if collect.printed > 0:
print(collect.LINE_CLEAR + collect.LINE_UP * len(collect.ids), end="")
print(
"\n".join(
[
collect.ids[i] + "\t" + collect.outs[i] + " " * 30
for i in range(len(collect.outs))
if filter in collect.ids[i]
]
)
)
collect.printed = len(collect.ids)
def device_ctl(id,args):
collect.writeout(id,"Connecting...")
if args.connected:
collect.writeout(id,"Connected")
collect.init(ids)
with concurrent.futures.ThreadPoolExecutor(20) as executor:
executor.map(device_ctl, ids, repeat(args))

How to temporally disconnect PyQt5 signal and reconnect it after?

minimum working code :
step1_failed = False
try:
print("step #1")
except:
step1_failed = True
print("step #2") # always appening after step #1 but before step #3 regardless of if step #1 failed or not
if not step1_failed:
print("step #3") # only if step #1 execute without error
My question is : is there a better way of doing this that i don't see ?
Ideally without any dummy variable like step1_failed.
I thought that maybe "finally" and "else" was the answers but finally happen after the else and i need to do something before the else statement.
The use case of this is for PyQt5, I want to disconnect a signal and reconnect it after doing something to avoid unwanted recursion.
But I need to reconnect it, only if it was connected at first.
Here is my PyQt5 code to understand why i need this :
def somefunction():
connected_at_first = True # assuming it was connected
try:
lineedit1.textChanged.disconnect(somefunction) # throw a TypeError if it is not connected
except TypeError:
connected_at_first = False # happen only if lineedit1 wasn't connected
lineedit1.setText("always happening !")
# reconnecting lineedit1 if it was connected at the beginning
if connected_at_first:
lineedit1.textChanged.connect(somefunction)
I don't know if there's a cleaner way, but your approach can be wrapped in a context manager.
from contextlib import contextmanager
def tempdisconnect(o, f)
connected = True
try:
o.disconnect(f)
except TypeError:
connected = False
yield
if connected:
o.connect(f)
with tempdisconnect(lineedit1.textChanged, somefunction):
lineedit1.setText("always happening !")
A better API for disconnect would be to return either the function being disconnected (similar to how signal.signal works), or return None. Then tempdisconnect could be written
def tempdisconnect(o, f):
old = o.disconnect(f)
yield
o.connect(old)
This also assumes that o.connect(None) is a no-op, so that it remains unconnected before and after the body of the with statement.
If you want to avoid recursion, you can use blockSignals():
def somefunction():
blocked = lineedit1.blockSignals(True)
lineedit1.setText("always happening !")
lineedit1.blockSignals(blocked)
Otherwise, use a simple flag:
class SomeClass(QtWidgets.QWidget):
signalFlag = False
# ...
def somefunction(self):
if self.signalFlag:
return
self.signalFlag = True
self.lineEdit.setText("always happening !")
self.signalFlag = False
Base on the answers of chepner, I modified his code to be able to remove duplicate connect of the same function and to handle multiple function.
from contextlib import contextmanager
#contextmanager
def tempdisconnect(signal, func):
if not isinstance(func, (tuple, list)):
func = (func,)
connected = [True] * len(func)
for i in range(len(func)):
a = 0
try:
while True:
signal.disconnect(func[i])
a += 1
except TypeError:
if a == 0:
connected[i] = False
yield
if connected != False:
for i in range(len(func)):
if connected[i]:
signal.connect(func[i])
usage :
# Disconnect somefunction (even if it was accidently connected multiple times)
with tempdisconnect(lineEdit1.textChanged, somefunction):
lineEdit1.setText("hello")
or
# Disconnect somefunc1, somefunc2, somefunc3
with tempdisconnect(lineEdit1.textChanged, (somefunc1, somefunc2, somefunc3)):
lineEdit1.setText("hello")

Set with expiration not working inside running Python script

I have this class called DecayingSet which is a deque with expiration
class DecayingSet:
def __init__(self, timeout): # timeout in seconds
from collections import deque
self.timeout = timeout
self.d = deque()
self.present = set()
def add(self, thing):
# Return True if `thing` not already in set,
# else return False.
result = thing not in self.present
if result:
self.present.add(thing)
self.d.append((time(), thing))
self.clean()
return result
def clean(self):
# forget stuff added >= `timeout` seconds ago
now = time()
d = self.d
while d and now - d[0][0] >= self.timeout:
_, thing = d.popleft()
self.present.remove(thing)
I'm trying to use it inside a running script, that connects to a streaming api.
The streaming api is returning urls that I am trying to put inside the deque to limit them from entering the next step of the program.
class CustomStreamListener(tweepy.StreamListener):
def on_status(self, status, include_entities=True):
longUrl = status.entities['urls'][0]['expanded_url']
limit = DecayingSet(86400)
l = limit.add(longUrl)
print l
if l == False:
pass
else:
r = requests.get("http://api.some.url/show?url=%s"% longUrl)
When i use this class in an interpreter, everything is good.
But when the script is running, and I repeatedly send in the same url, l returns True every time indicating that the url is not inside the set, when is supposed to be. What gives?
Copying my comment ;-) I think the indentation is screwed up, but it looks like you're creating a brand new limit object every time on_status() is called. Then of course it would always return True: you'd always be starting with an empty limit.
Regardless, change this:
l = limit.add(longUrl)
print l
if l == False:
pass
else:
r = requests.get("http://api.some.url/show?url=%s"% longUrl)
to this:
if limit.add(longUrl):
r = requests.get("http://api.some.url/show?url=%s"% longUrl)
Much easier to follow. It's usually the case that when you're comparing something to a literal True or False, the code can be made more readable.
Edit
i just saw in the interpreter the var assignment is the culprit.
How would I use the same obj?
You could, for example, create the limit object at the module level. Cut and paste ;-)

Conditional if in asynchronous python program with twisted

I'm creating a program that uses the Twisted module and callbacks.
However, I keep having problems because the asynchronous part goes wrecked.
I have learned (also from previous questions..) that the callbacks will be executed at a certain point, but this is unpredictable.
However, I have a certain program that goes like
j = calc(a)
i = calc2(b)
f = calc3(c)
if s:
combine(i, j, f)
Now the boolean s is set by a callback done by calc3. Obviously, this leads to an undefined error because the callback is not executed before the s is needed.
However, I'm unsure how you SHOULD do if statements with asynchronous programming using Twisted. I've been trying many different things, but can't find anything that works.
Is there some way to use conditionals that require callback values?
Also, I'm using VIFF for secure computations (which uses Twisted): VIFF
Maybe what you're looking for is twisted.internet.defer.gatherResults:
d = gatherResults([calc(a), calc2(b), calc3(c)])
def calculated((j, i, f)):
if s:
return combine(i, j, f)
d.addCallback(calculated)
However, this still has the problem that s is undefined. I can't quite tell how you expect s to be defined. If it is a local variable in calc3, then you need to return it so the caller can use it.
Perhaps calc3 looks something like this:
def calc3(argument):
s = bool(argument % 2)
return argument + 1
So, instead, consider making it look like this:
Calc3Result = namedtuple("Calc3Result", "condition value")
def calc3(argument):
s = bool(argument % 2)
return Calc3Result(s, argument + 1)
Now you can rewrite the calling code so it actually works:
It's sort of unclear what you're asking here. It sounds like you know what callbacks are, but if so then you should be able to arrive at this answer yourself:
d = gatherResults([calc(a), calc2(b), calc3(c)])
def calculated((j, i, calc3result)):
if calc3result.condition:
return combine(i, j, calc3result.value)
d.addCallback(calculated)
Or, based on your comment below, maybe calc3 looks more like this (this is the last guess I'm going to make, if it's wrong and you'd like more input, then please actually share the definition of calc3):
def _calc3Result(result, argument):
if result == "250":
# SMTP Success response, yay
return Calc3Result(True, argument)
# Anything else is bad
return Calc3Result(False, argument)
def calc3(argument):
d = emailObserver("The argument was %s" % (argument,))
d.addCallback(_calc3Result)
return d
Fortunately, this definition of calc3 will work just fine with the gatherResults / calculated code block immediately above.
You have to put if in the callback. You may use Deferred to structure your callback.
As stated in previous answer - the preocessing logic should be handled in callback chain, below is simple code demonstration how this could work. C{DelayedTask} is a dummy implementation of a task which happens in the future and fires supplied deferred.
So we first construct a special object - C{ConditionalTask} which takes care of storring the multiple results and servicing callbacks.
calc1, calc2 and calc3 returns the deferreds, which have their callbacks pointed to C{ConditionalTask}.x_callback.
Every C{ConditionalTask}.x_callback does a call to C{ConditionalTask}.process which checks if all of the results have been registered and fires on a full set.
Additionally - C{ConditionalTask}.c_callback sets a flag of wheather or not the data should be processed at all.
from twisted.internet import reactor, defer
class DelayedTask(object):
"""
Delayed async task dummy implementation
"""
def __init__(self,delay,deferred,retVal):
self.deferred = deferred
self.retVal = retVal
reactor.callLater(delay, self.on_completed)
def on_completed(self):
self.deferred.callback(self.retVal)
class ConditionalTask(object):
def __init__(self):
self.resultA=None
self.resultB=None
self.resultC=None
self.should_process=False
def a_callback(self,result):
self.resultA = result
self.process()
def b_callback(self,result):
self.resultB=result
self.process()
def c_callback(self,result):
self.resultC=result
"""
Here is an abstraction for your "s" boolean flag, obviously the logic
normally would go further than just setting the flag, you could
inspect the result variable and do other strange stuff
"""
self.should_process = True
self.process()
def process(self):
if None not in (self.resultA,self.resultB,self.resultC):
if self.should_process:
print 'We will now call the processor function and stop reactor'
reactor.stop()
def calc(a):
deferred = defer.Deferred()
DelayedTask(5,deferred,a)
return deferred
def calc2(a):
deferred = defer.Deferred()
DelayedTask(5,deferred,a*2)
return deferred
def calc3(a):
deferred = defer.Deferred()
DelayedTask(5,deferred,a*3)
return deferred
def main():
conditional_task = ConditionalTask()
dFA = calc(1)
dFB = calc2(2)
dFC = calc3(3)
dFA.addCallback(conditional_task.a_callback)
dFB.addCallback(conditional_task.b_callback)
dFC.addCallback(conditional_task.c_callback)
reactor.run()

Atomic operations in Redis

How to perform these operations atomically?
def setNickName(nick):
oldNick = r.get("user:id:1:nick") # r - instance of redis.Redis()
updated = r.set("user:id:1:nick", nick) if r.hsetnx("user:ref:nick", nick, '1') else False
if updated and oldNick:
r.hdel("user:ref:nick", oldNick)
return True
return False
You can make a lua script and execute it with EVAL command. It will effectively make this whole procedure atomic.
Note that Redis with Lua scripting is not released yet (2.6-rc5), but it's pretty stable already.
You would want to do this within a transaction using watch https://github.com/andymccurdy/redis-py/#pipelines
Let me rewrite your code with pipeline:
def setNickName(nick):
with r.pipeline() as pipe:
while 1:
try:
pipe.watch("user:id:1:nick")
oldNick = pipe.get("user:id:1:nick") # r - instance of redis.Redis()
pipe.multi()
updated = r.set("user:id:1:nick", nick) if r.hsetnx("user:ref:nick", nick, '1') else False
pipe.execute()
if updated and oldNick:
r.hdel("user:ref:nick", oldNick)
return True
print("False")
except redis.WatchError:
continue

Categories

Resources