When I do a query with isql I get the following message and the query blocks.
The transaction log in database foo is almost full. Your transaction is being suspended until space is made available in the log.
When I do the same from Python:
cursor.execute(sql)
The query blocks, but I would like see this message.
I tried:
Sybase.set_debug(sys.stderr)
connection.debug = 1
I'm using:
python-sybase-0.40pre1
Adaptive Server Enterprise/15.5/EBF 18164
EDIT: The question is, "How do I capture this warning message in the python program?"
Good question and I'm not sure I can completely answer it. Here are some ideas.
Sybase.py uses logging. Make sure you are using it. To "bump" the logging out I would do this:
import logging
logging.basicConfig(level = logging.INFO,
format = "%(asctime)s %(levelname)s [%(filename)s] (%(name)s) %(message)s",
datefmt = "%H:%M:%S", stream = sys.stdout)
log = logging.getLogger('sybase')
log.setLevel(logging.DEBUG)
And apparently (why is beyond me??) to get this working in Sybase.py you need to set a global DEBUG=True (see line 38)
But then if we look at def execute we can see (as you point out) that it's blocking. We'll that sort of answers your question. You aren't going to get anything back as it's blocking it. So how do you fix this - write a non-blocking excute method;) There is some hints in examples/timeout.py. Apparently someone else has run up on this but hasn't really fixed it.
I know this didn't probably help but I spent 15 minutes looking - I should at least tell you what I found.. You would think that execute would give you some result -- Wait what is the value of result in line 707??
while 1:
status, result = self._cmd.ct_results()
if status != CS_SUCCEED:
break
If status != CS_SUCCEED (which I'm assuming in your case is True) can you simply see what "result" is equal to? I wonder if they just failed raise the result as an exception?
Your query may be large which is overflowing the transaction log (e.g. you are inserting large binary files). Or you may not be truncating your transaction logs often enough if that's an option.
During runtime you will want to have an except statement that catches the exception Error.
See the Sybase Python documentation.
The reason is obvious, transaction log is full, you can check the blocikng queries in sybase MDA table monSysStatement .. here u can check sql statement which is taking high Io, time, number of row affecting etc.
Related
Let's say I would like to create a simple "listener" bot which will print out the result when I send the message "say *". I would imagine that it should look like this:
import requests
import time
key = 'iknowthatyouarenotsupposedtodoitlikethat'
start_time = time.time()
while True:
result = requests.get(f'https://api.telegram.org/bot{key}/getUpdates?timeout=1').json()
if result['result'][-1]['message']['date'] < start_time: continue # Ignore old messages
message = result['result'][-1]['message']['text'].split()
if message[0] == 'say':
print(' '.join(message[1:]))
break
This is by no means an example of a great approach, but it should work fine :).
The problem here is that the result variable is filled like it is supposed to with messages from the last 24 hours, but starting with the second iteration it only receives the one or two most distant messages, which is super weird. I have found that doing time.sleep(.25) after each iteration seems to fix the issue, but this looks like such a dumb fix which may not be reliable. If this is simply rate limiting, there at least should be some indication of the error, but the code is always 200 and there are no indications of the problem.
The same happens when you try doing the request by directly inserting the link into the browser and start mashing F5, which is obvious, but it is easier to see what I am talking about this way.
After looking into the documentation I have found that this issue may be caused by short polling which is "only recommended for testing purposes", but this should be fixed by the timeout argument which I have.
I don't know how to further approach this issue, so maybe there a solution that I am not seeing?
I'm trying to do everything I would have previously used Bash for in Python, in order to finally learn the language. However, this problem has me stumped, and I haven't been able to find any solutions fitting my use-case.
There is an element of trying to run before I can walk with though, so I'm looking for some direction.
Here's the issue:
I have a Python script that starts a separate program that creates and writes to a log file.
I want to watch that log file, and print out "Successful Run" if the script detects the "Success" string in the log, and "Failed Run" if the "Failed" string is found instead. The underlying process generally takes about 10 seconds to get to the stage where it'll write "Success" or "Failure" to the log file. Neither string will appear in the log at the same. It's either a success, or failure. It can't be both.
I've been attempting to do this with a while loop. So I can continue to watch the log file, until the string appears, and then exit when it does. I have got it working for just one string, but I'm unsure how to accomodate the other string.
Here's the code I'm running.
log_path = "test.log"
success = "Success"
failure = "Failed"
with open(log_path) as log:
while success != True:
if success in log.read():
print("Process Successfully Completed")
sys.exit()
Thanks to the pointers above from alaniwi and David, I've actually managed to get it to work, using the following code. So I must have been quite close originally.
I've wrapped it all in a while True, put the log.read() into a variable, and added an elif. Definitely interested in any pointers on whether this is the most Pythonic way to do it though? So please critique if need be.
while True:
with open(log_path) as log:
read_log = log.read()
if success in read_log:
print("Process Successfully Completed")
sys.exit()
elif fail in read_log:
print("Failed")
sys.exit()
I am using pythons built in logging class. I'd like to only log out when an error occurs, but when it does, to log out everything up until that point for debugging purposes.
It would be nice if I could reset this as well, so a long running process doesn't contain gigabytes of logs.
As an example. I have a process that processes one million widgets. Processing a widget can be complicated and involve several steps. If processing fails, knowing all of the logs for that widget up to that point would be helpful.
from random import randrange
logger = logging.getLogger()
for widget in widgetGenerator():
logger.reset()
widget.process(logger)
class Widget():
def process(self, logger):
logger.info('doing stuff')
logger.info('do more stuff')
if randrange(0, 10) == 5:
logger.error('something bad happened')
1 out of 10 times the following would be printed:
doing stuff
doing more stuff
something bad happened
But the normal logs would not be printed otherwise.
Can this be done with the logger as is or do I need to roll my own implementation?
Use a MemoryHandler to buffer records using a threshold of e.g. ERROR, and make the MemoryHandler's target attribute point to a handler which writes to e.g. console or file. Then output should only occur if the threshold (e.g. ERROR) is hit during program execution.
I have a big problem with a deadlock in an InnoDB table used with sqlalchemy.
sqlalchemy.exc.InternalError: (mysql.connector.errors.InternalError) 1213 (40001): Deadlock found when trying to get lock; try restarting transaction.
I have already serialized the access, but still get a deadlock error.
This code is executed on the first call in every function. Every thread and process should wait here, till it gets the lock. It's simplified, as selectors are removed.
# The work with the index -1 always exists.
f = s.query(WorkerInProgress).with_for_update().filter(
WorkerInProgress.offset == -1).first()
I have reduced my code to a minimal state. I am currently running only concurrent calls on the method next_slice. Session handling, rollback and deadloc handling are handled outside.
I get deadlocks even all access is serialized. I did tried to increment a retry counter in the offset == -1 entity as well.
def next_slice(self, s, processgroup_id, itemcount):
f = s.query(WorkerInProgress).with_for_update().filter(
WorkerInProgress.offset == -1).first()
#Take first matching object if available / Maybe some workers failed
item = s.query(WorkerInProgress).with_for_update().filter(
WorkerInProgress.processgroup_id != processgroup_id,
WorkerInProgress.processgroup_id != 'finished',
WorkerInProgress.processgroup_id != 'finished!locked',
WorkerInProgress.offset != -1
).order_by(WorkerInProgress.offset.asc()).limit(1).first()
# *****
# Some code is missing here. as it's not executed in my testcase
# Fetch the latest item and add a new one
item = s.query(WorkerInProgress).with_for_update().order_by(
WorkerInProgress.offset.desc()).limit(1).first()
new = WorkerInProgress()
new.offset = item.offset + item.count
new.count = itemcount
new.maxtries = 3
new.processgroup_id = processgroup_id
s.add(new)
s.commit()
return new.offset, new.count
I don't understand why the deadlocks are occurring.
I have reduced deadlock by fetching all items in one query, but still get deadlocks. Perhaps someone can help me.
Finally I solved my problem. It's all in the documentation, but I have to understand it first.
Always be prepared to re-issue a transaction if it fails due to
deadlock. Deadlocks are not dangerous. Just try again.
Source: http://dev.mysql.com/doc/refman/5.7/en/innodb-deadlocks-handling.html
I have solved my problem by changing the architecture of this part. I still get a lot of deadlocks, but they appear almost in the short running methods.
I have splitted my worker table to a locking and an non locking part. The actions on the locking part are now very short and no data is handling during the get_slice, finish_slice and fail_slice operations.
The transaction part with data handling are now in a non locking part and without concurrent access to table rows. The results are stored in finish_slice and fail_slice to the locking table.
Finally I have found a good description on stackoverflow too. After identifying the right search terms.
https://stackoverflow.com/a/2596101/5532934
When raised exception is caught on the root of call-stack I can see the whole context at every level of call-stack in Sentry.
But, when I use captureMessage() I can't see any context in Sentry.
If I use captureException() as in the code below I can see only the top of call-stack.
try:
raise Exception('Breakpoint!')
except:
raven_client.captureException()
In other words I want to see in Sentry a logged message with full stacktrace and context.
The Python SDK has the ability to capture arbitrary stacktraces by passing stack=True to captureMessage:
raven_client.captureMessage('hello world', stack=True)
There is additionally an auto_log_stacks value that can be turned on when configuring the Client:
raven_client = Client(..., auto_log_stacks=True)
Caveat: Automatically logging stacks is useful, but it's not guaranteed accurate in some common situations. It's also a performance hit, albeit a minor one, as it has to constantly call out to inspect.stack().