I have following following model:
class Foo(DeclarativeBase):
code = Column(u'code', String(length=255))
ctype = Column(u'ctype', String(length=255))
I need to validate one field, with respect to another one.
For example
if ctype == "bar" and code == "buzz": raise ValueError
Do not create object and record in db on commit. And if no exception was raised create all as usual.
I've tried to use. Simple validators
and tried to setup 'before_insert' event using before_insert mapper event
and wrote such code:
def validate_foo(mapper, connection, target):
if target.ctype == "bar" and target.code == "buzz":
raise ValueError
event.listen(Foo, 'before_insert', validate_foo)
When ctype == "bar" and code == "buzz", it does not create any object in DB. It does not raise any exception. But it creates Foo object instance(without db).
What is the best way to do such validation?
I've ended up with creating trigger in DB(I use postgresql). And catching integrity error in python code.
Related
I'm currently developing locally an Azure function that communicates with Microsoft Sentinel, in order to fetch the alert rules from it, and more specifically their respective querys :
credentials = AzureCliCredential()
alert_rules_operations = SecurityInsights(credentials, SUBSCRIPTION_ID).alert_rules
list_alert_rules = alert_rules_operations.list(resource_group_name=os.getenv('RESOURCE_GROUP_NAME'), workspace_name=os.getenv('WORKSPACE_NAME'))
The issue is that when I'm looping over list_alert_rules, and try to see each rule's query, I get an error:
Exception: AttributeError: 'FusionAlertRule' object has no attribute 'query'.
Yet, when I check their type via the type() function:
list_alert_rules = alert_rules_operations.list(resource_group_name=os.getenv(
'RESOURCE_GROUP_NAME'), workspace_name=os.getenv('WORKSPACE_NAME'))
for rule in list_alert_rules:
print(type(rule))
##console: <class 'azure.mgmt.securityinsight.models._models_py3.ScheduledAlertRule'>
The weirder issue is that this error appears only when you don't print the attribute. Let me show you:
Print:
for rule in list_alert_rules:
query = rule.query
print('query', query)
##console: query YAY I GET WHAT I WANT
No print:
for rule in list_alert_rules:
query = rule.query
...
##console: Exception: AttributeError: 'FusionAlertRule' object has no attribute 'query'.
I posted the issue on the GitHub repo, but I'm not sure whether it's a package bug or a runtime issue. Has anyone ran into this kind of problems?
BTW I'm running Python 3.10.8
TIA!
EDIT:
I've tried using a map function, same issue:
def format_list(rule):
query = rule.query
# print('query', query)
# query = query.split('\n')
# query = list(filter(lambda line: "//" not in line, query))
# query = '\n'.join(query)
return rule
def main(mytimer: func.TimerRequest) -> None:
# results = fetch_missing_data()
credentials = AzureCliCredential()
alert_rules_operations = SecurityInsights(
credentials, SUBSCRIPTION_ID).alert_rules
list_alert_rules = alert_rules_operations.list(resource_group_name=os.getenv(
'RESOURCE_GROUP_NAME'), workspace_name=os.getenv('WORKSPACE_NAME'))
list_alert_rules = list(map(format_list, list_alert_rules))
I have tried with same as you used After I changed like below; I get the valid response.
# Management Plane - Alert Rules
alertRules = mgmt_client.alert_rules.list_by_resource_group('<ResourceGroup>')
for rule in alertRules:
# Try this
test.query = rule.query //Get the result
#print(rule)
if mytimer.past_due:
logging.info('The timer is past due!')
Instead of this
for rule in list_alert_rules:
query = rule.query
Try below
for rule in list_alert_rules:
# Try this
test.query = rule.query
Sorry for the late answer as I've been under tons of work these last few days.
Python has an excellent method called hasattr that checks if the object contains a specific key.
I've used it in the following way:
for rule in rules:
if hasattr(rule, 'query'):
...
The reason behind using this is because the method returns object of different classes, however inherited from the one same mother class.
Hope this helps.
I have a python script and a database. They are connected. The python script raises some errors when the inputs are wrong. For example, one of the functions is to count classes. If there are less than 2 classes, an OneClassError is raised and the program is killed. There are other customized errors like OneClassError.
There is a column in my database table called error_code. It should update according to the specific error I get. For instance, it should be 401 if it's OneClassError; it should be 402 if it's TestClassMsg. (I make up the error names here just for easier illustration.)
For now, I only know how to update that column by a single value. I wonder how to automate the process that there are 401 and 402 where I don't need to specify 401 or 402 - the error_code identifies the error and knows the code for it.
# here is my simplified example:
def count_class_sim(inp_data):
"""
:param inp_data: the path of the input data
:return: pass or fail
"""
unique_cc, count_cc = np.unique(inp_data[inp_data != 0], return_counts=True)
if len(unique_cc) < 2:
raise OneClassError()
else:
raise TestClassMsg()
data1 = np.array([0,0,1,2,3,3,4,5,5,5])
count_class_sim(data1) # ---- TestClassMsg: TestingTesting
data2 = np.array([0,0,1])
count_class_sim(data2) # ---- OneClassError: Data has only 1 non-zero class - rejected.
Here is my database connection code. Note that I only know how to give a single value to error_code. I want a sophisticated way to do that. Can anyone show me how to modify the "set error_code = "command?
try:
count_class_sim(data2)
except:
cur.execute("""update lc_2020 set error_code = 401
where quad_id = '010059' """)
conn.commit()
Since you have custom error, you can mention your error code in custom error class itself.
Example:
class OneClassError(Exception):
code = 401
def __init__(self):
pass
#remaining logic
try:
#code that may cause Exception OneClassError
except OneClassError as e:
print(e.code)
I hope I Understand your query and answered right, If any questions ask in comment section.
I created a repository class for Person with the below method in it:
def find_by_order_id(self, order_id: str) -> [Person]:
results = []
client_table = Table(TBL_CLIENT, self._metadata, autoload=True,
autoload_with=self._connection)
order_table = Table(TBL_ORDER, self._metadata, autoload=True,
autoload_with=self._connection)
query = select([client_table.c.first_name, client_table.c.last_name, client_table.c.date_of_birth])
.select_from(
order_table.join(client_table, client_table.c.order_id = order_table.c.order_id)
).where(order_table.c.order_id == order_id).distinct()
for row in self._connection.execute(query).fetchall():
results.append(dict(row))
return results
for some odd reason which I have no explanation for, I sometimes get KeyError from SQLAlchemy on a key that actually exists when I debug the code:
File "/home/tghasemi/miniconda3/envs/myproj/lib/python3.6/site-packages/sqlalchemy/util/_collections.py", line 210, in __getattr__
return self._data[key]
KeyError: 'order_id'
I noticed this only happens when I pull the connection from a connection pool while multithreading (each thread only uses one connection - I know connections are not threadsafe):
if use_pooling:
self._engine = create_engine(connection_string, pool_size=db_pool_size,
pool_pre_ping=db_pool_pre_ping, echo=db_echo)
else:
self._engine = create_engine(connection_string, echo=db_echo)
Considering the fact that the key exists(even checked it when the exception happens) when I put a breakpoint where the exception happens, I suspect loading the table is not completed yet when the query is being constructed.
Does anyone have any idea why something like this can happen? I gave up!
Well, I think I managed to fix the problem.
Something I did not mention in my question (in fact I did not think of it as a possibility) was that both the Engine and the MetaData object come from a Singleton class I created for my Database:
class Database(metaclass=SingletonMetaClass):
def __init__(self, config: Config, use_pooling: bool = True, logger: Logger = None):
# Initializing the stuff here ...
#property
def metadata(self): # used to return self._metadata here
return MetaData(self._engine, reflect=False, schema=self._db_schema)
#property
def engine(self):
return self._engine
Although the SQLAlchemy's official documentation says that the MetaData object is thread-safe for read operations, but for some reason in my case (which is a read operation) it was causing this issue. Somehow after not sharing this object among my threads the issue went away (not 100% sure if it literally went away, but it's not happening anymore).
#pytest.fixture
def settings():
with open('../config.yaml') as yaml_stream:
return yaml.load(stream=yaml_stream)
#pytest.fixture
def viewers(settings):
try:
data = requests.get(settings['endpoints']['viewers']).json()
return data[0]['viewers']
except Exception:
print('ERROR retrieving viewers')
raise(SystemExit)
#pytest.fixture
def viewers_buffer_health(viewers):
print(viewers)
viewers_with_buffer_health = {}
for viewer in viewers:
try:
data = requests.get(settings['endpoints']['node_buffer_health']).replace('<NODE_ID>', viewer)
except Exception as e:
print('ERROR retrieving buffer_health for {}'.format(viewer))
raise(SystemExit)
viewers_with_buffer_health[viewer] = data[0]['avg_buffer_health']
return viewers_with_buffer_health
The fixture viewers_buffer_health is failing all the time on the requests because 'function' object is not subscriptable
Other times I have seen such error it has been because I was calling a variable and a function by the same name, but it's not the case (or I'm blind at all).
Although it shouldn't matter, the output of viewers is a list like ['a4a6b1c0-e98a-42c8-abe9-f4289360c220', '152bff1c-e82e-49e1-92b6-f652c58d3145', '55a06a01-9956-4d7c-bfd0-5a2e6a27b62b']
Since viewers_buffer_health() doesn't have a local definition for settings it is using the function defined previously. If it is meant to work in the same manner as viewers() then you will need to add a settings argument to its current set of arguments.
settings is a function.
data = requests.get(settings()['endpoints']['node_buffer_health']).replace('<NODE_ID>', viewer)
I'm using Peewee and bottle.py for a very small WebApp. Once in a while I get a MySQLDatabase has gone away Error, which my script doesn't seem to recover from.
According to this answer, I should simply try-catch the error and recover myself.
An example of what I have:
def create_db_con():
return peewee.MySQLDatabase("db_name", host="host", user="user", passwd="pass")
class ModelObj(peewee.Model):
#some member ommited
class Meta:
database=create_db_con()
#route("/")
def index_htm():
try:
mo = ModelObj.filter(foo="bar")
catch OperationalError, oe:
ModelObj.Meta.database = create_db_con()
gives me the AttributeError in case of the OperationalError:
AttributeError: type object 'OrderProdukt' has no attribute 'Meta'
How am I supposed to recover from this situation?
EDIT:
As univerio pointed out, I can access it via ModelObj._meta.database, but it doesn't seem to work to just create it new.
Is this a default python behavior for nested classes?