How can I delete entries from a table using pyignite? - python

I have a Apache Ignite database running which I want to interact with using Python thin client (pyignite). I've already performed create, read and update operations, but I meet problems with the delete ones. For now, even if the submission of delete request does not raise any error, the entries that are supposed to be deleted are not.
I've tried deleting those same entries running the same delete query in terminal via jdbc:ignite:thin://127.0.0.1/ and this does succesfully remove the targeted entries.
Here is how I unsuccesfully tried to delete data :
self.client = Client()
self.client.connect('127.0.0.1', 10800)
patientID = 5
IS_DEFINED_QUERY = "SELECT * FROM Patients WHERE PatientID = ?"
result = self.client.sql(
IS_DEFINED_QUERY,
query_args=[patientID]
)
try:
next(result)
DELETE_QUERY = "DELETE FROM Patients WHERE PatientID = ?"
self.client.sql(
DELETE_QUERY,
query_args=[patientID])
except StopIteration:
raise KeyDoesNotExist()
Any help would be greatly appreciated, thanks !
EDIT: I've got some suggestions saying it might come from database settings that would prevent Thin Client from executing deletions, any thought about it ?

Related

Loading of Ensembl data from Python into OrientDB stops every time in the middle of the run with invalid authentication

As a test I have been trying to load Ensembl gene-transcript-protein data through a python program into OrientDB (logged in as root). Nodes work fine,(transcript to protein) edges load tens of thousands of edges fine, but then give an error message like
Invalid authentication info for access to the database com.orientechnologies.orient.core.metadata.security.auth.OTokenAuthInfo#
hInfo#3b531bc6 etc
for all remaining edges
Terminal output:
.....
DB name="ensembl"
228190
com.orientechnologies.orient.core.exception.OSecurityAccessException - Invalid authentication info for access to the database com.orientechnologies.orient.core.metadata.security.auth.OTokenAuthInfo#48703c3d
DB name="ensembl"
228191
com.orientechnologies.orient.core.exception.OSecurityAccessException - Invalid authentication info for access to the database com.orientechnologies.orient.core.metadata.security.auth.OTokenAuthInfo#1c80d217
DB name="ensembl"
228192
Debug console (vscode) and inspectin of the input data show nothing unexpected
relevant section of the code:
enter code here
Create the tr2prot edge
tr2protTable=pandas.read_table(dataPath + "tr2prot.txt",sep=',')
(s1,s2)=tr2protTable.shape
for i1 in range(s1):
if i1%1000 == 0: print(i1)
row=tr2protTable.iloc[i1]
uIDfrom=(row['TranscriptStableID']).replace("'","\'")
uIDto = (row['ProteinStableID']).replace("'","\'")
commandStr=f"create edge Tr2prot from (select from Transcript where uID= '{uIDfrom}') to (select from Protein where uID = '{uIDto}')"
tr2protList.append(commandStr)
try:
tr2prot = client.command(commandStr)
except Exception as e:
print(e)
print(i1)
In case its relevant, I tried to set the session token as indicated in the manual
client.connect("root", root-password)
client.set_session_token(True)
import pyorient
client = pyorient.OrientDB("localhost", 2424) # host, port
# Reattach to Session
if sessionToken != '':
client.set_session_token(sessionToken)
else:
# Open Database
client.db_drop("ensembl")
client.db_open("ensembl", "root", root-password)
# Fetch Session Token
sessionToken = client.get_session_token()
o
If I generate a batch submission, I do not seem to have the problem, but errors, when, for some reason, the vertex is not there. Is there something like the neo4j apoc.load.csv switch failOnError:false in OrientDB?
Would also help.
Any suggestions would be great
Hans

cx_Oracle SessionPool root of all Flask problems

I created a web service in Flask over uwsgi. I thought I would follow good practice and create a SessionPool with 20 connections to be safe. Each call to a web service endpoint, I acquire a connection from the pool, and at the end I release it.
When using Locust to swarm test the API, I was getting hundreds of failures, nearly 100% on some of the longer responses (30Mb JSON response). Smaller payloads were much better, but with intermittent failures.
The minute I switched back to bad practice and created a brand new connection and cursor within the method itself, all my problems vanished. 100% success on 1000s of stress test calls.
My errors were varied. TNS Bad Packet, incorrect number of connections from pool, request cancelled by user....you name it, it was there.
So I can't use Oracle connection pooling with flask it seems, or have a single connection at the Flask application level (this generated errors, not sure why, which is why I switched to connection pooling).
Any advice on creating scalable apps using cx_Oracle in flask.
My original code was:
pool = cx_Oracle.SessionPool("user", "password", "myserver.company.net:1521/myservice", min=10, max=10, increment=0, getmode=cx_Oracle.SPOOL_ATTRVAL_WAIT, encoding="UTF-8")
def read_products_search(search=None):
"""
This function responds to a request for /api/products
with the complete lists of people
:return: json string of list of people
"""
conn_ariel = pool.acquire()
cursor_ariel = conn_ariel.cursor()
search=search.lower()
print("product search term is: ", search)
# Create the list of products from our data
sql = """
SELECT DRUG_PRODUCT_ID, PREFERRED_TRADE_NAME, PRODUCT_LINE, PRODUCT_TYPE, FLAG_PASSIVE, PRODUCT_NUMBER
FROM DIM_DRUG_PRODUCT
WHERE lower(PREFERRED_TRADE_NAME) LIKE '%' || :search1 || '%' or lower(PRODUCT_LINE) LIKE '%' || :search2 || '%' or lower(PRODUCT_NUMBER) LIKE '%' || :search3 || '%'
ORDER BY PREFERRED_TRADE_NAME ASC
"""
cursor_ariel.execute(sql, {"search1":search,"search2":search, "search3":search })
products = []
for row in cursor_ariel.fetchall():
r = reg(cursor_ariel, row, False)
product = {
"drug_product_id" : r.DRUG_PRODUCT_ID,
"preferred_trade_name" : r.PREFERRED_TRADE_NAME,
"product_line" : r.PRODUCT_LINE,
"product_type" : r.PRODUCT_TYPE,
"flag_passive" : r.FLAG_PASSIVE,
"product_number" : r.PRODUCT_NUMBER
}
# logging.info("Adding Product: %r", product)
products.append(product)
if len(products) == 0:
products = None
pool.release(conn_ariel)
return products
When you create the pool, use threaded=True.
See How to use Python Flask with Oracle Database.

Adding methods gives a 'index out of range error'?

When adding a vital component of methods=["POST", "GET"], my code gives the error:
Line 127, in PatientDashboard
""".format(Data[0][0]))
IndexError: list index out of range
I understand what this error normally means but I don't understand how adding methods affect the size of my list.
#app.route("/PatientDashboard.html", methods=["GET", "POST"])
def PatientDashboard():
Username = (request.args.get("Username"))
Connection = sqlite3.connect(DB)
Cursor = Connection.cursor()
Data = Cursor.execute("""
SELECT *
FROM PatientTable
WHERE Username = '{}'
""".format(Username))
Data = Data.fetchall()
AllAppointments = Cursor.execute("""
SELECT Title, Firstname, Surname, TimeSlot, Date, Status
FROM AppointmentTable
INNER JOIN DoctorTable ON AppointmentTable.DoctorID = DoctorTable.DoctorID
WHERE PatientID = '{}'
""".format(Data[0][0]))
AllAppointments = AllAppointments.fetchall()
The SQL statements work perfectly (database isn't empty) and when adding print(Data) after the first SQL statement there is an output of a nested list.
I have tried troubleshooting by looking at various other questions on stackoverflow but with no luck.
Thank you ever so much in advance.
EDIT 1:
Username = (request.args.get("Username"))
print("Username: ", Username)
Gives the correct output, e.g. Username: nx_prv but after using the POST request the output becomes Username: None.
EDIT 2:
I have managed to fix this using flask.sessions. The problem was that the request.args.get("Username") was getting 'reset' every time.
The scenario I envision: the route was tested with a GET method (because there was not methods argument), and everything was fine. The methods argument was added so a POST could be tested, and it "stopped working". But it really didn't stop working, it's just not built to handle a POST request.
From flask doc on request object the two salient attributes are:
form
A MultiDict with the parsed form data from POST or PUT requests. Please keep in mind that file uploads will not end up here, but
instead in the files attribute.
args
A MultiDict with the parsed contents of the query string. (The part in the URL after the question mark).
So a GET request will "populate" args and a POST request, form. Username will be None from this line Username = (request.args.get("Username")) on a POST request.
You can determine which method by interrogating the method attribute of the request object.
method
The current request method (POST, GET etc.)

Python Heroku Scheduler DB commit doesnt change values - DetachedInstanceError

I am using following code in the free heroku scheduler add-on to send emails to certain users. After the email was sent a value in the DB must be changed, to be more precise:
setattr(user, "stats_email_sent", True)
Somehow the db_session.commit() is executed but doesnt save the new value. Here is the code:
all_users = User.query.all()
for user in all_users:
if user.stats_email_sent is False and user.number_of_rooms > 0:
if date.today() <= user.end_offer_date and date.today() >= user.end_offer_date - relativedelta(days=10):
print user.id, user.email, user.number_of_rooms, user.bezahlt
if user.bezahlt is True:
with app.app_context():
print "app context true", user.id, user.email, user.number_of_rooms, user.bezahlt
html = render_template('stats_email_once.html', usersname=user.username)
subject = u"Update"
setattr(user, "stats_email_sent", True)
#send_email(user.email, subject, html, None)
else:
with app.app_context():
print "app context false", user.id, user.email, user.number_of_rooms, user.bezahlt
html = render_template('stats_email_once.html', usersname=user.username)
subject = u"Update"
setattr(user, "stats_email_sent", True)
#send_email(user.email, subject, html, None)
print "executing commit"
db_session.commit()
I tryed moving db_session.commit() right after setattr then it will work, but only for one user (for the first user).
setattr(user, "stats_email_sent", True)
db_session.commit()
And give me this in the logs:
sqlalchemy.orm.exc.DetachedInstanceError: Instance <User at 0x7f5c04d462d0> is not bound to a Session; attribute refresh operation cannot proceed
I also found some topics on detaching an instance, but I don't need to detach here anything, or?
EDIT:
I tryed now also adding db_session.expire_on_commit = False. This sadly had no effect.
I also looked on the bulk updates, which also didn't worked.
I even tryed to ignore and pass the DetachedInstanceError.
I cant believe that updating multiple rows at once is such an issue. I am running out of ideas here. Any help is appriciated. It either has no effect or is run into DetachedInstanceError.
EDIT
I solved the issue, had yesterday a similar one. I assume the query and its variables were consumed, thats why it didnt worked.
To solve this I created a list client_id_list = [] and appended all the ids of users which got the email and where the value needs to be changed.
Then I created a completely new query and all in all run the same code with the same logic, but the variables here were not consumed I guess? I wont answer the question again, because I am not sure whether this is true or not, here is the code which I appended to the code above which changes the values:
all_users_again = User.query.all()
for user in all_users_again:
if user.id in client_id_list:
setattr(user, "stats_email_sent", True)
db_session.commit()

Python-ldap search: Size Limit Exceeded

I'm using the python-ldap library to connect to our LDAP server and run queries. The issue I'm running into is that despite setting a size limit on the search, I keep getting SIZELIMIT_EXCEEDED errors on any query that would return too many results. I know that the query itself is working because I will get a result if the query returns a small subset of users. Even if I set the size limit to something absurd, like 1, I'll still get a SIZELIMIT_EXCEEDED on those bigger queries. I've pasted a generic version of my query below. Any ideas as to what I'm doing wrong here?
result = self.ldap.search_ext_s(self.base, self.scope, '(personFirstMiddle=<value>*)', sizelimit=5)
When the LDAP client requests a size-limit, that is called a 'client-requested' size limit. A client-requested size limit cannot override the size-limit set by the server. The server may set a size-limit for the server as a whole, for a particular authorization identity, or for other reasons - whichever the case, the client may not override the server size limit. The search request may have to be issued in multiple parts using the simple paged results control or the virtual list view control.
Here's a Python3 implementation that I came up with after heavily editing what I found here and in the official documentation. At the time of writing this it works with the pip3 package python-ldap version 3.2.0.
def get_list_of_ldap_users():
hostname = "google.com"
username = "username_here"
password = "password_here"
base = "dc=google,dc=com"
print(f"Connecting to the LDAP server at '{hostname}'...")
connect = ldap.initialize(f"ldap://{hostname}")
connect.set_option(ldap.OPT_REFERRALS, 0)
connect.simple_bind_s(username, password)
connect=ldap_server
search_flt = "(personFirstMiddle=<value>*)" # get all users with a specific middle name
page_size = 500 # how many users to search for in each page, this depends on the server maximum setting (default is 1000)
searchreq_attrlist=["cn", "sn", "name", "userPrincipalName"] # change these to the attributes you care about
req_ctrl = SimplePagedResultsControl(criticality=True, size=page_size, cookie='')
msgid = connect.search_ext(base=base, scope=ldap.SCOPE_SUBTREE, filterstr=search_flt, attrlist=searchreq_attrlist, serverctrls=[req_ctrl])
total_results = []
pages = 0
while True: # loop over all of the pages using the same cookie, otherwise the search will fail
pages += 1
rtype, rdata, rmsgid, serverctrls = connect.result3(msgid)
for user in rdata:
total_results.append(user)
pctrls = [c for c in serverctrls if c.controlType == SimplePagedResultsControl.controlType]
if pctrls:
if pctrls[0].cookie: # Copy cookie from response control to request control
req_ctrl.cookie = pctrls[0].cookie
msgid = connect.search_ext(base=base, scope=ldap.SCOPE_SUBTREE, filterstr=search_flt, attrlist=searchreq_attrlist, serverctrls=[req_ctrl])
else:
break
else:
break
return total_results
This will return a list of all users but you can edit it as required to return what you want without hitting the SIZELIMIT_EXCEEDED issue :)

Categories

Resources