Using a qTableWidget to view my sqlite database entries.
The first column of the database is hidden since it's the ID that gets auto assigned to an entry.
Now I want the user to select the row he wants to delete and then be able to press the delete button.
I'm using the following to do that:
class StartScherm(QtWidgets.QMainWindow, Ui_MainWindow):
def __init__(self):
super(StartScherm, self).__init__()
self.setupUi(self)
self.database_laden()
# -- Button presses with .connect -- #
self.tableSnelleKijk.doubleClicked.connect(self.weergave_scherm)
self.tableSnelleKijk.setSelectionBehavior(
QtWidgets.QTableView.SelectRows)
self.tableSnelleKijk.setColumnHidden(0, True)
Now I'm trying to make a function that once the users clicks on the delete button in the GUI, the function reads the ID so I can delete it from the sqlite database. I use this:
def delete(self):
index = self.tableSnelleKijk.selectedItems()
print(index[0].text())
unfortunately, this gives me the first visable column data as text instead of the hidden columnwith the ID.
How do I access the hidden data?
EDIT: changed column to row
The database is hidden on the tableview only, in sqlite it's simply there to have a rowID I can use to access a current row to delete/edit.
Database code:
c.execute("""CREATE TABLE Medailles (
id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,
orde text,
periode text,
locatie text,
beschrijving text
)""")
The column is hidden in the QtableView with .setColumnHidden.
The database gets populated when the users decides to enter a new entry, this is the function that takes care of this:
def add(self):
orde = self.orde_comboBox.currentText()
periode = self.periode_input.text()
locatie = self.locatie_input.text()
beschrijving = self.beschrijving_input.toPlainText()
medaille = Medaille(orde, periode, locatie, beschrijving)
with conn:
c.execute("INSERT INTO Medailles (orde, periode, locatie, beschrijving) VALUES (:orde, :periode, :locatie, :beschrijving)", {
'orde': medaille.orde, 'periode': medaille.periode, 'locatie': medaille.locatie, 'beschrijving': medaille.beschrijving})
self.close()
If the data is in an invisible column, the items in that column cannot be selected. You can access those indexes (and their data) by using the sibling(row, column) function (or siblingAtColumn(column) for Qt>=5.11):
def delete(self):
rows = set()
for index in self.tableSnelleKijk.selectedItems():
id = index.sibling(index.row(), 0).text()
rows.add(id)
print('IDs to delete: {}'.format(sorted(rows)))
Do note that if you need to write data to the database or edit its layout you should consider using a basic QTableView with a QtSql.QSqlTableModel instead, otherwise you might face some inconsistencies, bugs or even data corruption if you're not really careful with your implementation.
To get the value of a hidden column in a QTableWidget (PyQt5):
row = self.tblWidget.currentRow()
col = position of hidden column - usually 0
ID = self.tblWidget.item(row,col).text()
Related
I have two input fields that shall allow user selection via
ID, in the number input
(sorted) name, in the selection box
Changing one input field should update the other to keep them in sync.
How do you implement that behaviour with streamlit?
What I tried so far
ID selected -> update name selection box:
users = [(1, 'Jim'), (2, 'Jim'), (3, 'Jane')]
users.sort(key=lambda user: user[1]) # sort by name
selected_id = st.sidebar.number_input('ID', value=1)
options = ['%s (%d)' % (name, id) for id, name in users]
index = [i for i, user in enumerate(users) if user[0] == selected_id][0]
selected_option = st.sidebar.selectbox('Name', options, index)
Name selected -> update ID number input (using st.empty()):
users = [(1, 'Jim'), (2, 'Jim'), (3, 'Jane')]
users.sort(key=lambda user: user[1]) # sort by name
id_input = st.sidebar.empty()
options = ['%s (%d)' % (name, id) for id, name in users]
selected_option = st.sidebar.selectbox('Name', options)
# e.g. get 2 from "Jim (2)"
id = int(re.match(r'\w+ \((\d+)\)', selected_option).group(1))
selected_id = id_input.number_input('ID', value=id)
To keep the widgets in sync, there are two issues that need to be addressed:
We need to be able to tell when either widget has caused the current selection to change; and
We need to update the state of both widgets at the end of the script so that the browser keeps the new values when the script is re-run for visual updates.
For (1), it looks like there's no way of doing it without introducing some kind of persistent state. Without a way to store the current selection between script runs, we can only compare the two widgets' values with each other and with the default value. This causes problems once the widgets have been changed: For example, if the default value is 1, the value of the number input is 2, and the value from the selectbox is 3, we cannot tell whether it is the number input or the selectbox that was most recently changed (and therefore which one to update to the latest value).
For (2), it's a simple matter of using placeholders and refreshing the widgets whenever the selection has changed. Importantly, the widgets should not be refreshed if the selection has not changed, or we will get DuplicateWidgetID errors (since the content of the widgets will not have changed either, and they will have the same generated keys).
Here's some code that shows one way of dealing with both issues and capturing the user's selection at the end. Note that using #st.cache in this way will persist a single global selection across multiple browser sessions, and will allow anyone to clear the selection via the Streamlit menu -> 'Clear cache', which could be a problem if multiple users are accessing the script at the same time.
import re
import streamlit as st
# Simple persistent state: The dictionary returned by `get_state()` will be
# persistent across browser sessions.
#st.cache(allow_output_mutation=True)
def get_state():
return {}
# The actual creation of the widgets is done in this function.
# Whenever the selection changes, this function is also used to refresh the input
# widgets so that they reflect their new state in the browser when the script is re-run
# to get visual updates.
def display_widgets():
users = [(1, "Jim"), (2, "Jim"), (3, "Jane")]
users.sort(key=lambda user: user[1]) # sort by name
options = ["%s (%d)" % (name, id) for id, name in users]
index = [i for i, user in enumerate(users) if user[0] == state["selection"]][0]
return (
number_placeholder.number_input(
"ID", value=state["selection"], min_value=1, max_value=3,
),
option_placeholder.selectbox("Name", options, index),
)
state = get_state()
# Set to the default selection
if "selection" not in state:
state["selection"] = 1
# Initial layout
number_placeholder = st.sidebar.empty()
option_placeholder = st.sidebar.empty()
# Grab input and detect changes
selected_number, selected_option = display_widgets()
input_changed = False
if selected_number != state["selection"] and not input_changed:
# Number changed
state["selection"] = selected_number
input_changed = True
display_widgets()
selected_option_id = int(re.match(r"\w+ \((\d+)\)", selected_option).group(1))
if selected_option_id != state["selection"] and not input_changed:
# Selectbox changed
state["selection"] = selected_option_id
input_changed = True
display_widgets()
st.write(f"The selected ID was: {state['selection']}")
I'm trying to insert rows into a table after changing its schema in Cassandra with the CQLEngine python library. Before the change, the model looked like:
class MetricsByDevice(Model):
device = columns.Text(primary_key=True, partition_key=True)
datetime = columns.DateTime(primary_key=True, clustering_order="DESC")
load_power = columns.Double()
inverter_power = columns.Double()
I've changed the schema to this, adding four columns (DSO, node, park and commercializer):
class MetricsByDevice(Model):
device = columns.Text(primary_key=True, partition_key=True)
datetime = columns.DateTime(primary_key=True, clustering_order="DESC")
DSO = columns.Text(index=True, default='DSO_1'),
node = columns.Text(index=True, default='Node_1'),
park = columns.Integer(index=True, default=6),
commercializer = columns.Text(index=True, default='Commercializer_1'),
load_power = columns.Double()
inverter_power = columns.Double()
Then, I've synced the table with a script containing the line
sync_table(MetricsByDate)
I've checked the database and the four columns have been created. The existing rows has these fields with value NULL (as expected).
Then I've modified the script in charge of inserting in batch rows including the values corresponding to the new fields. It looks like:
batch = BatchQuery()
for idx, message in enumerate(consumer):
data = message.value
ts_to_insert = dateutil.parser.parse(data['timestamp'])
filters = get_filters(message.partition_key)
MetricsByDate.batch(batch).create(
device=device,
date=str(ts_to_insert.date()),
time=str(ts_to_insert.time()),
created_at=now,
DSO=str(filters['DSO']),
node=str(filters['node']),
park=int(filters['park']),
commercializer=str(filters['commercializer']),
load_power=data['loadPower'],
inverter_power=data['inverterPower'],
)
if idx % 100 == 0: # Insert every 100 messages
batch.execute()
# Reset batch
batch = BatchQuery()
I've already checked that the values corresponding to the new fields aren't None and have the correct type. Nevertheless, it's inserting all the row correctly but the values in the new fields, that are NULL in Cassandra.
The batch insertion does not return any errors. I don't know if I'm missing something, or if I need to do an extra step to update the schema. I've been looking in the docs, but I can't find anything that helps.
Is there anything I'm doing wrong?
EDIT
After Alex Ott suggestion, I've inserted the lines one by one. Changing the code to:
for idx, message in enumerate(consumer):
data = message.value
ts_to_insert = dateutil.parser.parse(data['timestamp'])
filters = get_filters(message.partition_key)
metrics_by_date = MetricsByDate(
device=device,
date=str(ts_to_insert.date()),
time=str(ts_to_insert.time()),
created_at=now,
DSO=str(filters['DSO']),
node=str(filters['node']),
park=int(filters['park']),
commercializer=str(filters['commercializer']),
load_power=data['loadPower'],
inverter_power=data['inverterPower'],
)
metrics_by_date.save()
If before executing the line metrics_by_date.save() I add these print statements:
print(metrics_by_date.DSO)
print(metrics_by_date.park)
print(metrics_by_date.load_power)
print(metrics_by_date.device)
print(metrics_by_date.date)
The output is:
(<cassandra.cqlengine.columns.Text object at 0x7ff0b492a670>,)
(<cassandra.cqlengine.columns.Integer object at 0x7ff0b492d190>,)
256.99
SQ3-3.2.3.1-70-17444
2020-04-22
In the fields that are new I'm getting a cassandra object, but in the others I get their values. It maybe is a clue, because it continues to insert NULL in the new column.
Finally I got It.
It was something stupid, in the model definition, for not knwon reasons, I've added commas to separate fields instead of linebreaks...
So correcting the model definition to:
class MetricsByDevice(Model):
device = columns.Text(primary_key=True, partition_key=True)
datetime = columns.DateTime(primary_key=True, clustering_order="DESC")
DSO = columns.Text(index=True, default='DSO_1')
node = columns.Text(index=True, default='Node_1')
park = columns.Integer(index=True, default=6)
commercializer = columns.Text(index=True, default='Commercializer_1')
load_power = columns.Double()
inverter_power = columns.Double()
It works!!
I'm working on to compute the average of x records and I don't want to include the last one (the record where I trigger the action).I can trigger the action in existing record or in a new one (not yet in database).
Here is my code:
#api.one
#api.depends('stc')
def _compute_average_gross(self):
if self.stc:
base_seniority = 12
match_seniority = self.seniority.split()
total_seniority = int(match_seniority[0]) + int(match_seniority[2]) * 12
if total_seniority < 12:
base_seniority = total_seniority if total_seniority else 1 # avoid dividing by 0
# if the hr.payslip is already in db
if self._origin.id:
limit = 13
# could be self.env.cr.execute()
sum_sbr = sum(self.search([('employee_id', '=', self.employee_id.id)], order='create_date desc', limit=limit)[1:].mapped('line_ids').filtered(lambda x: x.code == 'SBR').mapped('amount'))
sum_average_gross = sum(self.search([('employee_id', '=', self.employee_id.id)], order='create_date desc', limit=limit)[1:].mapped('average_gross'))
else:
limit = 12
# could be self.env.cr.execute()
sum_sbr = sum(self.search([('employee_id', '=', self.employee_id.id)], order='create_date desc', limit=limit).mapped('line_ids').filtered(lambda x: x.code == 'SBR').mapped('amount'))
sum_average_gross = sum(self.search([('employee_id', '=', self.employee_id.id)], order='create_date desc', limit=limit).mapped('average_gross'))
self.average_gross = round((sum_sbr + sum_average_gross) / base_seniority, 2)
With that I got an error that self doesn't have _origin, I trier with origin but got the same error. I also tried with self.context['params'].get('id') but it doesn't work as expected.
Could you help me?
To check if record is not saved in database do this:
if isinstance(self.id, models.NewId):
# record is not saved in database.
# do your logic
# record is saved in databse
if not isinstance(self.id, models.NewId):
# ....
for all who are coming to this after the accepted answer:
the correct solution should be this
if isinstance(self.id, models.NewId) and not self._origin:
# record is not saved in database.
# do your logic
# record is saved in databse
if not isinstance(self.id, models.NewId) or self._origin:
# ....
I'm not sure if _origin already existed in Odoo 10, but needed the same in Odoo 13
I haven't tested in a single record but with res.partner and the partner contacts (field child_ids) and the problem is if you open an existing contact and change any field, odoo transfers the existing record in a new record and you get a false positive answer as the record.id is a new ID but the origin exists in DB
haven't tested with copy functionality but i'm sure oodo is correctly resetting the origin in the new record so my answer should be right
I have the following code which I would like to do an upsert:
def add_electricity_reading(
*, period_usage, period_started_at, is_estimated, customer_pk
):
from sqlalchemy.dialects.postgresql import insert
values = dict(
customer_pk=customer_pk,
period_usage=period_usage,
period_started_at=period_started_at,
is_estimated=is_estimated,
)
insert_stmt = insert(ElectricityMeterReading).values(**values)
do_update_stmt = insert_stmt.on_conflict_do_update(
constraint=ElectricityMeterReading.__table_args__[0].name,
set_=dict(
period_usage=period_usage,
period_started_at=period_started_at,
is_estimated=is_estimated,
)
)
conn = DBSession.connection()
conn.execute(do_update_stmt)
return DBSession.query(ElectricityMeterReading).filter_by(**dict(
period_usage=period_usage,
period_started_at=period_started_at,
customer_pk=customer_pk,
is_estimated=is_estimated,
)).one()
def test_updates_existing_record_for_started_at_if_already_exists():
started_at = datetime.now(timezone.utc)
existing = add_electricity_reading(
period_usage=0.102,
customer_pk=customer.pk,
period_started_at=started_at,
is_estimated=True,
)
started_at = existing.period_started_at
reading = add_electricity_reading(
period_usage=0.200,
customer_pk=customer.pk,
period_started_at=started_at,
is_estimated=True,
)
# existing record was updated
assert reading.period_usage == 0.200
assert reading.id == existing.id
In my test when I add an existing record with period_usage=0.102 and then execute the query again but change to period_usage=0.2. When the final query at the bottom returns the record the period_usage is still 0.102.
Any idea why this could be happening?
This behaviour is explained in "Session Basics" under "What does the Session do?" The session holds references to objects it has loaded in a structure called the identity map, and so ensures that only 1 unique object per primary key value exists at a time during a session's lifetime. You can verify this with the following assertion in your own code:
assert existing is reading
The Core insert (or update) statements you are executing do not keep the session in sync with the changes taking place in the database the way for example Query.update() does. In order to fetch the new values you can expire the ORM loaded state of the unique object:
DBSession.expire(existing) # or reading, does not matter
# existing record was updated
assert reading.period_usage == 0.200
assert reading.id == existing.id
I am currently trying to populate 2 fields. They are both already created within a table that I want to populate with data from existing feature classes. The idea is copy all data from desired feature classes that match a particular Project #. The rows that match the project # will copy over to blank template with the matching fields. So far all is good except I need to push the data from the OBJECT ID field and the Name of the Feature Class in to 2 fields within the table.
**def featureClassName(table_path):
arcpy.AddMessage("Calculating Feature Class Name...")
print "Calculating Feature Class Name..."
featureClass = "FeatureClass"
SDE_ID = "SDE_ID"
fc_desc = arcpy.Describe(table_path)
lists = arcpy.ListFields(table_path)
print lists
with arcpy.da.SearchCursor(table_path, featureClass = "\"NAME\"" + " Is NULL") as cursor:
for row in cursor:
print row
if row.FEATURECLASS = str.replace(row.FEATURECLASS, "*", fc):
cursor.updateRow(row)
print row
del cursor, row
else:
pass**
The Code above is my attempt, out of many to populate the field with the Name of the Feature class.
I have attemped to do the same with the OID.
**for fc in fcs:
print fc
if fc:
print "Making Layer..."
lyr = arcpy.MakeFeatureLayer_management (fc, r"in_memory\temp", whereClause)
fcCount = int(arcpy.GetCount_management(lyr).getOutput(0))
print fcCount
if fcCount > 0:
tbl = arcpy.CopyRows_management(lyr, r"in_memory\temp2")
arcpy.AddMessage("Checking for Feature Class Name...")
arcpy.AddMessage("Appending...")
print "Appending..."
arcpy.Append_management(tbl, table_path, "NO_TEST")
print "Checking for Feature Class Name..."
featureClassName(table_path)
del fc, tbl, lyr, fcCount
arcpy.Delete_management(r"in_memory\temp")
arcpy.Delete_management(r"in_memory\temp2")
else:
arcpy.AddMessage("Pass... " + fc)
print ("Pass... " + fc)
del fc, lyr, fcCount
arcpy.Delete_management(r"in_memory\temp")
pass**
This code is the main loop for the feature classes within the dataset that i create a new layer/table to use for copying the data to the table. The data for Feature Class Name and OID dont have data to push, so thats where I am stuck.
Thanks Everybody
You have a number of things wrong. First, you are not setting up the cursor correctly. It has to be a updateCursor if you are going to update, and you called a searchCursor, which you called incorrectly, by the way. Second, you used = (assignment) instead of == (equality comparison) in the line "if row.FEATURECLASS ... Then 2 lines below that, your indentation is messed up on several lines. And it's not clear at all that your function knows the value of fc. Pass that as an arg to be sure. Bunch of other problems exist, but let's just give you an example that will work, and you can study it:
def featureClassName(table_path, fc):
'''Will update the FEATURECLASS FIELD in table_path rows with
value of fc (string) where FEATURECLASS field is currently null '''
arcpy.AddMessage("Calculating Feature Class Name...")
print "Calculating Feature Class Name..."
#delimit field correctly for the query expression
df = arcpy.AddFieldDelimiters(fc, 'FEATURECLASS')
ex = df + " is NULL"
flds = ['FEATURECLASS']
#in case we don't get rows, del will bomb below unless we put in a ref
#to row
row = None
#do the work
with arcpy.da.UpdateCursor(table_path, flds, ex) as cursor:
for row in cursor:
row[0] = fc #or basename, don't know which you want
cursor.updateRow(row)
del cursor, row
Notice we are now passing the name of the fc as an arg, so you will have to deal with that in the rest of your code. Also it's best to use AddFieldDelimiter, since different fc's require different delimiters, and the docs are not clear at all on this (sometimes they are just wrong).
good luck, Mike