I Have this example of sql database. I want to use a certain item from the database in math calculation but I can't because the value looks like this: (25.0,) instead of just 25.0. Please see the attached picture
https://imgur.com/a/j7JOZ5H
import sqlite3
#Create the database:
connection = sqlite3.connect('DataBase.db')
c = connection.cursor()
c.execute('CREATE TABLE IF NOT EXISTS table1 (name TEXT,age NUMBER)')
c.execute("INSERT INTO table1 VALUES('Jhon',25)")
#Pull out the value:
c.execute('SELECT age FROM table1')
data =c.fetchall()
print(data[0])
#simple math calculation:
r=data[0]+1
print(r)
According to Python's PEP 249, the specification for most DB-APIs including sqlite3, fetchall returns a sequence of sequences, usually list of tuples. Therefore, to retrieve the single value in first column to do arithmetic, index the return twice: for specific row and then specific position in row.
data = c.fetchall()
data[0][0]
Alternatively, fetchone returns a single row, either first or next row, in resultset, so simply index once: the position in single row.
data = c.fetchone()
data[0]
The returned data from fetchall always comes back as a list of tuples, even if the tuple only contains 1 value. Your data variable should be:
[(25,)]
You need to use:
print(data[0][0])
r = data[0][0] + 1
print(r)
Related
I am trying to retrieve values from a table to use in a calculation. The code below
mycursor = mydb.cursor()
mycursor.execute("SELECT number FROM info")
rows = mycursor.fetchall()
print (rows)
This returns this list
[(Decimal('30.00'),), (Decimal('66.00'),), (Decimal('72.00'),)]
How can I retrieve the numerical value only either in a list or tuple like
[30.00, 66.00, 72.00]
The original data type in mydb might be a Decimal object.
So you can cast the datatype in the MySQL query or python code.
1) Case in MySQL:
SELECT CAST(number AS DOUBLE) FROM info
but this code returns the fetched rows as
[(30.00,), (66.00,), (72.00,)] because the tuples represent the columns of the query result.
2) Case in python code:
converted_rows = list(map(lambda row:float(row[0]), rows))
It will return [30.00, 66.00, 72.00] list.
I have many number of tables inside a database, I am using pymysql to connect to my db and got all table names. When I print all table names they are stored something like follows:
tables = (('table1',), ('table2',), ('table3',), etc...)
What I want to do is for each table, turn them into a dataframe with their own table name. This is what I've tried:
for table in tables:
table[0] = pd.read_sql(f'select * from {table[0]}', con = conn)
and also by converting tuples into a list, however I think the string is the problem, how do I get rid of apostrophe so I can use them as a variable or is there no such way?
I couldn't find any relevant question on SO, only some stuff that gave me some ideas.
TypeError: 'tuple' object does not support item assignment when swapping values
How can I iterate over only the first variable of a tuple
tuple are immutable object you cannot do this, the result returned by first query is a tuple of tuple:
some_tuple[dome_index] = value
this will generate a tuple of tuple with one element is the dataframe:
tables = tuple((pd.read_sql(f'select * from {table_name}', con = conn),) for (table_name,) in tables)
EDITS:
You could create a tuple of pairs table_name, dataframe:
tables = tuple((table_name, pd.read_sql(f'select * from {table_name}', con = conn)) for (table_name,) in tables)
You need dictionary:
tables = {table_name: pd.read_sql(f'select * from {table_name}', con = conn) for (table_name,) in tables}
I select 1 column from a table in a database. I want to iterate through each of the results. Why is it when I do this it’s a tuple instead of a single value?
con = psycopg2.connect(…)
cur = con.cursor()
stmt = "SELECT DISTINCT inventory_pkg FROM {}.{} WHERE inventory_pkg IS NOT NULL;".format(schema, tableName)
cur.execute(stmt)
con.commit()
referenced = cur.fetchall()
for destTbl in referenced:#why is destTbl a single element tuple?
print('destTbl: '+str(referenced))
stmt = "SELECT attr_name, attr_rule FROM {}.{} WHERE ppm_table_name = {};".format(schema, tableName, destTbl)#this fails because the where clause gets messed up because ‘destTbl’ has a comma after it
cur.execute(stmt)
Because that's what the db api does: always returns a tuple for each row in the result.
It's pretty simple to refer to destTbl[0] wherever you need to.
Because you are getting rows from your database, and the API is being consistent.
If your query asked for * columns, or a specific number of columns that is greater than 1, you'd also need a tuple or list to hold those columns for each row.
In other words, just because you only have one column in this query doesn't mean the API suddenly will change what kind of object it returns to model a row.
Simply always treat a row as a sequence and use indexing or tuple assignment to get a specific value out. Use:
inventory_pkg = destTbl[0]
or
inventory_pkg, = destTbl
for example.
i am new to sqlite and i think this question should have been answered before but i havent been able to find an answer.
i have a list of around 50 elements that i need to write to an sqlite database with 50 columns.
went over the documentation # https://docs.python.org/2/library/sqlite3.html but in the examples the values are specified by ? (so for writing 3 values, 3 ? are specified
sample code:
row_to_write = range(50)
conn = sqlite3.connect('C:\sample_database\sample_database')
c = conn.cursor()
tried these:
approach 1
c.execute("INSERT INTO PMU VALUES (?)", row_to_write)
ERROR: OperationalError: table PMU has 50 columns but 1 values were supplied
approach 2...tried writing a generator for iterating over list
def write_row_value_generator(row_to_write):
for val in row_to_write:
yield (val,)
c.executemany("INSERT INTO PMU VALUES (?)", write_row_value_generator(row_to_write))
ERROR: OperationalError: table PMU has 50 columns but 1 values were supplied
What is the correct way of doing this?
Assuming that your row_to_write has exactly the same number of items as PMU has columns, you can create a string of ? marks easily using str.join : ','.join(['?']*len(row_to_write))
import sqlite3
conn = sqlite3.connect(':memory:')
c = conn.cursor()
c.execute("create table PMU (%s)" % ','.join("col%d"%i for i in range(50)))
row_to_write = list(range(100,150,1))
row_value_markers = ','.join(['?']*len(row_to_write))
c.execute("INSERT INTO PMU VALUES (%s)"%row_value_markers, row_to_write)
conn.commit()
You need to specify the names of the columns. Sqlite will not guess those for you.
columns = ['A', 'B', 'C', ...]
n = len(row_to_write)
sql = "INSERT INTO PMU {} VALUES ({})".format(
', '.join(columns[:n]) , ', '.join(['?']*n))
c.execute(sql, row_to_write)
Note also that if your rows have a variable number of columns, then you might want to rethink your database schema. Usually each row should have a fixed number of columns, and the variability expresses itself in the number of rows inserted, not the number of columns used.
For example, instead of having 50 columns, perhaps you need just one extra column, whose value is one of 50 names (what used to be a column name). Each value in row_to_write would have its own row, and for each row you would have two columns: the value and the name of the column.
I have a database table with multiple fields which I am querying and pulling out all data which meets certain parameters. I am using psycopg2 for python with the following syntax:
cur.execute("SELECT * FROM failed_inserts where insertid='%s' AND site_failure=True"%import_id)
failed_sites= cur.fetchall()
This returns the correct values as a list with the data's integrity and order maintained. However I want to query the list returned somewhere else in my application and I only have this list of values, i.e. it is not a dictionary with the fields as the keys for these values. Rather than having to do
desiredValue = failed_sites[13] //where 13 is an arbitrary number of the index for desiredValue
I want to be able to query by the field name like:
desiredValue = failed_sites[fieldName] //where fieldName is the name of the field I am looking for
Is there a simple way and efficient way to do this?
Thank you!
cursor.description will give your the column information (http://www.python.org/dev/peps/pep-0249/#cursor-objects). You can get the column names from it and use them to create a dictionary.
cursor.execute('SELECT ...')
columns = []
for column in cursor.description:
columns.append(column[0].lower())
failed_sites = {}
for row in cursor:
for i in range(len(row)):
failed_sites[columns[i]] = row[i]
if isinstance(row[i], basestring):
failed_sites[columns[i]] = row[i].strip()
The "Dictionary-like cursor", part of psycopg2.extras, seems what you're looking for.