I have some trouble making my script incrementing my PK in a correct way. Following the sqlalchemy documentation some special configuration has to be done in order to make it work with sqlite. Here is my script:
def db_stuff():
engine = create_engine('sqlite:///test.db', echo=True)
metadata = MetaData()
db = Table('users', metadata,
Column('id', Integer, primary_key=True),
Column('name', String),
Column('fullname', String),
Column('password', String),
sqlite_autoincrement=True)
metadata.create_all(engine)
return engine.connect(),db
def add_to_db():
ret = db_stuff()
conn = ret[0]
db = ret[1]
try:
conn.execute("INSERT INTO users VALUES ('1','john','smith john','23')")
result = conn.execute(db.select())
for row in result:
print row
finally:
conn.close()
It would be cool if you could help me figuring out what I'm missing here, I start to be desperate...
The problem is that the "id" is not incremented each time and i get an error that it should be unique when I run the script twice.
TIA
try
conn.execute("INSERT INTO users(name, fullname, password) VALUES ('john','smith john','23')")
id is autoincrement hence we should not pass it, however we need to specify what other parameters represent in the table i.e. where should the values ('john', 'smith john', '23') should go.
It should work.
Do this:
conn.execute("INSERT INTO users VALUES ('john','smith john','23')")
You are setting the id to 1 - always. Just leave it and it will be filled due auto-increment.
Related
I am using SQL Alchemy to access an SQLite Database. However, I am facing some bizarre behaviour with the where() filter. The table is defined as follows:
from sqlalchemy import Column, MetaData, String, Table, select, Integer
metadata = MetaData()
self._table = Table(
self._table_name,
metadata,
Column("email", String(64), primary_key=True, nullable=False),
Column("name", String(64), nullable=False),
Column("password", String(64), nullable=False),
)
And I insert a row using:
query = self._table.insert().values(
name=user.name, email=user.email, password=user.password
)
session.execute(query)
session.commit()
Now, when I try to fetch the find the row using the following code, it returns an empty list:
query = select([self._table]).where(self._table.c.email == "toto#toto.com")
print("Compiled Query ", query)
data = self._execute_select(query)
print("Found Data: ", data)
The above code gives output:
Columns are: ImmutableColumnCollection(user.name, user.email, user.password)
Compiled Query SELECT "user".name, "user".email, "user".password
FROM "user"
WHERE "user".email = :email_1
Found Data: []
If I change it slightly to use "name" instead of "email" it seems to work as expected:
(ONLY the second line has been changed!)
print("Columns are: ", self._table.c)
query = select([self._table]).where(self._table.c.name == "pawan")
print("Compiled Query ", query)
data = self._execute_select(query)
print("Found Data: ", data)
The output for the above code is:
Columns are: ImmutableColumnCollection(user.email, user.name, user.password)
Compiled Query SELECT "user".email, "user".name, "user".password
FROM "user"
WHERE "user".name = :name_1
Found Data: [('tototo', 'pawan', 'toto#toto.com')]
No other operations are performed between these two lines so the data is definitely present in the DB the whole time. I noticed that changing the "name" to "password" and value to "tototo" does not find any data either. So essentially, I am ONLY able to search using "name".
Please help me understand this error.
rollback.py :
def write_to_db(dataset_id):
try:
initialize_datamodels()
EpdUmpPushRollback = datamodels.relational_tables.EpdUmpPushRollback
with session_scope() as session:
epdumppushrollback_obj = EpdUmpPushRollback()
epdumppushrollback_obj.dataset_id = dataset_id
epdumppushrollback_obj.record_id = ''
epdumppushrollback_obj.operator_name = 'vis'
epdumppushrollback_obj.activation_flag = 'active'
epdumppushrollback_obj.record_creation_time = now()
epdumppushrollback_obj.start_time = now()
session.add(epdumppushrollback_obj)
session.flush()
except Exception as e:
#err = "Error in updating the table epd_ump_push_rollback "
#_log.exception(err)
_log.exception("Error in updating the table {}".format(e))
table.py :
"""epd_ump_push_rollback_table
Revision ID: 4e4d99a8e544
Revises: c010f4d4b319
Create Date: 2018-12-19 18:04:30.271380
"""
from alembic import op
from sqlalchemy import Column, String, INTEGER, VARCHAR, NVARCHAR, TIMESTAMP, \
Enum, ForeignKey, Sequence, MetaData
# revision identifiers, used by Alembic.
revision = '4e4d99a8e544'
down_revision = '2396e1b7de5c'
branch_labels = None
depends_on = None
meta = MetaData()
seq_obj = Sequence('record_id_seq', metadata=meta)
def upgrade():
activation_flag_state = Enum('active', 'inactive', name="activation_flag_state")
op.create_table('epd_ump_push_rollback',
Column('dataset_id', String, ForeignKey('epd_ip_dataset.dataset_id'),
primary_key=True),
Column('record_id', INTEGER, seq_obj, server_default=seq_obj.next_value(),
primary_key=True),
Column('operator_name', String, nullable=False),
Column('activation_flag', activation_flag_state, nullable=False),
Column('record_creation_time', TIMESTAMP),
Column('start_time', TIMESTAMP),
Column('end_time', TIMESTAMP))
def downgrade():
op.drop_table('epd_ump_push_rollback')
op.execute('DROP type activation_flag_state')
Explanation:
In the rollback.py file I am writing into db. I am setting a session with db(postgresql) using with session_scope() as session:. I am creating a object of table EpdUmpPushRollback and setting those values appropriately. The column record_id should be generated as sequence which I am defining in table.py and I am using alembic to upgrade my schema to new one which will have the table EpdUmpPushRollback.
I have two questions now.
For the column where we have defined sequence, is it mandatory to add using pdumppushrollback_obj.record_id = '' or it gets added automatically?
What should be the name of the sequence, whenever I try to add any entry in the table, it throws this error.
Error: sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError)
"record_id_seq" is not a sequence [SQL: 'INSERT INTO
epd_ump_push_rollback (dataset_id, operator_name, activation_flag,
record_creation_time, start_time, end_time) VALUES (%(dataset_id)s,
%(operator_name)s, %(activation_flag)s, now(), now(), %(end_time)s)
RETURNING epd_ump_push_rollback.record_id'] [parameters:
{'dataset_id': '20181221_1200_mno', 'operator_name': 'vis',
'activation_flag': 'active', 'end_time': None}]
The line
seq_obj = Sequence('record_id_seq', metadata=meta)
is not enough.
You need to add, in upgrade(), above the table creation:
op.execute(schema.CreateSequence(seq_obj))
Also, drop sequence is needed in downgrade() function
I already had this problem once and to solve it I created the sequence, so you can try this postgres statement:
CREATE SEQUENCE record_id_seq
START WITH 1
INCREMENT BY 1
NO MINVALUE
NO MAXVALUE
CACHE 1;
I use sqlalchemy to make changes to a table in SQL Server database, and would like to get back number of affected rows.
I know there is .rowcount attribute to ResultProxy, but as, for example, this answer is demonstrating .rowcount is not necessarily the same as number of affected rows.
SQL Server uses ##ROWCOUNT to access number of affected rows from the previous statement execution.
Is there a way to modify an sqlalchemy expression that uses insert / update statement to end with SELECT ##ROWCOUNT?
For example, given:
from sqlalchemy import Table, Column, Integer, String, MetaData, create_engine
url = 'mssql+pyodbc://dsn'
engine = create_engine(url)
metadata = MetaData()
users = Table('users', metadata,
Column('id', Integer, primary_key=True),
Column('name', String),
Column('fullname', String),
)
ins = users.insert().values(name='jack', fullname='Jack Jones')
upd1 = users.update().values(fullname='Jack Doe').where(users.c.name == 'jack')
upd2 = users.update().values(fullname='Jack Doe').where(users.c.name == 'jack')
I could prepend SELECT ##ROWCOUNT to an update statement:
sel = select([text('##ROWCOUNT')])
sql1 = sel.suffix_with(upd2)
print(sql1.compile(engine, compile_kwargs={"literal_binds": True}))
Yielding "wrong" query:
SELECT ##ROWCOUNT UPDATE users SET fullname='Jack Doe' WHERE users.name = 'jack'
Trying to do the "right" thing:
sql2 = upd2.suffix_with(sel)
Raises AttributeError since 'Update' object has no attribute 'suffix_with'.
So is there a way to get desired sql query:
UPDATE users SET fullname='Jack Doe' WHERE users.name = 'jack';
SELECT ##ROWCOUNT
Using sql expression language without fully textual constructs.
I am executing some code in python, but need to access some data from my database. This is my first time I have used sql alchemy
I have a table in my database called reports.bigjoin it has columns with the following types
id (varchar)
id2 (varchar)
ts_min (int4)
ts_local_min (int4)
10_meter_cell (int8)
ds (date)
ds_log (date)
ds_local (date)
I need to know the number of rows for a given set of dates. For example, within python I want to execute
x= select count(*) from reports.bigjoin where (ds>='2016-01-01' and ds<='2016-01-04')
My attempt so far has been
from sqlalchemy import create_engine, MetaData, Table
from sqlalchemy import Integer, String,Date
from sqlalchemy.orm import sessionmaker
engine = sqlalchemy.create_engine(url).connect()
Session = sqlalchemy.orm.sessionmaker(bind=engine)
session = Session()
metadata = sqlalchemy.MetaData(engine)
moz_bookmarks = Table('reports.bigjoin', metadata,
Column('id', String, primary_key=True),
Column('id2', String),
Column('ts_min', Integer),
Column('ts_local', Integer),
Column('10m_cell', Integer),
Column('ds', Date),
Column('ds_log', Date),
Column('ds_local', Date)
)
x = session.query(moz_bookmarks).filter(
(moz_bookmarks.ds >= '2016-01-01', moz_bookmarks.ds <= '2016-01-04')).count()
this has failed. Any help would be greatly appreciated.
cnt = (
session
.query(func.count('*').label('cnt'))
.filter(moz_bookmarks.c.ds >= '2016-01-01')
.filter(moz_bookmarks.c.ds <= '2016-01-04')
)
print(cnt)
After searching a bit and applying
How to use variables in SQL statement in Python?
I found that
connection = create_engine('url').connect()
result = connection.execute("select count(*) from reports.bigjoin where (ds>= %s and ds<= %s)", (x,y))
solved the problem
I'm trying to create a composite primary key with SQLAlchemy however when adding data it's telling me that the columns are not unique but, as a pair, I'm sure that they are.
Is there something wrong with my syntax?
from sqlalchemy import Column, Integer, String
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import create_engine
import dataset
Base = declarative_base()
class Work(Base):
__tablename__ = 'Work'
id = Column(String(25),primary_key=True)
user = Column(String(10),primary_key=False)
date = Column(Integer, primary_key=False)
time = Column(Integer, primary_key=False)
ticket = Column(String(10), primary_key=True)
updated = Column(Integer, primary_key=False)
timestamp = Column(Integer, primary_key=False)
engine = create_engine('sqlite:///work_items.db', pool_recycle=3600)
Base.metadata.create_all(engine)
I've defined the id and ticket columns as primary keys True, which is how I'm supposed to do it according to the docs - I just can't seem to figure out what's causing this issue.
I know that I could simply define the id column as a string composed of a concatenation of id+ticket, but I thought it would be better to the composite primary key feature because that's what the feature's for!
EDIT: It has been suggested that another question regarding defining a foreign key constraint on a composite primary key serves as an answer to this question. However, it does not: my database has only one table, and therefore no foreign key relationships. Despite that, I am still encountering an error:
sqlalchemy.exc.IntegrityError: (sqlite3.IntegrityError) columns id, ticket are not unique
EDIT2: this is the error I'm getting:
sqlalchemy.exc.IntegrityError: (sqlite3.IntegrityError) columns id, ticket are not unique
[SQL: u'INSERT INTO "Work" (id, user, date, time, ticket, updated, timestamp) VALUES (?, ?, ?, ?, ?, ?, ?)']
[parameters: (u'108-4900', u'kiba', u'1451390400000', u'30', u'S-1527', u'1452863269208', 1458724496050.0)]
Here's the thing: there's only one ticket with that name, and that ticket itself has only one id... so I'm really scratching my head here
EDIT3:
try:
table['Work'].insert(dict(user=work_item.authorLogin,
date=work_item.date,
time=work_item.duration,
id=work_item.id,
ticket=issue.id,
updated=issue.updated,
timestamp=time.time()*1000.0))
except exc.SQLAlchemyError:
print 'error'
print work_item.authorLogin
print work_item.date
print work_item.duration
print work_item.id
print issue.id
print issue.updated