I was writing a python program and used SQLite there. But after deploying it, understood that I should use PostgreSQL in order to have database globally.
import os
import import psycopg2
DB_Host = os.environ['DB_Host']
DB_Database = os.environ['DB_Database']
DB_User = os.environ['DB_User']
DB_Port = os.environ['DB_Port']
DB_Password = os.environ['DB_Password']
connection = psycopg2.connect(database = DB_Database, user = DB_User, password = DB_Password, host = DB_Host, port = DB_Port)
That is how I connected to my database.
Now, in the following code, I tried to create a table and insert something to it, but that functions doesn't work.
def sql_table(connection):
cur = connection.cursor()
cur.execute("CREATE TABLE IF NOT EXISTS tasks(id integer PRIMARY KEY, user_id integer, task text)")
connection.commit()
cur.close()
def sql_insert(connection, user_id, new_task):
cur = connection.cursor()
cur.execute("INSERT INTO tasks(user_id, task) VALUES(%s, %s)", (user_id, new_task, ))
connection.commit()
cur.close()
Where can be the mistake?
One thing I can see right away is that the INSERT must fail because it does not insert id. So a NULL value will be assigned to that column. But the column is a PRIMARY KEY, hence NOT NULL, so that will cause an error.
If you want the id to be auto-generated, use
CREATE TABLE IF NOT EXISTS tasks (
id integer GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
user_id integer,
task text
)
or on older versions
...
id serial PRIMARY KEY
...
Related
In Python I've connected to a Postgres database using the following code:
conn = psycopg2.connect(
host = "localhost",
port = "5432",
database = "postgres",
user = "postgres",
password = "123"
)
cur = conn.cursor()
I have created a table called departments and want to insert data into the database from a CSV file. I read the csv in as follows:
departments = pd.DataFrame(pd.read_csv('departments.csv'))
And I am trying to insert this data into the table with the following code:
for row in departments.itertuples():
cur.execute('''
INSERT INTO departments VALUES (?,?,?)
''',
row.id, row.department_name, row.annual_budget)
conn.commit()
which I've seen done in various articles but I keep getting the error:
TypeError: function takes at most 2 arguments (4 given)
How can I correct this, or is there another way to insert the csv?
You have to pass the row information as a tuple. Try this instead:
for row in departments.itertuples():
cur.execute('''
INSERT INTO departments VALUES (%s, %s, %s)
''',
(row.id, row.department_name, row.annual_budget))
conn.commit()
See the docs for more info: https://www.psycopg.org/docs/usage.html
I don't run into any error messages anymore but when i refresh my database nothing is actually injected? using psycopg2 and pgadmin4
import psycopg2 as p
con = p.connect("dbname =Feedbacklamp user =postgres password= fillpw host=localhost port=5432")
cur = con.cursor()
sql = "INSERT INTO audiolevels(lokaalnummer,audiolevel,tijdstip) VALUES (%s,%s,%s)"
val = "100"
val1 = 100
val2 = "tijdstip"
cur.execute(sql,(val,val1,val2))
con.commit
cur.close
con.close
The values to be inserted into my pgadmin sql database
con.commit() should be a function call I think is your problem. You are missing the parentheses which treats it as member access instead of a function call. This also goes for the other methods cur.close() and con.close()
I am creating a Python app that will store my homework in a database (using PhpMyAdmin). Here comes my problem:
At this moment, I am sorting every input with an ID (1, 2, 3, 4...), a date (23/06/2018...), and a task (read one chapter of a book). Now I would like to sort them by the date because when I want to read what do I have to do. I would prefer to see what shall I do first, depending on when should I get it done. For example:
If I have two tasks: one 25/07/2018 and the other 11/07/2018, I would like to show the 11/07/2018 first, no matter if it was addead later than the 25/07/2018. I am using Python (3.6), pymysql and PhpMyAdmin to manage the database.
I have had an idea to get this working, maybe I could run a Python script every 2 hours, that sorts all the elements in the database, but I have no clue about how can I do it.
Now, I will show you the code that enters the values into a database and then it shows them all.
def dba():
connection = pymysql.connect(host='localhost',
user='root',
password='Adminhost123..',
db='deuresc',
charset='utf8mb4',
cursorclass=pymysql.cursors.DictCursor)
try:
with connection.cursor() as cursor:
# Create a new record
sql = "INSERT INTO `deures` (`data`, `tasca`) VALUES (%s, %s)"
cursor.execute(sql, (data, tasca))
# connection is not autocommit by default. So you must commit to save
# your changes.
connection.commit()
with connection.cursor() as cursor:
# Read a single record
sql = "SELECT * FROM `deures` WHERE `data`=%s"
cursor.execute(sql, (data,))
resultat = cursor.fetchone()
print('Has introduït: ' + str(resultat))
finally:
connection.close()
def dbb():
connection = pymysql.connect(host='localhost',
user='root',
password='Adminhost123..',
db='deuresc',
charset='utf8mb4',
cursorclass=pymysql.cursors.DictCursor)
try:
with connection.cursor() as cursor:
# Read a single record
sql = "SELECT * FROM `deures`"
cursor.execute(sql)
resultat = cursor.fetchall()
for i in resultat:
print(i)
finally:
connection.close()
Can someone help?
You don't sort the database. You sort the results of the query when you ask for data. So in your dbb function you should do:
SELECT * FROM `deures` ORDER BY `data`
assuming that data is the field with the date.
I am quite new to programming. I have written the following code by researching from StackOverflow and other sites. I am trying to upload a csv file to the MS SQL Server. Every time I run this it connects and then a message pops up 'Previous SQL was not a query'. I am not sure how to actually tackle this. Any suggestions and help will be appreciated
import pyodbc import _csv
source_path= r'C:\Users\user\Documents\QA Canvas\module2\Module 2 Challenge\UFO_Merged.csv'
source_expand= open(source_path, 'r')
details= source_expand.readlines
print('Connecting...')
try:
conn = pyodbc.connect(r'DRIVER={ODBC Driver 13 for SQL Server};'r'SERVER=FAHIM\SQLEXPRESS;'r'DATABASE=Ash;'r'Trusted_Connection=yes')
print('Connected')
cur = conn.cursor()
print('Cursor established')
sqlquery ="""
IF EXISTS
(
SELECT TABLE_NAME ,TABLE_SCHEMA FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = 'UFO_MERGED' AND TABLE_SCHEMA = 'dbo')
BEGIN
DROP TABLE [dbo].[UFO_MERGED]
END
CREATE TABLE [dbo].[UFO_MERGED]
( [ID] smallint
,[COMMENTS] varchar(max)
,[FIRST OCCURANCE] datetime
,[CITY] varchar(60)
,[COUNTRY] varchar(20)
,[SHAPE] varchar(20)
,[SPEED] smallint
,[SECOND OCCURANCE] datetime
PRIMARY KEY(id)
) ON [PRIMARY]
"""
result = cur.execute(sqlquery).fetchall()
for row in result:
print(row)
print("{} rows returned".format(len(result)))
sqlstr= """
Insert into [dbo].[UFO_Merged] values ('()','()','()','()','()','()','()','()')
"""
for row in details[1:]:
row_data =row.split(',')
sqlquery=sqlstr.format(row_data[0],row_data[1],row_data[2],row_data[3],row_data[4],row_data[5],row_data[6],row_data[7])
result=cur.execute(sqlquery)
conn.commit()
conn.close()
except Exception as inst:
if inst.args[0]== '08001':
print("Cannot connect to the server")
elif inst.args[0] == '28000':
print("Login failed - check connection string")
else:
print(inst)
Well, make sure the SQL works first, before you try to introduce other technologies (Python, R, C#, etc.) on top of it. The SQL looks a little funky, but I'm not a SQL expert, so I can't say for sure, and I don't have time to recreate your setup on my machine. Maybe you can try with something a bit less complex, get that working, and then graduate to something more advanced. Does the following work for you?
import pyodbc
user='sa'
password='PC#1234'
database='climate'
port='1433'
TDS_Version='8.0'
server='192.168.1.103'
driver='FreeTDS'
con_string='UID=%s;PWD=%s;DATABASE=%s;PORT=%s;TDS=%s;SERVER=%s;driver=%s' % (user,password, database,port,TDS_Version,server,driver)
cnxn=pyodbc.connect(con_string)
cursor=cnxn.cursor()
cursor.execute("INSERT INTO mytable(name,address) VALUES (?,?)",('thavasi','mumbai'))
cnxn.commit()
I have a simple database application in Python with SQLite. I wrote a simple program to create database and insert into some values. However, database is created, but new values are not inserted, and I don't know why:
#!/usr/bin/python
# -*- coding: utf-8 -*-
import sqlite3 as lite
import sys
def CreateTable():
try:
connection = lite.connect(':memory:')
with connection:
cursor = connection.cursor()
sql = 'CREATE TABLE IF NOT EXISTS Authors' + '(ID INT PRIMARY KEY NOT NULL, FIRSTNAME TEXT, LASTNAME TEXT, EMAIL TEXT)'
cursor.execute(sql)
data = '\n'.join(connection.iterdump())
with open('authors.sql', 'w') as f:
f.write(data)
except lite.Error, e:
if connection:
connection.rollback()
finally:
if connection:
cursor.close()
connection.close()
def Insert(firstname, lastname, email) :
try:
connection = lite.connect('authors.sql')
with connection:
cursor = connection.cursor()
sql = "INSERT INTO Authors VALUES (NULL, %s, %s, %s)" % (firstname, lastname, email)
cursor.execute(sql)
data = '\n'.join(connection.iterdump())
with open('authors.sql', 'w') as f:
f.write(data)
except lite.Error, e:
if connection:
connection.rollback()
finally:
if connection:
cursor.close()
connection.close()
CreateTable()
Insert('Tibby', 'Molko', 'tibby.molko#yahoo.co.uk')
You are not calling commit on your connection. You should also not write to the database file yourself, the database engine is writing to the file.
Try to go through the first few examples in sqlite documentation, it should be clear then.
You have misunderstood what connection.iterdump() is for. You are creating SQL text, instructions for SQLite to execute again at a later date. It is not the database itself. If all you wanted was to output SQL statements you can just write your SQL statements directly, there is little point in passing it through SQLite first.
You also cannot 'connect' SQLite to the text file with SQL statements; you'd have to load those statements as text and re-play them all. That's not what I think you wanted however.
You can connect to an existing database to insert additional rows. Each time you want to have add data, just connect:
def CreateTable():
connection = lite.connect('authors.db')
try:
with connection as:
cursor = connection.cursor()
sql = '''\
CREATE TABLE IF NOT EXISTS Authors (
ID INT PRIMARY KEY NOT NULL,
FIRSTNAME TEXT,
LASTNAME TEXT,
EMAIL TEXT)
'''
cursor.execute(sql)
finally:
connection.close()
def Insert(firstname, lastname, email) :
connection = lite.connect('authors.db')
try:
with connection:
cursor = connection.cursor()
sql = "INSERT INTO Authors VALUES (NULL, ?, ?, ?)"
cursor.execute(sql, (firstname, lastname, email))
finally:
connection.close()
Note that using the connection as a context manager already ensures that the transaction is either committed or rolled back, depending on there being an exception.
On the whole, you want to be informed of exceptions here; if you cannot connect to the database you'd want to know about it. I simplified the connection handling as such. Closing a connection auto-closes any remaining cursors.
Last but far from least, I switched your insertion to using SQL parameters. Never use string interpolation where parameters can be used instead. Using parameters makes it possible for the database to cache statement parse results and most of all prevents SQL injection attacks.
You cannot connect to a text file with sql commands.
sqlite3.connect expects or creates a binary file.
You didnt commit it.For writing into database, it should be committed.For read (select) operations,not needed.
try:
with connection:
cursor = connection.cursor()
sql = "INSERT INTO Authors VALUES (NULL, ?, ?, ?)"
cursor.execute(sql, (firstname, lastname, email))
connection.commit() # or cursor.commit()
finally:
connection.close()