I am working on a project in which I am using python to store sensor data in a SQLite Database.
If the table does not exists it will be created like this:
query_db('create table if not exists %s (id INTEGER PRIMARY KEY AUTOINCREMENT, value double, datetimestamp DATETIME, user CHAR(2));' % (sensor_name+'_VALUES'))
Next I want to insert the values from a HTTP Post Request:
sensor_value = request.json['value']
dtstamp = request.json['timestamp']
user_name = request.json['user']
result = query_db('INSERT INTO %s ("value", "datetimestamp", "user") VALUES ( "%s" ,"%s" , "%s");' % (sensor_name + '_VALUES', sensor_value, dtstamp, user_name))
But I don't get an error and it also seems like the insert is not executed.
The request looks like this:
{
"value" : 22,
"timestamp" : "2017-02-17 22:22:22",
"user" : "TE"
}
Anyone know why?
You need to commit the transaction to persist the changes in the database:
db.commit()
Here is some additional information about why and when do you need to commit:
python sqlite3, how often do I have to commit?
Related
I am planning to run this script on Lambda function on a daily basis and get the list of IP addresses and find any suspicious IP address and add to the WAF block rule. But my problem here is in the script I might need to manually change the Table name as it cant overwrite with the same existing table on AWS Athena and need to update the S3 bucket_input for the current day. So is there a way to automate this as well
#!/usr/bin/env python3
import boto3
#Function for executing athena queries
def run_query(query, database, s3_output):
client = boto3.client('athena')
response = client.start_query_execution(
QueryString=query,
QueryExecutionContext={
'Database': s3_accesslog1
},
ResultConfiguration={
'OutputLocation': s3_output,
}
)
print('Execution ID: ' + response['QueryExecutionId'])
return response
#Athena configuration
s3_input = 's3://smathena/athenatest/'
s3_ouput = 's3://python-demo/Test-Athena/'
database = 's3_accesslog1'
table = 'Test_output'
#Athena database and table definition
create_database = "CREATE DATABASE IF NOT EXISTS %s;" % (database)
create_table = \
"""CREATE EXTERNAL TABLE IF NOT EXISTS %s.%s (
`Date` DATE,
Time STRING,
Location STRING,
SCBytes BIGINT,
RequestIP STRING,
Method STRING,
Host STRING,
Uri STRING,
Status INT,
Referrer STRING,
UserAgent STRING,
UriQS STRING,
Cookie STRING,
ResultType STRING,
RequestId STRING,
HostHeader STRING,
Protocol STRING,
CSBytes BIGINT,
TimeTaken FLOAT,
XForwardFor STRING,
SSLProtocol STRING,
SSLCipher STRING,
ResponseResultType STRING,
CSProtocolVersion STRING,
FleStatus STRING,
FleEncryptedFields INT,
CPort INT,
TimeToFirstByte FLOAT,
XEdgeDetailedResult STRING,
ScContent STRING,
ScContentLen BIGINT,
ScRangeStart BIGINT,
ScRangeEnd BIGINT
)
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t'
LOCATION '%s'
TBLPROPERTIES ('skip.header.line.count' = '2');""" % ( database, table, s3_input )
#Query definitions
query_1 = "SELECT requestip, count(*) FROM %s.%s group by requestip order by count(*) desc" % (database, table)
query_2 = "SELECT * FROM %s.%s where useragent = googlebot" % (database, table)
#Execute all queries
queries = [ create_database, create_table, query_1, query_2 ]
for q in queries:
print("Executing query: %s" % (q))
res = run_query(q, database, s3_ouput)
I think the problem is that the command is being run with:
QueryExecutionContext={
'Database': s3_accesslog1
},
However, the database doesn't exist -- in fact, you are trying to create it!
So, try removing that entry and see whether it works. It means you will need to specify the database/schema in each query, but that is often a good idea to avoid ambiguity.
I'm using Python 3.7.5 and SQLite3 3.X as well as Tkinter (but that's irrelevant) and I can't seem to update my table called "Account"
try:
Cursor.execute("""CREATE TABLE Account (
Application text,
Username text,
Password text)""")
except sqlite3.OperationalError:
Cursor.execute("""UPDATE Account SET
Application = :NewApp,
Username = :NewUser,
Password = :NewPass
WHERE oid = :oid""",
{"NewApp": NewApplicationE.get(),
"NewUser": NewUsernameE.get(),
"NewPass": NewPasswordE.get(),
"oid": X[3]
})
The try bit is just to create the table if there's not already one and if there is it goes on to update the table
I know for a fact there's columns called Application, Username, Password and the variable.get() all returns the proper string
The oid being X[3] gives you an integer
The program runs but it doesn't actually seem to update anything.
Any help with the formatting or just in general would be appreciated
I think that you need just commit your change
I assume that you get cursor from a connectio,
For instance something like that should work:
import sqlite3
conn = sqlite3.connect('example.db')c = conn.cursor()
Cursor = conn.cursor()
try:
Cursor.execute("""CREATE TABLE Account (
Application text,
Username text,
Password text)""")
conn.commit()
except sqlite3.OperationalError:
Cursor.execute("""UPDATE Account SET
Application = :NewApp,
Username = :NewUser,
Password = :NewPass
WHERE oid = :oid""",
{"NewApp": NewApplicationE.get(),
"NewUser": NewUsernameE.get(),
"NewPass": NewPasswordE.get(),
"oid": X[3]
})
conn.commit()
conn.close()
Referece
https://docs.python.org/3/library/sqlite3.html
I have written a Flask App which can access a mysql database running in a Dockerfile with following schema:
CREATE TABLE tasksdb (
id INTEGER AUTO_INCREMENT PRIMARY KEY,
task VARCHAR(20),
is_completed BOOLEAN,
notify VARCHAR(100)
);
I need to return the id of the task which was last created. Thus, I would like to retrieve the id from the database. I have tried several variants but none of them worked:
cursor.execute('SELECT id FROM tasksdb WHERE task="%s"', [task])
print(cursor.fetchall())
Output: ()
cursor.execute("SELECT COUNT(*) FROM tasksdb")
Output: ((1,),), ((2,),) etc for each time running the command.
However, I am confused since the database shows this, when running the following in the Docker bash shell:
SELECT * from tasksdb;
Empty set (0.00 sec)
Nonetheless, I try to get the 1,2 etc from the output so that my function can return this:
The problem is that
string_id=str(cursor.fetchall()) leads to output (). Thus, I have no idea how to access the middle part.
res = int(''.join(map(str, cursor.fetchall()))) leads to
ValueError: invalid literal for int() with base 10: ''
res=cursor.fetchall()[0]
leads to: IndexError: tuple index out of range
Could you help me how I could get the id?
Additionally, I am wondering why the id always starts again at 1 when restarting the flask application and running the test but not restarting the mysql container. I would expect the container and database to create the tasks with a new id even after restarting.
Here is my full code:
from flask import Flask, request, Response
import json
import MySQLdb
import json
app = Flask(__name__)
cursor = None
def get_db_connection():
global cursor
if not cursor:
db = MySQLdb.connect("127.0.0.1", "root", "DockerPasswort!", "demo", port=3306)
cursor = db.cursor()
return cursor
# Create a new task
#app.route('/v1/tasks', methods=['POST'])
def post():
cursor = get_db_connection()
data = request.get_json()
if "title" not in data:
return bulkadd(data)
is_completed=False
if "is_completed" in data:
is_completed=data["is_completed"]
notify=""
if "notify" in data:
notify=data["notify"]
task = data["title"]
sql='INSERT INTO tasksdb (task, is_completed, notify) VALUES (%s, %s, %s)'
cursor.execute(sql, [task, is_completed, notify])
#Insert any of the attempts to obtain the id here and return it.
return "True"
Thank you for your help!
-------------------------------------EDIT-----------------------
I just added a task directly in mysql and executed this:
SELECT * from tasksdb;
+----+---------------+--------------+-----------------------------+
| id | task | is_completed | notify |
+----+---------------+--------------+-----------------------------+
| 22 | My First Task | 0 | test#berkeley.edu |
+----+---------------+--------------+-----------------------------+
1 row in set
The fact that id is 22 convinces me that the requests earlier were stored. However, I do not understand why only this one row is returned? Should all former requests not be saved as well?
I'm trying do delete a user from my database using python and sqlite.
import sqlite3
database_connection = sqlite3.connect('test.db')
delete_username_input = input("Which user you would like to delete?\n\n")
sql = ("DELETE * from USERS where USERNAME = ?")
args = (delete_username_input)
database_connection.execute(sql, args)
database_connection.commit()
database_connection.close()
When running the code above, I get the following error:
sqlite3.OperationalError: near "*": syntax error
Any idea what might be causing this error?
The table that I use has been created using the following syntax:
conn.execute('''CREATE TABLE USERS
(ID INTEGER PRIMARY KEY AUTOINCREMENT,
USERNAME TEXT NOT NULL,
PASSWORD TEXT NOT NULL,
WINS FLOAT NOT NULL,
LOSES FLOAT NOT NULL,
GAMESPLAYED FLOAT NOT NULL,
WINPERCENT FLOAT NOT NULL );''')
Any help will be appreciated.
Your SQL syntax is wrong. It should be
DELETE from USERS where USERNAME = ?
without the *
Check it out here
I want to insert into users table only if userId, email and username does not exist ( want these to be unique).
userId is the primary key ( Hash key, data type - Number ).
username and email are non-key attributes ( both string ).
Here is how i tried:
response = userTable.put_item(
Item={
'userId': userIdNext,
'accType': 0,
'username': usernameInput,
'pwd': hashedPwd,
'email': emailInput
},
ConditionExpression = "(attribute_not_exists(userIdNext)) AND (NOT (contains (email, :v_email))) AND (NOT (contains(username, :v_username)))",
ExpressionAttributeValues={
":v_email": emailInput,
":v_username": usernameInput
}
)
I tried to follow the aws documentation for logical operators and condition expression from here: AWS Conditional Expressions
But it is inserting everytime into table even if username or email already exists in the db. ( i am giving new userIdNext as it is primary key and cannot be a duplicate )
I am using Python implemetation boto3
dynamodb can force uniqueness only for hash-range table keys (and not for global secondary index keys)
in your case there are 2 options:
1) force it on application level - query for records, and find if duplicate
2) add another dynamodb table with hash/range values (that can enforce uniqeness), you can query this table before putting an item to the main table
3) use application locks (memcache..)
4) dont use dynamodb (maybe its not answer your requirements )
referring you to answers here:
DynamoDB avoid duplicate non-key attributes
DynamoDB consistent reads for Global Secondary Index
Ok, i found the bug in my application.
I trying to do a conditional put item to DynamoDB but I'm passing a different primary key that one that exists on DynamoDB, at this point a new object will be created and conditional expression was ignored.
# id primary key
id = 123
# email is only index
email = "email#email.com"
item = {
"id": id,
"email": email
"info": {
"gender": "male",
"country": "Portugal"
}
}
response = self._table.put_item(Item=item, ConditionExpression="email <> :v_email", ExpressionAttributeValues={":v_email": email})
If you want to conditional expressions work fine on DynamoDB put_item you should send the same primary key of the one that exists on DB.
kishorer747, can you put a sample of your code that do this? "i have
Global Secondary Indexes to check if username or emails exists
already, but its not atomic. So if 2 users query at the same time and
get that email1 is available, they both insert into users table. And
if there is no check to verify if email/username does not exist
already, we have duplicate entries for email/username "