Relatively new to python scripts, so bare with.
I have used speedtest-cli before. I have edited the script so it will insert the values into a sql table as below, however having an issue with one of the inserts. It will insert ping, and download ok, however, the upload is always 2.74 or 2.75 for example, but ONLY when run from a crontab.. very weird.
If I run the python script from cli it will insert values fine.
This is my query, and the values ping, download and upload are coming from the speedtest-cli script.
Here is the full script
import re
import subprocess
import time
import mysql.connector
from mysql.connector import Error
from mysql.connector import errorcode
print "----------------------------------"
print 'Started: {} {}'.format(time.strftime('%d/%m/%y %H:%M:%S'), "")
response = subprocess.Popen('speedtest-cli --simple', shell=True, stdout=subprocess.PIPE).stdout.read()
ping = re.findall('Ping:\s(.*?)\s', response, re.MULTILINE)
download = re.findall('Download:\s(.*?)\s', response, re.MULTILINE)
upload = re.findall('Upload:\s(.*?)\s', response, re.MULTILINE)
ping[0] = ping[0].replace(',', '.')
download[0] = download[0].replace(',', '.')
upload[0] = upload[0].replace(',', '.')
try:
if os.stat('/var/www/html/speed/log.txt').st_size == 0:
print 'Date,Time,Ping (ms),Download (Mbit/s),Upload (Mbit/s)'
except:
pass
print 'PING: {}, DOWN: {}, UP: {}'.format(ping[0], download[0], upload[0])
try:
connection = mysql.connector.connect(host='localhost',
database='dev',
user='dev',
password='dev1')
sql_insert_query = ("""INSERT INTO speedtest(ping, download, upload) VALUES (%s,%s,%s)""", (ping[0], download[0], upload[0]))
cursor = connection.cursor()
result = cursor.execute(*sql_insert_query)
connection.commit()
print ("Insert success into speedtest tbl")
except mysql.connector.Error as error :
connection.rollback() #rollback if any exception occured
print("Failed inserting record into speedtest table {}".format(error))
finally:
#closing database connection.
if(connection.is_connected()):
cursor.close()
connection.close()
print("MySQL conn closed")
print 'Finished: {} {}'.format(time.strftime('%d/%m/%y %H:%M:%S'), "")
Manual script runs ok, just from crontab I get unexpected values. Not sure how to solve.
Related
I have an python application which reads the python scripts and runs it and returns the values:
main.py
def Exec(id):
try:
connection = mysql.connector.connect(host='localhost',
user='root',
password='',
database='mydb')
# Fetch the python script from pythontbl table
sql_select_Query = "SELECT python FROM mydb.pythontbl WHERE id={}".format(id)
cursor = connection.cursor()
cursor.execute(sql_select_Query)
# get all records
script = cursor.fetchall()
# execute the python script with arguments
??
# return value should be saved in out
out= ???
print("output",out) ??
except mysql.connector.Error as e:
print("Error reading data from MySQL table", e)
finally:
if connection.is_connected():
connection.close()
cursor.close()
print("MySQL connection is closed")
how can I execute the python script which I fetched from my main and sending the arguments to script and get back the result ?
I can not use import script.py as I am fetching the script through my main.py
create another file name fetchscript.py and in this file create the connection and fetch the script from your table , call fetchscript.py from main.py and then import pythonscript and call the desired function from pythonscript.py
I tried a lot however I am unable to copy data available as json file in S3 bucket(I have read only access to the bucket) to Redshift table using python boto3. Below is the python code which I am using to copy the data. Using the same code I was able to create the tables in which I am trying to copy.
import configparser
import psycopg2
from sql_queries import create_table_queries, drop_table_queries
def drop_tables(cur, conn):
for query in drop_table_queries:
cur.execute(query)
conn.commit()
def create_tables(cur, conn):
for query in create_table_queries:
cur.execute(query)
conn.commit()
def main():
try:
config = configparser.ConfigParser()
config.read('dwh.cfg')
# conn = psycopg2.connect("host={} dbname={} user={} password={} port={}".format(*config['CLUSTER'].values()))
conn = psycopg2.connect(
host=config.get('CLUSTER', 'HOST'),
database=config.get('CLUSTER', 'DB_NAME'),
user=config.get('CLUSTER', 'DB_USER'),
password=config.get('CLUSTER', 'DB_PASSWORD'),
port=config.get('CLUSTER', 'DB_PORT')
)
cur = conn.cursor()
#drop_tables(cur, conn)
#create_tables(cur, conn)
qry = """copy DWH_STAGE_SONGS_TBL
from 's3://udacity-dend/song-data/A/A/A/TRAAACN128F9355673.json'
iam_role 'arn:aws:iam::xxxxxxx:role/MyRedShiftRole'
format as json 'auto';"""
print(qry)
cur.execute(qry)
# execute a statement
# print('PostgreSQL database version:')
# cur.execute('SELECT version()')
#
# # display the PostgreSQL database server version
# db_version = cur.fetchone()
# print(db_version)
print("Executed successfully")
cur.close()
conn.close()
# close the communication with the PostgreSQL
except Exception as error:
print("Error while processing")
print(error)
if __name__ == "__main__":
main()
I don't see any error in the Pycharm console but I see Aborted status in the redshift query console. I don't see any reason why it has been aborted(or I don't know where to look for that)
Other thing that I have noticed is when I run the copy statement in Redshift query editor , it runs fine and data gets moved into the table. I tried to delete and recreate the cluster but no luck. I am not able to figure what I am doing wrong. Thank you
Quick read - it looks like you haven't committed the transaction and the COPY is rolled back when the connection closes. You need to either change the connection configuration to be in "autocommit" or add an explicit "commit()".
I am trying to connect to my database via Python 2.7 with this code:
import csv
import psycopg2
try:
conn = psycopg2.connect("dbname='student', user='postgres',password='password', host='localhost'")
cursor = conn_cursor()
reader = csv.reader(open('last_file.csv', 'rb'))
print "connected"
except:
print "not Connected"
It did work last week and we don't think we've changed anything, but now it won't connect.
We've tried using it with the database open and closed, nothing worked.
The database does exist in Postgres.
import psycopg2
try:
conn = psycopg2.connect("dbname='database_name' user='postgres_user_name' host='localhost' password='user_passwd'")
except:
print "I am unable to connect to the database"
cur = conn.cursor()
cur.execute("""SELECT * from table_name""")
rows = cur.fetchall()
print "\nShow me the data:\n"
for row in rows:
print " ", row[0]
print " ", row[1]
Exception part add like this to see what is error
except Exception as ex:
print "not Connected"
print "Error: "+ str(ex)
Try this:
import csv
import psycopg2
try:
conn = psycopg2.connect("dbname='student', user='postgres',password='password', host='localhost'")
except:
print "I am unable to connect to the database."
cursor = conn.cursor()
try:
reader = csv.reader(open('last_file.csv', 'rb'))
print "connected"
except:
print "not Connected"
Seems like there are something wrong with your postgres.
Try and see postgres log.
Location of postgres log by default :
tail -f /var/log/postgresql/<>/main/postgresql.log
something like this.
Also don't forget to check firewall. Maybe someone disable it by accident.
Also try for pip install PyGreSQL package. Since psycopg2 (some of versions) is under GPL license. It could be tricky for open source license. Just for your information.
there’s something wrong in my python script: when I try to put some data in my database and print it, it looks like it’s working, but when I rerun the code, or if I check the phpmyadmin, there’s no data saved in the db. Does anyone have some idea on how to solve this problem?
import mysql.connector
from mysql.connector import errorcode
def connect():
""" Connect to MySQL database """
try:
conn = mysql.connector.connect(host='localhost',
database='Temperature',
user='Temperature',
password='mypass')
if conn.is_connected():
print('Connected to MySQL database')
cur = conn.cursor()
query = "INSERT INTO Temp(temp, humi) " \
"VALUES(315, 55)"
try:
cur.execute(query)
except MySQLdb.ProgrammingError as e:
print(e)
query = "SELECT * FROM Temp"
try:
cur.execute(query)
for reading in cur.fetchall():
print (str(reading[0])+" "+str(reading[1]))
except MySQLdb.ProgrammingError as e:
print(e)
except Error as e:
print(e)
finally:
conn.close()
if __name__ == '__main__':
connect()
You will need to add conn.commit() before conn.close(). That should solve the problem.
I have the following procedure that doesn't load data into a table as expected:
def upload_data(parsed_buffer):
parsed_buffer.seek(0)
try:
con = psycopg2.connect(database=database, user=user, password=pw, host=host)
cur = con.cursor()
try:
cur.copy_from(parsed_buffer, 'staging.vcf_ht')
except StandardError, err:
conn.rollback()
print(" Caught error (as expected):\n", err)
except psycopg2.NotSupportedError as e:
now = time.strftime("%H:%M:%S +0000", time.localtime())
print("Copy failed at: " + now + " - " + e.pgerror)
sys.exit(1)
finally:
if con:
con.close()
now = time.strftime("%H:%M:%S +0000", time.localtime())
print('Finished loading data at:' + now)
In other posts they discuss adding the seek function after writes. That is in line 3 of my code. This did not work. A couple of other things I checked. 1. The string buffer is populated with tab delimited data. 2. If I redirect the output to a file and use the \copy command in psql it works as advertised. 3. If I write an insert statement instead of a string buffer this also works (but this is bad for performance). This procedure terminates with throwing any errors.
The problem was no commit statement. Added: con.commit()