pymssql (python module) losing item when fetching data - python

I have a database named "sina2013",and the columus is Title,Content
Now I want to use pymssql module to get the data.At the same time ,using the Title as the filename of a txt file,the Content as the content of the txt file.
The strange thing is the number of files is less than the items in database.
where is the error?
the code i have tried is:
import pymssql
conn = pymssql.connect(...)
cur = conn.cursor()
cur.execute('SELECT Title,Content FROM sina2013')
count=len(cur.fetchall()) #Will return the right number :5913
for Title,Content in cur:
filename=file(str(Title)+r'.txt',r'w')
filename.write(Content )
filename.close()
cur.close()
The number of txt file is less than it should be.
what is the reason?

Perhaps changing your for loop into this:
# cursor fetchall() method returns all rows from a query
for Title,Content in cur.fetchall():
... will fix the issue?

Related

values missing while inserting geojson values to postgres table

I am trying to read a geojson file and insert the records into a postgres table - using the below python code.
import json
import psycopg2
conn = psycopg2.connect(host="<<ip_address>>",database="DB1", user="<<id>>", password="pwd")
cur = conn.cursor()
with open('NTA_shape.json') as f:
Geojson_data = json.load(f)
for feature in Geojson_data['features']:
type_val=feature['geometry']['type']
geom=feature['geometry']['coordinates']
ntaname=feature['properties']['NTAName']
boroname=feature['properties']['BoroName']
data = {"type":type_val,"coordinates":geom}
sql ="""Insert into <<Table_NAME> (geom,ntaname,boroname) VALUES(ST_GeomFromGeoJSON(%s),%s,%s)"""
nta_boro=(json.dumps(data),ntaname,boroname)
cur.execute(sql,nta_boro)
conn.commit()
conn.close()
But when I query the table, lot of records are missing.
If I print the json.dumps(data) variable - its showing all records.
I am not sure, what am i missing during table insert
Kindly help.
I was able to fix with below change
nta_boro=(json.dumps(data,),ntaname,boroname)

Python MySQL cursor fails to fetch rows

I am trying to fetch data from AWS MariaDB:
cursor = self._cnx.cursor()
stmt = ('SELECT * FROM flights')
cursor.execute(stmt)
print(cursor.rowcount)
# prints 2
for z in cursor:
print(z)
# Does not iterate
row = cursor.fetchone()
# row is None
rows = cursor.fetchall()
# throws 'No result set to fetch from.'
I can verify that table contains data using MySQL Workbench. Am I missing some step?
EDIT: re 2 answers:
res = cursor.execute(stmt)
# res is None
EDIT:
I created new Python project with a single file:
import mysql.connector
try:
cnx = mysql.connector.connect(
host='foobar.rds.amazonaws.com',
user='devuser',
password='devpasswd',
database='devdb'
)
cursor = cnx.cursor()
#cursor = cnx.cursor(buffered=True)
cursor.execute('SELECT * FROM flights')
print(cursor.rowcount)
rows = cursor.fetchall()
except Exception as exc:
print(exc)
If I run this code with simple cursor, fetchall raises "No result set to fetch from". If I run with buffered cursor, I can see that _rows property of cursor contains my data, but fetchall() returns empty array.
Your issue is that cursor.execute(stmt) returns an object with results and you're not storing that.
results = cursor.execute(stmt)
print(results.fetchone()) # Prints out and pops first row
For the future googlers with the same Problem I found a workaround which may help in some cases:
I didn't find the source of the problem but a solution which worked for me.
In my case .fetchone() also returned none whatever I did on my local(on my own Computer) Database. I tried the exact same code with the Database on our companies server and somehow it worked. So I copied the complete server Database onto my local Database (by using database dumps) just to get the server settings and afterwards I also could get data from my local SQL-Server with the code which didn't work before.
I am a SQL-newbie but maybe some crazy setting on my local SQL-Server prevented me from fetching data. Maybe some more experienced SQL-user knows this setting and can explain.

python write to file is breaking

I am writing python script to fetch mysql db rows and copy to excel file
The Code I am trying on 2.7.6 :
from __future__ import print_function
import MySQLdb as mdb
conn = mdb.connect("localhost","username","passwd","db")
table = open("table.csv" , 'w')
with conn:
query = "SELECT NAME FROM mytable WHERE H_ID= 5 ORDER BY NAME;"
cursor = conn.cursor()
cursor.execute(query)
for item in range(cursor.rowcount):
row = cursor.fetchone()
print(row[0],file=table)
If I execute that query in mysql means I will get 446 rows and while printing on python shell [ idle ] also I will get proper result , But when I try to copy the output to file I am able to see only 341 rows. What may be the problem.?
I Don't know what exactly the problem, But It got resolved when I remove with keyword and closing the file manually ,
Below code works properly.
from __future__ import print_function
import MySQLdb as mdb
conn = mdb.connect("localhost","username","passwd","db")
table = open("table.csv" , 'w')
query = "SELECT NAME FROM mytable WHERE H_ID= 5 ORDER BY NAME;"
cursor = conn.cursor()
cursor.execute(query)
for item in range(cursor.rowcount):
row = cursor.fetchone()
print(row[0],file=table)
table.close()
From the doc of Python connector for cursor.rowcount:
For nonbuffered cursors, the row count cannot be known before the rows have been fetched. In this case, the number of rows is -1 immediately after query > execution and is incremented as rows are fetched.
Therefore, the block for for item in range(cursor.rowcount): will not be executed.
Try:
for row in cursor:
print(row[0], file=table)

Confirmation that a postgres 'update' query worked in python

I've written my first 'update' query in python, while it seems correct, I'm not sure how to receive back the output to confirm it worked..
This is supposed to load a CSV file and replace the values in the first column with those in the second:
def main():
try:
conn=psycopg2.connect("dbname='subs' user='subs' host='localhost' password=''")
except:
print "I am unable to connect to the database."
sys.exit()
with open("dne.txt", "r+") as f:
for line in f:
old = line.split(',')[0].strip()
new = line.split(',')[1].strip()
cur = conn.cursor()
cur.execute("UPDATE master_list SET subs = '{0}' WHERE subs = '{1}';".format(new, old))
conn.commit()
results = cur.fetchall()
for each in results:
print str(each)
if __name__=="__main__":
main()
I thought the results (UPDATE 1 for each change?) would come back as a tuple, but I got an error instead:
psycopg2.ProgrammingError: no results to fetch
I'm not sure if this means my query just didn't work and there were no updates, or if I can't use fetchall() like I'm trying to.
Any feedback or suggestions welcome!
The UPDATE statement won't return any values as you are asking the database to update its data not to retrieve any data.
By far the best way to get the number of rows updated is to use cur.rowcount. This works with other drivers too, like with Psycopg2 for Postgresql it's the same syntax.
cur.execute("UPDATE master SET sub = ('xyz') WHERE sub = 'abc'")
print(cur.rowcount)
A more roundabout way of checking the update is by running a SELECT against the table after updating it; you should get the data returned. In my example below the first SELECT will return the row(s) where the update will happen. The second SELECT after the update should then return no rows as you have already updated all fields. The third SELECT should return the rows you have updated, plus any that already existed with the 'xyz' value.
import sqlite3
def main():
try:
conn=sqlite3.connect(":memory:")
cur = conn.cursor()
cur.execute("create table master(id text, sub text)")
cur.execute("insert into master(id, sub) values ('1', 'abc')")
cur.execute("insert into master(id, sub) values ('2', 'def')")
cur.execute("insert into master(id, sub) values ('3', 'ghi')")
conn.commit()
except:
print("I am unable to connect to the database.")
sys.exit()
cur.execute("select id, sub from master where sub='abc'")
print(cur.fetchall())
cur.execute("UPDATE master SET sub = ('xyz') WHERE sub = 'abc'")
conn.commit()
cur.execute("select id, sub from master where sub='abc'")
print(cur.fetchall())
cur.execute("select id, sub from master where sub='xyz'")
print(cur.fetchall())
if __name__=="__main__":
main()
In PostgreSQL 9.5 or later you can add RETURNING * to end your query that then returns the modified rows.
PostgreSQL docs: https://www.postgresql.org/docs/9.5/dml-returning.html
Sometimes it is useful to obtain data from modified rows while they
are being manipulated. The INSERT, UPDATE, and DELETE commands all
have an optional RETURNING clause that supports this. Use of RETURNING
avoids performing an extra database query to collect the data, and is
especially valuable when it would otherwise be difficult to identify
the modified rows reliably.

Sybase sybpydb queries not returning anything

I am currently connecting to a Sybase 15.7 server using sybpydb. It seems to connect fine:
import sys
sys.path.append('/dba/sybase/ase/15.7/OCS-15_0/python/python26_64r/lib')
sys.path.append('/dba/sybase/ase/15.7/OCS-15_0/lib')
import sybpydb
conn = sybpydb.connect(user='usr', password='pass', servername='serv')
is working fine. Changing any of my connection details results in a connection error.
I then select a database:
curr = conn.cursor()
curr.execute('use db_1')
however, now when I try to run queries, it always returns None
print curr.execute('select * from table_1')
I have tried running the use and select queries in the same execute, I have tried including go commands after each, I have tried using curr.connection.commit() after each, all with no success. I have confirmed, using dbartisan and isql, that the same queries I am using return entries.
Why am I not getting results from my queries in python?
EDIT:
Just some additional info. In order to get the sybpydb import to work, I had to change two environment variables. I added the lib paths (the same ones that I added to sys.path) to $LD_LIBRARY_PATH, i.e.:
setenv LD_LIBRARY_PATH "$LD_LIBRARY_PATH":dba/sybase/ase/15.7/OCS-15_0/python/python26_64r/lib:/dba/sybase/ase/15.7/OCS-15_0/lib
and I had to change the SYBASE path from 12.5 to 15.7. All this was done in csh.
If I print conn.error(), after every curr.execute(), I get:
("Server message: number(5701) severity(10) state(2) line(0)\n\tChanged database context to 'master'.\n\n", 5701)
I completely understand where you might be confused by the documentation. Its doesn't seem to be on par with other db extensions (e.g. psycopg2).
When connecting with most standard db extensions you can specify a database. Then, when you want to get the data back from a SELECT query, you either use fetch (an ok way to do it) or the iterator (the more pythonic way to do it).
import sybpydb as sybase
conn = sybase.connect(user='usr', password='pass', servername='serv')
cur = conn.cursor()
cur.execute("use db_1")
cur.execute("SELECT * FROM table_1")
print "Query Returned %d row(s)" % cur.rowcount
for row in cur:
print row
# Alternate less-pythonic way to read query results
# for row in cur.fetchall():
# print row
Give that a try and let us know if it works.
Python 3.x working solution:
import sybpydb
try:
conn = sybpydb.connect(dsn="Servername=serv;Username=usr;Password=pass")
cur = conn.cursor()
cur.execute('select * from db_1..table_1')
# table header
header = tuple(col[0] for col in cur.description)
print('\t'.join(header))
print('-' * 60)
res = cur.fetchall()
for row in res:
line = '\t'.join(str(col) for col in row)
print(line)
cur.close()
conn.close()
except sybpydb.Error:
for err in cur.connection.messages:
print(f'Error {err[0]}, Value {err[1]}')

Categories

Resources