Insert GTiff-file into PostGIS table using raster2pgsql - python

I am trying to insert a GTiff-file into a specific PostGIS table using the raster2pgsql-command. So far I managed inserting the GTiff-file into the PostGIS database I am connected to. But this creates a new table with the file-name of the GTiff-file. I could also move the raster-data to the target table afterwards, but I suppose there is a more efficient way.
Here is an example:
import psycopg2
import os
tif_path = 'test.tif'
conn = psycopg2.connect(
host = 'localhost',
port = 5432,
user = 'postgres',
dbname = 'gisdb'
)
curs = conn.cursor()
curs.execute("SET postgis.gdal_enabled_drivers = 'ENABLE_ALL';")
os.system('raster2pgsql "%s" > temp.sql'%tif_path)
curs.execute(open('temp.sql','r').read())
Is there a way to insert the raster-data directly into an existing table?
I know I can use -a to append the raster to an existing table and specify the column name by using -f. But there doesn't seem to be a way to specify the name of the table.

if you want to specify the table by yourself, your query must be like that:
raster2pgsql -s 4326 -I -C -M C:\temp\test_1.tif -t 100x100 myschema.mytable > out.sql
if you want to add the raster to existing table, you are right you must use "-a" value

Related

i cant use python input to create mysql database

I want to take input from user of creating a mysql database I cant use python input to create mysql databasewhat i tryed
Getting this error please help the error
execute() method parameters must be provided as a tuple, dict or a list :
cursor.execute(cdb, (dbname,))
And I think you can execute your query directly like :
%-formatting
cdb = 'CREATE DATABASE %s' % dbname
cursor.execute(cdb)
F-strings
cdb = f'CREATE DATABASE {dbname}'
cursor.execute(cdb)
str.format()
cdb = 'CREATE DATABASE {}'.format(dbname)
cursor.execute(cdb)
Consider using f-strings when dealing with string that contains variables.
cdb = f'CREATE DATABASE {dbname}'
Try this way, this works correctly.
try:import mysql.connector as con
except ImportError:print("⚠ Install correctly mysql.connector")
db = con.connect(host="localhost",user="<username>",passwd="<password>")
cursor = db.cursor()
dbname = input("Enter ddbb name to create: ")
cdb = f"CREATE DATABASE {dbname}"
try:cursor.execute(cdb)
except NameError:print(NameError)

Sqlite3 .database command in python

I am trying to view all the databases in sqlite3. It can be done through the command line with .database command. I want to do the same thing in Django and show the render the details in HTML.
The following is the code I wrote in the views file:
def analyzer(request):
conn = sqlite3.connect("db.sqlite3")
c = conn.cursor()
c.execute("SHOW DATABASES")
l = c.fetchall()
print (l)
return render(request, 'analyzer.html')
You could probably use PRAGMA database_list;. That, like the .databases command, will show all the attached databases.
The tables for the main database can be retrieved with
SELECT name
from sqlite_master
where type = 'table';
For attached dbs, prefix sqlite_master with the attached db's name and dot (eg db2.sqlite_master). You probably want to filter out tables that begin with sqlite_

how to fetch multiple tables using spark sql

I am fetching data from mysql using pyspark which for only one table.I want to fetch all tables from mysql db. Don't want call jdbc connection again and again. see code below
Is it possible to simplify my code? Thank you in advance
url = "jdbc:mysql://localhost:3306/dbname"
table_df=sqlContext.read.format("jdbc").option("url",url).option("dbtable","table_name").option("user","root").option("password", "root").load()
sqlContext.registerDataFrameAsTable(table_df, "table1")
table_df_1=sqlContext.read.format("jdbc").option("url",url).option("dbtable","table_name_1").option("user","root").option("password", "root").load()
sqlContext.registerDataFrameAsTable(table_df_1, "table2")
you need somehow to acquire the list of the tables you have in mysql.
Either you find some sql commands to do that, or you manually create a file containing everything.
Then, assuming you can create a list of tablenames in python tablename_list, you can simply loop over it like this :
url = "jdbc:mysql://localhost:3306/dbname"
reader = (
sqlContext.read.format("jdbc")
.option("url", url)
.option("user", "root")
.option("password", "root")
)
for tablename in tablename_list:
reader.option("dbtable", tablename).load().createTempView(tablename)
This will create a temporary view with the same tablename. If you want another name, you can probably change the initial tablename_list with a list of tuples (tablename_in_mysql, tablename_in_spark).
#Steven already gave a perfect answer. As he said, in order to find a Python list of tablenames, you can use:
#list of the tables in the server
table_names_list = spark.read.format('jdbc'). \
options(
url='jdbc:postgresql://localhost:5432/', # database url (local, remote)
dbtable='information_schema.tables',
user='YOUR_USERNAME',
password='YOUR_PASSWORD',
driver='org.postgresql.Driver'). \
load().\
filter("table_schema = 'public'").select("table_name")
#DataFrame[table_name: string]
# table_names_list.collect()
# [Row(table_name='employee'), Row(table_name='bonus')]
table_names_list = [row.table_name for row in table_names_list.collect()]
print(table_names_list)
# ['employee', 'bonus']
Note that this is in PostgreSQL. You can easily change url and driver arguments.

copy data from csv to postgresql using python

I am on windows 7 64 bit.
I have a csv file 'data.csv'.
I want to import data to a postgresql table 'temp_unicommerce_status' via a python script.
My Script is:
import psycopg2
conn = psycopg2.connect("host='localhost' port='5432' dbname='Ekodev' user='bn_openerp' password='fa05844d'")
cur = conn.cursor()
cur.execute("""truncate table "meta".temp_unicommerce_status;""")
cur.execute("""Copy temp_unicommerce_status from 'C:\Users\n\Desktop\data.csv';""")
conn.commit()
conn.close()
I am getting this error
Traceback (most recent call last):
File "C:\Users\n\Documents\NetBeansProjects\Unicommerce_Status_Update\src\unicommerce_status_update.py", line 5, in <module>
cur.execute("""Copy temp_unicommerce_status from 'C:\\Users\\n\\Desktop\\data.csv';""")
psycopg2.ProgrammingError: must be superuser to COPY to or from a file
HINT: Anyone can COPY to stdout or from stdin. psql's \copy command also works for anyone.
Use the copy_from cursor method
f = open(r'C:\Users\n\Desktop\data.csv', 'r')
cur.copy_from(f, temp_unicommerce_status, sep=',')
f.close()
The file must be passed as an object.
Since you are coping from a csv file it is necessary to specify the separator as the default is a tab character
The way I solved this problem particular to use psychopg2 cursor class function copy_expert (Docs: http://initd.org/psycopg/docs/cursor.html). copy_expert allows you to use STDIN therefore bypassing the need to issue a superuser privilege for the postgres user. Your access to the file then depends on the client (linux/windows/mac) user's access to the file
From Postgres COPY Docs (https://www.postgresql.org/docs/current/static/sql-copy.html):
Do not confuse COPY with the psql instruction \copy. \copy invokes
COPY FROM STDIN or COPY TO STDOUT, and then fetches/stores the data in
a file accessible to the psql client. Thus, file accessibility and
access rights depend on the client rather than the server when \copy
is used.
You can also leave the permissions set strictly for access to the development_user home folder and the App folder.
csv_file_name = '/home/user/some_file.csv'
sql = "COPY table_name FROM STDIN DELIMITER '|' CSV HEADER"
cursor.copy_expert(sql, open(csv_file_name, "r"))
#sample of code that worked for me
import psycopg2 #import the postgres library
#connect to the database
conn = psycopg2.connect(host='localhost',
dbname='database1',
user='postgres',
password='****',
port='****')
#create a cursor object
#cursor object is used to interact with the database
cur = conn.cursor()
#create table with same headers as csv file
cur.execute("CREATE TABLE IF NOT EXISTS test(**** text, **** float, **** float, ****
text)")
#open the csv file using python standard file I/O
#copy file into the table just created
with open('******.csv', 'r') as f:
next(f) # Skip the header row.
#f , <database name>, Comma-Seperated
cur.copy_from(f, '****', sep=',')
#Commit Changes
conn.commit()
#Close connection
conn.close()
f.close()
Here is an extract from relevant PostgreSQL documentation : COPY with a file name instructs the PostgreSQL server to directly read from or write to a file. The file must be accessible to the server and the name must be specified from the viewpoint of the server. When STDIN or STDOUT is specified, data is transmitted via the connection between the client and the server
That's the reason why the copy command to or from a file a restricted to a PostgreSQL superuser : the file must be present on server and is loaded directly by the server process.
You should instead use :
cur.copy_from(r'C:\Users\n\Desktop\data.csv', temp_unicommerce_status)
as suggested by this other answer, because internally it uses COPY from stdin.
You can use d6tstack which makes this simple
import d6tstack
import glob
c = d6tstack.combine_csv.CombinerCSV([r'C:\Users\n\Desktop\data.csv']) # single-file
c = d6tstack.combine_csv.CombinerCSV(glob.glob('*.csv')) # multi-file
c.to_psql_combine('postgresql+psycopg2://psqlusr:psqlpwdpsqlpwd#localhost/psqltest', 'tablename')
It also deals with data schema changes, create/append/replace table and allows you to preprocess data with pandas.
I know this question has been answered, but here are my two cent. I am adding little more description:
You can use cursor.copy_from method :
First you have to create a table with same no of columns as your csv file.
Example:
My csv looks like this:
Name, age , college , id_no , country , state , phone_no
demo_name 22 , bdsu , 1456 , demo_co , demo_da , 9894321_
First create a table:
import psycopg2
from psycopg2 import Error
connection = psycopg2.connect(user = "demo_user",
password = "demo_pass",
host = "127.0.0.1",
port = "5432",
database = "postgres")
cursor = connection.cursor()
create_table_query = '''CREATE TABLE data_set
(Name TEXT NOT NULL ,
age TEXT NOT NULL ,
college TEXT NOT NULL ,
id_no TEXT NOT NULL ,
country TEXT NOT NULL ,
state TEXT NOT NULL ,
phone_no TEXT NOT NULL);'''
cursor.execute(create_table_query)
connection.commit()
Now you can simply use cursor.copy_from where you need three parameters :
first file object , second table_name , third sep type
you can copy now :
f = open(r'final_data.csv', 'r')
cursor.copy_from(f, 'data_set', sep=',')
f.close()
done
I am going to post some of the errors I ran into trying to copy a csv file to a database on a linux based system....
here is an example csv file:
Name Age Height
bob 23 59
tom 56 67
You must install the library psycopg2 (i.e. pip install psycopg2 or sudo apt install python3-psycopg2 )
You must have postgres installed on your system before you can use psycopg2 (sudo apt install postgresql-server postgresql-contrib )
Now you must create a database to store the csv unless you already have postgres setup with a pre-existing database
COPY CSV USING POSTGRES COMMANDS
After installing postgres it creates a default user account which gives you access to postgres commands
To switch to the postgres account issue: sudo -u postgres psql
Acess the prompt by issuing: psql
#command to create a database
create database mytestdb;
#connect to the database to create a table
\connect mytestdb;
#create a table with same csv column names
create table test(name char(50), age char(50), height char(50));
#copy csv file to table
copy mytestdb 'path/to/csv' with csv header;
COPY CSV USING PYTHON
The main issue I ran into with copying the CSV file to a database was I didn't have the database created yet, however this can be done with python still.
import psycopg2 #import the Postgres library
#connect to the database
conn = psycopg2.connect(host='localhost',
dbname='mytestdb',
user='postgres',
password='')
#create a cursor object
#cursor object is used to interact with the database
cur = conn.cursor()
#create table with same headers as csv file
cur.execute('''create table test(name char(50), age char(50), height char(50));''')
#open the csv file using python standard file I/O
#copy file into the table just created
f = open('file.csv','r')
cursor.copy_from(f, 'test', sep=',')
f.close()
Try to do the same as the root user - postgres. If it were linux system, you could change file's permissions or move the file to /tmp. The problem results from missing credentials to read from the filesystem.

Add new field to access table using python

I have an access table that I am trying to add fields programmatically using Python. It is not a personal geodatabase. Just a standard Access database with some tables in it.
I have been able to access the table and get the list of field names and data types.
How do I add a new field and assign the data type to this Access table using Python.
Thanks!
SRP
Using the pyodbc module:
import pyodbc
MDB = 'c:/path/to/my.mdb'
DRV = '{Microsoft Access Driver (*.mdb)}'
PWD = 'my_password'
conn = pyodbc.connect('DRIVER=%s;DBQ=%s;PWD=%s' % (DRV,MDB,PWD))
c = conn.cursor()
c.execute("ALTER TABLE my_table ADD COLUMN my_column INTEGER;")
conn.commit()
c.close()
conn.close()
Edit:
Using win32com.client...
import win32com.client
conn = win32com.client.Dispatch(r'ADODB.Connection')
DSN = 'PROVIDER=Microsoft.Jet.OLEDB.4.0;DATA SOURCE=c:/path/to/my.mdb;'
conn.Open(DSN)
conn.Execute("ALTER TABLE my_table ADD COLUMN my_column INTEGER;")
conn.Close()

Categories

Resources