Let's see if you could help me with this:
I try to upload a gpkg (geo-referenced) file (table) to a MariadB server (mysql).
Not in postgres, Mariadb-(Mysql)
If I import from an OSGeo4W in the console, it works perfectly for me, being a string, for example:
string=ogr2ogr -f MySQL MySQL:ServerX,host=10.00.00.00,user=userX,password=passX E:\QGIS\cup.shp -nln datatable_name -update -overwrite -lco engine=MYISAM
but I don't see how to integrate it in code from python.
What I do is using os.system,, call that previous string and run it: os.system(string),
I have imported in the module:
import mysql.connector
from osgeo import ogr
etc
but it always returns an error:
ERROR 1: Unable to find driver `MySQL'.
Thank You
Related
I'm working on a legacy script upgrade to Python 3 however the script is hanging during a database delete command (DELETE FROM). The script is showing no error and the logger contains only the is_connected result which is true. Here's my test script based on the main.py file but with only the call to delete the contents of a table and reset it's auto increment prior to repopulating the table.
Here's my test.py file.
#!/usr/bin/env python3
import json
import hashlib
from pprint import pprint
import mysql.connector
import configparser
import re
import random
import requests
import sys
import logging
# Load config for database variables
config = configparser.ConfigParser()
config.read("config.ini")
logFile = "logger.log"
# Set up logging
logging.basicConfig(format='%(asctime)s %(message)s', filename=logFile,level=logging.DEBUG)
# Connect to the MySQL database
cnx = mysql.connector.connect(
host=config["MySQL"]["host"],
user=config["MySQL"]["user"],
port=config["MySQL"]["port"],
passwd=config["MySQL"]["password"],
database=config["MySQL"]["database"]
)
cur = cnx.cursor()
logging.debug(cnx.is_connected())
# Clear the database ready to re-import
# Clear lookup tables first
cur.execute("DELETE FROM member_additional_skills")
logging.debug("Delete Done")
cur.execute("ALTER TABLE member_additional_skills AUTO_INCREMENT = 1")
cnx.commit()
logging.debug("Finished!")
print("Done")
I've left this running for 20 minutes and still nothing else is logged after it declares Teue to being connectedand the process is still running. Is there anything I've missed here?
*** EDIT ***
The process is still in htop but is not using cpu so seems crashed right? And as i write this I have the following as the output to my python3 test.py command line:
client_loop: send disconnect: Broken pipe and the process is no longer in htop
I should point out that this table has no more than 30 rows in it so would expect it to complete in milliseconds
thanks
You can use TrUNCAtE member_additional_skills OR DROP member_additional_skills and CREATE member_additional_skills(....)
that is much faster
You might be having a lock issue:
Sometime with the MySQL connector, the table lock's aren't released properly. Making the table unchangeable.
Try resetting the database and trying again!
I am trying to run a simple test code to connect ti a mysql db using sql alchemy.
The code is as follows:
from sqlalchemy import (create_engine, Table, Column, Integer, String, MetaData)
import settings
import sys
try:
db = create_engine('mysql://daniel:dani#localhost/test')
db.connect()
except:
print('opps ', sys.exc_info()[1])
I get the following error:
dlopen(//anaconda/lib/python3.5/site-packages/_mysql.cpython-35m-darwin.so, 2): Library not loaded: libssl.1.0.0.dylib
Referenced from: //anaconda/lib/python3.5/site-packages/_mysql.cpython-35m-darwin.so
Reason: image not found
[Finished in 1.4s]
But running on terminal:
locate libssl.1.0.0.dylib
I get:
/Applications/Dtella.app/Contents/Frameworks/libssl.1.0.0.dylib
/Applications/XAMPP/xamppfiles/lib/libssl.1.0.0.dylib
/Users/dpereira14/anaconda/envs/dato-env/lib/libssl.1.0.0.dylib
/Users/dpereira14/anaconda/lib/libssl.1.0.0.dylib
/Users/dpereira14/anaconda/pkgs/openssl-1.0.1k-1/lib/libssl.1.0.0.dylib
/anaconda/lib/libssl.1.0.0.dylib
/anaconda/pkgs/openssl-1.0.2g-0/lib/libssl.1.0.0.dylib
/opt/local/lib/libssl.1.0.0.dylib
/usr/local/Cellar/openssl/1.0.1j/lib/libssl.1.0.0.dylib
I have no clue how to fix this error.
Thanks!
I also had some problems with SQLAlchemy with mysql, i changed localhost in the create_engine to 127.0.0.1:port, and also had to use pymysql. Ended up working with this:
engine = create_engine('mysql+pymysql://user:password#127.0.0.1:port/db')
pymysql is installed via pip.
You are using python , so you have add mysqldb with mysql. Try the below code.
try:
db = create_engine('mysql+mysqldb://daniel:dani#localhost/test')
db.connect()
except:
print('opps ', sys.exc_info()[1])
I have a directory (/home/usuario/Desktop/Example) with one database (MyBBDD.db) and the file (script.py) that run the command "UPDATE".
If in the terminal I'm in the directory "Example", script.py work fine but if Im not in the directory "Example" and I execute script.py like this:
"python /home/usuario/Desktop/Example/script.py" doesnt work fine, the error is: "no such table: name_table".
Somebody know what is the problem?
Thanks in advance.
Best regards.
code as of comments script.py
import urllib
import sqlite3
conn = sqlite3.connect('MyBBDD.db')
c = conn.cursor()
c.execute ("UPDATE...")
conn.commit()
c.close()
conn.close()
When you create a connection object with sqlite3 in script.py, use the absolute file path, i.e.
con = sqlite3.connect('/home/usuario/Desktop/Example/MyBBDD.db')
I wrote a simple Flask app to pass some data to Spark. The script works in IPython Notebook, but not when I try to run it in it's own server. I don't think that the Spark context is running within the script. How do I get Spark working in the following example?
from flask import Flask, request
from pyspark import SparkConf, SparkContext
app = Flask(__name__)
conf = SparkConf()
conf.setMaster("local")
conf.setAppName("SparkContext1")
conf.set("spark.executor.memory", "1g")
sc = SparkContext(conf=conf)
#app.route('/accessFunction', methods=['POST'])
def toyFunction():
posted_data = sc.parallelize([request.get_data()])
return str(posted_data.collect()[0])
if __name__ == '__main_':
app.run(port=8080)
In IPython Notebook I don't define the SparkContext because it is automatically configured. I don't remember how I did this, I followed some blogs.
On the Linux server I have set the .py to always be running and installed the latest Spark by following up to step 5 of this guide.
Edit:
Following the advice by davidism I have now instead resorted to simple programs with increasing complexity to localise the error.
Firstly I created .py with just the script from the answer below (after appropriately adjusting the links):
import sys
try:
sys.path.append("your/spark/home/python")
from pyspark import context
print ("Successfully imported Spark Modules")
except ImportError as e:
print ("Can not import Spark Modules", e)
This returns "Successfully imported Spark Modules". However, the next .py file I made returns an exception:
from pyspark import SparkContext
sc = SparkContext('local')
rdd = sc.parallelize([0])
print rdd.count()
This returns exception:
"Java gateway process exited before sending the driver its port number"
Searching around for similar problems I found this page but when I run this code nothing happens, no print on the console and no error messages. Similarly, this did not help either, I get the same Java gateway exception as above. I have also installed anaconda as I heard this may help unite python and java, again no success...
Any suggestions about what to try next? I am at a loss.
Okay, so I'm going to answer my own question in the hope that someone out there won't suffer the same days of frustration! It turns out it was a combination of missing code and bad set up.
Editing the code:
I did indeed need to initialise a Spark Context by appending the following in the preamble of my code:
from pyspark import SparkContext
sc = SparkContext('local')
So the full code will be:
from pyspark import SparkContext
sc = SparkContext('local')
from flask import Flask, request
app = Flask(__name__)
#app.route('/whateverYouWant', methods=['POST']) #can set first param to '/'
def toyFunction():
posted_data = sc.parallelize([request.get_data()])
return str(posted_data.collect()[0])
if __name__ == '__main_':
app.run(port=8080) #note set to 8080!
Editing the setup:
It is essential that the file (yourrfilename.py) is in the correct directory, namely it must be saved to the folder /home/ubuntu/spark-1.5.0-bin-hadoop2.6.
Then issue the following command within the directory:
./bin/spark-submit yourfilename.py
which initiates the service at 10.0.0.XX:8080/accessFunction/ .
Note that the port must be set to 8080 or 8081: Spark only allows web UI for these ports by default for master and worker respectively
You can test out the service with a restful service or by opening up a new terminal and sending POST requests with cURL commands:
curl --data "DATA YOU WANT TO POST" http://10.0.0.XX/8080/accessFunction/
I was able to fix this problem by adding the location of PySpark and py4j to the path in my flaskapp.wsgi file. Here's the full content:
import sys
sys.path.insert(0, '/var/www/html/flaskapp')
sys.path.insert(1, '/usr/local/spark-2.0.2-bin-hadoop2.7/python')
sys.path.insert(2, '/usr/local/spark-2.0.2-bin-hadoop2.7/python/lib/py4j-0.10.3-src.zip')
from flaskapp import app as application
Modify your .py file as it is shown in the linked guide 'Using IPython Notebook with Spark' part second point. Insted sys.path.insert use sys.path.append. Try insert this snippet:
import sys
try:
sys.path.append("your/spark/home/python")
from pyspark import context
print ("Successfully imported Spark Modules")
except ImportError as e:
print ("Can not import Spark Modules", e)
I'm trying to access AWS using Boto, and it's not working. I've installed Boto, and the boto.cfg in /etc. Here's my code:
import requests, json
import datetime
import hashlib
import boto
conn = boto.connect_s3()
Here's the error:
Traceback (most recent call last):
File "boto.py", line 4, in <module>
import boto
File "/home/mydir/public_html/boto.py", line 6, in <module>
conn = boto.connect_s3()
AttributeError: 'module' object has no attribute 'connect_s3'
What the hell? This isn't complicated.
It looks like the file you're working on is called boto.py. I think what's happening here is that your file is importing itself--Python looks for modules in the directory containing the file doing the import before it looks on your PYTHONPATH. Try changing the name to something else.
Use the Connection classes.
e.g.
from boto.s3.connection import S3Connection
from boto.sns.connection import SNSConnection
from boto.ses.connection import SESConnection
def connect_s3(self):
return S3Connection(AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY)
def connect_sns(self):
return SNSConnection(AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY)
def connect_ses(self):
return SESConnection(AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY)
#valdogg21
I am following your instructions and put this into my code:
from boto.s3.connection import S3Connection
conn = S3Connection('<aws access key>', '<aws secret key>')
But despite my good intentions, it results in a small error. I just did
sudo pip install boto --upgrade to ensure I have the latest version installed.
This is the error message. Just wondering if I am a lone wolf or if others encounter this issue...
from boto.s3.connection import S3Connection ImportError: cannot import
name S3Connection
You may need to do something similar to how I had to utilize the EC2Connection class in some of my code, which looks like this:
from boto.ec2.connection import EC2Connection
conn = EC2Connection(...)
Also, from their docs (http://boto.s3.amazonaws.com/s3_tut.html):
>>> from boto.s3.connection import S3Connection
>>> conn = S3Connection('<aws access key>', '<aws secret key>')
EDIT: I know that doc page has the shortcut function you're trying to use, but I saw a similar problem when trying to do the same type of shortcut with EC2.
I have tried all of your solutions, but none of them seem to work. I keep going over StackOverFlow as I cannot see anyone else not having this rather small issue. Kind of weird fact is that in the server it works like a charm. The issue is on my Mac
I had this issue and was facing the same error when using boto3 and moto to mock s3 bucket.
boto3.connect_s3()
I switched back my library to boto and it worked fine. It looks like boto3 has migrated connect_s3() to resources():
boto.connect_s3() //works
boto3.resources('s3') //works
I could resolve similar issue for AWS Lambda too:
boto.connect_awslambda() //works
boto3.client('lambda') //works