I have a directory (/home/usuario/Desktop/Example) with one database (MyBBDD.db) and the file (script.py) that run the command "UPDATE".
If in the terminal I'm in the directory "Example", script.py work fine but if Im not in the directory "Example" and I execute script.py like this:
"python /home/usuario/Desktop/Example/script.py" doesnt work fine, the error is: "no such table: name_table".
Somebody know what is the problem?
Thanks in advance.
Best regards.
code as of comments script.py
import urllib
import sqlite3
conn = sqlite3.connect('MyBBDD.db')
c = conn.cursor()
c.execute ("UPDATE...")
conn.commit()
c.close()
conn.close()
When you create a connection object with sqlite3 in script.py, use the absolute file path, i.e.
con = sqlite3.connect('/home/usuario/Desktop/Example/MyBBDD.db')
Related
Let's see if you could help me with this:
I try to upload a gpkg (geo-referenced) file (table) to a MariadB server (mysql).
Not in postgres, Mariadb-(Mysql)
If I import from an OSGeo4W in the console, it works perfectly for me, being a string, for example:
string=ogr2ogr -f MySQL MySQL:ServerX,host=10.00.00.00,user=userX,password=passX E:\QGIS\cup.shp -nln datatable_name -update -overwrite -lco engine=MYISAM
but I don't see how to integrate it in code from python.
What I do is using os.system,, call that previous string and run it: os.system(string),
I have imported in the module:
import mysql.connector
from osgeo import ogr
etc
but it always returns an error:
ERROR 1: Unable to find driver `MySQL'.
Thank You
I'm working on a legacy script upgrade to Python 3 however the script is hanging during a database delete command (DELETE FROM). The script is showing no error and the logger contains only the is_connected result which is true. Here's my test script based on the main.py file but with only the call to delete the contents of a table and reset it's auto increment prior to repopulating the table.
Here's my test.py file.
#!/usr/bin/env python3
import json
import hashlib
from pprint import pprint
import mysql.connector
import configparser
import re
import random
import requests
import sys
import logging
# Load config for database variables
config = configparser.ConfigParser()
config.read("config.ini")
logFile = "logger.log"
# Set up logging
logging.basicConfig(format='%(asctime)s %(message)s', filename=logFile,level=logging.DEBUG)
# Connect to the MySQL database
cnx = mysql.connector.connect(
host=config["MySQL"]["host"],
user=config["MySQL"]["user"],
port=config["MySQL"]["port"],
passwd=config["MySQL"]["password"],
database=config["MySQL"]["database"]
)
cur = cnx.cursor()
logging.debug(cnx.is_connected())
# Clear the database ready to re-import
# Clear lookup tables first
cur.execute("DELETE FROM member_additional_skills")
logging.debug("Delete Done")
cur.execute("ALTER TABLE member_additional_skills AUTO_INCREMENT = 1")
cnx.commit()
logging.debug("Finished!")
print("Done")
I've left this running for 20 minutes and still nothing else is logged after it declares Teue to being connectedand the process is still running. Is there anything I've missed here?
*** EDIT ***
The process is still in htop but is not using cpu so seems crashed right? And as i write this I have the following as the output to my python3 test.py command line:
client_loop: send disconnect: Broken pipe and the process is no longer in htop
I should point out that this table has no more than 30 rows in it so would expect it to complete in milliseconds
thanks
You can use TrUNCAtE member_additional_skills OR DROP member_additional_skills and CREATE member_additional_skills(....)
that is much faster
You might be having a lock issue:
Sometime with the MySQL connector, the table lock's aren't released properly. Making the table unchangeable.
Try resetting the database and trying again!
I am trying to run a simple test code to connect ti a mysql db using sql alchemy.
The code is as follows:
from sqlalchemy import (create_engine, Table, Column, Integer, String, MetaData)
import settings
import sys
try:
db = create_engine('mysql://daniel:dani#localhost/test')
db.connect()
except:
print('opps ', sys.exc_info()[1])
I get the following error:
dlopen(//anaconda/lib/python3.5/site-packages/_mysql.cpython-35m-darwin.so, 2): Library not loaded: libssl.1.0.0.dylib
Referenced from: //anaconda/lib/python3.5/site-packages/_mysql.cpython-35m-darwin.so
Reason: image not found
[Finished in 1.4s]
But running on terminal:
locate libssl.1.0.0.dylib
I get:
/Applications/Dtella.app/Contents/Frameworks/libssl.1.0.0.dylib
/Applications/XAMPP/xamppfiles/lib/libssl.1.0.0.dylib
/Users/dpereira14/anaconda/envs/dato-env/lib/libssl.1.0.0.dylib
/Users/dpereira14/anaconda/lib/libssl.1.0.0.dylib
/Users/dpereira14/anaconda/pkgs/openssl-1.0.1k-1/lib/libssl.1.0.0.dylib
/anaconda/lib/libssl.1.0.0.dylib
/anaconda/pkgs/openssl-1.0.2g-0/lib/libssl.1.0.0.dylib
/opt/local/lib/libssl.1.0.0.dylib
/usr/local/Cellar/openssl/1.0.1j/lib/libssl.1.0.0.dylib
I have no clue how to fix this error.
Thanks!
I also had some problems with SQLAlchemy with mysql, i changed localhost in the create_engine to 127.0.0.1:port, and also had to use pymysql. Ended up working with this:
engine = create_engine('mysql+pymysql://user:password#127.0.0.1:port/db')
pymysql is installed via pip.
You are using python , so you have add mysqldb with mysql. Try the below code.
try:
db = create_engine('mysql+mysqldb://daniel:dani#localhost/test')
db.connect()
except:
print('opps ', sys.exc_info()[1])
I wrote a simple Flask app to pass some data to Spark. The script works in IPython Notebook, but not when I try to run it in it's own server. I don't think that the Spark context is running within the script. How do I get Spark working in the following example?
from flask import Flask, request
from pyspark import SparkConf, SparkContext
app = Flask(__name__)
conf = SparkConf()
conf.setMaster("local")
conf.setAppName("SparkContext1")
conf.set("spark.executor.memory", "1g")
sc = SparkContext(conf=conf)
#app.route('/accessFunction', methods=['POST'])
def toyFunction():
posted_data = sc.parallelize([request.get_data()])
return str(posted_data.collect()[0])
if __name__ == '__main_':
app.run(port=8080)
In IPython Notebook I don't define the SparkContext because it is automatically configured. I don't remember how I did this, I followed some blogs.
On the Linux server I have set the .py to always be running and installed the latest Spark by following up to step 5 of this guide.
Edit:
Following the advice by davidism I have now instead resorted to simple programs with increasing complexity to localise the error.
Firstly I created .py with just the script from the answer below (after appropriately adjusting the links):
import sys
try:
sys.path.append("your/spark/home/python")
from pyspark import context
print ("Successfully imported Spark Modules")
except ImportError as e:
print ("Can not import Spark Modules", e)
This returns "Successfully imported Spark Modules". However, the next .py file I made returns an exception:
from pyspark import SparkContext
sc = SparkContext('local')
rdd = sc.parallelize([0])
print rdd.count()
This returns exception:
"Java gateway process exited before sending the driver its port number"
Searching around for similar problems I found this page but when I run this code nothing happens, no print on the console and no error messages. Similarly, this did not help either, I get the same Java gateway exception as above. I have also installed anaconda as I heard this may help unite python and java, again no success...
Any suggestions about what to try next? I am at a loss.
Okay, so I'm going to answer my own question in the hope that someone out there won't suffer the same days of frustration! It turns out it was a combination of missing code and bad set up.
Editing the code:
I did indeed need to initialise a Spark Context by appending the following in the preamble of my code:
from pyspark import SparkContext
sc = SparkContext('local')
So the full code will be:
from pyspark import SparkContext
sc = SparkContext('local')
from flask import Flask, request
app = Flask(__name__)
#app.route('/whateverYouWant', methods=['POST']) #can set first param to '/'
def toyFunction():
posted_data = sc.parallelize([request.get_data()])
return str(posted_data.collect()[0])
if __name__ == '__main_':
app.run(port=8080) #note set to 8080!
Editing the setup:
It is essential that the file (yourrfilename.py) is in the correct directory, namely it must be saved to the folder /home/ubuntu/spark-1.5.0-bin-hadoop2.6.
Then issue the following command within the directory:
./bin/spark-submit yourfilename.py
which initiates the service at 10.0.0.XX:8080/accessFunction/ .
Note that the port must be set to 8080 or 8081: Spark only allows web UI for these ports by default for master and worker respectively
You can test out the service with a restful service or by opening up a new terminal and sending POST requests with cURL commands:
curl --data "DATA YOU WANT TO POST" http://10.0.0.XX/8080/accessFunction/
I was able to fix this problem by adding the location of PySpark and py4j to the path in my flaskapp.wsgi file. Here's the full content:
import sys
sys.path.insert(0, '/var/www/html/flaskapp')
sys.path.insert(1, '/usr/local/spark-2.0.2-bin-hadoop2.7/python')
sys.path.insert(2, '/usr/local/spark-2.0.2-bin-hadoop2.7/python/lib/py4j-0.10.3-src.zip')
from flaskapp import app as application
Modify your .py file as it is shown in the linked guide 'Using IPython Notebook with Spark' part second point. Insted sys.path.insert use sys.path.append. Try insert this snippet:
import sys
try:
sys.path.append("your/spark/home/python")
from pyspark import context
print ("Successfully imported Spark Modules")
except ImportError as e:
print ("Can not import Spark Modules", e)
i'm trying to make a group of defs in one file so then i just can import them whenever i want to make a script in python
i have tried this:
def get_dblink( dbstring):
"""
Return a database cnx.
"""
global psycopg2
try
cnx = psycopg2.connect( dbstring)
except Exception, e:
print "Unable to connect to DB. Error [%s]" % ( e,)
exit( )
but i get this error: global name 'psycopg2' is not defined
in my main file script.py
i have:
import psycopg2, psycopg2.extras
from misc_defs import *
hostname = '192.168.10.36'
database = 'test'
username = 'test'
password = 'test'
dbstring = "host='%s' dbname='%s' user='%s' password='%s'" % ( hostname, database, username, password)
cnx = get_dblink( dbstring)
can anyone give me a hand?
You just need to import psycopg2 in your first snippet.
If you need to there's no problem to 'also' import it in the second snippet (Python makes sure the modules are only imported once). Trying to use globals for this is bad practice.
So: at the top of every module, import every module which is used within that particular module.
Also: note that from x import * (with wildcards) is generally frowned upon: it clutters your namespace and makes your code less explicit.