i'm trying to make a group of defs in one file so then i just can import them whenever i want to make a script in python
i have tried this:
def get_dblink( dbstring):
"""
Return a database cnx.
"""
global psycopg2
try
cnx = psycopg2.connect( dbstring)
except Exception, e:
print "Unable to connect to DB. Error [%s]" % ( e,)
exit( )
but i get this error: global name 'psycopg2' is not defined
in my main file script.py
i have:
import psycopg2, psycopg2.extras
from misc_defs import *
hostname = '192.168.10.36'
database = 'test'
username = 'test'
password = 'test'
dbstring = "host='%s' dbname='%s' user='%s' password='%s'" % ( hostname, database, username, password)
cnx = get_dblink( dbstring)
can anyone give me a hand?
You just need to import psycopg2 in your first snippet.
If you need to there's no problem to 'also' import it in the second snippet (Python makes sure the modules are only imported once). Trying to use globals for this is bad practice.
So: at the top of every module, import every module which is used within that particular module.
Also: note that from x import * (with wildcards) is generally frowned upon: it clutters your namespace and makes your code less explicit.
Related
My query_distinct_data() function executes successfully when run.
But when I try to import the query_distinct_data() using Jupyter notebook from my function page map_distinct_data on to my main page I get the following error.
NameError: name 'athena' is not defined
Below is my main page below
import pandas as pd
import requests
import xml.etree.ElementTree as ET
from datetime import date
import boto3
import time
import geopandas
import folium
from ipynb.fs.full.qld_2 import qld_data
from ipynb.fs.full.vic_2 import vic_data
from ipynb.fs.full.put_to_s3_bucket import put_to_s3_bucket
from ipynb.fs.full.map_distinct_data import query_distinct_data
from ipynb.fs.full.map_distinct_data import distinct_data_df
from ipynb.fs.full.map_distinct_data import create_distinct_data_map
aws_region = "ap-southeast-2"
schema_name = "fire_data"
table_name ='rfs_fire_data'
result_output_location = "s3://camgoo2-rfs-visualisation/query_results/"
bucket='camgoo2-rfs-visualisation'
athena = boto3.client("athena",region_name=aws_region)
qld_data()
vic_data()
put_to_s3_bucket()
execution_id = query_distinct_data()
df = distinct_data_df()
create_distinct_data_map()
Below is my function that I am wanting to import from map_distinct_data notebook. This successfully executes but am getting the error when trying to import to my main page.
def query_distinct_data():
query = "SELECT DISTINCT * from fire_data.rfs_fire_data where state in ('NSW','VIC','QLD')"
response = athena.start_query_execution(
QueryString=query,
ResultConfiguration={"OutputLocation": result_output_location})
return response["QueryExecutionId"]
I am able to run function query_distinct_data() and it executes when run seperately.
But it fails when I try to import the function.
The other functions that I import using ipynb.fs.full that do involve athena are executing okay when imported.
It is all about variables visibility scope (1, 2)
In short: map_distinct_data module knows nothing about main page's athena variable.
The good and correct way is to pass athena variable inside function as parameter:
from ipynb.fs.full.map_distinct_data import create_distinct_data_map
...
athena = boto3.client("athena",region_name=aws_region)
execution_id = create_distinct_data_map(athena)
where create_distinct_data_map should be defined as
def create_distinct_data_map(athena):
...
The second way is to set variable inside imported module:
from ipynb.fs.full.map_distinct_data import create_distinct_data_map
from ipynb.fs.full import map_distinct_data
athena = boto3.client("athena",region_name=aws_region)
map_distinct_data.athena = athena
execution_id = create_distinct_data_map()
Even if second way is working it is a really bad style.
Here is some must to know information about encapsulation in Python.
I am diving down into the world of Python, practicing by writing a simple inventory based application for my dvd collection as a means to get aquainted to working with sqlite3.
As part of my project, I am using a ini file for settings, and then reading the values from that in a shared library that is called from another file. I am curious as to feedback on my methods, especially my use of the config file, and best coding practices around it.
The config is formatted as follows, named config.ini
[main]
appversion = 0.1.0
datasource = data\database.db
my utils library is then formatted as follows:
import os
import sqlite3
from configparser import ConfigParser
CONFIG_PATH = os.path.join(os.path.dirname(__file__), 'config/config.ini')
def get_settings(config_path=CONFIG_PATH):
config = ConfigParser()
config.read(config_path)
return config
def db_connect():
config = get_settings()
con = sqlite3.connect(config.get('main', 'datasource'))
return con
Finally, my test lookup, which does function is:
from utils import db_connect
def asset_lookup():
con = db_connect()
cur = con.cursor()
cur.execute("SELECT * FROM dvd")
results = cur.fetchall()
for row in results:
print(row)
My biggest question is in regards to my building of the data connection from within utils.py. First I'm reading the file, then form within the same script, building the data connection from a setting within the ini file. This is then read by other files. This was my method of trying to be modular, but I was not sure if its proper.
Thanks in advance.
To directly answer your question, you could do something like this to cache your objects so you don't create/open them over and over again whenever you call one of the functions in utils.py:
import os
import sqlite3
from configparser import ConfigParser
CONFIG_PATH = os.path.join(os.path.dirname(__file__), 'config/config.ini')
config = None
con = None
def get_settings(CONFIG_PATH):
global config
if config is None:
config = ConfigParser()
config.read(CONFIG_PATH)
return config
def db_connect():
global con
if con is None:
config = get_settings()
con = sqlite3.connect(config.get('main', 'datasource'))
return con
While this might solve your problem, it relies heavily on global variables, which might cause problems elsewhere. Typically, that's where you switch to classes as containers for your code parts that belong together. For example:
import os
import sqlite3
from configparser import ConfigParser
class DVDApp:
CONFIG_PATH = os.path.join(os.path.dirname(__file__), 'config/config.ini')
def __init__(self):
self.config = ConfigParser()
self.config.read(self.CONFIG_PATH)
self.con = sqlite3.connect(self.config.get('main', 'datasource'))
def asset_lookup(self):
cur = self.con.cursor()
cur.execute("SELECT * FROM dvd")
results = cur.fetchall()
for row in results:
print(row)
Initializing config and connection objects held in self reduces to just 3 lines of code. Thereby making it almost unnecessary to split your code over several files. And even if so, it would be enough to share the one instance of DVDApp between modules, which then holds all the other shared objects.
I want to use multiprocessing for importing modules. In 'conf' variable I have large list of modules names. Function loads classes names from module and checks if tables for those classes are created, if don't - they will be created in the database.
import importlib
from mymodule import exceptions
from datetime import datetime
from multiprocessing import Process
def process_import(ormModule):
print "importing ...", ormModule
t1 = datetime.now()
try:
importlib.import_module(ormModule)
except ImportError:
exceptions.log("Unable to import ORM module %s", ormModule)
failed.append(ormModule)
except:
exceptions.log("Exception while importing ORM module %s", ormModule)
failed.append(ormModule)
t2 = datetime.now()
total = t2 - t1
print "finished importing %s %s" % (ormModule, total.total_seconds())
def loadClasses(conf):
for ormModule in conf:
Process(target=process_import, args=(ormModule,)).start()
Some of the modules are importing fine, but some don't
sqlalchemy.exc.OperationalError: (OperationalError) server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
'select relname from pg_class c join pg_namespace n on n.oid=c.relnamespace where n.nspname=current_sch() and relname=%(name)s' {'name': u'document1'}
How can I fix this problem? I'm using PostgreSQL.
Here's the code I have. Basically I have the Shebang line in there because the psycopg2 wasn't working without it.
But now when I have this line in there it doesn't allow me to run the database, it just says "no module named 'flask'"
#!/usr/bin/python3.4
#
# Small script to show PostgreSQL and Pyscopg together
#
from flask import Flask, render_template
from flask import request
from flask import *
from datetime import datetime
from functools import wraps
import time
import csv
import psycopg2
app = Flask(__name__)
app.secret_key ='lukey'
def getConn():
connStr=("dbname='test' user='lukey' password='lukey'")
conn=psycopg2.connect(connStr)
return conn
#app.route('/')
def home():
return render_template(index.html)
#app.route('/displayStudent', methods =['GET'])
def displayStudent():
residence = request.args['residence']
try:
conn = None
conn = getConn()
cur = conn.cursor()
cur.execute('SET search_path to public')
cur.execute('SELECT stu_id,student.name,course.name,home_town FROM student,\
course WHERE course = course_id AND student.residence = %s',[residence])
rows = cur.fetchall()
if rows:
return render_template('stu.html', rows = rows, residence = residence)
else:
return render_template('index.html', msg1='no data found')
except Exception as e:
return render_template('index.html', msg1='No data found', error1 = e)
finally:
if conn:
conn.close()
##app.route('/addStudent, methods =['GET','POST']')
#def addStudent():
if __name__ == '__main__':
app.run(debug = True)
This is an environment problem, not a flask, postgres or shebang problem. A specific version of Python is being called, and it is not being given the correct path to its libraries.
Depending on what platform you are on, changing you shebang to #! /usr/bin/env python3 can fix the problem, but if not (very likely not, though using env is considered better/portable practice these days), then you may need to add your Python3 libs location manually in your code.
sys.path.append("/path/to/your/python/libs")
If you know where your Python libs are (or maybe flask is installed somewhere peculiar?) then you can add that to the path and imports following the line where you added to the path will include it in their search for modules.
Im trying to import files on Flask app in base of url route. I started to coding python few days ago so i havent idea if i doing it well. I write this on :
#app.route('/<file>')
def call(file):
__import__('controller.'+file)
hello = Example('Hello world')
return hello.msg
And i have other file called example.py into a controller folder that contains this:
class Example:
def __init__(self, msg):
self.msg = msg
So i start from terminal the app and i try to enter to localhost:5000/example.
Im trying to show in screen Hello world but give me the next error:
NameError: global name 'Example' is not defined
Thanks for all!
__import__ returns the newly imported module; names from that module are not added to your globals, so you need to get the Example class as an attribute from the returned module:
module = __import__('controller.'+file)
hello = module.Example('Hello world')
__import__ is rather low-level, you probably want to use importlib.import_module() instead:
import importlib
module = importlib.import_module('controller.'+file)
hello = module.Example('Hello world')
If you need to dynamically get the classname too, use getattr():
class_name = 'Example'
hello_class = getattr(module, class_name)
hello = hello_class('Hello world')
The Werkzeug package (used by Flask) offers a helpful functions here: werkzeug.utils.import_string() imports an object dynamically:
from werkzeug.utils import import_string
object_name = 'controller.{}:Example'.format(file)
hello_class = import_string(object_name)
This encapsulates the above process.
You'll need to be extremely careful with accepting names from web requests and using those as module names. Please do sanitise the file argument and only allow alphanumerics to prevent relative imports from being used.
You could use the werkzeug.utils.find_modules() function to limit the possible values for file here:
from werkzeug.utils import find_modules, import_string
module_name = 'controller.{}'.format(file)
if module_name not in set(find_modules('controller')):
abort(404) # no such module in the controller package
hello_class = import_string(module_name + ':Example')
I think you might not add the directory to the file, add the following code into the previous python program
# Add another directory
import sys
sys.path.insert(0, '/your_directory')
from Example import Example
There are two ways for you to do imports in Python:
import example
e = example.Example('hello world')
or
from example import Example
e = Example('hello world')