I've used startup scripts on Google Cloud Compute Instances:
setsid python home/junaid_athar/pull.py
And I can run the following script on the VM without issue when logged in at the root directory:
setsid python3 home/junaid_athar/btfx.py
however, when I add setsid python3 home/junaid_athar/btfx.py to the startup-script it throws an error saying:
ImportError: cannot import name 'opentype'
The same script runs fine when I'm logged in, but not when I run it as a startup-script, why and how do I resolve it?
Update: I'm pretty new to programming, and hack away. Here's the script:
import logging
import time
import sys
import json
from btfxwss import BtfxWss
from google.cloud import bigquery
log = logging.getLogger(__name__)
fh = logging.FileHandler('/home/junaid_athar/test.log')
fh.setLevel(logging.CRITICAL)
sh = logging.StreamHandler(sys.stdout)
sh.setLevel(logging.CRITICAL)
log.addHandler(sh)
log.addHandler(fh)
logging.basicConfig(level=logging.DEBUG, handlers=[fh, sh])
def stream_data(dataset_id, table_id, json_data):
bigquery_client = bigquery.Client()
dataset_ref = bigquery_client.dataset(dataset_id)
table_ref = dataset_ref.table(table_id)
data = json.loads(json_data)
# Get the table from the API so that the schema is available.
table = bigquery_client.get_table(table_ref)
rows = [data]
errors = bigquery_client.create_rows(table, rows)
wss=BtfxWss()
wss.start()
while not wss.conn.connected.is_set():
time.sleep(2)
# Subscribe to some channels
wss.subscribe_to_trades('BTCUSD')
# Do something else
t = time.time()
while time.time() - t < 5:
pass
# Accessing data stored in BtfxWss:
trades_q = wss.trades('BTCUSD') # returns a Queue object for the pair.
while True:
while not trades_q.empty():
item=trades_q.get()
if item[0][0]=='te':
json_data={'SEQ':item[0][0], 'ID':item[0][1][0], 'TIMESTAMP':int(str(item[0][1][1])[:10]) , 'PRICE':item[0][1][3], 'AMOUNT':item[0][1][2], 'UNIQUE_TS':item[0][1][1], 'SOURCE':'bitfinex'}
stream_data('gdax','btfxwss', json.dumps(json_data))
# Unsubscribing from channels:
wss.unsubscribe_from_trades('BTCUSD')
# Shutting down the client:
wss.stop()
I'm running it on a Standard 1-CPU 3.75mem machine. (Debian GNU/Linux 9 (stretch)).
I THINK the problem is with the install directory of python3 & modules and the difference between how start-up scripts are ran vs being logged into the machine-- how do I troubleshoot that?
Figured out what was going on. Startup scripts are run as the (on the?) root. I added -u username to the start of the startup script, and it ran as though I were SSH'ed into the server. All is good, thanks all for your help!
Related
I have run into a peculiar issue while using the teradatasql package (installed from pypi). I use the following code (let's call it pytera.py) to query a database:
from dotenv import load_dotenv
import pandas as pd
import teradatasql
# Load the database credentials from .env file
_ = load_dotenv()
db_host = os.getenv('db_host')
db_username = os.getenv('db_username')
db_password = os.getenv('db_password')
def run_query(query):
"""Run query string on teradata and return DataFrame."""
if query.strip()[-1] != ';':
query += ';'
with teradatasql.connect(host=db_host, user=db_username,
password=db_password) as connect:
df = pd.read_sql(query, connect)
return df
When I import this function in the IPython/Python interpreter or in Jupyter Notebook, I can run queries just fine like so:
import pytera as pt
pt.run_query('select top 5 * from table_name;')
However, if I save the above code in a .py file and try to run it, I get an error message most of the time (not all the time). The error message is below.
E teradatasql.OperationalError: [Version 16.20.0.49] [Session 0] [Teradata SQL Driver] Hostname lookup failed for None
E at gosqldriver/teradatasql.(*teradataConnection).makeDriverError TeradataConnection.go:1046
E at gosqldriver/teradatasql.(*Lookup).getAddresses CopDiscovery.go:65
E at gosqldriver/teradatasql.discoverCops CopDiscovery.go:137
E at gosqldriver/teradatasql.newTeradataConnection TeradataConnection.go:133
E at gosqldriver/teradatasql.(*teradataDriver).Open TeradataDriver.go:32
E at database/sql.dsnConnector.Connect sql.go:600
E at database/sql.(*DB).conn sql.go:1103
E at database/sql.(*DB).Conn sql.go:1619
E at main.goCreateConnection goside.go:229
E at main._cgoexpwrap_e6e101e164fa_goCreateConnection _cgo_gotypes.go:214
E at runtime.call64 asm_amd64.s:574
E at runtime.cgocallbackg1 cgocall.go:316
E at runtime.cgocallbackg cgocall.go:194
E at runtime.cgocallback_gofunc asm_amd64.s:826
E at runtime.goexit asm_amd64.s:2361
E Caused by lookup None on <ip address redacted>: server misbehaving
I am using Python 3.7.3 and teradatasql 16.20.0.49 on Ubuntu (WSL) 18.04.
Perhaps not coincidentally, I run into a similar issue when trying a similar workflow on Windows (using the teradata package and the Teradata Python drivers installed). Works when I connect inside the interpreter or in Jupyter, but not in a script. In the Windows case, the error is:
E teradata.api.DatabaseError: (10380, '[08001] [Teradata][ODBC] (10380) Unable to establish connection with data source. Missing settings: {[DBCName]}')
I have a feeling that there's something basic that I'm missing, but I can't find a solution to this anywhere.
Thanks ravioli for the fresh eyes. Turns out the issue was loading in the environment variables using dotenv. My module is in a Python package (separate folder), and my script and .env files are in the working directory.
dotenv successfully reads in the environment variables (.env in my working directory) when I run the code in my original post, line by line, in the interpreter or in Jupyter. However, when I run the same code in a script, it does not find in the .env file in my working directory. That will be a separate question I'll have to find the answer to.
import teradatasql
import pandas as pd
def run_query(query, db_host, db_username, db_password):
"""Run query string on teradata and return DataFrame."""
if query.strip()[-1] != ';':
query += ';'
with teradatasql.connect(host=db_host, user=db_username,
password=db_password) as connect:
df = pd.read_sql(query, connect)
return df
The code below runs fine in a script now:
import pytera as pt
from dotenv import load_dotenv()
_ = load_dotenv()
db_host = os.getenv('db_host')
db_username = os.getenv('db_username')
db_password = os.getenv('db_password')
data = pt.run_query('select top 5 * from table_name;', db_host, db_username, db_password)
It looks like your client can't find the Teradata server, which is why you see that DBCName missing error. This should be the "system name" of your Teradata server (i.e. TDServProdA).
A couple things to try:
If you are trying to connect directly with a hostname, try disabling COP discovery in your connection with this flag: cop = false. More info
Try updating your hosts file on your local system. From the documentation:
Modifying the hosts File
If your site does not use DNS, you must define the IP address and the
Teradata Database name to use in the system hosts file on the
computer.
Locate the hosts file on the computer. This file is typically located in the following folder: %SystemRoot%\system32\drivers\etc
Open the file with a text editor, such as Notepad.
Add the following entry to the file: xxx.xx.xxx.xxx sssCOP1 where xxx.xx.xxx.xxx is the IP address and where sss is the Teradata
Database name.
Save the hosts file.
Link 1
Link 2
I've changed a script by commenting out a section and adding some print statements above it for testing purposes, the problem is when I run the script from any other directory than the one it's in, python is running the old version of the script.
Clearing __pycache__ had no effect.
Here's the python script in question:
import discord
import watson
import configparser
import os
from discord.ext.commands import Bot
from getpass import getuser
print("WHY ISN'T THIS BEING CALLED!!")
#print(os.getcwd())
#os.chdir("/home/{}/discordBotStaging".format(getuser()))
#print(os.getcwd())
#if(not os.path.isfile("./config.ini")):
# print("No config file found, make sure you have one in the same directory as this python script\nexiting")
# quit()
config = configparser.ConfigParser()
config.read("./config.ini")
TOKEN = config["DEFAULT"]["DISCORD_KEY"]
BOT_PREFIX = ("!", "$")
client = Bot(command_prefix=BOT_PREFIX)
#client.command(name="memealyze",
description="When uploading image include command \"!memealyze\" | \"!Memealyze\" | \"!MemeAlyze\" | \"!ma\" as a comment",
brief="Neural network put to good use",
aliases=["Memealyze", "MemeAlyze", "ma"],
pass_context=True)
async def GetMemeContents(context):
await client.say("Sending image to the mothership, hold tight.")
if(not context.message.attachments):
await client.say(
"Couldn't find image attachement. Make sure you include \"!memealyze\" or any of it's variants as a comment when submitting an image")
return
imageUrl = str(context.message.attachments[0]["url"])
messageContent = ""
resultDict = watson.ReturnWatsonResults(imageUrl)
for key,val in resultDict.items():
messageContent += "{} : {}%\n".format(key, val)
await client.say("Done, the boys at IBM said they found this:\n" + messageContent)
client.run(TOKEN)
And here's the issue:
yugnut#RyzenBuild:~$ python3 discordBotStaging/main.py
No config file found, make sure you have one in the same directory as this python script
exiting
yugnut#RyzenBuild:~$
yugnut#RyzenBuild:~/discordBotStaging$ python3 main.py
WHY ISN'T THIS BEING CALLED!!
^Cyugnut#RyzenBuild:~/discordBotStaging$
EDITS:
#ShadowRanger suggestions:
Try moving the print above all the imports in your script.
This yields promising results, I do get output by trying this but right after that I still run into the same issue
You can't use relative paths like that if the config file is expected to be in the same directory as the script; you have to do config.read(os.path.join(os.path.dirname(__file__), 'config.ini'))
I think my ignorance is showing :^), I changed this in my script as well
After making these edits along with trying to run the script after commenting out my import configparser line I still get the same error.
I have been searching since a couple of days for a solution without success.
We have a windows service build to copy some files from one location to another one.
So I build the code shown below with Python 3.7.
The full coding can be found on Github.
When I run the service using python all is working fine, I can install the service and also start the service.
This using commands:
Install the service:
python jis53_backup.py install
Run the service:
python jis53_backup.py start
When I now compile this code using pyinstaller with command:
pyinstaller -F --hidden-import=win32timezone jis53_backup.py
After the exe is created, I can install the service but when trying to start the service I get the error:
Error starting service: The service did not respond to the start or
control request in a timely fashion
I have gone through multiple posts on Stackoverflow and on Google related to this error however, without success. I don't have the option to install the python 3.7 programs on the PC's that would need to run this service. That's why we are trying to get a .exe build.
I have made sure to have the path updated according to the information that I found in the different questions.
Image of path definitions:
I also copied the pywintypes37.dll file.
From -> Python37\Lib\site-packages\pywin32_system32
To -> Python37\Lib\site-packages\win32
Does anyone have any other suggestions on how to get this working?
'''
Windows service to copy a file from one location to another
at a certain interval.
'''
import sys
import time
from distutils.dir_util import copy_tree
import servicemanager
import win32serviceutil
import win32service
from HelperModules.CheckFileExistance import check_folder_exists, create_folder
from HelperModules.ReadConfig import (check_config_file_exists,
create_config_file, read_config_file)
from ServiceBaseClass.SMWinService import SMWinservice
sys.path += ['filecopy_service/ServiceBaseClass',
'filecopy_service/HelperModules']
class Jis53Backup(SMWinservice):
_svc_name_ = "Jis53Backup"
_svc_display_name_ = "JIS53 backup copy"
_svc_description_ = "Service to copy files from server to local drive"
def start(self):
self.conf = read_config_file()
if not check_folder_exists(self.conf['dest']):
create_folder(self.conf['dest'])
self.isrunning = True
def stop(self):
self.isrunning = False
def main(self):
self.ReportServiceStatus(win32service.SERVICE_RUNNING)
while self.isrunning:
# Copy the files from the server to a local folder
# TODO: build function to trigger only when a file is changed.
copy_tree(self.conf['origin'], self.conf['dest'], update=1)
time.sleep(30)
if __name__ == '__main__':
if sys.argv[1] == 'install':
if not check_config_file_exists():
create_config_file()
if len(sys.argv) == 1:
servicemanager.Initialize()
servicemanager.PrepareToHostSingle(Jis53Backup)
servicemanager.StartServiceCtrlDispatcher()
else:
win32serviceutil.HandleCommandLine(Jis53Backup)
I was also facing this issue after compiling using pyinstaller. For me, the issue was that I was using the paths to configs and logs file in dynamic way, for ex:
curr_path = os.path.dirname(os.path.abspath(__file__))
configs_path = os.path.join(curr_path, 'configs', 'app_config.json')
opc_configs_path = os.path.join(curr_path, 'configs', 'opc.json')
log_file_path = os.path.join(curr_path, 'logs', 'application.log')
This was working fine when I was starting the service using python service.py install/start. But after compiling it using pyinstaller, it always gave me error of not starting in timely fashion.
To resolve this, I made all the dynamic paths to static, for ex:
configs_path = 'C:\\Program Files (x86)\\ScantechOPC\\configs\\app_config.json'
opc_configs_path = 'C:\\Program Files (x86)\\ScantechOPC\\configs\\opc.json'
debug_file = 'C:\\Program Files (x86)\\ScantechOPC\\logs\\application.log'
After compiling via pyinstaller, it is now working fine without any error. Looks like when we do dynamic path, it doesn't get the actual path to files and thus it gives error.
Hope this solves your problem too. Thanks
I'm trying to run a python script that simulates traffic sensors sending in data in real time to PubSub on my Google Cloud Shell. I'm getting this error
Traceback (most recent call last):
File "./send_sensor_data.py", line 87, in <module>
psclient = pubsub.Client()
AttributeError: 'module' object has no attribute 'Client'
Tried running google.cloud.pubsub.__file__, no duplicates exist.
I've been searching everywhere and the popular consensus was to install the pubsub package into a virtual environment which I've tried to no avail.
What I've tried so far:
Set VM to clean state
Uninstalled and reinstalled all gcloud components
Updated all gcloud components to the latest version
uninsalled and reinstalled python pubsub library
Installed pubsub inside a virtualenv
Tried from a different project
Tried from a different GCP account
This is my script:
import time
import gzip
import logging
import argparse
import datetime
from google.cloud import pubsub
TIME_FORMAT = '%Y-%m-%d %H:%M:%S'
TOPIC = 'sandiego'
INPUT = 'sensor_obs2008.csv.gz'
def publish(topic, events):
numobs = len(events)
if numobs > 0:
with topic.batch() as batch:
logging.info('Publishing {} events from {}'.
format(numobs, get_timestamp(events[0])))
for event_data in events:
batch.publish(event_data)
def get_timestamp(line):
# look at first field of row
timestamp = line.split(',')[0]
return datetime.datetime.strptime(timestamp, TIME_FORMAT)
def simulate(topic, ifp, firstObsTime, programStart, speedFactor):
# sleep computation
def compute_sleep_secs(obs_time):
time_elapsed = (datetime.datetime.utcnow() - programStart).seconds
sim_time_elapsed = (obs_time - firstObsTime).seconds / speedFactor
to_sleep_secs = sim_time_elapsed - time_elapsed
return to_sleep_secs
topublish = list()
for line in ifp:
event_data = line # entire line of input CSV is the message
obs_time = get_timestamp(line) # from first column
# how much time should we sleep?
if compute_sleep_secs(obs_time) > 1:
# notify the accumulated topublish
publish(topic, topublish) # notify accumulated messages
topublish = list() # empty out list
# recompute sleep, since notification takes a while
to_sleep_secs = compute_sleep_secs(obs_time)
if to_sleep_secs > 0:
logging.info('Sleeping {} seconds'.format(to_sleep_secs))
time.sleep(to_sleep_secs)
topublish.append(event_data)
# left-over records; notify again
publish(topic, topublish)
def peek_timestamp(ifp):
# peek ahead to next line, get timestamp and go back
pos = ifp.tell()
line = ifp.readline()
ifp.seek(pos)
return get_timestamp(line)
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='Send sensor data to Cloud Pub/Sub in small groups, simulating real-time behavior')
parser.add_argument('--speedFactor', help='Example: 60 implies 1 hour of data sent to Cloud Pub/Sub in 1 minute', required=True, type=float)
args = parser.parse_args()
# create Pub/Sub notification topic
logging.basicConfig(format='%(levelname)s: %(message)s', level=logging.INFO)
psclient = pubsub.Client()
topic = psclient.topic(TOPIC)
if not topic.exists():
logging.info('Creating pub/sub topic {}'.format(TOPIC))
topic.create()
else:
logging.info('Reusing pub/sub topic {}'.format(TOPIC))
# notify about each line in the input file
programStartTime = datetime.datetime.utcnow()
with gzip.open(INPUT, 'rb') as ifp:
header = ifp.readline() # skip header
firstObsTime = peek_timestamp(ifp)
logging.info('Sending sensor data from {}'.format(firstObsTime))
simulate(topic, ifp, firstObsTime, programStartTime, args.speedFactor)
The pubsub.Client class exists until the 0.27.0 version of the pubsub python package. So I just created a virtual environment and installed the 0.27.0 version of pubsub into it.
Here are the commands:
virtualenv venv
source venv/bin/activate
pip install google-cloud-pubsub==0.27.0
Solution for Google Cloud Platforms is:
Modify the send_senor_data.py file as follows:
a. Comment the original import statement for pub_sub and use _v1 version
#from google.cloud import pubsub
from google.cloud import pubsub_v1
b. Find this code and replace it as follows:
#publisher = pubsub.PublisherClient()
publisher = pubsub_v1.PublisherClient()
Then execute your send_sensor_data.py as follows:
./send_sensor_data.py --speedFactor=60 --project=YOUR-PROJECT-NAME
There's no pubsub.Client class. You
need to choose a PublisherClient or SubscriberClient
see https://github.com/GoogleCloudPlatform/google-cloud-python/blob/master/pubsub/google/cloud/pubsub.py
How do I run OpenERP on uWSGI?
I found this wsgi script online, but I'm not sure where to place it?
import openerp
try:
import uwsgi
uwsgi.port_fork_hook = openerp.wsgi.core.on_starting
except:
openerp.wsgi.core.on_starting()
# Equivalent of --load command-line option
openerp.conf.server_wide_modules = ['web']
# internal TODO: use openerp.conf.xxx when available
conf = openerp.tools.config
# Path to the OpenERP Addons repository (comma-separated for
# multiple locations)
conf['addons_path'] = '/home/openerp/addons/trunk,/home/openerp/web/trunk/addons'
# Optional database config if not using local socket
#conf['db_name'] = 'mycompany'
#conf['db_host'] = 'localhost'
#conf['db_user'] = 'foo'
#conf['db_port'] = 5432
#conf['db_password'] = 'secret'
# OpenERP Log Level
# DEBUG=10, DEBUG_RPC=8, DEBUG_RPC_ANSWER=6, DEBUG_SQL=5, INFO=20,
# WARNING=30, ERROR=40, CRITICAL=50
# conf['log_level'] = 20
# If --static-http-enable is used, path for the static web directory
#conf['static_http_document_root'] = '/var/www'
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
application = openerp.wsgi.core.application
I installed OpenERP in a virtual environment in /var/www/openerp/venv and I can run it by calling $ openerp-server.
Thanks in advance.
you can just put the script file in the same directory with the openerp-server.py file.
however when I test it it doesnot work since gunicorn cannot find the openerp in the
import openerp sentence. the reason is that openerp is not installed as a python module to the system with the installation procedures around.
I think it will work when you do a openerp install with the DEB package. (when you make such install you should disable the start script so it will just work from gunicorn.
let me also make a test install and share the result.