I am exporting a script I have made in Python that sends commands to IP Addresses of projectors to shut them down. The functionality of the code works, and the projectors will shut down. The list of projectors is stored in a dictionary in a different file to the script to allow it to be edited and accessed by other scripts.
Once I export my script to an exe using Pyinstaller v3.3.1 for Python 3.5.1, the .exe file no longer updates from the .py file that contains the dictionary, and instead has a version already stored in memory that I cannot update.
How can I make the executable file still read from the dictionary and update every time it is run?
Thanks,
Josh
Code:
dictonaryfile.py (reduced for security, but format shown).
projectors = {
'1': '192.168.0.1'
}
Script that performs shutdown
from pypjlink import Projector
from file3 import projectors
for item in projectors:
try:
myProjector = Projector.from_address(projectors[item])
myProjector.authenticate('PASSWORD REMOVED FOR SECURITY')
state = myProjector.get_power()
try:
if state == 'on':
myProjector.set_power('off')
print('Successfully powered off: ', item)
except:
print('The projector in ', item, ', raised an error, shutdown
may not have occurred')
continue
except:
print(item, 'did not respond to state request, it is likely powered
off at the wall.')
continue
As you noticed, once an exe is made, you can't update it. A workaround for a problem like this is ask for the location of dictonaryfile.py in your code-
from pypjlink import Projector
projector_location = input('Enter projector file location')
with open(projector_location) as f:
for line in f:
# extract data from f
....
For applications like these, it's a good idea to take a configuration file(.ini) and python has Configparser to read from config files. You could create your config file as -
[Projectors]
1 = '192.168.0.1'
2 = '192.168.0.45' # ..and so on
And read projectors from this file with Configparser -
config = configparser.ConfigParser()
config.read(projector_location)
projectors = config['PROJECTORS']
for k, v in projectors.items():
# your code here
Related
I'm trying to build project consists of multiple python files. The first file is called "startup.py" and just responsible of opening connections to multiple routers and switches (each device allow only one connection at a time) and save them to the list. This script should be running all the time so other files can use it
#startup.py
def validate_connections_to_leaves():
leaves = yaml_utils.load_yaml_file_from_directory("inventory", topology)["fabric_leaves"]
leaves_connections = []
for leaf in leaves:
leaf_ip = leaf["ansible_host"]
leaf_user = leaf["ansible_user"]
leaf_pass = leaf["ansible_pass"]
leaf_cnx = junos_utils.open_fabric_connection(host=leaf_ip, user=leaf_user, password=leaf_pass)
if leaf_cnx:
leaves_connections.append(leaf_cnx)
else:
log.script_logger(severity="ERROR", message="Unable to connect to Leaf", data=leaf_ip, debug=debug,
indent=0)
return leaves_connections
if __name__ == '__main__':
leaves = validate_connections_to_leaves()
pprint(leaves)
#Keep script running
while True:
time.sleep(10)
now I want to re-use these opened connections in another python file(s) without having to establish connections again. if I just import it to another file it will re-execute the startup script one more time.
can anyone help me to identify which part I'm missing here?
You should consider your startup.py file as your entry point where all the logic is. You other files should be imported and used inside this file.
import otherfile1
import otherfile2
# import other file here
def validate_connections_to_leaves:
# ...
if __name__ == '__main__':
leaves = validate_connections_to_leaves()
otherfile1.do_something_with_the_connection(leaves)
#Keep script running
while True:
time.sleep(10)
And in your other file it will be simply:
def do_something_with_the_connection(leaves):
# do something with the connections
I'm pretty new to python and I have a strange issue which I can't manage to understand by my own, I'm sure it's stupid but I can see what it is and never encountered before, even having wiring several python scripts with lots of subfiles
For the record I'm coding and launching my script with Spyder (Python 3.6 version) on Windows but I set #!/usr/lib/python2.7/ at the beginning of each file
My main script is a big file and I wanted to refactor it by externalising code in some other files
The main is like that :
if __name__ == "__main__":
configuration = Conf.loadConf(os.path.join(scriptDir,confFile))
print(configuration)
loadFavs(configuration,bioses,setDict)
When loadFavs is in main script everything works fine
As soon as I move it in fav.py file at same level than my main script adding import fav and modifying fav.loadFavs(configuration,bioses,setDict) it stops working and Spyder just says nothing without any reason :
In [1]: runfile('C:/DevZone/workspaceFX/scripts4recalbox/BestArcade/fav.py', wdir='C:/DevZone/workspaceFX/scripts4recalbox/BestArcade')
In [2] runfile('C:/DevZone/workspaceFX/scripts4recalbox/BestArcade/fav.py', wdir='C:/DevZone/workspaceFX/scripts4recalbox/BestArcade')
The first line configuration = Conf.loadConf(os.path.join(scriptDir,confFile)) should print things on screen and it doesn't even show
As soon as I put back the code in main script my code works again
It happens with several different part of the script I tried to put in different files
I'm at a loss here, what I checked :
having at the beginning of each file
#!/usr/lib/python2.7/
# -- coding: utf-8 --
always end the script on an empty line
creating each file within Spyder and not outside
I don't thing the code I move is the issue has it works fine in main script and I had the issue with several pieces of code but here it is :
def parseSetFile(setFile, setDict) :
file = open(setFile,'r')
genre = None
# Parse iniFile in iniFile dir
for line in file.readlines() :
line = line.rstrip('\n\r ')
if (line.startswith('[') and not line == '[FOLDER_SETTINGS]' and not line == '[ROOT_FOLDER]') :
genre = line
if genre not in setDict :
setDict[genre] = []
else :
if (genre is not None and not line == '' ) :
setDict[genre].append(line)
def loadFavs(configuration, bioses, setDict) :
print("Load favs small set")
parseSetFile(os.path.join(configuration['scriptDir'],dataDir,smallSetFile),setDict)
print("Load favs big set")
parseSetFile(os.path.join(configuration['scriptDir'],dataDir,bigSetFile),setDict)
print('Nb Genre : %s' %len(setDict))
sumGames = 0
for key in setDict.keys() :
# print(key)
# print(setDict[key])
sumGames = sumGames + len(setDict[key])
print('Nb Games : %s' %sumGames)
print('Nb Bios : %s' %len(bioses))
OK i'm effectively massively stupid :
In [1]: runfile('C:/DevZone/workspaceFX/scripts4recalbox/BestArcade/fav.py', wdir='C:/DevZone/workspaceFX/scripts4recalbox/BestArcade')
I'm launching my fav.py subscript not the main one, and off course it doesn't have any main ......
This is the code where I'm dumping all the data from .csv file into mongodb. What is strange is that it runs perfectly well on my mac but when I upload this code to Windows Azure running ubuntu 12.04.3 LTS only the main code gets executed and function is not called. Here's the code I'm using
import csv,json,glob,traceback
from pymongo import MongoClient
import datetime
import sys
import string
def make_document(column_headers,columns,timestamps):
#assert len(column_headers)==len(columns)
lotr = filter(lambda x: x[0] is not None,zip(column_headers,columns))
final = []
#print lotr
if not timestamps=={}:
for k,v in lotr:
try:
tformat = timestamps[k]
time_val = datetime.datetime.strptime(v,tformat)
final.append((k,time_val))
except KeyError:
final.append((k,v))
return dict(final)
else:
return dict(lotr)
def keep_printable_only(s):
return filter(lambda x: x in string.printable,s)
def perform(conf):
client = MongoClient(conf["server"],conf["port"])
db = client[conf["db"]]
collection = db[conf["collection"]]
files = glob.glob(conf["data_form"])
column_headers = conf["columns"]
csv_opts = {}
for k,v in conf["csv_options"].items():
csv_opts[str(k)]=str(v)
for infile in files:
#print conf["csv_options"]
inCSV = csv.reader(open(infile,'rU'),**csv_opts)
counter = 0
for record in inCSV:
yield record
counter +=1
if counter==2:
print record
#sys.exit(0)
record= map(keep_printable_only,record)
try:
doc = make_document(column_headers,record,conf["timestamp_columns"])
collection.insert(doc)
except :
print "error loading one of the lines : "
print traceback.format_exc()
if __name__=='__main__':
print"reads all data files of same format as given in column mapping and dumps them to a mongo collection"
print "uses conf.json.test as config file"
conf = json.load(open('./conf.json.txt'))
for row in perform(conf):
record= map(keep_printable_only,row)
When I run this on Azure, mongo collection is not created and the code terminates after printing the two lines in main code. I have no idea as to why this is happening.
Debug output would be very useful, in the form of a stack trace, as commented by #Alfe.
Further than that, it looks like your code stops at the line where you try to access a local file to read the configuration. Make sure that you can access the filesystem that way in Azure; sometimes providers put very strict walls between your code and the actual machine.
You can make your code more portable by using:
import os
import os.path
conf_filehandle = open(os.path.join(os.getcwd(), 'conf.json'))
conf = json.load(conf_filehandle)
Of course, you should also make sure that you have uploaded the JSON file to Azure :)
I tried looking at the documentation for running ZEO on a ZODB database, but it isn't working how they say it should.
I can get a regular ZODB running fine, but I would like to make the database accessible by several processes for a program, so I am trying to get ZEO to work.
I created this script in a folder with a subfolder zeo, which will hold the "database.fs" files created by the make_server function in a different parallel process:
CODE:
from ZEO import ClientStorage
import ZODB
import ZODB.config
import os, time, site, subprocess, multiprocessing
# make the server in for the database in a separate process with windows command
def make_server():
runzeo_path = site.getsitepackages()[0] + "\Lib\site-packages\zeo-4.0.0-py2.7.egg\ZEO\\runzeo.py"
filestorage_path = os.getcwd() + '\zeo\database.fs'
subprocess.call(["python", runzeo_path, "-a", "127.0.0.1:9100", "-f" , filestorage_path])
if __name__ == "__main__":
server_process = multiprocessing.Process(target = make_server)
server_process.start()
time.sleep(5)
storage = ClientStorage.ClientStorage(('localhost', 9100), wait=False)
db = ZODB.DB(storage)
connection = db.open()
root = connection.root()
the program will just block at the ClientStorage line if the wait=False is not given.
If the wait=False is given it produces this error:
Error Message:
Traceback (most recent call last):
File "C:\Users\cbrown\Google Drive\EclipseWorkspace\NewSpectro - v1\20131202\2 - database\zeo.py", line 17, in <module>
db = ZODB.DB(storage)
File "C:\Python27\lib\site-packages\zodb-4.0.0-py2.7.egg\ZODB\DB.py", line 443, in __init__
temp_storage.load(z64, '')
File "C:\Python27\lib\site-packages\zeo-4.0.0-py2.7.egg\ZEO\ClientStorage.py", line 841, in load
data, tid = self._server.loadEx(oid)
File "C:\Python27\lib\site-packages\zeo-4.0.0-py2.7.egg\ZEO\ClientStorage.py", line 88, in __getattr__
raise ClientDisconnected()
ClientDisconnected
Here is the output from the cmd prompt for my process which runs a server:
------
2013-12-06T21:07:27 INFO ZEO.runzeo (7460) opening storage '1' using FileStorage
------
2013-12-06T21:07:27 WARNING ZODB.FileStorage Ignoring index for C:\Users\cab0008
\Google Drive\EclipseWorkspace\NewSpectro - v1\20131202\2 - database\zeo\databas
e.fs
------
2013-12-06T21:07:27 INFO ZEO.StorageServer StorageServer created RW with storage
s: 1:RW:C:\Users\cab0008\Google Drive\EclipseWorkspace\NewSpectro - v1\20131202\
2 - database\zeo\database.fs
------
2013-12-06T21:07:27 INFO ZEO.zrpc (7460) listening on ('127.0.0.1', 9100)
What could I be doing wrong? I just want this to work locally right now so there shouldn't be any need for fancy web stuff.
You should use proper process management and simplify your life. You likely want to look into supervisor, which can be responsible for running/starting/stopping your application and ZEO.
Otherwise, you need to look at the double-fork trick to daemonize ZEO -- but why bother when a process management tool like supervisor does this for you.
If you are savvy with relational database administration, and already have a relational database at your disposal -- you can also consider RelStorage as a very good ZODB (low-level) storage backend.
In Windows you should use double \ instead of a single \ in the paths. Easy and portable way to accomplish this is to use os.path.join() function, eg. os.path.join('os.getcwd()', 'zeo', 'database.fs'). Otherwise a similar code worked ok for me.
Had same error on Windows , on Linux everything OK ...
your code is ok , to make this to work change following
C:\Python33\Lib\site-packages\ZEO-4.0.0-py3.3.egg\ZEO\zrpc\trigger.py ln:235
self.trigger.send(b'x')
C:\Python33\Lib\site-packages\ZEO-4.0.0-py3.3.egg\ZEO\zrpc\client.py ln:458:459 - comment them
here is those lines:
if socktype != socket.SOCK_STREAM:
continue
I'm building a web application (python and Django) that allows users to upload pdf files for other users to download. How do I prevent a user from uploading a virus embedded in the pdf?
Update:
I found this code on django snippets that uses clamcv. Would this do the job?
def clean_file(self):
file = self.cleaned_data.get('file', '')
#check a file in form for viruses
if file:
from tempfile import mkstemp
import pyclamav
import os
tmpfile = mkstemp()[1]
f = open(tmpfile, 'wb')
f.write(file.read())
f.close()
isvirus, name = pyclamav.scanfile(tmpfile)
os.unlink(tmpfile)
if isvirus:
raise forms.ValidationError( \
"WARNING! Virus \"%s\" was detected in this file. \
Check your system." % name)
return file
Well, in general you can use any virus scanning software to accomplish this task: Just
generate a command line string which calls the virus scanner on your file
use python subprocess to run the command line string like so:
try:
command_string = 'my_virusscanner -parameters ' + uploaded_file
result = subprocess.check_output(command_string,stderr=subprocess.STDOUT,shell=True)
#if needed, do something with "result"
except subprocess.CalledProcessError as e:
#if your scanner gives an error code when detecting a virus, you'll end up here
pass
except:
#something else went wrong
#check sys.exc_info() for info
pass
Without checking the source code, I assume that pyclamav.scanfile is doning more or less the same - so if you trust clamav, you should be doing fine. If you don't trust ist, use the approach above with the virus scanner of your choice.
You can use django-safe-filefield package to validate that uploaded file extension match it MIME-type. Example:
settings.py
CLAMAV_SOCKET = 'unix://tmp/clamav.sock' # or tcp://127.0.0.1:3310
CLAMAV_TIMEOUT = 30 # 30 seconds timeout, None by default which means infinite
forms.py
from safe_filefield.forms import SafeFileField
class MyForm(forms.Form):
attachment = SafeFileField(
scan_viruses=True,
)