I'm trying to build project consists of multiple python files. The first file is called "startup.py" and just responsible of opening connections to multiple routers and switches (each device allow only one connection at a time) and save them to the list. This script should be running all the time so other files can use it
#startup.py
def validate_connections_to_leaves():
leaves = yaml_utils.load_yaml_file_from_directory("inventory", topology)["fabric_leaves"]
leaves_connections = []
for leaf in leaves:
leaf_ip = leaf["ansible_host"]
leaf_user = leaf["ansible_user"]
leaf_pass = leaf["ansible_pass"]
leaf_cnx = junos_utils.open_fabric_connection(host=leaf_ip, user=leaf_user, password=leaf_pass)
if leaf_cnx:
leaves_connections.append(leaf_cnx)
else:
log.script_logger(severity="ERROR", message="Unable to connect to Leaf", data=leaf_ip, debug=debug,
indent=0)
return leaves_connections
if __name__ == '__main__':
leaves = validate_connections_to_leaves()
pprint(leaves)
#Keep script running
while True:
time.sleep(10)
now I want to re-use these opened connections in another python file(s) without having to establish connections again. if I just import it to another file it will re-execute the startup script one more time.
can anyone help me to identify which part I'm missing here?
You should consider your startup.py file as your entry point where all the logic is. You other files should be imported and used inside this file.
import otherfile1
import otherfile2
# import other file here
def validate_connections_to_leaves:
# ...
if __name__ == '__main__':
leaves = validate_connections_to_leaves()
otherfile1.do_something_with_the_connection(leaves)
#Keep script running
while True:
time.sleep(10)
And in your other file it will be simply:
def do_something_with_the_connection(leaves):
# do something with the connections
Related
I have a scenario where we need to download certain image files in different directories in SFTP server to local.
Example :
/IMAGES/folder1 has img11, img12, img13, img14
/IMAGES/folder2 has img21, img22, img23, img24
/IMAGES/folder3 has img31, img32, img33, img34
And I need to download img12, img23 and img34 from folder 1, 2 and 3 respectively
Right now I go inside each folder and get the images individually which takes an extraordinary amount of time(since there are 10,000s of images to download).
I have also found out that downloading a single file of the same size(as that of multiple image files) takes a fraction of the time.
My question is, is there a way to get these multiple files together instead of downloading them one after another ?
One approach I came up with was to copy all the files to a temp folder in sftp server and then download the directory but sftp does not allow 'copy', and I can not use 'rename' because then I will be moving the files to temp directory
You could use a process pool to open multiple sftp connections and download in parallel. For example,
from paramiko import SSHClient
from multiprocessing import Pool
def download_init(host):
global client, sftp
client = SSHClient()
client.load_system_host_keys()
client.connect(host)
sftp = ssh_client.open_sftp()
def download_close(dummy):
client.close()
def download_worker(params):
local_path, remote_path = *params
sftp.get(remote_path, local_path)
list_of_local_and_remote_files = [
["/client/files/folder1/img11", "/IMAGES/folder1/img11"],
]
def downloader(files):
pool_size = 8
pool = Pool(8, initializer=download_init,
initargs=["sftpserver.example.com"])
result = pool.map(download_worker, files, chunksize=10)
pool.map(download_close, range(pool_size))
if __name__ == "__main__":
downloader(list_of_local_and_remote_files)
Its unfortunate that Pool doesn't have a finalizer to undo what was set in the initializer. Its not usually necessary - the exiting process is cleanup enough. In the example I just wrote a separate worker function that cleans things up. By having 1 work item per pool process, they each get 1 call.
I am exporting a script I have made in Python that sends commands to IP Addresses of projectors to shut them down. The functionality of the code works, and the projectors will shut down. The list of projectors is stored in a dictionary in a different file to the script to allow it to be edited and accessed by other scripts.
Once I export my script to an exe using Pyinstaller v3.3.1 for Python 3.5.1, the .exe file no longer updates from the .py file that contains the dictionary, and instead has a version already stored in memory that I cannot update.
How can I make the executable file still read from the dictionary and update every time it is run?
Thanks,
Josh
Code:
dictonaryfile.py (reduced for security, but format shown).
projectors = {
'1': '192.168.0.1'
}
Script that performs shutdown
from pypjlink import Projector
from file3 import projectors
for item in projectors:
try:
myProjector = Projector.from_address(projectors[item])
myProjector.authenticate('PASSWORD REMOVED FOR SECURITY')
state = myProjector.get_power()
try:
if state == 'on':
myProjector.set_power('off')
print('Successfully powered off: ', item)
except:
print('The projector in ', item, ', raised an error, shutdown
may not have occurred')
continue
except:
print(item, 'did not respond to state request, it is likely powered
off at the wall.')
continue
As you noticed, once an exe is made, you can't update it. A workaround for a problem like this is ask for the location of dictonaryfile.py in your code-
from pypjlink import Projector
projector_location = input('Enter projector file location')
with open(projector_location) as f:
for line in f:
# extract data from f
....
For applications like these, it's a good idea to take a configuration file(.ini) and python has Configparser to read from config files. You could create your config file as -
[Projectors]
1 = '192.168.0.1'
2 = '192.168.0.45' # ..and so on
And read projectors from this file with Configparser -
config = configparser.ConfigParser()
config.read(projector_location)
projectors = config['PROJECTORS']
for k, v in projectors.items():
# your code here
I have been googling to find how to create a global file, which will open till my application is completed . Need to write the output of all modules in a view in single file. so that users can download this file as a report once application is completed running from Front end. This is the class I have created
import time
class FileOperations:
def __init__(self):
self.current_time = time.strftime('%Y-%m-%d_%H-%M-%S')
self.outfile = open("reports/username_" + self.current_time + ".txt", 'w')
self.outfile.write("Final Report \n")
self.outfile.write("*****************")
I want this file to get it generated when the application start running & should be available for all modules
A context manager is a way to safely handle operations such as writing to file. It also allows you to better trace when file opens or closes.
I suggest you take the time when the application starts, and reuse that file as I take it you intended. That's probably "safer" than keeping the file open.
def get_time():
global start_time
start_time = time.strftime('%Y-%m-%d_%H-%M-%S')
def write_to_file():
with open('reports/username_{}.txt'.format(start_time), 'a') as f:
f.write("Final Report \n")
f.write("*****************")
if 'start_time' not in globals():
get_time()
The conditional will run each time the module is imported. By checking if its already defined in the module scope, we make sure to only define it once.
I am trying to get familiar with the multiprocessing module. I am currently having some issues with Pipe(). I devised a small example to illustrate my problem.
I wrote two functions:
One that creates files in a specific folder (spawner)
And another that detects these files and copies them to another folder (cleaner)
They both work fine. I also managed to create a Process for both so that the creation and copying of the files happens simultaneously.
For the next step, I want the spawner to communicate to the cleaner that it has finished creating files so that the latter can terminate.
Here is the code:
import os
from time import sleep
import multiprocessing as mp
from shutil import copy2
def spawner(f_folder, pipeEnd):
template = 'my_file{}.txt'
for i in range(10):
new_file = os.path.join(f_folder, template.format(str(i)))
with open(new_file, 'w'):
pass
sleep(1)
pipeEnd.send(True)
return
def cleaner(f_folder, t_folder, pipeEnd):
state = set()
while not pipeEnd.recv():
new_files = set(os.listdir(f_folder)).difference(state)
state = set(os.listdir(f_folder))
for file in new_files:
copy2(os.path.join(f_folder, file), t_folder)
sleep(3)
return
if __name__ == '__main__':
receiver, sender = mp.Pipe()
from_folder = r'C:\Users\evkouni\Desktop\TEMP\PythonTests\subProcess\from'
to_folder = r'C:\Users\evkouni\Desktop\TEMP\PythonTests\subProcess\to'
p = mp.Process(target=spawner, args=(from_folder, sender))
q = mp.Process(target=cleaner, args=(from_folder, to_folder, receiver))
p.start()
q.start()
I just cannot seem to be able to get it to work.. Any help would be appreciated.
A Pipe is the wrong solution to your problem. You could use a pipe if you wanted to pass the file names from the spawner to the cleaner, but what you are trying to do is raise a flag. For that purpose, I would recommend the use of an Event: https://docs.python.org/2/library/multiprocessing.html#multiprocessing.Event
This can be considered a thread-safe (and multiprocess-safe) boolean. You would use it like
finished = mp.Event()
...
finished.set() # pipeEnd.send(True)
...
while not finished.is_set(): # while not receiver.recv():
I am working with Robot Operating System (ROS) and am attempting to make a server/client where the server will boot up ROS nodes that are specified by the client. To perform the "boot up" I am using roslaunch based on the recommendations found here: http://wiki.ros.org/roslaunch/API%20Usage
I can run the roscore in a window and then I can run the server which boots up fine. However, as soon as I try to send the node names I want to boot up via the client, I get the following error:
"Error processing request: signal only works in main thread"
It then points to a bunch of files in various areas that I have not yet tracked down.
I have tried using a simple roslaunch call on each of the launch files I made individually for the nodes I want to launch (in this case turtlesim_node and turtle_teleop_key) and they boot up fine and work by just running roslaunch [package] turtlesim_launch.launch, etc.
Below is the code for my server:
#!/usr/bin/env python
#Filename: primary_server.py
import rospy
import roslaunch
from robot_man.srv import *
class StartUpServer(object):
def __init__(self):
self._nodes = []
def handle_startup(self, names):
self._temp = names.nodes.split(",") #This reades in the
# information from the srv file sent by the client and
# separates each node/package pair
#This loop separates the input from pairs of 2 corresponding
# to the package and node names the user inputs
for i in range(len(self._temp)):
self._nodes.append(self._temp[i].split())
#This loop will launch each node that the user has specified
for package, executable in self._nodes:
print('package is: {0}, executable is: {1}'.format(package, executable))
node = roslaunch.core.Node(package, executable)
launch = roslaunch.scriptapi.ROSLaunch()
launch.start()
process = launch.launch(node)
print('running node: %s'%executable)
return StartUpResponse(self._nodes) #I will change this later
if __name__ == '__main__':
rospy.init_node('startup_node')
server = StartUpServer()
rospy.Service('startup', StartUp, server.handle_startup)
print('The servers are up and running')
rospy.spin()
Here is the code for my client:
#!/usr/bin/env python
#Filename: primary_client_startup.py
import rospy
from robot_man.srv import *
def main(nodes):
rospy.wait_for_service('startup')
proxy = rospy.ServiceProxy('startup', StartUp)
response = proxy(nodes)
return response
if __name__ == '__main__':
nodes = input('Please enter the node packages followed by \
node names separated by spaces. \n To activate multiple nodes \
separate entries with a comma > ')
main(nodes)
Here is my srv file:
#This should be a string of space/comma separated values
string nodes
---
#This should return "success" if node/s start up successfully, else "fail"
string status
And finally, here are the two launch files I have made to launch the turtle simulator:
turtlesim_launch.launch
<launch>
<node name="turtlesim_node" pkg="turtlesim" type="turtlesim_node" />
</launch>
turtle_teleop_launch.launch
<launch>
<node name="turtle_teleop_key" pkg="turtlesim" type="turtle_teleop_key" />
</launch>
I have done a bit of google searching and found no similar problems for ROS (though there are some similar errors for Django and the like but they don't relate).
I appreciate the help!
P.S. It is worth noting that I make it to line 34 when the error occurs (process = launch.launch(node)).
actually that is mentioned in the documentation
You're not calling init_node() from the Python Main thread. Python only allows signals to be registered from the Main thread.
Look here rospyOverviewInitialization and Shutdown
A solution that works (but is quite dirty) is to remove the signal handler registration function:
def dummy_function(): pass
roslaunch.pmon._init_signal_handlers = dummy_function
This way you'll lose the ability to kill the node with CTRL+C, but at least you can start it.
I believe the problem is that you are trying to launch a node in a callback, which as user3732793 says isn't allowed.
If you use append to Queue (or just a list) in the callback, then instead of just rospy.spin() check for items in the queue and then launch if they are there. I believe it will work.
Here is an example