I am trying to write a simple python code with fabric to transfer a file from one host to another using the get() function although I keep getting error message:
MacBook-Pro-3:PythonsScripts$ fab get:'/tmp/test','/tmp/test'
[hostname] Executing task 'get'
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/fabric/main.py", line 743, in main
*args, **kwargs
File "/Library/Python/2.7/site-packages/fabric/tasks.py", line 387, in execute
multiprocessing
File "/Library/Python/2.7/site-packages/fabric/tasks.py", line 277, in _execute
return task.run(*args, **kwargs)
File "/Library/Python/2.7/site-packages/fabric/tasks.py", line 174, in run
return self.wrapped(*args, **kwargs)
File "/Users/e0126914/Desktop/PYTHON/PythonsScripts/fabfile.py", line 128, in get
get('/tmp/test','/tmp/test') ***This line repeats many times then last error below***
RuntimeError: maximum recursion depth exceeded
My current code is:
from fabric.api import *
from getpass import getpass
from fabric.decorators import runs_once
env.hosts = ['hostname']
env.port = '22'
env.user = 'parallels'
env.password="password"
def abc(remote_path, local_path):
abc('/tmp/test','/tmp/')
Any help would be appreciated!
fabric.api.get already is a method. When you perform from fabric.api import * you are importing fabric's get. You should rename your get function to avoid conflict.
From inside the abc function, you need to call get
def abc(p1,p2):
get(p1, p2)
EDIT:
When executing functions through fabric, the arguments are passed through the command line
ie. $ fab abc:string1,string2
Related
I have tried implementing the basic Flask-RQ2 setup as per the docs to attempt to write to two separate files concurrently, but I am getting the following Redis error: redis.exceptions.RedisError: ZADD requires an equal number of values and scores when the worker tries to perform a job in the Redis queue.
Here's the full stack trace:
10:20:37: Worker rq:worker:1d0c83d6294249018669d9052fd759eb: started, version 1.2.0
10:20:37: *** Listening on default...
10:20:37: Cleaning registries for queue: default
10:20:37: default: tester('time keeps on slipping') (02292167-c7e8-4040-a97b-742f96ea8756)
10:20:37: Worker rq:worker:1d0c83d6294249018669d9052fd759eb: found an unhandled exception, quitting...
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/rq/worker.py", line 515, in work
self.execute_job(job, queue)
File "/usr/local/lib/python3.6/dist-packages/rq/worker.py", line 727, in execute_job
self.fork_work_horse(job, queue)
File "/usr/local/lib/python3.6/dist-packages/rq/worker.py", line 667, in fork_work_horse
self.main_work_horse(job, queue)
File "/usr/local/lib/python3.6/dist-packages/rq/worker.py", line 744, in main_work_horse
raise e
File "/usr/local/lib/python3.6/dist-packages/rq/worker.py", line 741, in main_work_horse
self.perform_job(job, queue)
File "/usr/local/lib/python3.6/dist-packages/rq/worker.py", line 866, in perform_job
self.prepare_job_execution(job, heartbeat_ttl)
File "/usr/local/lib/python3.6/dist-packages/rq/worker.py", line 779, in prepare_job_execution
registry.add(job, timeout, pipeline=pipeline)
File "/usr/local/lib/python3.6/dist-packages/rq/registry.py", line 64, in add
return pipeline.zadd(self.key, {job.id: score})
File "/usr/local/lib/python3.6/dist-packages/redis/client.py", line 1691, in zadd
raise RedisError("ZADD requires an equal number of "
redis.exceptions.RedisError: ZADD requires an equal number of values and scores
My main is this:
#!/usr/bin/env python3
from flask import Flask
from flask_rq2 import RQ
import time
import tester
app = Flask(__name__)
rq = RQ(app)
default_worker = rq.get_worker()
default_queue = rq.get_queue()
tester = tester.Tester()
while True:
default_queue.enqueue(tester.tester, args=["time keeps on slipping"])
default_worker.work(burst=True)
with open('test_2.txt', 'w+') as f:
data = f.read() + "it works!\n"
time.sleep(5)
if __name__ == "__main__":
app.run()
and my tester.py module is thus:
#!/usr/bin/env python3
import time
class Tester:
def tester(string):
with open('test.txt', 'w+') as f:
data = f.read() + string + "\n"
f.write(data)
time.sleep(5)
I'm using the following:
python==3.6.7-1~18.04
redis==2.10.6
rq==1.2.0
Flask==1.0.2
Flask-RQ2==18.3
I've also tried the simpler setup from the documentation were you don't specify either queue or worker but implicitly rely on the Flask-RQ2 module defaults... Any help with this would be greatly appreciated.
Reading the docs a bit deeper it seems for it to work with Redis<3.0 you need Flask-RQ2==18.2.2.
I have different errors now but I can work with this.
I'm trying to use a environment variables with hug. However, I can't.
first step how i did:
$ export INTEGER=5
I have this in my main code:
import hug
import os
#hug.get('/')
def foo():
var = os.environ['INTEGER']
return {'INT':var}
when i execute
sudo hug -p 80 -f foo.py
and go to localhost/
Error:
Traceback (most recent call last):
File "/usr/local/Cellar/python3/3.6.4_2/Frameworks/Python.framework/Versions/3.6/lib/python3.6/wsgiref/handlers.py", line 137, in run
self.result = application(self.environ, self.start_response)
File "/Users/Andres/.local/share/virtualenvs/botCS-HzHaMvtf/lib/python3.6/site-packages/falcon/api.py", line 244, in __call__
responder(req, resp, **params)
File "/Users/Andres/.local/share/virtualenvs/botCS-HzHaMvtf/lib/python3.6/site-packages/hug/interface.py", line 734, in __call__
raise exception
File "/Users/Andres/.local/share/virtualenvs/botCS-HzHaMvtf/lib/python3.6/site-packages/hug/interface.py", line 709, in __call__
self.render_content(self.call_function(input_parameters), request, response, **kwargs)
File "/Users/Andres/.local/share/virtualenvs/botCS-HzHaMvtf/lib/python3.6/site-packages/hug/interface.py", line 649, in call_function
return self.interface(**parameters)
File "/Users/Andres/.local/share/virtualenvs/botCS-HzHaMvtf/lib/python3.6/site-packages/hug/interface.py", line 100, in __call__
return __hug_internal_self._function(*args, **kwargs)
File "repro.py", line 7, in foo
var = os.environ['INTEGER']
File "/Users/Andres/.local/share/virtualenvs/botCS-HzHaMvtf/bin/../lib/python3.6/os.py", line 669, in __getitem__
raise KeyError(key) from None
KeyError: 'INTEGER'
what is wrong?
Your problem is that you are running hug as sudo (something you should never do btw.) and adding environment variable as you (a normal user).
I'm guessing that you are running as sudo because you want to run on port 80. Run it rather on port 8080.
So this works:
shell:
export INTEGER=5
python code:
import os
#hug.get('/')
def view():
print(os.environ['INTEGER']) # or print(os.environ.get('INTEGER'))
shell:
hug -f app.py -p 8080
os.environ['INTEGER'] fails because os.environ has no key called "INTEGER".
You can use this code, and provide an optional default as the second arg to get:
os.environ.get("INTEGER", 0)
This will return 0 (or whatever default you supply) if INTEGER is not found.
The reason INTEGER is missing must be related to where you defined your bash variable. It must be "in scope" or accessible to the script based on where you are running the script.
Hey guys I'm working on a prototype for a project at my school (I'm a research assistant so this isn't a graded project). I'm running celery on a server cluster (with 48 workers/cores) which is already setup and working. The nutshell of my project is that we want to use celery for some number crunching of a rather large amount of files/tasks.
Because of this it is very important that we save results to an actual file, we have gigs upon gigs of data and it WON'T fit in RAM while running the traditional task queue/backend.
Anyways...
My prototype (with a trivial add function):
task.py
from celery import Celery
app=Celery()
#app.task
def mult(x,y):
return x*y
And this works great when I execute: $ celery worker -A task -l info
But if I try and add a new backend:
from celery import Celery
app=Celery()
app.conf.update(CELERY_RESULT_BACKEND = 'file://~/Documents/results')
#app.task
def mult(x,y):
return x*y
I get a rather large error:
[2017-08-04 13:22:18,133: CRITICAL/MainProcess] Unrecoverable error:
AttributeError("'NoneType' object has no attribute 'encode'",)
Traceback (most recent call last):
File "/home/bartolucci/anaconda3/lib/python3.6/site-packages/kombu/utils/objects.py", line 42, in __get__
return obj.__dict__[self.__name__]
KeyError: 'backend'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/bartolucci/anaconda3/lib/python3.6/site- packages/celery/worker/worker.py", line 203, in start
self.blueprint.start(self)
File "/home/bartolucci/anaconda3/lib/python3.6/site-packages/celery/bootsteps.py", line 115, in start
self.on_start()
File "/home/bartolucci/anaconda3/lib/python3.6/site-packages/celery/apps/worker.py", line 143, in on_start
self.emit_banner()
File "/home/bartolucci/anaconda3/lib/python3.6/site-packages/celery/apps/worker.py", line 158, in emit_banner
' \n', self.startup_info(artlines=not use_image))),
File "/home/bartolucci/anaconda3/lib/python3.6/site-packages/celery/apps/worker.py", line 221, in startup_info
results=self.app.backend.as_uri(),
File "/home/bartolucci/anaconda3/lib/python3.6/site-packages/kombu/utils/objects.py", line 44, in __get__
value = obj.__dict__[self.__name__] = self.__get(obj)
File "/home/bartolucci/anaconda3/lib/python3.6/site-packages/celery/app/base.py", line 1183, in backend
return self._get_backend()
File "/home/bartolucci/anaconda3/lib/python3.6/site-packages/celery/app/base.py", line 902, in _get_backend
return backend(app=self, url=url)
File "/home/bartolucci/anaconda3/lib/python3.6/site-packages/celery/backends/filesystem.py", line 45, in __init__
self.path = path.encode(encoding)
AttributeError: 'NoneType' object has no attribute 'encode'
I am only 2 days into this project and have never worked with celery (or a similar library) before (I come from the algorithmic, mathy side of the fence). I currently wrangling with celery's user guide docs, but they're honestly pretty sparse on this detail.
Any help is much appreciated and thank you.
Looking at the celery code for filesystem backed result backend here.
https://github.com/celery/celery/blob/master/celery/backends/filesystem.py#L54
Your path needs to start with file:/// (3 slashes)
Your settings has it starting with file:// (2 slashes)
You might also want to use the absolute path instead of the ~.
I tried to use pp(Parallel Python) like this:
import glob
import subprocess
import pp
def run(cmd):
print cmd
subprocess.call(cmd, shell=True)
job_server = pp.Server()
job_server.set_ncpus(8)
jobs = []
for a_file in glob.glob("./*"):
cmd = "ls"
jobs.append(job_server.submit(run, (cmd,)))
for j in jobs:
j()
But encountered such an error that subprocess.call is not a global name.
An error has occured during the function execution
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/pp-1.6.1-py2.7.egg/ppworker.py", line 90, in run
__result = __f(*__args)
File "<string>", line 3, in run
NameError: global name 'subprocess' is not defined
I've imported subprocess, why can't it be used here?
According to abarnert's suggestion, I changed my code to this:
import glob
import pp
def run(cmd):
print cmd
subprocess.call(cmd, shell=True)
job_server = pp.Server()
job_server.set_ncpus(8)
jobs = []
for a_file in glob.glob("./*"):
cmd = "ls"
jobs.append(job_server.submit(run, (cmd,),modules=("subprocess",)))
for j in jobs:
j()
But it still doesn't work, it complains like this:
Traceback (most recent call last):
File "/usr/lib/python2.6/threading.py", line 532, in __bootstrap_inner
self.run()
File "/usr/lib/python2.6/threading.py", line 484, in run
self.__target(*self.__args, **self.__kwargs)
File "/usr/local/lib/python2.6/dist-packages/pp-1.6.1-py2.6.egg/pp.py", line 721, in _run_local
job.finalize(sresult)
UnboundLocalError: local variable 'sresult' referenced before assignment
The documentation explains this pretty well, and each example shows you how to deal with it.
Among the params of the submit method is "modules - tuple with module names to import". Any modules you want to be available in the submitted job has to be listed here.
So, you can do this:
jobs.append(job_server.submit(run, (cmd,), (), ('subprocess',)))
Or this:
jobs.append(job_server.submit(run, (cmd,), modules=('subprocess',)))
Sorry, untested, but did you try:
from subprocess import call
Inside the 'run' function?
And then use "call" instead of "subprocess.call" ? That would make 'call' local to the function but accessible.
I need to get fabric to set its hosts list by opening and reading a file to get the hosts.
Setting it this way allows me to have a huge list of hosts without needing to edit my fabfile for this data each time.
I tried this:
def set_hosts():
env.hosts = [line.split(',') for line in open("host_file")]
def uname():
run("uname -a")
and
def set_hosts():
env.hosts = open('hosts_file', 'r').readlines
def uname():
run("uname -a")
I get the following error each time I try to use the function set_hosts:
fab set_hosts uname
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/fabric/main.py", line 712, in main
*args, **kwargs
File "/usr/local/lib/python2.7/dist-packages/fabric/tasks.py", line 264, in execute
my_env['all_hosts'] = task.get_hosts(hosts, roles, exclude_hosts, state.env)
File "/usr/local/lib/python2.7/dist-packages/fabric/tasks.py", line 74, in get_hosts
return merge(*env_vars)
File "/usr/local/lib/python2.7/dist-packages/fabric/task_utils.py", line 57, in merge
cleaned_hosts = [x.strip() for x in list(hosts) + list(role_hosts)]
AttributeError: 'builtin_function_or_method' object has no attribute 'strip'
The problem you're hitting here is that you're setting env.hosts to a function object, not a list or iterable. You need the parens after readlines, to actually call it:
def set_hosts():
env.hosts = open('hosts_file', 'r').readlines()