Hi I installed perl's gearman package in windows 7 and written simple client and worker just to connect to gearman server as follows :
Client.pl
use Gearman::Client;
my $client = Gearman::Client->new;
$client->job_servers('127.0.0.1:7003');
and Worker.pl
use Gearman::Worker;
my $worker = Gearman::Worker->new;
$worker->job_servers('127.0.0.1:7003');
$worker->work while 1;
but when i runs worker.pl, command prompt for gearmand gives error as
Error accepting incoming connection: Bad file descriptor
I have also tried with python worker and clients ( which works correctly under cygwin and with gearman 0.14) for reversing string but it shows the same error.
My Python Modules are :
Client.py
import gearman
def check_request_status(job_request):
if job_request.complete:
print "Job %s finished! Result: %s - %s" % (job_request.job.unique, job_request.state, job_request.result)
elif job_request.timed_out:
print "Job %s timed out!" % job_request.unique
elif job_request.state == JOB_UNKNOWN:
print "Job %s connection failed!" % job_request.unique
gm_client = gearman.GearmanClient(['127.0.0.1:7003'])
completed_job_request = gm_client.submit_job("reverse", "Hello World!")
check_request_status(completed_job_request)
and worker.py
import gearman
gm_worker = gearman.GearmanWorker(['127.0.0.1:7003'])
def task_listener_reverse(gearman_worker, gearman_job):
print 'Reversing string: ' + gearman_job.data
return gearman_job.data[::-1]
# gm_worker.set_client_id is optional
gm_worker.set_client_id('python-worker')
gm_worker.register_task('reverse', task_listener_reverse)
# Enter our work loop and call gm_worker.after_poll() after each time we timeout/see socket activity
gm_worker.work()
When i run Client.pl gearmand command prompt gives error as follows:
Error: Can't locate object method "CMD_" via package "Gearman::Server::Client" at C:/Perl/site/lib/Gearman/Server/Client.pm line 505.
Can anybody please resolve this issue?
Related
I have this code in my python script where in I pass the ip address of a newly launched server and I am trying to scp a file upto that new server. Now since its a newly launched server so it's ip is not automatically added to the list of know hosts and it prompts me to add it and by the time I type yes the code results in error. When I execute the same code again with same ip then no error comes as the ip was just added in last execution.
I have added -o StrictHostKeyChecking=no to not ask for prompt(and it does not ask for prompt) but it does not resolve the error.
def scp_script():
try:
cmd_txt = "scp -o StrictHostKeyChecking=no script_filename.txt root#" + server_ip + ":/home/"
output = system(cmd_txt)
except:
traceback.print_exc()
sys.exit(1)
if output != 0:
print('Error: ' + strerror(return_code) + ' \n')
sys.exit(1)
if __name__ == '__main__':
scp_script()
This script is called as a subprocess from another script. The error I get is :
root#: No such file or directory
Error: Unknown error 256
How can I resolve this error?
I'm working on a Telnet client. I started coding on my notebook(Windows) and at the finish I uploaded it on my server(Debian). Both systems works with Python 3. At my notebook the script works well, but on Debian, it does make errors.
The Code:
import telnetlib
import sys
try:
HOST = sys.argv[1]
user = sys.argv[3]
password = sys.argv[4]
cmd= sys.argv[5]
port=int(sys.argv[2])
tn = telnetlib.Telnet(HOST, port)
tn.read_until(b"username: ")
tn.write(user.encode('ascii') + b"\n")
if password:
tn.read_until(b"password: ")
tn.write(password.encode('ascii') + b"\n")
tn.write(cmd.encode('ascii') + b"\n")
except ConnectionRefusedError:
print('ERROR')
else:
print('OK')
Server(CraftBukkit server with RemoteToolKit):
Mar 05, 2014 12:39:58 PM net.wimpi.telnetd.net.ConnectionManager makeConnection
INFO: connection #1 made.
Unexpected error in shell!
java.net.SocketException: Connection reset
> at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:118)
> at java.net.SocketOutputStream.write(SocketOutputStream.java:159)
> at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
> at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
> at java.io.DataOutputStream.flush(DataOutputStream.java:123)
> at net.wimpi.telnetd.io.TelnetIO.flush(Unknown Source)
> at net.wimpi.telnetd.io.TerminalIO.flush(Unknown Source)
> at net.wimpi.telnetd.io.TerminalIO.write(Unknown Source)
> at com.drdanick.McRKit.Telnet.ConsoleShell.run(ConsoleShell.java:78)
> at net.wimpi.telnetd.net.Connection.run(Unknown Source)
Mar 05, 2014 12:39:58 PM net.wimpi.telnetd.net.ConnectionManager cleanupClosed
INFO: cleanupClosed():: Removing closed connection Thread[Connection1,5,]
Greets miny
EDIT: The error handling works now! THX # Wojciech Walczak
The client doesn't report errors, but the server reports errors. If I run the same code on Windows, it doesn't make errors.
...and are you sure you're using Python 3.3 or later? ConnectionRefusedError has been added in Python 3.3.
EDIT:
Given that your client works fine when launched from your laptop, and is catching ConnectionRefusedError on another machine, I would say that the problem is not the script itself. It's rather about server's telnet/firewall settings. Are other telnet clients working in the environment in which your script is failing?
The reason for the traceback when the server is offline is because you are trying to trap a non-existent exception (which is to say that the name ConnectionRefusedError has not yet been assigned a value).
Solely for its educational purpose I would remove the `try ... except ..." and let the error be raised Then hopefully you will find out exactly what exception is being raised.
As to the Java traceback, WTF?
The error is in the script. The telnet command on Linux works well.
I ran the code w/o try and except and it doesn't report errors.
I made some tests with byte strings.
If I run this code, the machines displays different strings.
Command:
print(b"Hello there!")
Windows:
b"Hello there!"
Linux:
Hello there!
Updated Code at Debian (Windows uses still the old code):
import telnetlib
import sys
if(True):
HOST = sys.argv[1]
user = sys.argv[3]
password = sys.argv[4]
cmd= sys.argv[5]
port=int(sys.argv[2])
tn = telnetlib.Telnet(HOST, port)
tn.read_until(b"username:")
tn.write(user.encode('ascii') + b"\n")
if password:
tn.read_until(b"password:")
tn.write(password.encode('ascii') + b"\n")
tn.write(cmd.encode('ascii') + b"\n")
I tested the code at Windows and the telnet server can run the commands.
When I run the code at Debian, it doesn't say nothing, but the server says *Connection reset".
I'm using fabric to run a command on one server against another server. Specifically, I'm running a SQL query through the psql command line.
The fabric run() function is throwing a SystemExit exception which I can catch.
If I go to the server and run the psql command directly I am told:
psql: could not connect to server: Connection timed out
Is the server running on host "xyz.example.com" (10.16.16.66) and accepting
TCP/IP connections on port 5432?
So, I know that the command is not working but what I want is to get that text under psql so my code can be explicit about the problem.
I think that the fabric code is fine because if I change the psql command so it executes on the same server but against a different database, I get no exception and the expected answer. So the problem is that the server I'm running psql on cannot communicate with one of the database servers.
Is it possible to get the results of the psql command through fabric after fabric throwsd the SystemExit exeption?
For reference, here's the sample code:
from __future__ import with_statement
from fabric.api import local, settings, abort, run, cd, execute, env
from fabric.contrib.console import confirm
import sys
import os
def test():
try:
count = run('psql blah blah blah',timeout=60)
print('count: {}'.format(count))
except Exception,ex:
print('====> Exception type: %s' % ex.__class__)
print('====> Exception: %s' % ex)
except SystemExit,ex:
print('====> Exception type: %s' % ex.__class__)
print('====> Exception: %s' % ex)
def go():
print "Working"
env.host_string = "jobs0.onshift.com"
execute(test)
Take a look at the settings context manager module in the fabric docs, and the succeeded and failed properties on the object returned by run
from fabric.api import *
from fabric.context_managers import settings
def test():
with settings(warn_only = True):
res = run('psql blah blah blah', timeout = 60)
if res.succeeded:
print('count: {}'.format(res))
def go():
print "Working"
execute(test, hosts = ["jobs0.onshift.com"])
I have written a paramiko script to batch-transfer files with sftp. The script works fine on my development machine -- Linux Mint 13, using Python 2.7.
When I moved the script to the production system, I found I had to build Python from scratch on it since the system Python was too old. So I built Python 2.7 on it --Centos -- and then attempted to run my script. It failed with a:
paramiko.SSHException - Errno 110, connection timeout
I've googled for that exception, but didn't find anything that seemed to fit. The script seems to 'hang' and the timeout on the paramiko.Transport((host, port)) part.
I thought this strange so attempted to do an sftp using openssh from that system, just to assure the remote host was responsive. It was -- and it worked.
So, now I go back to my script and simplify everything so it makes a bare-bones connection .. Still, I get a connection timeout. I don't know how to turn up debug with paramiko. Any suggestions?
Here's the basic script:
import os.path
import sys
import traceback
import paramiko
host = 'sftp.host.com'
user = 'user'
pw = 'password'
storepath = '/home/ftpservice/download'
is_dir = lambda x: oct(x)[1:3] == '40'
is_file = lambda x: oct(x)[1:3] == '10'
tp = paramiko.Transport((host, 22))
print 'tp is made, connecting '
tp.connect(username=user, password=pw, hostkey=None)
sftp = tp.open_sftp_client()
print 'sftp client made, now listing files'
filelist = sftp.listdir('.')
print filelist
for i in filelist:
fs = sftp.stat(i)
print "file is %s " % i
print "stmode %s" % sftp.stat(i).st_mode
if is_dir(sftp.stat(i).st_mode):
print "%s is a directory " % i
elif is_file(sftp.stat(i).st_mode):
print "%s is a file " % i
else:
print "no clue what %s is " % i
I'm working on a script that spins up a fresh EC2 instance with boto and uses the Paramiko SSH client to execute remote commands on the instance. For whatever reason, the Paramiko client is unabled to connect, I get the error:
Traceback (most recent call last):
File "scripts/sconfigure.py", line 29, in <module>
ssh.connect(instance.ip_address, username='ubuntu', key_filename=os.path.expanduser('~/.ssh/test'))
File "build/bdist.macosx-10.3-fat/egg/paramiko/client.py", line 291, in connect
File "<string>", line 1, in connect
socket.error: [Errno 61] Connection refused
I can ssh in fine manually using the same key file and user. Has anyone run into issues using Paramiko? My full code is below. Thanks.
import boto.ec2, time, paramiko, os
# Connect to the us-west-1 region
ec2 = boto.ec2.regions()[3].connect()
image_id = 'ami-ad7e2ee8'
image_name = 'Ubuntu 10.10 (Maverick Meerkat) 32-bit EBS'
new_reservation = ec2.run_instances(
image_id=image_id,
key_name='test',
security_groups=['web'])
instance = new_reservation.instances[0]
print "Spinning up instance for '%s' - %s. Waiting for it to boot up." % (image_id, image_name)
while instance.state != 'running':
print "."
time.sleep(1)
instance.update()
print "Instance is running, ip: %s" % instance.ip_address
print "Connecting to %s as user %s" % (instance.ip_address, 'ubuntu')
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(instance.ip_address, username='ubuntu', key_filename=os.path.expanduser('~/.ssh/test'))
stdin, stdout, stderr = ssh.exec_command('echo "TEST"')
print stdout.readlines()
ssh.close()
I seem to have figured this out by trial and error. Even though the instance status is "running" according to boto, there is a delay for when it will actually allow an SSH connection. Adding a "time.sleep(30)" before the "ssh.connect(...)" seems to do the trick for me, though this may vary.
The way to check it's ssh available is to make sure its two status checks both passes. On web UI it looks like this:
And using boto3 (the original question used boto but it was 5 years ago), we can do:
session = boto3.Session(...)
client = session.client('ec2')
res = client.run_instances(...) # launch instance
instance_id = res['Instances'][0]['InstanceId']
while True:
statuses = client.describe_instance_status(InstanceIds=[instance_id])
status = statuses['InstanceStatuses'][0]
if status['InstanceStatus']['Status'] == 'ok' \
and status['SystemStatus']['Status'] == 'ok':
break
print '.'
time.sleep(5)
print "Instance is running, you are ready to ssh to it"
Why not use boto.manage.cmdshell instead?
cmd = boto.manage.cmdshell.sshclient_from_instance(instance,
key_path,
user_name='ec2_user')
(code taken from line 152 in ec2_launch_instance.py)
For available cmdshell commands have a look at the SSHClient class from cmdshell.py.
I recently ran into this issue. The "correct" way would be to initiate a close() first and then reopen the connection. However on older versions, close() was broken.
With this version or later, it should be fixed:
https://github.com/boto/boto/pull/412
"Proper" method:
newinstance = image.run(min_count=instancenum, max_count=instancenum, key_name=keypair, security_groups=security_group, user_data=instancename, instance_type=instancetype, placement=zone)
time.sleep(2)
newinstance.instances[0].add_tag('Name',instancename)
print "Waiting for public_dns_name..."
counter = 0
while counter < 70:
time.sleep(1)
conn.close()
conn = boto.ec2.connection.EC2Connection(ec2auth.access_key,ec2auth.private_key)
startedinstance = conn.get_all_instances(instance_ids=str(newinstance.instances[0].id))[0]
counter = counter + 1
if str(startedinstance.instances[0].state) == "running":
break
if counter == 69:
print "Timed out waiting for instance to start."
print "Added: " + startedinstance.instances[0].tags['Name'] + " " + startedinstance.instances[0].public_dns_name
I recently view this code and I have a suggestion for code,
Instead of running while loop to check whether the instance is running or not, you can try "wait_until_running()".
Following is the sample code...
client = boto3.resource(
'ec2',
region_name="us-east-1"
)
Instance_ID = "<your Instance_ID>"
instance = client.Instance(Instance_ID)
instance.start()
instance.wait_until_running()
After that try to code for the ssh connection code.