Can't use fabric put - Is there any server configuration needed? - python

I'm using fabric to do a remote deployment of my app on a rackspace server. I've tried my scripts on virtual machines using the same OS (Ubuntu Server 10.04) on my home computer and they all seem to work.
Strangely, all put fabric commands fail on the real server. All other commands (run, cd, sudo, etc) seem to work ok.
This only happens when targeting this specific server, this is the command I execute:
fab test --host remote-server
remote-server is an alias on my .ssh/config. My fabfile:
#task
def test():
sudo("echo testing")
put("/tmp/file.txt", "/tmp/")
tmp/test_file.txt is just a text file I'm using for my tests
This is the output
[remote-server] Executing task 'test'
[remote-server] sudo: echo testing
[remote-server] out: testing
Traceback (most recent call last):
File "/home/user/env/lib/python2.6/site-packages/fabric/main.py", line 712, in main
*args, **kwargs
File "/home/user/env/lib/python2.6/site-packages/fabric/tasks.py", line 298, in execute
multiprocessing
File "/home/user/env/lib/python2.6/site-packages/fabric/tasks.py", line 197, in _execute
return task.run(*args, **kwargs)
File "/home/user/env/lib/python2.6/site-packages/fabric/tasks.py", line 112, in run
return self.wrapped(*args, **kwargs)
File "/home/user/project/fabfile/__init__.py", line 33, in test
put("/tmp/file.txt", "/tmp/")
File "/home/user/env/lib/python2.6/site-packages/fabric/network.py", line 457, in host_prompting_wrapper
return func(*args, **kwargs)
File "/home/user/env/lib/python2.6/site-packages/fabric/operations.py", line 338, in put
ftp = SFTP(env.host_string)
File "/home/user/env/lib/python2.6/site-packages/fabric/sftp.py", line 20, in __init__
self.ftp = connections[host_string].open_sftp()
File "/home/user/env/lib/python2.6/site-packages/ssh/client.py", line 399, in open_sftp
return self._transport.open_sftp_client()
File "/home/user/env/lib/python2.6/site-packages/ssh/transport.py", line 844, in open_sftp_client
return SFTPClient.from_transport(self)
File "/home/user/env/lib/python2.6/site-packages/ssh/sftp_client.py", line 105, in from_transport
chan.invoke_subsystem('sftp')
File "/home/user/env/lib/python2.6/site-packages/ssh/channel.py", line 240, in invoke_subsystem
self._wait_for_event()
File "/home/user/env/lib/python2.6/site-packages/ssh/channel.py", line 1114, in _wait_for_event
raise e
ssh.SSHException: Channel closed.
Disconnecting from root#server.com... done.
Is there anything I need to configure on the remote server to be able to send files using put?

Thanks to #Drake I found out that there was an issue with the sftp server on the remote machine.
To test for this:
$ sftp remote-server
subsystem request failed on channel 0
Couldn't read packet: Connection reset by peer
I read that in order to enable sftp I needed to add the line
Subsystem sftp /usr/lib/openssh/sftp-server
to /etc/ssh/sshd_config and restart (/etc/init.d/ssh restart) the ssh service. But the line was already there and it wasn't working.
Then, after reading http://forums.debian.net/viewtopic.php?f=5&t=42818, I changed that line for
Subsystem sftp internal-sftp
restarted the ssh service, and it is now working:
$ sftp remote-server
Connected to remote-server
sftp>

I had same issue.
I found sftp was not installed on my server.
I installed openssh and restarted sshd service.
yum -y install openssh
service sshd restart
Then also i had same issue.
I checked the system log /var/log/messages. I found following error
Jul 3 04:23:20 <ip> sshd[13996]: subsystem request for sftp
Jul 3 04:23:20 <ip> sshd[13996]: error: subsystem: cannot stat /usr/libexec/sftp- server: No such file or directory
Jul 3 04:23:20 <ip> sshd[13996]: subsystem request for sftp failed, subsystem not found
I locate my sftp-server location which was in "/usr/libexec/openssh/sftp-server" and script was looking at "/usr/libexec/sftp-server" location
I created symbolic link and my issue got resolved.
root#<ip> fabric]# locate sftp-server
/usr/libexec/openssh/sftp-server
/usr/share/man/man8/sftp-server.8.gz
ln -s /usr/libexec/openssh/sftp-server /usr/libexec/sftp-server

Related

Permission error when trying to connect to MySQL DB using CGI Script [duplicate]

I'm trying to host an application on apache2 server using python CGI framework. The program works fine when compiled and there is no error.
When I try it on the web browser I get the error.
InterfaceError: 2003: Can't connect to MySQL server on 'localhost:3306' (13 Permission denied)
I have tried installing mysql-connector-python and checked if I'm missing anything. Everything seems to be fine. Below is the code with the error in detail.
def connectdb():
mydb = mx.connect(host='localhost',user='******',passwd='********',database='searchdb')
cur=mydb.cursor()
return mydb,cur
Error when trying to access the program.
Traceback (most recent call last):
File "/var/www/html/ftest.py", line 116, in <module>
mydb,cur=connectdb()
File "/var/www/html/ftest.py", line 55, in connectdb
mx.connect(host='localhost',user='*****',passwd='********',database='searchdb')
File "/usr/lib/python2.7/site-packages/mysql/connector/__init__.py", line 98, in connect
return MySQLConnection(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/mysql/connector/connection.py", line 118, in __init__
self.connect(**kwargs)
File "/usr/lib/python2.7/site-packages/mysql/connector/connection.py", line 382, in connect
self._open_connection()
File "/usr/lib/python2.7/site-packages/mysql/connector/connection.py", line 345, in _open_connection
self._socket_open_connection()
File "/usr/lib/python2.7/site-packages/mysql/connector/network.py", line 386, in _open_connection
errno=2003, values=(self.get_address(), _strioerror(err)))
InterfaceError: 2003: Can't connect to MySQL server on 'localhost:3306' (13 Permission denied)
This is the error that I'm facing. Anyone familiar with this issue, Please let me know.
Thank you.
Some things to check, assuming this is a Linux machine:
Make sure you can connect locally using the same user name and password.
mysql -u [youruser] -p
If SELinux is running, make sure the "httpd_can_network_connect_db" boolean is set to on. Check with sudo getsebool httpd_can_network_connect_db. If it comes back as "off," set it to "on" with sudo setsebool -P httpd_can_network_connect_db on. (Make sure to include the -P, or it will revert back to the original value if the system restarts.)

How can I connect to ray in docker on Mac M1?

The post is long mainly because of all the error messages. The gist is:
I start a docker container with ray(latest tag has ray version 1.9.2 at the moment)
using docker exec I start a python process within this container
From python I try to connect to ray
The attempt to connect fails on M1 Mac while works on Linux
➜ docker run -it rayproject/ray:latest
$ ray start --head --block --num-gpus=1
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
(base) ray#8346ae81903e:~$ ray start --head --dashboard-host 0.0.0.0 --block --include-dashboard trueLocal node IP: 172.17.0.2
2022-01-27 01:22:01,109 INFO services.py:1340 -- View the Ray dashboard at http://172.17.0.2:8265
2022-01-27 01:22:01,119 WARNING services.py:1826 -- WARNING: The object store is using /tmp instead of /dev/shm because /dev/shm has only 67108864 bytes available. This will harm performance! You may be able to free up space by deleting files in /dev/shm. If you are inside a Docker container, you can increase /dev/shm size by passing '--shm-size=1.78gb' to 'docker run' (or add it to the run_options list in a Ray cluster config). Make sure to set this to more than 30% of available RAM.
[mutex.cc : 926] RAW: pthread_getschedparam failed: 1
--------------------
Ray runtime started.
--------------------
Next steps
To connect to this Ray runtime from another node, run
ray start --address='172.17.0.2:6379' --redis-password='XXXXX'
Alternatively, use the following Python code:
import ray
ray.init(address='auto', _redis_password='XXXXX')
To connect to this Ray runtime from outside of the cluster, for example to
connect to a remote cluster from your laptop directly, use the following
Python code:
import ray
ray.init(address='ray://<head_node_ip_address>:10001')
...
Then I use docker exec -it ... bash to connect to the container, run python repl and try using the commands suggested by the previous ray output.
import ray
ray.init(address='auto', _redis_password='XXXXX')
Results in
Traceback (most recent call last):
File "", line 1, in
File "/home/ray/anaconda3/lib/python3.7/site-packages/ray/_private/client_mode_hook.py", line 105, in wrapper
return func(*args, **kwargs)
File "/home/ray/anaconda3/lib/python3.7/site-packages/ray/worker.py", line 834, in init
redis_address, _, _ = services.validate_redis_address(address)
File "/home/ray/anaconda3/lib/python3.7/site-packages/ray/_private/services.py", line 375, in validate_redis_address
address = find_redis_address_or_die()
File "/home/ray/anaconda3/lib/python3.7/site-packages/ray/_private/services.py", line 287, in find_redis_address_or_die
"Could not find any running Ray instance. "
ConnectionError: Could not find any running Ray instance. Please specify the one to connect to by setting address.
Attempt to connect by specific address don't end well either.
ray.init(address='ray://localhost:10001')
[mutex.cc : 926] RAW: pthread_getschedparam failed: 1
Traceback (most recent call last):
File "", line 1, in
File "/home/ray/anaconda3/lib/python3.7/site-packages/ray/_private/client_mode_hook.py", line 105, in wrapper
return func(*args, **kwargs)
File "/home/ray/anaconda3/lib/python3.7/site-packages/ray/worker.py", line 775, in init
return builder.connect()
File "/home/ray/anaconda3/lib/python3.7/site-packages/ray/client_builder.py", line 155, in connect
ray_init_kwargs=self._remote_init_kwargs)
File "/home/ray/anaconda3/lib/python3.7/site-packages/ray/util/client_connect.py", line 42, in connect
ray_init_kwargs=ray_init_kwargs)
File "/home/ray/anaconda3/lib/python3.7/site-packages/ray/util/client/init.py", line 228, in connect
conn = self.get_context().connect(*args, **kw_args)
File "/home/ray/anaconda3/lib/python3.7/site-packages/ray/util/client/init.py", line 88, in connect
self.client_worker._server_init(job_config, ray_init_kwargs)
File "/home/ray/anaconda3/lib/python3.7/site-packages/ray/util/client/worker.py", line 698, in _server_init
f"Initialization failure from server:\n{response.msg}")
ConnectionAbortedError: Initialization failure from server:
Traceback (most recent call last):
File "/home/ray/anaconda3/lib/python3.7/site-packages/ray/util/client/server/proxier.py", line 629, in Datapath
"Starting Ray client server failed. See "
RuntimeError: Starting Ray client server failed. See ray_client_server_23000.err for detailed logs.
I had the same issue, and resolved this issue by setting dashboard_host to 0.0.0.0.
ray.init(dashboard_host="0.0.0.0",dashboard_port=6379)

Unable to connect to Redis instance running on Azure Linux VM from Python

I created an Ubuntu VM on Azure. In its inbound networking filters, I added ports 22 (for SSHing) and 6379 (for Redis). Following this, I SSHed into an instance from my bash shell, downloaded, built and installed Redis from source. The resultant redis.conf file is located in /tmp/redis-stable, so I edited that to comment out the bind 127.0.0.1 rule.
Then I started redis-server redis.conf from the /tmp/redis-stable directory, and it started normally, following which I SSHed into another instance of the VM, started redis-cli and set some keys. Retrieved them, working correctly.
Now in Python I am running this command:
r = redis.Redis(host='same_which_I_use_for_SSHing', port=6379, password='pwd')
It connects immediately (looks weird). But then when I try a simple command like r.get("foo"), I get this error:
>>> r.get("foo")
Traceback (most recent call last):
File "/lib/python3.5/site-packages/redis/connection.py", line 484, in connect
sock = self._connect()
File "/lib/python3.5/site-packages/redis/connection.py", line 511, in _connect
socket.SOCK_STREAM):
File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/socket.py", line 732, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno 8] nodename nor servname provided, or not known
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/lib/python3.5/site-packages/redis/client.py", line 667, in execute_command
connection.send_command(*args)
File "/lib/python3.5/site-packages/redis/connection.py", line 610, in send_command
self.send_packed_command(self.pack_command(*args))
File "/lib/python3.5/site-packages/redis/connection.py", line 585, in send_packed_command
self.connect()
File "/lib/python3.5/site-packages/redis/connection.py", line 489, in connect
raise ConnectionError(self._error_message(e))
redis.exceptions.ConnectionError: Error 8 connecting to username#ip_address. nodename nor servname provided, or not known.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/lib/python3.5/site-packages/redis/connection.py", line 484, in connect
sock = self._connect()
File "/lib/python3.5/site-packages/redis/connection.py", line 511, in _connect
socket.SOCK_STREAM):
File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/socket.py", line 732, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno 8] nodename nor servname provided, or not known
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/lib/python3.5/site-packages/redis/client.py", line 976, in get
return self.execute_command('GET', name)
File "/lib/python3.5/site-packages/redis/client.py", line 673, in execute_command
connection.send_command(*args)
File "/lib/python3.5/site-packages/redis/connection.py", line 610, in send_command
self.send_packed_command(self.pack_command(*args))
File "/lib/python3.5/site-packages/redis/connection.py", line 585, in send_packed_command
self.connect()
File "/lib/python3.5/site-packages/redis/connection.py", line 489, in connect
raise ConnectionError(self._error_message(e))
redis.exceptions.ConnectionError: Error 8 connecting to username#ip_address. nodename nor servname provided, or not known.
Any idea how to fix this? FYI, the username#ip_address in the error message is the same I use for SSH from bash and connecting to Redis from Python respectively:
ssh username#ip_address
r = redis.Redis(host='username#ip_address', port=6379, password='pwd')
I also tried adding bind 0.0.0.0 in the redis.conf file after commenting out the bind 127.0.0.1 line. Same result.
Update: I tried setting protected mode to no in the config file, followed by running sudo ufw allow 6379 in the VM. Still same result. But now I get a weird error when I run redis-server redis.conf. I don't get the typical redis cube which shows up as a figure Instead I get this:
After this, if I enter redis-cli and issue a simple command like set foo boo, I get this error message:
127.0.0.1:6379> set foo 1
(error) MISCONF Redis is configured to save RDB snapshots, but it is currently not able to persist on disk. Commands that may modify the data set are disabled, because this instance is configured to report errors during writes if RDB snapshotting fails (stop-writes-on-bgsave-error option). Please check the Redis logs for details about the RDB error.
Even shutdown fails after this. I have to run grep to find the process if of redis-server and kill it manually, following which redis-server command needs to be run to start it normally. But of course, I still cannot connect from remote Mac.
I tried to create an Ubuntu VM on Azure, build & configure redis server from source code, and add a port rule for my VM NSG on Azure portal, then I connected the redis server via the same code with python redis package successfully.
Here is my steps as below.
Create an Ubuntu 18.04 VM on Azure portal.
Connect the Ubuntu VM via SSH to install the build packages (include build-essential, gcc and make), download & decompress the tar.gz file of redis source code from redis.io, and build & test the redis server with make & make test.
Configure the redis.conf via vim, to change the bind configuration from 127.0.0.1 to 0.0.0.0.
Add a new port rule of 6379 port with TCP into the NSG for the Network Interface shown in the tab Settings -> Networking of my VM on Azure portal, as the figures below.
Notes: There are two NSG in my Networking tab, the second one is related to my Network Interface which can be accessed via internet.
and
5. Install redis via pip install redis on my local machine.
6. Open a termial to type python to try to connect the redis server hosted on my Azure Ubuntu VM and get the value of the foo key successfully.
>>> import redis
>>> r = redis.Redis(host='xxx.xx.xx.xxx')
>>> r.get('foo')
b'bar'
There are two key issues I found within my testing.
For redis.conf, the binding host must be 0.0.0.0. If not, redis server will be running in protected mode to refuse the query.
For the NSG port rules, make sure the new port rule added in the NSG attached to the current network interface, not the default subnet.
The python package redis is lazy evaluation, only connect redis server when first command request happened.
Hope it helps.

How do I connect via SSH to docker image without "Invalid DISPLAY variable"-error by matplotlib when local docker session works fine?

I have a fully working docker-image running/hosted on my Ubuntu 18.04 linux machine. However, connecting to the physical machine via SSH from my Win10 Laptop via PowerShell:
ssh username#machine
I do get the following error from matplotlib when I try to execute my code remotely via ssh:
Traceback (most recent call last):
File "foo", line 284, in <module>
cnnTrainTestApply.applyStructureDetectionNet(absPathToCsvFiles, absPathToCnnOutputFiles)
File "/home/dev/foo.py", line 702, in bar
plt.figure(figsize=(15, 15))
File "/opt/conda/lib/python3.5/site-packages/matplotlib/pyplot.py", line 539, in figure
**kwargs)
File "/opt/conda/lib/python3.5/site-packages/matplotlib/backend_bases.py", line 171, in new_figure_manager
return cls.new_figure_manager_given_figure(num, fig)
File "/opt/conda/lib/python3.5/site-packages/matplotlib/backend_bases.py", line 177, in new_figure_manager_given_figure
canvas = cls.FigureCanvas(figure)
File "/opt/conda/lib/python3.5/site-packages/matplotlib/backends/backend_qt5agg.py", line 35, in __init__
super(FigureCanvasQTAggBase, self).__init__(figure=figure)
File "/opt/conda/lib/python3.5/site-packages/matplotlib/backends/backend_qt5.py", line 235, in __init__
_create_qApp()
File "/opt/conda/lib/python3.5/site-packages/matplotlib/backends/backend_qt5.py", line 122, in _create_qApp
raise RuntimeError('Invalid DISPLAY variable')
RuntimeError: Invalid DISPLAY variable
Neither this, nor ssh -X username#machine do yield in success. Working directly on my machine without ssh does not make any issues. I suppose it is a missing XServer running, or something similar.
What do I get wrong about the ssh connection? How can I resolve the issue?
As matplotlib demands functioning qt5 backend, I can not simply avoid the forwarding of the X-server of the host system to docker.
This solution is a workaround, that needs a logged in user with active X-Server. Which is not optimal.
As supposed DISPLAY is missing during the SSH-Connection. To use it for with matplotlib we have to do the following:
After the ssh login, on the host system "machine" do type following command before connecting to the docker image.
export DISPLAY=:1
This may depend on your machine setup, as long as you have the X-Server running with your current user, you may have to put the output of echo $DISPLAY instead of 1 as export. As long as the same usernames are provided this should work.

running dev_appserver.py from linux terminal

I have recently spun up a VM on Google Compute Engine with the view of creating a development environment in the cloud.
I have the source code and install the Google Cloud SDK and the App-Engine SDK. However when i try to run dev_appserver.py I get the following error, even after ensuring firewall rules are created.
x#dev:~/code$ dev_appserver.py --host dev.cfcmelbourne.org --port=8080 cfc/
INFO 2015-05-20 12:54:22,744 sdk_update_checker.py:229] Checking for updates to the SDK.
INFO 2015-05-20 12:54:23,280 sdk_update_checker.py:273] This SDK release is newer than the advertised release.
INFO 2015-05-20 12:54:23,361 api_server.py:190] Starting API server at: http://localhost:38624
INFO 2015-05-20 12:54:23,441 api_server.py:615] Applying all pending transactions and saving the datastore
INFO 2015-05-20 12:54:23,441 api_server.py:618] Saving search indexes
Traceback (most recent call last):
File "/home/xxx/software/google_appengine/dev_appserver.py", line 83, in <module>
_run_file(__file__, globals())
File "/home/xxx/software/google_appengine/dev_appserver.py", line 79, in _run_file
execfile(_PATHS.script_file(script_name), globals_)
File "/home/xxx/software/google_appengine/google/appengine/tools/devappserver2/devappserver2.py", line 1002, in <module>
main()
File "/home/xxx/software/google_appengine/google/appengine/tools/devappserver2/devappserver2.py", line 995, in main
dev_server.start(options)
File "/home/xxx/software/google_appengine/google/appengine/tools/devappserver2/devappserver2.py", line 798, in start
self._dispatcher.start(options.api_host, apis.port, request_data)
File "/home/xxx/software/google_appengine/google/appengine/tools/devappserver2/dispatcher.py", line 189, in start
_module.start()
File "/home/xxx/software/google_appengine/google/appengine/tools/devappserver2/module.py", line 1174, in start
self._balanced_module.start()
File "/home/xxx/software/google_appengine/google/appengine/tools/devappserver2/wsgi_server.py", line 315, in start
self._start_all_fixed_port(host_ports)
File "/home/xxx/software/google_appengine/google/appengine/tools/devappserver2/wsgi_server.py", line 352, in _start_all_fixed_port
raise BindError('Unable to bind %s:%s' % self.bind_addr)
google.appengine.tools.devappserver2.wsgi_server.BindError: Unable to bind dev.cfcmelbourne.org:8080
xxx#dev:~/code$
The firewall rules clear allow 8080 TCP access.
run netstat -tulpn as root user to see if their is a process running on port 8080 . type fuser 8080/tcp to get PID of process running on port 8080 and kill that port or simply use argument -k with fuser command i.e fuser -k 8080/tcp to kill that process. it work fine for me.

Categories

Resources