I have recently spun up a VM on Google Compute Engine with the view of creating a development environment in the cloud.
I have the source code and install the Google Cloud SDK and the App-Engine SDK. However when i try to run dev_appserver.py I get the following error, even after ensuring firewall rules are created.
x#dev:~/code$ dev_appserver.py --host dev.cfcmelbourne.org --port=8080 cfc/
INFO 2015-05-20 12:54:22,744 sdk_update_checker.py:229] Checking for updates to the SDK.
INFO 2015-05-20 12:54:23,280 sdk_update_checker.py:273] This SDK release is newer than the advertised release.
INFO 2015-05-20 12:54:23,361 api_server.py:190] Starting API server at: http://localhost:38624
INFO 2015-05-20 12:54:23,441 api_server.py:615] Applying all pending transactions and saving the datastore
INFO 2015-05-20 12:54:23,441 api_server.py:618] Saving search indexes
Traceback (most recent call last):
File "/home/xxx/software/google_appengine/dev_appserver.py", line 83, in <module>
_run_file(__file__, globals())
File "/home/xxx/software/google_appengine/dev_appserver.py", line 79, in _run_file
execfile(_PATHS.script_file(script_name), globals_)
File "/home/xxx/software/google_appengine/google/appengine/tools/devappserver2/devappserver2.py", line 1002, in <module>
main()
File "/home/xxx/software/google_appengine/google/appengine/tools/devappserver2/devappserver2.py", line 995, in main
dev_server.start(options)
File "/home/xxx/software/google_appengine/google/appengine/tools/devappserver2/devappserver2.py", line 798, in start
self._dispatcher.start(options.api_host, apis.port, request_data)
File "/home/xxx/software/google_appengine/google/appengine/tools/devappserver2/dispatcher.py", line 189, in start
_module.start()
File "/home/xxx/software/google_appengine/google/appengine/tools/devappserver2/module.py", line 1174, in start
self._balanced_module.start()
File "/home/xxx/software/google_appengine/google/appengine/tools/devappserver2/wsgi_server.py", line 315, in start
self._start_all_fixed_port(host_ports)
File "/home/xxx/software/google_appengine/google/appengine/tools/devappserver2/wsgi_server.py", line 352, in _start_all_fixed_port
raise BindError('Unable to bind %s:%s' % self.bind_addr)
google.appengine.tools.devappserver2.wsgi_server.BindError: Unable to bind dev.cfcmelbourne.org:8080
xxx#dev:~/code$
The firewall rules clear allow 8080 TCP access.
run netstat -tulpn as root user to see if their is a process running on port 8080 . type fuser 8080/tcp to get PID of process running on port 8080 and kill that port or simply use argument -k with fuser command i.e fuser -k 8080/tcp to kill that process. it work fine for me.
Related
The post is long mainly because of all the error messages. The gist is:
I start a docker container with ray(latest tag has ray version 1.9.2 at the moment)
using docker exec I start a python process within this container
From python I try to connect to ray
The attempt to connect fails on M1 Mac while works on Linux
➜ docker run -it rayproject/ray:latest
$ ray start --head --block --num-gpus=1
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
(base) ray#8346ae81903e:~$ ray start --head --dashboard-host 0.0.0.0 --block --include-dashboard trueLocal node IP: 172.17.0.2
2022-01-27 01:22:01,109 INFO services.py:1340 -- View the Ray dashboard at http://172.17.0.2:8265
2022-01-27 01:22:01,119 WARNING services.py:1826 -- WARNING: The object store is using /tmp instead of /dev/shm because /dev/shm has only 67108864 bytes available. This will harm performance! You may be able to free up space by deleting files in /dev/shm. If you are inside a Docker container, you can increase /dev/shm size by passing '--shm-size=1.78gb' to 'docker run' (or add it to the run_options list in a Ray cluster config). Make sure to set this to more than 30% of available RAM.
[mutex.cc : 926] RAW: pthread_getschedparam failed: 1
--------------------
Ray runtime started.
--------------------
Next steps
To connect to this Ray runtime from another node, run
ray start --address='172.17.0.2:6379' --redis-password='XXXXX'
Alternatively, use the following Python code:
import ray
ray.init(address='auto', _redis_password='XXXXX')
To connect to this Ray runtime from outside of the cluster, for example to
connect to a remote cluster from your laptop directly, use the following
Python code:
import ray
ray.init(address='ray://<head_node_ip_address>:10001')
...
Then I use docker exec -it ... bash to connect to the container, run python repl and try using the commands suggested by the previous ray output.
import ray
ray.init(address='auto', _redis_password='XXXXX')
Results in
Traceback (most recent call last):
File "", line 1, in
File "/home/ray/anaconda3/lib/python3.7/site-packages/ray/_private/client_mode_hook.py", line 105, in wrapper
return func(*args, **kwargs)
File "/home/ray/anaconda3/lib/python3.7/site-packages/ray/worker.py", line 834, in init
redis_address, _, _ = services.validate_redis_address(address)
File "/home/ray/anaconda3/lib/python3.7/site-packages/ray/_private/services.py", line 375, in validate_redis_address
address = find_redis_address_or_die()
File "/home/ray/anaconda3/lib/python3.7/site-packages/ray/_private/services.py", line 287, in find_redis_address_or_die
"Could not find any running Ray instance. "
ConnectionError: Could not find any running Ray instance. Please specify the one to connect to by setting address.
Attempt to connect by specific address don't end well either.
ray.init(address='ray://localhost:10001')
[mutex.cc : 926] RAW: pthread_getschedparam failed: 1
Traceback (most recent call last):
File "", line 1, in
File "/home/ray/anaconda3/lib/python3.7/site-packages/ray/_private/client_mode_hook.py", line 105, in wrapper
return func(*args, **kwargs)
File "/home/ray/anaconda3/lib/python3.7/site-packages/ray/worker.py", line 775, in init
return builder.connect()
File "/home/ray/anaconda3/lib/python3.7/site-packages/ray/client_builder.py", line 155, in connect
ray_init_kwargs=self._remote_init_kwargs)
File "/home/ray/anaconda3/lib/python3.7/site-packages/ray/util/client_connect.py", line 42, in connect
ray_init_kwargs=ray_init_kwargs)
File "/home/ray/anaconda3/lib/python3.7/site-packages/ray/util/client/init.py", line 228, in connect
conn = self.get_context().connect(*args, **kw_args)
File "/home/ray/anaconda3/lib/python3.7/site-packages/ray/util/client/init.py", line 88, in connect
self.client_worker._server_init(job_config, ray_init_kwargs)
File "/home/ray/anaconda3/lib/python3.7/site-packages/ray/util/client/worker.py", line 698, in _server_init
f"Initialization failure from server:\n{response.msg}")
ConnectionAbortedError: Initialization failure from server:
Traceback (most recent call last):
File "/home/ray/anaconda3/lib/python3.7/site-packages/ray/util/client/server/proxier.py", line 629, in Datapath
"Starting Ray client server failed. See "
RuntimeError: Starting Ray client server failed. See ray_client_server_23000.err for detailed logs.
I had the same issue, and resolved this issue by setting dashboard_host to 0.0.0.0.
ray.init(dashboard_host="0.0.0.0",dashboard_port=6379)
I am using Python which is using RabbitMQ for input and output. I am able to run my script locally without any errors but when I try to Dockerize that script and run it it's giving me the following error:
Traceback (most recent call last):
File "./Kusto_connection_with_rabbitmq_2.py", line 1674, in <module>
main()
File "./Kusto_connection_with_rabbitmq_2.py", line 1668, in main
channel.start_consuming()
File "/usr/local/lib/python3.8/site-packages/pika/adapters/blocking_connection.py", line 1865,
in start_consuming
self._process_data_events(time_limit=None)
File "/usr/local/lib/python3.8/site-packages/pika/adapters/blocking_connection.py", line 2026,
in _process_data_events self.connection.process_data_events(time_limit=time_limit)
File "/usr/local/lib/python3.8/site-packages/pika/adapters/blocking_connection.py", line 824,
in process_data_events
self._flush_output(common_terminator)
File "/usr/local/lib/python3.8/site-packages/pika/adapters/blocking_connection.py", line 523,
in _flush_output
raise self._closed_result.value.error
pika.exceptions.StreamLostError: Transport indicated EOF
Below is the my Python code which is connecting to RabbitMQ:
credentials = pika.PlainCredentials(username, password)
parameters = pika.ConnectionParameters(host=Api_url,virtual_host=rmqvhost,credentials=credentials,heartbeat=0)
print (username,password)
connection = pika.BlockingConnection(parameters)
channel = connection.channel()
channel.queue_declare(queue='test',durable=True)
channel.basic_qos(prefetch_size=0,prefetch_count=1) # this is for acknowdeging packet one by one
channel.basic_consume(queue='test', on_message_callback=callback,auto_ack=False)
print(' [*] Waiting for messages. To exit press CTRL+C')
channel.start_consuming()
My Dockerfile:
FROM python:3.8
WORKDIR /First_try
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY Kusto_connection_with_rabbitmq_2.py .
CMD ["python","./Kusto_connection_with_rabbitmq_2.py"]
I run my Docker container with
docker run <image_name>
Maybe your connection is being interrupted, and pika is declaring your client dead. Try setting the heartbeat in your parameters to 30 or so.
I suppose your problem is you don't have access to rabbitmq from your docker container.
Here is simple example of usage rabbitmq from python using docker-compose, you should try to implement your solution relying on this example.
I've installed pritunl on my VPS. Now I can't access to the web interface. It's not possible to start the service (Please have a look on the logs below).
There's a mariadb DB besides the mongodb which is included in pritunl, is it a problem? (I need the mariadb for other applications, thats why)
I followed the official guide for CentOS. It seems like there are some missing prerequisites, but actually I don't know.
Can someone help me out? :-)
Thanks!
Moejoe
pritunl logs
[undefined][2017-01-30 21:45:51,211][ERROR] Pritunl setup failed
Traceback (most recent call last):
File "/usr/lib/pritunl/lib/python2.7/site-packages/pritunl/setup/__init__.py", line 68, in setup_db
setup_mongo()
File "/usr/lib/pritunl/lib/python2.7/site-packages/pritunl/setup/mongo.py", line 65, in setup_mongo
serverSelectionTimeoutMS=MONGO_SOCKET_TIMEOUT,
File "/usr/lib/pritunl/lib/python2.7/site-packages/pymongo/mongo_client.py", line 345, in __init__
seeds.update(uri_parser.split_hosts(entity, port))
File "/usr/lib/pritunl/lib/python2.7/site-packages/pymongo/uri_parser.py", line 244, in split_hosts
raise ConfigurationError("Empty host "
ConfigurationError: Empty host (or extra comma in host list).
Traceback (most recent call last):
File "/usr/bin/pritunl", line 9, in <module>
load_entry_point('pritunl==1.26.1231.99', 'console_scripts', 'pritunl')()
File "/usr/lib/pritunl/lib/python2.7/site-packages/pritunl/__main__.py", line 264, in main
setup.setup_db()
File "/usr/lib/pritunl/lib/python2.7/site-packages/pritunl/setup/__init__.py", line 68, in setup_db
setup_mongo()
File "/usr/lib/pritunl/lib/python2.7/site-packages/pritunl/setup/mongo.py", line 65, in setup_mongo
serverSelectionTimeoutMS=MONGO_SOCKET_TIMEOUT,
File "/usr/lib/pritunl/lib/python2.7/site-packages/pymongo/mongo_client.py", line 345, in __init__
seeds.update(uri_parser.split_hosts(entity, port))
File "/usr/lib/pritunl/lib/python2.7/site-packages/pymongo/uri_parser.py", line 244, in split_hosts
raise ConfigurationError("Empty host "
pymongo.errors.ConfigurationError: Empty host (or extra comma in host list).
I had exactly the same problem.
The only solution is to (temporaly) shut down apache:
Obviously sudo service httpd stop or on debian sudo service apache stop
After that, you should be able to finish the installation and start pritunl, and change the ports to not use 80 and 443, and finally restarting everything like a boss
For me that was like, in this order only
sudo pritunl set app.redirect_server false
sudo service pritunl stop
sudo service apache start
sudo service pritunl start
I have this error when I run my code in server, my env is debian, and Python2.7.3
Traceback (most recent call last):
File "fetcher.py", line 4, in <module>
import mirad.fetcher_tasks as tasks
File "/home/mirad/backend/mirad/fetcher_tasks.py", line 75, in <module>
redis_keys = r.keys('*')
File "/home/mirad/backend/venv/local/lib/python2.7/site-packages/redis/client.py", line 863, in keys
return self.execute_command('KEYS', pattern)
File "/home/mirad/backend/venv/local/lib/python2.7/site-packages/redis/client.py", line 534, in execute_command
connection.send_command(*args)
File "/home/mirad/backend/venv/local/lib/python2.7/site-packages/redis/connection.py", line 532, in send_command
self.send_packed_command(self.pack_command(*args))
File "/home/mirad/backend/venv/local/lib/python2.7/site-packages/redis/connection.py", line 508, in send_packed_command
self.connect()
File "/home/mirad/backend/venv/local/lib/python2.7/site-packages/redis/connection.py", line 412, in connect
raise ConnectionError(self._error_message(e))
redis.exceptions.ConnectionError: Error -2 connecting to localhost:6379. Name or service not known.
when I run redis-cli it works correctly without any error:
$ redis-cli
127.0.0.1:6379>
It seems that you are trying connect redis with server that is unidentified by your current Debian environment.
From Traceback, I see you are trying to connect using host name as localhost ,
r_server=redis.Redis(host="localhost",port=6379)
But , your system is unable to understand "localhost" , make entry in hosts file i.e saying 127.0.0.1 is localhost. add below code in /etc/hosts
127.0.0.1 localhost
otherwise connect redis using below command ;
r_server=redis.Redis(host="127.0.0.1",port=6379)
I faced a similar problem and later on realized that I have not opened redis in the terminal by command: $ redis-server.
Maybe it is the case.
I'm using fabric to do a remote deployment of my app on a rackspace server. I've tried my scripts on virtual machines using the same OS (Ubuntu Server 10.04) on my home computer and they all seem to work.
Strangely, all put fabric commands fail on the real server. All other commands (run, cd, sudo, etc) seem to work ok.
This only happens when targeting this specific server, this is the command I execute:
fab test --host remote-server
remote-server is an alias on my .ssh/config. My fabfile:
#task
def test():
sudo("echo testing")
put("/tmp/file.txt", "/tmp/")
tmp/test_file.txt is just a text file I'm using for my tests
This is the output
[remote-server] Executing task 'test'
[remote-server] sudo: echo testing
[remote-server] out: testing
Traceback (most recent call last):
File "/home/user/env/lib/python2.6/site-packages/fabric/main.py", line 712, in main
*args, **kwargs
File "/home/user/env/lib/python2.6/site-packages/fabric/tasks.py", line 298, in execute
multiprocessing
File "/home/user/env/lib/python2.6/site-packages/fabric/tasks.py", line 197, in _execute
return task.run(*args, **kwargs)
File "/home/user/env/lib/python2.6/site-packages/fabric/tasks.py", line 112, in run
return self.wrapped(*args, **kwargs)
File "/home/user/project/fabfile/__init__.py", line 33, in test
put("/tmp/file.txt", "/tmp/")
File "/home/user/env/lib/python2.6/site-packages/fabric/network.py", line 457, in host_prompting_wrapper
return func(*args, **kwargs)
File "/home/user/env/lib/python2.6/site-packages/fabric/operations.py", line 338, in put
ftp = SFTP(env.host_string)
File "/home/user/env/lib/python2.6/site-packages/fabric/sftp.py", line 20, in __init__
self.ftp = connections[host_string].open_sftp()
File "/home/user/env/lib/python2.6/site-packages/ssh/client.py", line 399, in open_sftp
return self._transport.open_sftp_client()
File "/home/user/env/lib/python2.6/site-packages/ssh/transport.py", line 844, in open_sftp_client
return SFTPClient.from_transport(self)
File "/home/user/env/lib/python2.6/site-packages/ssh/sftp_client.py", line 105, in from_transport
chan.invoke_subsystem('sftp')
File "/home/user/env/lib/python2.6/site-packages/ssh/channel.py", line 240, in invoke_subsystem
self._wait_for_event()
File "/home/user/env/lib/python2.6/site-packages/ssh/channel.py", line 1114, in _wait_for_event
raise e
ssh.SSHException: Channel closed.
Disconnecting from root#server.com... done.
Is there anything I need to configure on the remote server to be able to send files using put?
Thanks to #Drake I found out that there was an issue with the sftp server on the remote machine.
To test for this:
$ sftp remote-server
subsystem request failed on channel 0
Couldn't read packet: Connection reset by peer
I read that in order to enable sftp I needed to add the line
Subsystem sftp /usr/lib/openssh/sftp-server
to /etc/ssh/sshd_config and restart (/etc/init.d/ssh restart) the ssh service. But the line was already there and it wasn't working.
Then, after reading http://forums.debian.net/viewtopic.php?f=5&t=42818, I changed that line for
Subsystem sftp internal-sftp
restarted the ssh service, and it is now working:
$ sftp remote-server
Connected to remote-server
sftp>
I had same issue.
I found sftp was not installed on my server.
I installed openssh and restarted sshd service.
yum -y install openssh
service sshd restart
Then also i had same issue.
I checked the system log /var/log/messages. I found following error
Jul 3 04:23:20 <ip> sshd[13996]: subsystem request for sftp
Jul 3 04:23:20 <ip> sshd[13996]: error: subsystem: cannot stat /usr/libexec/sftp- server: No such file or directory
Jul 3 04:23:20 <ip> sshd[13996]: subsystem request for sftp failed, subsystem not found
I locate my sftp-server location which was in "/usr/libexec/openssh/sftp-server" and script was looking at "/usr/libexec/sftp-server" location
I created symbolic link and my issue got resolved.
root#<ip> fabric]# locate sftp-server
/usr/libexec/openssh/sftp-server
/usr/share/man/man8/sftp-server.8.gz
ln -s /usr/libexec/openssh/sftp-server /usr/libexec/sftp-server