I'm trying to connect Misson Planner so that I can simulate, but getting this error:
(proje) C:\Users\hasan>mavproxy.py --master tcp:127.0.0.1:5760 --out udp:127.0.0.1:20000 --out udp:127.0.0.1:10000
Auto-detected serial ports are:
Connect 0.0.0.0:14550 source_system=255
Loaded module console
Log Directory:
Telemetry log: mav.tlog
Waiting for heartbeat from 0.0.0.0:14550
MAV>
The IP i want to connect to is 127.0.0.1 but it tries to connect 0.0.0.0:14550. I did some research but none of the solutions i found solved my problem.
MAVProxy version = 1.8.35
pymavlink version = 2.4.8
screen shot
Related
I've created a Spark cluster with one master and two slaves, each one on a Docker container.
I launch it with the command start-all.sh.
I can reach the UI from my local machine at localhost:8080 and it shows me that the cluster is well launched :
Screenshot of Spark UI
Then I try to submit a simple Python script from my host machine (not from the Docker container) with this command spark-submit : spark-submit --master spark://spark-master:7077 test.py
test.py :
import pyspark
conf = pyspark.SparkConf().setAppName('MyApp').setMaster('spark://spark-master:7077')
sc = pyspark.SparkContext(conf=conf)
But the console returned me this error :
22/01/26 09:20:39 INFO StandaloneAppClient$ClientEndpoint: Connecting to master spark://spark-master:7077...
22/01/26 09:20:40 WARN StandaloneAppClient$ClientEndpoint: Failed to connect to master spark-master:7077
org.apache.spark.SparkException: Exception thrown in awaitResult:
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:226)
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
at org.apache.spark.rpc.RpcEnv.setupEndpointRefByURI(RpcEnv.scala:101)
at org.apache.spark.rpc.RpcEnv.setupEndpointRef(RpcEnv.scala:109)
at org.apache.spark.deploy.client.StandaloneAppClient$ClientEndpoint$$anonfun$tryRegisterAllMasters$1$$anon$1.run(StandaloneAppClient.scala:106)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.io.IOException: Failed to connect to spark-master:7077
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:245)
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:187)
at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:198)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:194)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:190)
... 4 more
Caused by: java.net.UnknownHostException: spark-master
at java.base/java.net.InetAddress$CachedAddresses.get(InetAddress.java:797)
at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1509)
at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1368)
at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1302)
at java.base/java.net.InetAddress.getByName(InetAddress.java:1252)
at io.netty.util.internal.SocketUtils$8.run(SocketUtils.java:146)
at io.netty.util.internal.SocketUtils$8.run(SocketUtils.java:143)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at io.netty.util.internal.SocketUtils.addressByName(SocketUtils.java:143)
at io.netty.resolver.DefaultNameResolver.doResolve(DefaultNameResolver.java:43)
at io.netty.resolver.SimpleNameResolver.resolve(SimpleNameResolver.java:63)
at io.netty.resolver.SimpleNameResolver.resolve(SimpleNameResolver.java:55)
at io.netty.resolver.InetSocketAddressResolver.doResolve(InetSocketAddressResolver.java:57)
at io.netty.resolver.InetSocketAddressResolver.doResolve(InetSocketAddressResolver.java:32)
at io.netty.resolver.AbstractAddressResolver.resolve(AbstractAddressResolver.java:108)
at io.netty.bootstrap.Bootstrap.doResolveAndConnect0(Bootstrap.java:202)
at io.netty.bootstrap.Bootstrap.access$000(Bootstrap.java:48)
at io.netty.bootstrap.Bootstrap$1.operationComplete(Bootstrap.java:182)
at io.netty.bootstrap.Bootstrap$1.operationComplete(Bootstrap.java:168)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:551)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490)
at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:615)
at io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:604)
at io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104)
at io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84)
at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetSuccess(AbstractChannel.java:985)
at io.netty.channel.AbstractChannel$AbstractUnsafe.register0(AbstractChannel.java:505)
at io.netty.channel.AbstractChannel$AbstractUnsafe.access$200(AbstractChannel.java:416)
at io.netty.channel.AbstractChannel$AbstractUnsafe$1.run(AbstractChannel.java:475)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:510)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:518)
at io.netty.util.concurrent.SingleThreadEventExecutor$6.run(SingleThreadEventExecutor.java:1044)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
... 1 more
I also try with a simple scala script, just to try to reach the cluster but I've had the same error.
Do you have any idea how can I reach my cluster with a python script?
(Edit)
I forgot to specify i've created a docker network between my master and my slaves.
So with the help of MrTshoot and Gaarv, i replace spark-master (in spark://spark-master:7077) by the ip of my master container (you can get it with the command docker network inspect my-network).
And it's work! Thanks!
When you specify .setMaster('spark://spark-master:7077') it means "reach spark cluster at DNS address "spark-master" and port 7077 which local machine cannot resolve.
So it order for your host machine to reach the cluster you must instead specify the Docker DNS / IP address of your Spark cluster, check "docker0" interface on your local machine and replace "spark-master" with it.
You can not connect Docker services on your host with its service name. you should set DNS or IP service or use following trick:
Expose your Spark Cluster Ports.
open your /etc/hosts and add following content on it
127.0.0.1 localhost spark-master
::1 localhost spark-master
I'm making a new telegram bot and wanna deploy it to Heroku. I made a postgres database with that (heroku). I used to use NordVPN, now I'm not, and I noticed that I can't connect to my database anymore, while using other VPNs. When I run my bot (with other VPNs), I face this error:
Exception: could not connect to server: Connection timed out (0x0000274C/10060)
Is the server running on host "ec2-18-211-41-246.compute-1.amazonaws.com" (18.211.41.246) and accepting
TCP/IP connections on port 5432?
It works fine without VPN though. As I searched, It's because of the firewall. But I made windows firewall off, and also paused my anti-virus, and still got the error. Any suggestions on how to make it work with any VPN?
Some information:
>>> systeminfo
...
OS Name: Microsoft Windows 10 Home
OS Version: 10.0.19042 N/A Build 19042
...
>>> python --version
Python 3.8.0
>>> pip freeze | findstr psycopg2
psycopg2==2.8.6
>>> netstat -a -n | findstr "5432"
TCP 0.0.0.0:5432 0.0.0.0:0 LISTENING
TCP [::]:5432 [::]:0 LISTENING
>>> telnet 0.0.0.0 5432
Connecting To 0.0.0.0...Could not open connection to the host, on port 5432: Connect failed
>>> heroku version
heroku/7.54.1 win32-x64 node-v12.21.0
>>> heroku pg:info -a my-bot
=== DATABASE_URL
Plan: Hobby-dev
Status: Available
Connections: 0/20
PG Version: 13.3
Created: 2021-08-08 07:47 UTC
Data Size: 8.1 MB/1.00 GB (In compliance)
Tables: 1
Rows: 15/10000 (In compliance)
Fork/Follow: Unsupported
Rollback: Unsupported
Continuous Protection: Off
The above commands are done bellow a VPN connection
Note: Telegram is blocked in my country, so in order to be able to code telegram bots, I have to use a VPN
I solved my problem by installing OpenVPN and using freeopenvpn configs. I don't know how did that help, but I'm leaving this here for anyone who found this problem in the future
I have a couchbase 6.0 server running on linode and I'm using the python SDK to insert data into my couchbase bucket. When run directly on the Linode server, my data gets inserted.
However, when I run my code from a remote machine I get network error:
CouchbaseNetworkError, CouchbaseTransientError): <RC=0x2C[The remote host refused the connection.
I have ports 8091, 8092, 8093, 8094 open on linode.
from couchbase.cluster import Cluster
from couchbase.cluster import PasswordAuthenticator
# linode ip: 1.2.3.4
cluster = Cluster('couchbase://1.2.3.4:8094')
cluster.authenticate(PasswordAuthenticator('admin', 'password'))
bucket = cluster.open_bucket('test_bucket')
bucket.upsert('1',{"foo":"bar"})
My code executes when run on the server with couchbase://localhost but it fails when run from a remote machine. is there any port or configuration I am missing?
Client-to-node: Between any clients/app-servers/SDKs and all nodes of each cluster they require access to.
Unencrypted*: 8091-8096, 11210, 11211
Encrypted: 18091-18096†††, 11207
using ports 11210 and 11211 worked for me. source
I'm currently running a python script inside a docker container on an ubuntu machine. The script processes the input from a serial device with pyserial.
Everytime when I start the script I have no real connection to the device, it always throws this error:
serial.serialutil.SerialException: [Errno 16] could not open port /dev/ttyACM3: [Errno 16] Device or resource busy: '/dev/ttyACM3'
Note: The device isn't used in any other process, so there should be no reason to throw this error.
while True:
try:
self.serialSource = serial.Serial(self.inputDevice)
self.serialSource.timeout = 0.5
break
except serial.serialutil.SerialException as e:
print(str(e))
time.sleep(1)
continue
If I'm running the script directly on the host machine, then it seems to work as expected.
I think there might be a problem with the docker configuration.
I simply use docker-compose to attach the device to the container like this:
devices:
- "/dev/ttyACM3:/dev/ttyACM3"
setup info:
host os: ubuntu 18.04
docker: 18.06.1-ce, build e68fc7a
docker-compose: 1.22.0, build f46880fe
docker image: "python:3.7.0"
pyserial: 3.4
Any ideas?
I am using Docker to "containerize" a PostgreSQL deployment. I can spin up the container and connect to PostgreSQL via the command line as shown below:
minime2#CEBERUS:~/Projects/skunkworks$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
dc176901052a df:pg "docker-entrypoint..." About an hour ago Up About an hour 5432/tcp vigilant_agnesi
minime2#CEBERUS:~/Projects/skunkworks$ CONTAINER_ID=dc176901052a
minime2#CEBERUS:~/Projects/skunkworks$ IP=$(docker inspect -f '{{.NetworkSettings.Networks.bridge.IPAddress}}' $CONTAINER_ID)
minime2#CEBERUS:~/Projects/skunkworks$ echo $IP
172.17.0.2
minime2#CEBERUS:~/Projects/skunkworks$ docker exec -it vigilant_agnesi psql -U postgres -W cookiebox
Passwod for user postgres:
psql (9.6.5)
Type "help" for help
cookiebox#
Now attempting connection with Python:
Python 3.5.2 (default, Sep 14 2017, 22:51:06)
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import psycopg2
>>> conn = psycopg2.connect("dbname='cookiebox' user='postgres' host='172.17.0.2' password='nunyabiznes'") Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/minime2/Projects/skunkworks/archivers/env/lib/python3.5/site-packages/psycopg2/__init__.py", line 130, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: could not connect to server: Connection refused
Is the server running on host "172.17.0.2" and accepting
TCP/IP connections on port 5432?
>>>
Can anyone explain why I can't connect to PostgreSQL using Python - even though I'm using the same arguments/parameters that enable a successful connection at the command line (using docker exec?).
[[Additional Info]]
As suggested by #Itvhillo, I tried to use a desktop application to connect to the PG service. I run the docker service using the following command:
docker run -i -p 5432:5432 --name $CONTAINER_NAME $DOCKER_IMAGE
I am using Db Visualizer to connect to the database, and I have set the hostname to 'localhost'. I can successfully ping the port, but still get an error message when I try to connect to the database (possible permissions related error):
An error occurred while establishing the connection:
Long Message:
The connection attempt failed.
Details:
Type: org.postgresql.util.PSQLException
SQL State: 08001
Incidentally, this is the tail end of the output for the PG service instance:
PostgreSQL init process complete; ready for start up.
LOG: could not bind IPv6 socket: Cannot assign requested address
HINT: Is another postmaster already running on port 5432? If not, wait a few seconds and retry.
LOG: database system was shut down at 2018-01-30 16:21:59 UTC
LOG: MultiXact member wraparound protections are now enabled
LOG: database system is ready to accept connections
LOG: autovacuum launcher started
[[Additional Info2]]
Here is the tail end of my Dockerfile:
# modified target locations (checked by login onto Docker container)
# show hba_file;
# show config_file;
#################################################################################
# From here: https://docs.docker.com/engine/examples/postgresql_service/
# Adjust PostgreSQL configuration so that remote connections to the
# database are possible.
RUN echo "host all all 0.0.0.0/0 md5" >> /var/lib/postgresql/data/pg_hba.conf
# And add ``listen_addresses`` to ``/var/lib/postgresql/data/postgresql.conf``
RUN echo "listen_addresses='*'" >> /var/lib/postgresql/data/postgresql.conf
#################################################################################
EXPOSE 5432
# Add VOLUMEs to allow backup of config, logs and databases
VOLUME ["/etc/postgresql", "/var/log/postgresql", "/var/lib/postgresql", "/usr/lib/postgresql/"]
If you are are running
$ docker run -i -p 5432:5432 --name $CONTAINER_NAME $DOCKER_IMAGE
Then you should be able to connect to localhost:5432 from the host. The easiest way to check whether something is listening on port 5432 is using netcat. In case of success you should get:
$ nc -zv localhost 5432
Connection to localhost 5432 port [tcp/postgresql] succeeded!
In this case, you should be able to connect using:
>>> psycopg2.connect("dbname='cookiebox' user='postgres' host='localhost' password='nunyabiznes'")
If, on the other hand, you get something like:
$ nc -zv localhost 5432
nc: connect to localhost port 5432 (tcp) failed: Connection refused
Then it means that PostgreSQL is not listening, and hence something is wrong in your Dockerfile, and you'll need to post more details on your Dockerfile to diagnose it.
It seems that PostgreSQL couldn't bind socket to listen for TCP connections for some reason. It still listens the default UNIX socket inside the container, though, so you could connect to it via docker exec -it $CONTAINER_NAME psql.
In the postgresql.conf file change
listen_addresses = 'localhost' to listen_addresses = '*'
Then try to connect via psql
psql -h docker-ip -u username -w
Try to isolate the issue: run clean postgres instance without any extensions on default port:
docker run -d --name tst-postgres-01 -p 5432:5432 -e POSTGRES_PASSWORD=mysecretpassword postgres:9.6
And try to connect to it. If you can, then you have to review your Dockerfile accordingly: just remove everything from there and then add new things one by one. Otherwise, if it doesn't connect, then try to run it on other port:
docker run -d --name tst-postgres-01 -p 45432:5432 -e POSTGRES_PASSWORD=mysecretpassword postgres:9.6
And try to connect. If it works this time, then the issue your network configuration on host machine, somehow 5432 port is blocked or in use by other app.