Why smbprotocol connection is getting timeout - python

I'm trying to transfer some file from ubuntu to Window(AWS EC2 instance).But I'm getting following error.
ValueError: Failed to connect to '35.154.105.236': timed out
For reference
import smbclient
import sys
# Optional - register the credentials with a server
print(smbclient.register_session("35.154.105.236", username="Administrator", password="XXXXXXXXXXX"))
May I know what is missing and why it is getting timeout.

Related

DB2 connection issue in python after change and update of SSL

I am trying to establish a connection to DB2 which was working fine but after the change and update of SSL, it gave this error: com.ibm.db2.jcc.am.DisconnectNonTransientException: [jcc][t4][2034][11148][4.27.25] Execution failed due to a distribution protocol error that caused deallocation of the conversation. A DRDA Data Stream Syntax Error was detected. Reason: 0x3. ERRORCODE=-4499, SQLSTATE=-58009. Despite I had updated the cert and did the set up of the connection settings and connection was ok, the codes are giving this error. Can anyone help to advice where did I miss out or anything wrong with my script pls.
import os
from subprocess import Popen, PIPE, run
import jaydebeapi
from project_lib import Project
project = Project.access()
abc_connection = project.get.connection(name="abc")
abc_connection = jaydebeapi.connect('com.ibm.db2.jcc.DB2Driver',
'{}://{}:{}/{}:user={};password={};'.format('jdbc:db2',
abc_credentials['host'],
abc_credentials['port'],
abc_credentials['database'], '/project_data/data_asset/ssl_cert.crt',
abc_credentials['username'],
abc_credentials['password']))
curs = abc_connection.cursor()
Any help is appreciated. Thanks.

Rasa core agent.handle_channel

I am trying to do a slack integration for my bot. this is my python script that will run the bot on slack:
from rasa_core.channels import HttpInputChannel
from rasa_core.agent import Agent
from rasa_core.interpreter import RasaNLUInterpreter
from rasa_slack_connector import SlackInput
nlu_interpreter = RasaNLUInterpreter('./model/nlu/default/weathernlu')
agent = Agent.load('./model/dialogue', interpreter = nlu_interpreter)
input_channel = SlackInput('*******', #app verification token
'*******', # bot verification token
'********', # slack verification token
True)
agent.handle_channel(HttpInputChannel(5006, '/', input_channel))
My problem is everytime I close the app and try to run it, i can't use the same port. I started with 5000 and you can see I reached 5006 because I had to change it everytime. If I try to run it using the same port I get this error:
OSError: [WinError 10048] Only one usage of each socket address
(protocol/networ k address/port) is normally permitted
Can anyone explain what's going on?
You should check which port are binded using the cmd netstat and also check the process still running on your machine.
Closing your app might not kill the process therefore your previous instance of your app may still use the ports.

can't login with Paramiko ssh client

local-host --->Aterm server (security server ) -----> target-machine(
I am trying to write a code in Python using Paramiko to first SSH from local-host to the target-machine. From the target-machine, I want to capture some outputs and store them locally either as a variable or as a file (havent got to that point yet). I found an example from stackoverflow where they talk about using nested SSH with paramiko, and I follow it but I get stuck here:
i need just reaching the target-machine
My code:
import paramiko
import sys
import subprocess
hostname = '10.10.10.1'
port = 22
username = 'mohamed.hosseny'
password ='Pass#1'
client = paramiko.Transport((hostname, port))
client.connect(username=username, password=password)
client.close()
but i found the below error message :
Traceback (most recent call last):
File "C:/Users/mohamed.hosseny/Desktop/Paramiko.py", line 13, in <module>
client = paramiko.Transport((hostname, port))
File "C:\Python27\lib\site-packages\paramiko\transport.py", line 332, in
__init__
'Unable to connect to {}: {}'.format(hostname, reason))
SSHException: Unable to connect to 10.10.10.1: [Errno 10060] A connection
attempt failed because the connected party did not properly respond after
a period of time, or established connection failed because connected host
has failed to respond
paramiko.Transport is a lower level API. Don't use it unless you have a good reason. Instead, you can use paramiko.SSHClient.

Python/P2P - Unable to connect to rendezvous server

I am trying to create a P2P node using python (pyp2p) but I am getting this error:
Eamons-MacBook-Pro:blockchain eamonwhite$ python3 serveralice.py
HTTP Error 404: Not Found
HTTP Error 404: Not Found
HTTP Error 404: Not Found
HTTP Error 404: Not Found
Traceback (most recent call last):
File "/Users/eamonwhite/.pyenv/versions/3.6.3/lib/python3.6/site-packages/pyp2p/net.py", line 732, in start
rendezvous_con = self.rendezvous.server_connect()
File "/Users/eamonwhite/.pyenv/versions/3.6.3/lib/python3.6/site-packages/pyp2p/rendezvous_client.py", line 92, in server_connect
con.connect(server["addr"], server["port"])
File "/Users/eamonwhite/.pyenv/versions/3.6.3/lib/python3.6/site-packages/pyp2p/sock.py", line 189, in connect
self.s.bind((src_ip, 0))
TypeError: str, bytes or bytearray expected, not NoneType
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "serveralice.py", line 10, in <module>
alice.start()
File "/Users/eamonwhite/.pyenv/versions/3.6.3/lib/python3.6/site-packages/pyp2p/net.py", line 735, in start
raise Exception("Unable to connect to rendezvous server.")
Exception: Unable to connect to rendezvous server.
My relevant code looks like this:
from uuid import uuid4
from blockchain import Blockchain
from flask import Flask, jsonify, request
from pyp2p.net import *
import time
#Setup Alice's p2p node.
alice = Net(passive_bind="192.168.1.131", passive_port=44444, interface="en0", node_type="passive", debug=1)
alice.start()
alice.bootstrap()
alice.advertise()
while 1:
for con in alice:
for reply in con:
print(reply)
time.sleep(1)
...
It is getting stuck on the Net function right at the beginning - something to do with the rendezvous package. The IP is my IP on the my network, and I port forwarded 44444 although I'm not sure if I need to do that or not. Thanks.
I am new to this, apparently with the way the server code was configured, it needed a rendezvous server to work (a node that handles all the other nodes). It is in net.py of the pyp2p package:
# Bootstrapping + TCP hole punching server.
rendezvous_servers = [
{
"addr": "162.243.213.95",
"port": 8000
}
]
The address was the problem, obviously it is just a placeholder IP. So then I realized I needed my own rendezvous server, and I used this code - https://raw.githubusercontent.com/StorjOld/pyp2p/master/pyp2p/rendezvous_server.py.
However I had to debug this file a little, it ended up needing to have import sys, import time and import re statements at the top before it would work. Now I am going to host it on my raspberry pi so that it is always up to handle nodes :)

Uploading to S3 from Docker Container

I'm trying to get a handle on Docker. I've got a very basic container setup that runs a simple python script to:
Query a database
Write a CSV file of the query results
Upload the CSV to S3 (using the tinys3 package).
When I run the script from my host, everything works as intended: the query fires, csv is created and uploaded perfectly. But when I run it from within my Docker container, tinys3 fails with the following error:
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='my-s3-bucket', port=443): Max retries exceeded with url: /bucket.s3.amazonaws.com/test.csv (Caused by NewConnectionError('<requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7f4f17cf7790>: Failed to establish a new connection: [Errno -2] Name or service not known',))
Everything prior to that works (query and CSV creation). This answer suggests that there's an incorrect endpoint. But that doesn't seem correct, since running the script from my host does not result in an error.
So my question is: am I missing something obvious? Is this an issue with the tinys3 module? Do I need to set something up in my container to allow it to "call out"? Or is there a better way to do this?
Alternatively you can also use minio-py client library for the same.
Please find example code for fput_object.py
from minio import Minio
from minio.error import ResponseError
client = Minio('s3.amazonaws.com',
access_key='YOUR-ACCESSKEYID',
secret_key='YOUR-SECRETACCESSKEY')
# Put on object 'my-objectname-csv' with contents from
# 'my-filepath.csv' as 'application/csv'.
try:
client.fput_object('my-bucketname', 'my-objectname-csv',
'my-filepath.csv', content_type='application/csv')
except ResponseError as err:
print(err)
Hope it helps.
Disclaimer: I work with Minio

Categories

Resources