Instagram is still blocking instabot - python

I use instabot to upload images on IG. The problem, in general, is that Instagram blocks my bot and I always have to use a code sent to mail email to reload my account. I know that IG does not like bots instabot is from 3 bots just one who is working. but it has still this issue. I tried to make a bot using Instagram-private-API but a way to get a working API on Instagram is so long and from the past, I know the use of APIs of big companies is really a complete task and I want to make it easy in some way how to make it prevent instant or some better alternative that Instagram official APIs.
my code:
from instabot import Bot
def upload_to_instagram(caption):
# Create an InstagramBot object
bot = Bot()
bot.login(username= "nature_for_best_day",
password="""29$5!5U9Ba%C$#M6""")
# Upload the image
media_id = bot.upload_photo("image_with_quote.jpg",caption)
# Set the caption for the image
print("Photo uploaded")
my error:
File "c:\Users\Lukas\Dokumenty\python_scripts\Instagram Quotes\main.py", line 92, in upload_to_instagram
media_id = bot.upload_photo("image_with_quote.jpg",caption)
File "C:\Users\Lukas\Dokumenty\python_scripts\Instagram Quotes\env\bot\lib\site-packages\instabot\bot\bot.py", line 806, in upload_photo
return upload_photo(self, photo, caption, upload_id, from_video, options)
File "C:\Users\Lukas\Dokumenty\python_scripts\Instagram Quotes\env\bot\lib\site-packages\instabot\bot\bot_photo.py", line 26, in upload_photo
result = self.api.upload_photo(
File "C:\Users\Lukas\Dokumenty\python_scripts\Instagram Quotes\env\bot\lib\site-packages\instabot\api\api.py", line 825, in upload_photo
return upload_photo(
File "C:\Users\Lukas\Dokumenty\python_scripts\Instagram Quotes\env\bot\lib\site-packages\instabot\api\api_photo.py", line 168, in upload_photo
response = self.session.post(
File "C:\Users\Lukas\Dokumenty\python_scripts\Instagram Quotes\env\bot\lib\site-packages\requests\sessions.py", line 635, in post
return self.request("POST", url, data=data, json=json, **kwargs)
File "C:\Users\Lukas\Dokumenty\python_scripts\Instagram Quotes\env\bot\lib\site-packages\requests\sessions.py", line 587, in request
resp = self.send(prep, **send_kwargs)
File "C:\Users\Lukas\Dokumenty\python_scripts\Instagram Quotes\env\bot\lib\site-packages\requests\sessions.py", line 701, in send
r = adapter.send(request, **kwargs)
File "C:\Users\Lukas\Dokumenty\python_scripts\Instagram Quotes\env\bot\lib\site-packages\requests\adapters.py", line 563, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: HTTPSConnectionPool(host='i.instagram.com', port=443): Max retries exceeded with url: /rupload_igphoto/1675010396498_0_9661884500 (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:2396)')))

Related

PUT Data on HDFS via HTTP WEB API

I'm trying to implement a PUT request on HDFS via the HDFS Web API.
So I looked up the Documentation on how to do that : https://hadoop.apache.org/docs/r1.0.4/webhdfs.html#CREATE
First do a PUT without redirect, to get a 307, catch the new URL and then PUT on that URL with DATA.
When I do my first put, I do get the 307, but the URL is the same has the first one. So I'm note sure if I'm already on the "good" datanode or not.
Any way, I get this URL and try to add DATA to it, but I get an error, from what I understand, it is a connection error. Host is cutting down the connection.
class HttpFS:
def __init__(self, url=settings.HTTPFS_URL):
self.httpfs = url
self.auth = HTTPKerberosAuth(mutual_authentication=OPTIONAL)
def put(self, local_file, hdfs_dir):
url = "{}{}".format(self.httpfs, hdfs_dir)
params = {"op": "CREATE", "overwrite": True}
print(url)
r = requests.put(url, auth=self.auth, params=params, stream=True, verify=settings.CA_ROOT_PATH, allow_redirects=False)
r = requests.put(r.headers['Location'], auth=self.auth, data=open(local_file, 'rb'), params=params, stream=True, verify=settings.CA_ROOT_PATH)
Here is the error given:
r = requests.put(r.headers['Location'], auth=self.auth, data=open(local_file, 'rb'), params=params, stream=True, verify=settings.CA_ROOT_PATH)
File "/home/cdsw/.local/lib/python3.6/site-packages/requests/api.py", line 134, in put
return request('put', url, data=data, **kwargs)
File "/home/cdsw/.local/lib/python3.6/site-packages/requests/api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
File "/home/cdsw/.local/lib/python3.6/site-packages/requests/sessions.py", line 530, in request
resp = self.send(prep, **send_kwargs)
File "/home/cdsw/.local/lib/python3.6/site-packages/requests/sessions.py", line 643, in send
r = adapter.send(request, **kwargs)
File "/home/cdsw/.local/lib/python3.6/site-packages/requests/adapters.py", line 498, in send
raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', BrokenPipeError(32, 'Broken pipe'))
Edit 1:
I tried also with https://github.com/pywebhdfs/pywebhdfs repos. Since it is supposed to do exactly what i'm looking for. But I still have this Broken Pipe Error.
from requests_kerberos import OPTIONAL, HTTPKerberosAuth
from pywebhdfs.webhdfs import PyWebHdfsClient
from utils import settings
auth = HTTPKerberosAuth(mutual_authentication=OPTIONAL)
url = f"https://{settings.HDFS_HTTPFS_HOST}:{settings.HDFS_HTTPFS_PORT}/webhdfs/v1/"
hdfs_client = PyWebHdfsClient(base_uri_pattern=url, request_extra_opts={'auth':auth, 'verify': settings.CA_ROOT_PATH})
with open(data_dir + file_name, 'rb') as file_data:
hdfs_client.create_file(hdfs_path + file_name, file_data=file_data, overwrite=True)
Same error:
hdfs_client.create_file(hdfs_path + file_name, file_data=file_data, overwrite=True)
File "/home/cdsw/.local/lib/python3.6/site-packages/pywebhdfs/webhdfs.py", line 115, in create_file
**self.request_extra_opts)
File "/home/cdsw/.local/lib/python3.6/site-packages/requests/api.py", line 134, in put
return request('put', url, data=data, **kwargs)
File "/home/cdsw/.local/lib/python3.6/site-packages/requests/api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
File "/home/cdsw/.local/lib/python3.6/site-packages/requests/sessions.py", line 530, in request
resp = self.send(prep, **send_kwargs)
File "/home/cdsw/.local/lib/python3.6/site-packages/requests/sessions.py", line 643, in send
r = adapter.send(request, **kwargs)
File "/home/cdsw/.local/lib/python3.6/site-packages/requests/adapters.py", line 498, in send
raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', BrokenPipeError(32, 'Broken pipe'))
Edit 2:
I found out I was sending too much data at once. So now I create a file on HDFS, then I append to it with chunk of data. But it is slow... And I still can get the same error as above randomly. The bigger are the are the chunk the more I have chances to get a Connection aborted. My files more in range of 200Mb, so it take ages comparing to the Hadoop binary "hdfs dfs -put"

How to upload image files from multiple devices(raspberrypies) into one Dropbox account?

I have two camera modules connected(wired and configured) to two raspberrypies(one for each). These cameras take pictures and save the image onto the pies.
I have one Dropbox account. I have a code to upload image files (saved in the specified folder) from the pi to my Dropbox account.
So, when I give a command, these two pies start uploading the image files to my Dropbox account at the same time.
And the problem comes into play.
Always one pi succeed in uploading. The other fails.
I need help on this. And I am planning to have 20 cameras.
So 20 pies should be able to upload images to one single Dropbox account at the same time.
Thank you for your help.
I hope the solution is easy and straightforward.
Code:
import dropbox, sys, os
import os.path
from dropbox.files import WriteMode
dbx=dropbox.Dropbox('my authentication key')
image_dir="/home/pi/somefolder"
for dir, dirs, files in os.walk(image_dir):
for file in files:
file_path=os.path.join(dir, file)
dest_path=os.path.join('/somefolder', file)
with open(file_path) as f:
dbx.files_upload(f.read(), dest_path, mute=True,
mode=WriteMode('add'))
When I run only one pi, the uploading works perfect. No problem at all.
When I run two pies uploading at the same time, always one of them malfunctions.
here I have attached what I get from the malfunctioning pi.
Error:
Traceback (most recent call last):
File "scanner.py", line 288, in dest_path, mute=True, mode=WriteMode('add'))
File "/usr/local/lib/python2.7/dist-packages/dropbox/base.py", line 2293, in files_upload
f,
File "/usr/local/lib/python2.7/dist-packages/dropbox/dropbox.py", line 274, in request
timeout=timeout)
File "/usr/local/lib/python2.7/dist-packages/dropbox/dropbox.py", line 365, in request_json_string_with_retry
timeout=timeout)
File "/usr/local/lib/python2.7/dist-packages/dropbox/dropbox.py", line 449, in request_json_string
timeout=timeout,
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 581, in post
return self.request('POST', url, data=data, json=json, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 533, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 646, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/adapters.py", line 516, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='content.dropboxapi.com', port=443): Max retries exceeded with url: /2/files/upload (Caused by New ConnectionError(': Failed to establish a new connection: [Errno -3] Temporary failure in name r esolution',))
I hope someone can help me with ease. Thank you very much in advance!

Not been able to connect simple International space station api using Python

I executed this code to connect most common api-
import requests
response = requests.get("http://api.open-notify.org/iss-now.json")
print(response.status_code)
But it is showing this error -
runfile('C:/Users/sanchit.joshi/use case of unassigned tickets/Api try
out.py', wdir='C:/Users/sanchit.joshi/use case of unassigned tickets')
Traceback (most recent call last):
File "<ipython-input-17-39bcdc5917ae>", line 1, in <module>
runfile('C:/Users/sanchit.joshi/use case of unassigned tickets/Api try
out.py', wdir='C:/Users/sanchit.joshi/use case of unassigned tickets')
File "C:\ProgramData\Anaconda3\lib\site-
packages\spyder_kernels\customize\spydercustomize.py", line 668, in
runfile
execfile(filename, namespace)
File "C:\ProgramData\Anaconda3\lib\site-
packages\spyder_kernels\customize\spydercustomize.py", line 108, in
execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "C:/Users/sanchit.joshi/use case of unassigned tickets/Api try
out.py", line 8, in <module>
response = requests.get("http://api.open-notify.org/iss-now.json")
File "C:\ProgramData\Anaconda3\lib\site-packages\requests\api.py", line
72, in get
return request('get', url, params=params, **kwargs)
File "C:\ProgramData\Anaconda3\lib\site-packages\requests\api.py", line
58, in request
return session.request(method=method, url=url, **kwargs)
File "C:\ProgramData\Anaconda3\lib\site-packages\requests\sessions.py",
line 512, in request
resp = self.send(prep, **send_kwargs)
File "C:\ProgramData\Anaconda3\lib\site-packages\requests\sessions.py",
line 622, in send
r = adapter.send(request, **kwargs)
File "C:\ProgramData\Anaconda3\lib\site-packages\requests\adapters.py",
line 513, in send
raise ConnectionError(e, request=request)
ConnectionError: HTTPConnectionPool(host='api.open-notify.org',
port=80): Max retries exceeded with url: /iss-now.json (Caused by
NewConnectionError('<urllib3.connection.HTTPConnection object at
0x000001E8E5BCBE80>: Failed to establish a new connection: [Errno 11002]
getaddrinfo failed'))
I tried changing max retry value, but its not working. It is more frustrating bcz I this is the simplest code to connect to an api. Any help is appreciated.
There is nothing wrong with your code, that code should work.
You are having a proxy issue. If you are on windows you can add the url to your proxy exceptions by going to the settings menu in internet explorer, then internet options, then connections, lan settings advanced and add the url to your exceptions. It is typical in a corporate environment or in your school for the admins to put you behind a proxy.
Alternatively you can use this Q/A to set the proxy in your request
Using the JSON and urllib.request libraries
import json
import urllib.request
file = urllib.request.urlopen("http://api.open-notify.org/iss-now.json")
data = json.loads(file.read())
print(data)
Resulting in
{'message': 'success', 'timestamp': 1541059187, 'iss_position': {'longitude': '13.6813', 'latitude': '47.8641'}}

Can't load website using requests in Python [duplicate]

This question already has an answer here:
Requests failing to connect to a TLS server
(1 answer)
Closed 5 years ago.
I'm using Python and I'm trying to scrape this website:
https://online.ratb.ro/info/browsers.aspx
But I'm getting this error:
Traceback (most recent call last): File
"C:\Users\pinguluk\Desktop\Proiecte GIT\RATB Scraper\test2.py", line
3, in
test = requests.get('https://online.ratb.ro/info/browsers.aspx') File "C:\Python27\lib\site-packages\requests\api.py", line 72,
in get
return request('get', url, params=params, **kwargs) File "C:\Python27\lib\site-packages\requests\api.py", line 58, in request
return session.request(method=method, url=url, **kwargs) File "C:\Python27\lib\site-packages\requests\sessions.py", line 518,
in request
resp = self.send(prep, **send_kwargs) File "C:\Python27\lib\site-packages\requests\sessions.py", line 639, in
send
r = adapter.send(request, **kwargs) File "C:\Python27\lib\site-packages\requests\adapters.py", line 512, in
send
raise SSLError(e, request=request) requests.exceptions.SSLError: ("bad handshake: SysCallError(-1,
'Unexpected EOF')",)
Installed modules:
['appdirs==1.4.3', 'asn1crypto==0.22.0', 'attrs==16.3.0',
'automat==0.5.0', 'beautifulsoup4==4.5.3', 'cairocffi==0.8.0',
'certifi==2017.4.17', 'cffi==1.10.0', 'colorama==0.3.9',
'constantly==15.1.0', 'cryptography==1.8.1', 'cssselect==1.0.1',
'cycler==0.10.0', 'distributedlock==1.2', 'django-annoying==0.10.3',
'django-oauth-tokens==0.6.3', 'django-taggit==0.22.1',
'django==1.11.1', 'enum34==1.1.6', 'facepy==1.0.8',
'functools32==3.2.3.post2', 'futures==3.1.1', 'gevent==1.2.1',
'greenlet==0.4.12', 'grequests==0.3.0', 'html5lib==0.999999999',
'htmlparser==0.0.2', 'httplib2==0.10.3', 'idna==2.5',
'incremental==16.10.1', 'ipaddress==1.0.18', 'lazyme==0.0.10',
'lxml==3.7.3', 'matplotlib==2.0.2', 'mechanize==0.3.3',
'ndg-httpsclient==0.4.2', 'numpy==1.12.1', 'oauthlib==2.0.2',
'olefile==0.44', 'opencv-python==3.2.0.7', 'packaging==16.8',
'parsel==1.1.0', 'pillow==4.0.0', 'pip==9.0.1', 'py2exe==0.6.9',
'pyandoc==0.0.1', 'pyasn1-modules==0.0.8', 'pyasn1==0.2.3',
'pycairo-gtk==1.10.0', 'pycparser==2.17', 'pygtk==2.22.0',
'pyhook==1.5.1', 'pynput==1.3.2', 'pyopenssl==17.0.0',
'pyparsing==2.2.0', 'pypiwin32==219', 'pyquery==1.2.17',
'python-dateutil==2.6.0', 'python-memcached==1.58', 'pytz==2017.2',
'pywin32==221', 'queuelib==1.4.2', 'requests-futures==0.9.7',
'requests-oauthlib==0.8.0', 'requests-toolbelt==0.8.0',
'requests==2.14.2', 'restclient==0.11.0', 'robobrowser==0.5.3',
'selenium==3.4.1', 'service-identity==16.0.0', 'setuptools==35.0.2',
'simplejson==3.10.0', 'six==1.10.0', 'twitter==1.17.0',
'twitterfollowbot==2.0.2', 'urllib3==1.21.1', 'w3lib==1.17.0',
'webencodings==0.5.1', 'werkzeug==0.12.1', 'wheel==0.29.0',
'zope.interface==4.3.3']
Thanks.
I think you will have hard time solving this problem since the server you are trying to "scrape" is awfully configured (ssllabs.com gave it a grade F) and it might be that Requests don't even support any of cipher suites because they are all insecure. There might be an option of creating a custom HTTPAdapter, so you might try that out.
You can try using:
requests.get(url, verify=False)
if you don't need to check the authenticity of the SSL certificate.

Write on HDFS using Python

I am trying to write on HDFS from Python.
Right now, I am using https://hdfscli.readthedocs.io/en/latest/quickstart.html
but for large file I get back:
File "/home/edge7/venv-dev/local/lib/python2.7/site-packages/hdfs/client.py", line 400, in write
consumer(data)
File "/home/edge7/venv-dev/local/lib/python2.7/site-packages/hdfs/client.py", line 394, in consumer
auth=False,
File "/home/edge7/venv-dev/local/lib/python2.7/site-packages/hdfs/client.py", line 179, in _request
**kwargs
File "/home/edge7/venv-dev/local/lib/python2.7/site-packages/requests/sessions.py", line 465, in request
resp = self.send(prep, **send_kwargs)
File "/home/edge7/venv-dev/local/lib/python2.7/site-packages/requests/sessions.py", line 573, in send
r = adapter.send(request, **kwargs)
File "/home/edge7/venv-dev/local/lib/python2.7/site-packages/requests/adapters.py", line 415, in send
raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', gaierror(-2, 'Name or service not known'))
My code for writing is pretty simple:
client = InsecureClient('http://xxxxxxx.co:50070', user='hdfs')
client.write("/tmp/a",stringToWrite)
Anyone can suggest a decent package to write on HDFS?
Cheers
For the stacktrace, it seems to be security related. Are you sure you need to use the InsecureClient and not the Kerberos one?. Also, remember that library is just a binding for HttpFs, so doing a manual test with Postman or CURL would let you debug any issue cluster-side.

Categories

Resources