this is the code but its giving me error. maybe get by entity not working in this code.i have tried changing the entity type and everything else but that also doesnt works. i give parameters as required in the script but still it doesnt do anything but only gives me error"some error in adding".
try:
user_to_add = client.get_entity(int(user['user_id']))
print(user_to_add)
client(InviteToChannelRequest(entity, [user_to_add]))
usr_id = user['user_id']
print(f'{attempt}{g} Adding {usr_id}{rs}')
print(f'{sleep}{g} Sleep 20s{rs}')
time.sleep(20)
except PeerFloodError:
#time.sleep()
os.system(f'del {file}')
sys.exit(f'\n{error}{r} Aborted. Peer Flood Error{rs}')
except UserPrivacyRestrictedError:
print(f'{error}{r} User Privacy Restriction{rs}')
continue
except KeyboardInterrupt:
print(f'{error}{r} Aborted. Keyboard Interrupt{rs}')
update_list(users, added_users)
if not len(users) == 0:
print(f'{info}{g} Remaining users logged to {file}')
logger = Relog(users, file)
logger.start()
except:
print(f'{error}{r} Some Other error in adding{rs}')
continue
#os.system(f'del {file}')
input(f'{info}{g}Adding complete...Press enter to exit...')
sys.exit()
give me this error
> Traceback (most recent call last):
File "C:\Users\Noni\Desktop\python\telegramscraper-main\addbyid1.py", line 90, in <module>
user_to_add = client.get_entity(int(user['user_id']))
File "C:\Users\Noni\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\telethon\sync.py", line 39, in syncified
return loop.run_until_complete(coro)
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.9_3.9.3568.0_x64__qbz5n2kfra8p0\lib\asyncio\base_events.py", line 647, in run_until_complete
return future.result()
File "C:\Users\Noni\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\telethon\client\users.py", line 292, in get_entity
inputs.append(await self.get_input_entity(x))
File "C:\Users\Noni\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\telethon\client\users.py", line 466, in get_input_entity
raise ValueError(
ValueError: Could not find the input entity for PeerUser(user_id=807250194) (PeerUser). Please read https://docs.telethon.dev/en/latest/concepts/entities.html to find out more details.
Can you provide the error?
I think you know that you can't add more than 200 subscribers to the channel manually. Please check the current number of subscribers. If it is more than 200 your code will not work.
Related
I am getting the below error when running the Python code:
sftp.put(local_path, remote_path, callback=track_progress, confirm=True)
But if I make confirm=False then this error doesn't come.
Definition of track_progress is as follows:
def track_progress(bytes_transferred, bytes_total):
total_percent = 100
transferred_percent = (bytes_transferred * total_percent) / bytes_total
result_str = f"Filename: {file}, File Size={str(bytes_total)}b |-->
" f" Transfer Details ={str(transferred_percent)}% " \ f"({str(bytes_transferred)}b)Transferred"
#self.logger.info(result_str)
print(result_str)
Can anyone please help me understand the issue here.
Traceback (most recent call last):
File "D:/Users/prpandey/PycharmProjects/PMPPractise/Transport.py", line 59, in <module>
sftp.put(local_path, remote_path, callback=track_progress, confirm=True)
File "D:\Users\prpandey\PycharmProjects\PMPPractise\venv\lib\site-packages\paramiko\sftp_client.py", line 759, in put
return self.putfo(fl, remotepath, file_size, callback, confirm)
File "D:\Users\prpandey\PycharmProjects\PMPPractise\venv\lib\site-packages\paramiko\sftp_client.py", line 720, in putfo
s = self.stat(remotepath)
File "D:\Users\prpandey\PycharmProjects\PMPPractise\venv\lib\site-packages\paramiko\sftp_client.py", line 495, in stat
raise SFTPError("Expected attributes")
paramiko.sftp.SFTPError: Expected attributes
Paramiko log file:
As suggested, I have tried:
sftp.put(local_path, remote_path, callback=track_progress, confirm=False)
t, msg = sftp._request(CMD_STAT, remote_path)
The t is 101.
When you set confirm=True, the SFTPClient.put asks the server for a size of the just uploaded file. It does that to verify that the file size matches the size of the source local file. See also Paramiko put method throws "[Errno 2] File not found" if SFTP server has trigger to automatically move file upon upload.
The request for size uses SFTP "STAT" message, to which the server should either return "ATTRS" message (file attributes) or an "STATUS" message (101) with an error. Your server seems to return "STATUS" message with "OK" status (my guess based on the data from you and Paramiko source code). The "OK" is an invalid response to the "STAT" request. Paramiko does not expect such nonsense response, so it reports a bit unclear error. But ultimately it's a bug of the server. All you can do is to disable the verification by setting confirm=False.
I have a function. The function get started in some more threads. I tried to print a own error message. But its not important what I do I get still the traceback printed. My function:
def getSuggestengineResultForThree(suggestengine, seed, dynamoDBLocation):
results[seed][suggestengine] = getsuggestsforsearchengine(seed, suggestengine)
for keyword_result in results[seed][suggestengine]:
o = 0
while True:
try:
allKeywords.put_item(
Item={
'keyword': keyword_result
}
)
break
except ProvisionedThroughputExceededException as pe:
if (o > 9):
addtoerrortable(keyword_result)
print('ProvisionedThroughputExceededException 10 times in getSuggestengineResultForThree for allKeywords')
break
sleep(1)
o = o + 1
print("ProvisionedThroughputExceededException in getSugestengineResult")
But I get for every Thread an output like this:
Exception in thread Thread-562:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/threading.py", line 914, in _bootstrap_inner
self.run()
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/threading.py", line 862, in run
self._target(*self._args, **self._kwargs)
File "/Users/iTom/ownCloud/Documents/Workspace/PyCharm/Keywords/TesterWithDB.py", line 362, in getSuggestengineResultForThree
'keyword': keyword_result
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/boto3/resources/factory.py", line 518, in do_action
response = action(self, *args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/boto3/resources/action.py", line 83, in __call__
response = getattr(parent.meta.client, operation_name)(**params)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/botocore/client.py", line 252, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/botocore/client.py", line 542, in _make_api_call
raise ClientError(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (ProvisionedThroughputExceededException) when calling the PutItem operation: The level of configured provisioned throughput for the table was exceeded. Consider increasing your provisioning level with the UpdateTable API
Can someone help me to get my own print instead of the traceback? :)
This answer is a bit late for your question but here it is in case anyone is searching for this answer.
The exception that boto3 throws is a botocore.exceptions.ClientError as Neil has answered. However you should check response error code for 'ProvisionedThroughputExceededException' because the ClientError could be for another issue.
from botocore.exceptions import ClientError
except ClientError as e:
if e.response['Error']['Code'] != 'ProvisionedThroughputExceededException':
raise
# do something else with 'e'
I am using Python 2.7 which may or may not make a difference. The exception that I receive suggests that boto3 is already doing retries (up to 9 times) which is different from your exception:
An error occurred (ProvisionedThroughputExceededException) when calling the PutItem operation (reached max retries: 9): The level of configured provisioned throughput for the table was exceeded. Consider increasing your provisioning level with the UpdateTable API.
It's possible that ProvisionedThroughputExceededException is not actually the error. Try:
except botocore.exceptions.ClientError as pe:
instead.
If that doesn't work, figure out what line the error is occurring on and put the except statement there.
In the t.cursor() method, an exception from the Twython library is thrown for a small percentage of ids. However, whenever the exception occurs, the actual line in my code where it's thrown is the for-loop that comes after the try/except block, which prevents the continue from being called.
How is it that an exception can be thrown in the try block, not be caught by the except block, and then show up later during some (mostly) unrelated code?
And yes, it's a 401 error, but that's just the Twitter API returning the wrong code. In reality I'm correctly authenticating. I also know I can just move the except block to after the for-loop, but I just want to know how this can happen at all.
from twython import Twython
t = Twython(...)
# ...
for id in ids:
try:
# exception is truly caused by the following line
followers = t.cursor(t.get_followers_ids, id=id)
except:
# this block is never run
print("Exception with user " + str(id))
continue
# this line actually throws the exception, inexplicably
for follower_id in followers:
values.append((follower_id, id, scrape_datetime))
# ...
The traceback:
Traceback (most recent call last):
File "/root/twitter/nightly.py", line 5, in <module>
t.get_followers(t.current_tweeters)
File "/root/twitter/tweets.py", line 81, in get_followers
for follower_id in followers:
File "/usr/local/lib/python3.3/dist-packages/twython-3.0.0-py3.3.egg/twython/api.py", line 398, in cursor
content = function(**params)
File "/usr/local/lib/python3.3/dist-packages/twython-3.0.0-py3.3.egg/twython/endpoints.py", line 212, in get_followers_ids
return self.get('followers/ids', params=params)
File "/usr/local/lib/python3.3/dist-packages/twython-3.0.0-py3.3.egg/twython/api.py", line 231, in get
return self.request(endpoint, params=params, version=version)
File "/usr/local/lib/python3.3/dist-packages/twython-3.0.0-py3.3.egg/twython/api.py", line 225, in request
content = self._request(url, method=method, params=params, api_call=url)
File "/usr/local/lib/python3.3/dist-packages/twython-3.0.0-py3.3.egg/twython/api.py", line 195, in _request
retry_after=response.headers.get('retry-after'))
twython.exceptions.TwythonAuthError: Twitter API returned a 401 (Unauthorized), An error occurred processing your request.
Looks like t.cursor(...) returns a generator that doesn't actually execute until you iterate through it. While it might appear that the connection happens at:
followers = t.cursor(t.get_followers_ids, id=id)
It doesn't until you iterate through the generator with your for loop. Sort of mentioned here
If you have a need to defer processing until later but still want to catch the exception, turn the generator into a list. This will exhaust the generator and save the data for later.
followers = t.cursor(t.get_followers_ids, id=id)
followers = list(followers)
What you probably got with
followers = t.cursor(t.get_followers_ids, id=id)
is a cursor to a piece of code that will fetch your list. But because this cursor is lazy no Twyton code is yet executed. The acual fetching code is only executed when it is first used .... at the line that throws your exception.
So wrap that in the exception handling also.
Quite often GAE is not able to upload the file and I am getting the following error:
ApplicationError: 2
Traceback (most recent call last):
File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/webapp/__init__.py", line 636, in __call__
handler.post(*groups)
File "/base/data/home/apps/picasa2vkontakte/1.348093606241250361/picasa2vkontakte.py", line 109, in post
headers=headers
File "/base/python_runtime/python_lib/versions/1/google/appengine/api/urlfetch.py", line 260, in fetch
return rpc.get_result()
File "/base/python_runtime/python_lib/versions/1/google/appengine/api/apiproxy_stub_map.py", line 592, in get_result
return self.__get_result_hook(self)
File "/base/python_runtime/python_lib/versions/1/google/appengine/api/urlfetch.py", line 355, in _get_fetch_result
raise DownloadError(str(err))
DownloadError: ApplicationError: 2
How should I perform retries in case of such error?
try:
result = urlfetch.fetch(url=self.request.get('upload_url'),
payload=''.join(data),
method=urlfetch.POST,
headers=headers
)
except DownloadError:
# how to retry 2 more times?
# and how to verify result here?
If you can, move this work into the task queue. When tasks fail, they retry automatically. If they continue to fail, the system gradually backs off retry frequency to as slow as once-per hour. This is an easy way to handle API requests to rate-limited services without implementing one-off retry logic.
If you really need to handle requests synchronously, something like this should work:
for i in range(3):
try:
result = urlfetch.fetch(...)
# run success conditions here
break
except DownloadError:
#logging.debug("urlfetch failed!")
pass
You can also pass deadline=10 to urlfetch.fetch to double the default timeout deadline.
I've written my first Python application with the App Engine APIs, it is intended to monitor a list of servers and notify me when one of them goes down, by sending a message to my iPhone using Prowl, or sending me an email, or both.
Problem is, a few times a week it notifies me a server is down even when it clearly isn't. I've tested it with servers i know should be up virtually all the time like google.com or amazon.com but i get notifications with them too.
I've got a copy of the code running at http://aeservmon.appspot.com, you can see that google.com was added Jan 3rd but is only listed as being up for 6 days.
Below is the relevant section of the code from checkservers.py that does the checking with urlfetch, i assumed that the DownloadError exception would only be raised when the server couldn't be contacted, but perhaps I'm wrong.
What am I missing?
Full source on github under mrsteveman1/aeservmon (i can only post one link as a new user, sorry!)
def testserver(self,server):
if server.ssl:
prefix = "https://"
else:
prefix = "http://"
try:
url = prefix + "%s" % server.serverdomain
result = urlfetch.fetch(url, headers = {'Cache-Control' : 'max-age=30'} )
except DownloadError:
logging.info('%s could not be reached' % server.serverdomain)
self.serverisdown(server,000)
return
if result.status_code == 500:
logging.info('%s returned 500' % server.serverdomain)
self.serverisdown(server,result.status_code)
else:
logging.info('%s is up, status code %s' % (server.serverdomain,result.status_code))
self.serverisup(server,result.status_code)
UPDATE Jan 21:
Today I found one of the exceptions in the logs:
ApplicationError: 5
Traceback (most recent call last):
File "/base/python_lib/versions/1/google/appengine/ext/webapp/__init__.py", line 507, in __call__
handler.get(*groups)
File "/base/data/home/apps/aeservmon/1.339312180538855414/checkservers.py", line 149, in get
self.testserver(server)
File "/base/data/home/apps/aeservmon/1.339312180538855414/checkservers.py", line 106, in testserver
result = urlfetch.fetch(url, headers = {'Cache-Control' : 'max-age=30'} )
File "/base/python_lib/versions/1/google/appengine/api/urlfetch.py", line 241, in fetch
return rpc.get_result()
File "/base/python_lib/versions/1/google/appengine/api/apiproxy_stub_map.py", line 501, in get_result
return self.__get_result_hook(self)
File "/base/python_lib/versions/1/google/appengine/api/urlfetch.py", line 331, in _get_fetch_result
raise DownloadError(str(err))
DownloadError: ApplicationError: 5
other folks have been reporting issues with the fetch service (e.g. http://code.google.com/p/googleappengine/issues/detail?id=1902&q=urlfetch&colspec=ID%20Type%20Status%20Priority%20Stars%20Owner%20Summary%20Log%20Component)
can you print the exception, it may have more detail, e.g.:
"DownloadError: ApplicationError: 2 something bad"