ScraperException: 4 requests to https://api.twitter.com/2/search/adaptive.json?include_profile_interstitial_type=1&include_blocking=1&include_blocked_by=1&include_followed_by=1&include_want_retweets=1&include_mute_edge=1&include_can_dm=1&include_can_media_tag=1&skip_status=1&cards_platform=Web-12&include_cards=1&include_ext_alt_text=true&include_quote_count=true&include_reply_count=1&tweet_mode=extended&include_entities=true&include_user_entities=true&include_ext_media_color=true&include_ext_media_availability=true&send_error_codes=true&simple_quoted_tweets=true&q=%28from%3AZeeNewsEnglish%29+until%3A2023-01-12+since%3A2023-01-08+-filter%3Areplies&count=100&query_source=spelling_expansion_revert_click&pc=1&spelling_corrections=1&ext=mediaStats%2ChighlightedLabel failed, giving up.
I tried following code :
import snscrape.modules.twitter as sntwitter
import time
query5 = "(from:BBC) until:2023-01-12 since:2023-01-08 -filter:replies"
news = [query5]
tweets = []
for news_data in news:
limit = 500
for tweet in sntwitter.TwitterSearchScraper(news_data).get_items():
# print(vars(tweet))
# break
if len(tweets) == limit:
break
else:
tweets.append([tweet.date, tweet.username, tweet.content])
time.sleep(2)
you have to install the last version of snscrape 0.5.0.20230113.
Faced the same issue. I guess snscraper, made use of Twitter API (elevated access) in the backend. Twitter shut down all the bots which were fetching the API data. Twitter essentially wants you to make authentic use of their data. I suggest signing up on twitter's developer account and requesting the elevated environment. Notice the first line in error makes a call to Twitter API.
Related
I am trying to pull tweets using Twitter Developer API V2. My access level is Elevated. This is how my code looks like:
import tweepy
consumerKey = "*********mc7K"
consumerSecret = "*******BZNB"
accessToken = "*****-****9eeT"
accessTokenSecret = "****sWAa"
bearerToken="****VRqn"
client = tweepy.Client(wait_on_rate_limit=True, bearer_token=bearerToken, consumer_key=consumerKey, consumer_secret=consumerSecret, access_token=accessToken, access_token_secret=accessTokenSecret)
tweets = client.search_all_tweets(query=search_query, tweet_fields=['context_annotations', 'created_at'], max_results=10)
This throws me an error saying:
When authenticating requests to the Twitter API v2 endpoints, you must
use keys and tokens from a Twitter developer App that is attached to a
Project. You can create a project via the developer portal.
But if I use tweets = client.search_recent_tweets(query=search_query, tweet_fields=['context_annotations', 'created_at']), it works fine.
What am I missing here?
Tweepy - TWITTER API V2 REFERENCE
Client.search_all_tweets
This endpoint is only available to those users who have been approved
for the Academic Research product track.
The full-archive search endpoint returns the complete history of
public Tweets matching a search query; since the first Tweet was
created March 26, 2006.
The Tweets returned by this endpoint count towards the Project-level Tweet cap.
I am trying to use this code to delete 550 tweets from my account, as I want to use it as my personal account again without the tweets from my bots:
import tweepy
import json
auth = tweepy.OAuthHandler("consumer_key", "consumer_secret")
auth.set_access_token("access_token", "access_token_secret")
api = tweepy.API(auth)
screen_name = "YOUR_TWITTER_HANDLE"
timelines = api.user_timeline(screen_name, count = 550, include_rts = False)
for status in timelines:
tweetid = status._json
api.destroy_status(tweetid)
But every time I run it, I get this error:
tweepy.error.TweepError: [{'message': 'Application cannot perform write actions. Contact Twitter Platform Operations through https://support.twitter.com/forms/platform', 'code': 261}]
When I print tweetid, it is successfully grabbing all my tweets. I can also tweet normally using api.update_status, so why is this not working? What am I doing wrong?
seems like you have made to many requests to the twitter api - from my little reasearch the error code 261 means the token was suspended. Check this links:
Why is the twitter API throwing this error?
https://blog.cotten.io/common-twitter-error-codes-6b324396042e
It sounds like your API key has been restricted (this can happen if you try to perform too many automated actions, see the automation rules https://help.twitter.com/en/rules-and-policies/twitter-automation). You will need to have the app restored via https://help.twitter.com/forms/automation
I am developing a tool to fetch all my pull requests and their dates in the repository of my organisation. How am I supposed to fetch pull requests raised from my account?
I have used pygithub and used the access token and login into my github account but could not fetch the pull requests of mine.
import os
from github import Github
my_token = "da0ab89dc50d9b2354e8f9c76*****74e0111"
ct_gh = Github(base_url="http://github.****.de/api/v3",login_or_token=my_token)
print("Connection to the server is done")
user_id = ct_gh.get_user()
print(user_id.name)
I am able to fetch my username and details but could not fetch the pull request numbers of mine.
Found a very normal way, and runs very slowly.
from github import Github
#url = your own url
#token = your own token
my_pr = list()
gh = Github(base_url=url, login_or_token=str(_token))
user = gh.get_user()
repos = user.get_repos()
for repo in repos:
for pr in repo.get_pulls():
if pr.user.name == user.name:
my_pr.append(pr)
After this brute loop, all prs by your account are stored in my_pr, a list with PullRequest object.
In the hunt to list all (to include Other) Contacts for a Gmail/GSuite user. The current People API does not support this functionality, noting the following threads:
Found this thread here, confirming such change in the API: Google Contacts API vs People API
Google team noting it here: https://groups.google.com/forum/#!topic/google-contacts-api/iLsrN23xF6g
Referencing ticket request for prioritization here: https://issuetracker.google.com/issues/36757468
When diving deeper, it seems the Contacts API is still functioning and can be used via gdata https://developers.google.com/contacts/v3/
However, based on the following repo (https://github.com/google/gdata-python-client), there's limited documentation on implementation using OAuth2 (userID, token, refreshToken), which is the current stumbling block to get the list of Other Contacts
Any help would be greatly appreciated, thanks!
I found this posting https://gist.github.com/jorilallo/3686737 from 7 years ago(?). The actual sample code below that I had to modify a bit to get it working:
import gdata
import gdata.gauth
import gdata.contacts.client
import json
import requests
GOOGLE_CLIENT_ID = 'GOOGLE_CLIENT_ID' # Provided in the APIs console
GOOGLE_CLIENT_SECRET = 'GOOGLE_CLIENT_SECRET' # Provided in the APIs console
ACCESS_TOKEN = 'ACCESS_TOKEN' # given from a prior OAuth2 workflow, along with userID and refreshToken
REFRESH_TOKEN = 'REFRESH_TOKEN'
# GData with access token
token = gdata.gauth.OAuth2Token(
client_id=GOOGLE_CLIENT_ID,
client_secret=GOOGLE_CLIENT_SECRET,
scope='https://www.google.com/m8/feeds',
user_agent='app.testing',
access_token=ACCESS_TOKEN,
refresh_token=REFRESH_TOKEN)
contact_client = gdata.contacts.client.ContactsClient()
token.authorize(contact_client)
feed = contact_client.GetContacts()
for entry in feed.entry:
entry.title.text
for e in entry.email:
e.address
# JSON with access token
r = requests.get('https://www.google.com/m8/feeds/contacts/default/full?access_token=%s&alt=json&max-results=50&start-index=0' % (access_token))
data = json.loads(r.text)
print data
I am downloading data from quandl.com with python and I have reached my limit with 50 downloads for today. Users with an account can exceed this limit which I already have an account set up. The error message say I need to put my api key with the request, but to my knowledge it does not say how??
This is the error message
quandl.errors.quandl_error.LimitExceededError: (Status 429) (Quandl Error QELx01) You have exceeded the anonymous user limit of 50 calls per day. To make more calls today, please register for a free Quandl account and then include your API key with your requests.
This is the code I am using it works except for request limit
import quandl
import pandas as pd
from datetime import datetime
import pandas.io.data as web
symbols = ['BOE/XUDLTWD','BOE/XUDLCDS','tvix']
pnls = {}
for i in symbols:
a = '/' in i
if a == True:
data = quandl.get(i )
t = i.split('/')
df1= pnls
df1[str(t)] = data
print(a)
That's how to use your api key properly.
data = Quandl.get('what', authtoken='your_api_key')
for API key you have to first login in quandl and after login, u will get your API Key.
then data = Quandl.get('what', authtoken='your_api_key') as above answer