status = confluence.update_page(
parent_id=None,
page_id={con_pageid},
title={con_title},
body='Updated Page. You can use <strong>HTML tags</strong>!')
Using this code gives me the following error:
Traceback (most recent call last):
File "update2.py", line 24, in
status = confluence.update_page(
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/atlassian/confluence.py", line 1513, in update_page
if not always_update and body is not None and self.is_page_content_is_already_updated(page_id, body, title):
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/atlassian/confluence.py", line 1433, in is_page_content_is_already_updated
current_title = confluence_content.get("title", None)
AttributeError: 'str' object has no attribute 'get'
Does anyone have an idea on how to update a confluence page using python? I've tried various solutions provided even here, but none of them is working for me.
Finally tweaked a code that works. Providing whole code in case anyone would need it later.
import requests
from atlassian import Confluence
confluence = Confluence(
url='https://confluence.<your domain>.com',
token='CyberPunk EdgeRunners')
status = confluence.get_page_by_title(space='ABC', title='testPage')
print(status)
status = confluence.update_page(
parent_id=<a number sequence>,
page_id=<a number sequence>,
title='testpage',
body='<h1 id="WindowsSignatureSet2.4.131.3-2-HandoffinstructionstoOperations">A</h1>'
)
print(status)
Related
Up until this morning I successfully used snscrape to scrape twitter tweets via python.
The code looks like this:
import snscrape.modules.twitter as sntwitter
query = "from:annewilltalk"
for i,tweet in enumerate(sntwitter.TwitterSearchScraper(query).get_items()):
# do stuff
But this morning without any changes I got the error:
Traceback (most recent call last):
File "twitter.py", line 568, in <module>
Scraper = TwitterScraper()
File "twitter.py", line 66, in __init__
self.get_tweets_talkshow(username = "annewilltalk")
File "twitter.py", line 271, in get_tweets_talkshow
for i,tweet in enumerate(sntwitter.TwitterSearchScraper(query).get_items()):
File "/home/pi/.local/lib/python3.8/site-packages/snscrape/modules/twitter.py", line 1455, in get_items
for obj in self._iter_api_data('https://api.twitter.com/2/search/adaptive.json', _TwitterAPIType.V2, params, paginationParams, cursor = self._cursor):
File "/home/pi/.local/lib/python3.8/site-packages/snscrape/modules/twitter.py", line 721, in _iter_api_data
obj = self._get_api_data(endpoint, apiType, reqParams)
File "/home/pi/.local/lib/python3.8/site-packages/snscrape/modules/twitter.py", line 691, in _get_api_data
r = self._get(endpoint, params = params, headers = self._apiHeaders, responseOkCallback = self._check_api_response)
File "/home/pi/.local/lib/python3.8/site-packages/snscrape/base.py", line 221, in _get
return self._request('GET', *args, **kwargs)
File "/home/pi/.local/lib/python3.8/site-packages/snscrape/base.py", line 217, in _request
raise ScraperException(msg)
snscrape.base.ScraperException: 4 requests to https://api.twitter.com/2/search/adaptive.json?include_profile_interstitial_type=1&include_blocking=1&include_blocked_by=1&include_followed_by=1&include_want_retweets=1&include_mute_edge=1&include_can_dm=1&include_can_media_tag=1&include_ext_has_nft_avatar=1&skip_status=1&cards_platform=Web-12&include_cards=1&include_ext_alt_text=true&include_quote_count=true&include_reply_count=1&tweet_mode=extended&include_entities=true&include_user_entities=true&include_ext_media_color=true&include_ext_media_availability=true&include_ext_sensitive_media_warning=true&include_ext_trusted_friends_metadata=true&send_error_codes=true&simple_quoted_tweet=true&q=from%3Aannewilltalk&tweet_search_mode=live&count=20&query_source=spelling_expansion_revert_click&pc=1&spelling_corrections=1&ext=mediaStats%2ChighlightedLabel%2ChasNftAvatar%2CvoiceInfo%2Cenrichments%2CsuperFollowMetadata%2CunmentionInfo failed, giving up.
I found online, that the URL encoding must not exceed a length of 500 and the one from the error message is about 800 long. Could that be the problem? Why did that change overnight?
How can I fix that?
the same problem here. It was working just fine and sudenly stoped. I get the same error code.
This is the code I used:
import snscrape.modules.twitter as sntwitter
import pandas as pd
# Creating list to append tweet data to
tweets_list2 = []
# Using TwitterSearchScraper to scrape data and append tweets to list
for i,tweet in enumerate(sntwitter.TwitterSearchScraper('xxxx since:2022-06-01 until:2022-06-30').get_items()):
if i>100000:
break
tweets_list2.append([tweet.date, tweet.id, tweet.content, tweet.user.username])
# Creating a dataframe from the tweets list above
tweets_df2 = pd.DataFrame(tweets_list2, columns=['Datetime', 'Tweet Id', 'Text', 'Username'])
print(tweets_df2)
tweets_df2.to_csv('xxxxx.csv')
Twitter changed something that broke the snscrape requests several days back. It looks like they just reverted back to the configuration that worked about 15 minutes ago, though. If you are using the latest snscrape version, everything should be working for you now!
Here is a thread with more info:
https://github.com/JustAnotherArchivist/snscrape/issues/671
There was a lot of discussion in the snscrape github, but none of the answers worked for me.
I now have a workaround using the shell-client. Executing the queries via terminal works for some reason.
So I execute the shell command via subprocess, save the result in a json and load that json back to python.
Example shell command: snscrape --jsonl --max-results 10 twitter-search "from:maybritillner" > test_twitter.json
I'm using notion.py and I'm new to python I want to get a page title from page and post it in another page but when I try I'm getting an error
Traceback (most recent call last):
File "auto_notion_read.py", line 16, in <module>
page_read = client.get_block(list_url_read)
File "/home/lotfi/.local/lib/python3.6/site-packages/notion/client.py", line 169, in get_block
block = self.get_record_data("block", block_id, force_refresh=force_refresh)
File "/home/lotfi/.local/lib/python3.6/site-packages/notion/client.py", line 162, in get_record_data
return self._store.get(table, id, force_refresh=force_refresh)
File "/home/lotfi/.local/lib/python3.6/site-packages/notion/store.py", line 184, in get
self.call_load_page_chunk(id)
File "/home/lotfi/.local/lib/python3.6/site-packages/notion/store.py", line 286, in call_load_page_chunk
recordmap = self._client.post("loadPageChunk", data).json()["recordMap"]
File "/home/lotfi/.local/lib/python3.6/site-packages/notion/client.py", line 262, in post
"message", "There was an error (400) submitting the request."
requests.exceptions.HTTPError: Invalid input.
My code is that I'm using is
from notion.client import NotionClient
import time
token_v2 = "my page tocken"
client = NotionClient(token_v2 = token_v2)
list_url_read = 'the url of the page page to read'
page_read = client.get_block(list_url_read)
list_url_post = 'the url of the page'
page_post = client.get_block(list_url_post)
print (page_read.title)
It isn't recommended to edit source code for dependencies, as you will most certainly cause a conflict when updating the dependencies in the future.
Fix PR 294 has been open since the 6th of March 2021 and has not been merged.
To fix this issue with the currently open PR (pull request) on GitHub, do the following:
pip uninstall notion
Then either:
pip install git+https://github.com/jamalex/notion-py.git#refs/pull/294/merge
OR in your requirements.txt add
git+https://github.com/jamalex/notion-py.git#refs/pull/294/merge
Source for PR 294 fix
You can find the fix here
In a nutshell you need to modify two files in the library itself:
store.py
client.py
Find "limit" value and change it to 100 in both.
I've been trying to get attachment image data from documents in Cloudant.
I can successfully do it once a document is selected (direct extract with _id, etc).
Now trying to do it in combination with "query" operation using selector, I run into trouble.
Here is my code.
targetName="chibika33"
targetfile="chibitest.png"
#--------------------------------------------------
# get all the documents with the specific nameField
#--------------------------------------------------
myDatabase.create_query_index(fields = ['nameField'])
selector = {'nameField': {'$eq': targetName}}
docs = myDatabase.get_query_result(selector)
#--------------------------------------------------
# get the attachment files to them, save it locally
#--------------------------------------------------
count = 0
for doc in docs:
count=count+1
result_filename="result%03d.png"%(count)
dataContent = doc.get_attachment(targetfile, attachment_type='binary')
dataContentb =base64.b64decode(dataContent)
with open(result_filename,'wb') as output:
output.write(dataContentb)
Causes error as;
Traceback (most recent call last):
File "view8.py", line 44, in <module>
dataContent = doc.get_attachment(targetfile, attachment_type='binary')
AttributeError: 'dict' object has no attribute 'get_attachment'
So far, I've been unable to find any API for converting dict to document object in the python-cloudant-document...[python-cloudant document]: http://python-cloudant.readthedocs.io/en/latest/index.html
Any advise would be highly appreciated.
The returned structure from get_query_result(...) isn't an array of documents.
Try:
resp = myDatabase.get_query_result(selector)
for doc in resp['docs']:
# your code here
See the docs at:
http://python-cloudant.readthedocs.io/en/latest/database.html#cloudant.database.CloudantDatabase.get_query_result
I am trying to submit a form and retrieve some data
with dryscrape but when I execute the program, I get the error:
Traceback (most recent call last):
File "easyjettest.py", line 22, in <module>
originairport_field.set(originairport)
AttributeError: 'NoneType' object has no attribute 'set'
I really can't figure out what is the problem. I've read the documentation and searched as much as I could online.
The code is the following:
import dryscrape
import sys
if 'linux' in sys.platform:
# start xvfb in case no X is running. Make sure xvfb
# is installed, otherwise this won't work!
dryscrape.start_xvfb()
originairport = 'Cyprus (Larnaca) LCA'
destinationairport = 'London Gatwick LGW'
odate = '16/08/2016'
adate = '18/08/2016'
adults = '1'
sess = dryscrape.Session(base_url = 'http://www.easyjet.com/en/')
sess.set_attribute('auto_load_images', False)
sess.visit('/')
originairport_field = sess.at_xpath('.//*[#id="acOriginAirport"]')
originairport_field.set(originairport)
destinationairport_field = sess.at_xpath('.//* [#id="acDestinationAirport"]')
destinationairport_field.set(destinationairport)
odate_field = sess.at_xpath('.//*[#id="oDate"]')
odate_field.set(odate)
rdate_field = session.at_xpath('.//*[#id="rDate"]')
rdate_field.set(rdate)
adults_field = session.at_xpath('.//*[#id="numberOfAdults"]')
adults_field.set(adults)
originairport_field.form().submit()
# extract all links
for link in session.xpath('//a[#href]'):
print link['href']
Check in which line the error is taking place ,probably any of the variables originairport_field, destinationairport_field, odate_field ,rdate_field,adults_field is assigned none.
By the way from where does the session in the lines where you set the values of rdate_field and adults_field come from? isnt that sess
Edit:
From your updated error info probably sess.at_xpath('.//*[#id="acOriginAirport"]') isnt returning anything.
I have a python script that takes a generated CSV and uploads it to Google Docs. It can upload it just fine, put I cannot seem to get it to replace the data, it returns an error I cannot find reference to.
Le Code:
import gdata.auth
import gdata.docs
import gdata.docs.service
import gdata.docs.data
import gdata.docs.client
email = 'admin#domain.com'
CONSUMER_KEY='domain.com'
CONSUMER_SECRET='blah54545blah'
ms_client = gdata.docs.client.DocsClient('Domain_Doc_Upload')
ms_client.auth_token = gdata.gauth.TwoLeggedOAuthHmacToken(CONSUMER_KEY, CONSUMER_SECRET, email)
url = 'http://docs.google.com/feeds/documents/private/full/sd01blahgarbage'
ms = gdata.data.MediaSource(file_path="C:\\people.csv", content_type='text/csv')
csv_entry2 = ms_client.Update(url, ms)
It returns:
Traceback (most recent call last):
File "so_test.py", line 19, in <module>
csv_entry2 = ms_client.Update(ms, url)
File "build\bdist.win-amd64\egg\gdata\client.py", line 717, in update
AttributeError: 'MediaSource' object has no attribute 'to_string'
I cannot find anything about the 'to_string' attribute, so I am lost on the trace. ANy help, much appreciated.
I took a look at the docs and it looks like the Update method takes (entry, ms) where entry needs to be a gdata.docs.data.DocsEntry object. You should be able to get the DocsEntry object by getting a feed from your client.
feed = client.GetDocList()