I am unable to load the json data & getting errors which mention below.
My code is ,
import requests
import json
url = 'https://172.28.1.220//actifio/api/info/lsjobhistory?sessionid=cafc8f31-fb39-4020-8172-e8f0085004fd'
ret=requests.get(url,verify=False)
data=json.load(ret)
print(data)
Getting error
Traceback (most recent call last):
File "pr.py", line 7, in <module>
data=json.load(ret)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/json/__init__.py", line 293, in load
return loads(fp.read(),
AttributeError: 'Response' object has no attribute 'read'
You dont actually need to import json
try this
import requests
url = 'https://172.28.1.220//actifio/api/info/lsjobhistory?sessionid=cafc8f31-fb39-4020-8172-e8f0085004fd'
ret = requests.get(url,verify=False)
data = ret.json()
print(data)
Related
First of all, I am getting this error. When I try running
pip3 install --upgrade json
in an attempt to resolve the error, python is unable to find the module.
The segment of code I am working with can be found below the error, but some further direction as for the code itself would be appreciated.
Error:
Traceback (most recent call last):
File "Chicago_cp.py", line 18, in <module>
StopWork_data = json.load(BeautifulSoup(StopWork_response.data,'lxml'))
File "/usr/lib/python3.8/json/__init__.py", line 293, in load
return loads(fp.read(),
TypeError: 'NoneType' object is not callable
Script:
#!/usr/bin/python
import json
from bs4 import BeautifulSoup
import urllib3
http = urllib3.PoolManager()
# Define Merge
def Merge(dict1, dict2):
res = {**dict1, **dict2}
return res
# Open the URL and the screen name
StopWork__url = "someJsonUrl"
Violation_url = "anotherJsonUrl"
StopWork_response = http.request('GET', StopWork__url)
StopWork_data = json.load(BeautifulSoup(StopWork_response.data,'lxml'))
Violation_response = http.request('GET', Violation_url)
Violation_data = json.load(BeautifulSoup(Violation_response.data,'lxml'))
dict3 = Merge(StopWork_data,Violation_data)
print (dict1)
json.load expects a file object or something else with a read method. The BeautifulSoup object doesn't have a method read. You can ask it for any attribute and it will try to find a child tag with that name, i.e. a <read> tag in this case. When it doesn't find one it returns None which causes the error. Here's a demo:
import json
from bs4 import BeautifulSoup
soup = BeautifulSoup("<p>hi</p>", "html5lib")
assert soup.read is None
assert soup.blablabla is None
assert json.loads is not None
json.load(soup)
Output:
Traceback (most recent call last):
File "main.py", line 8, in <module>
json.load(soup)
File "/usr/lib/python3.8/json/__init__.py", line 293, in load
return loads(fp.read(),
TypeError: 'NoneType' object is not callable
If the URL is returning JSON then you don't need BeautifulSoup at all because that's for parsing HTML and XML. Just use json.loads(response.data).
I tried to download a Tableau Workbook in as an image png-format and uses the sample from GitHub: https://github.com/tableau/server-client-python/blob/master/samples/download_view_image.py using the tableauserverclient. But i get the error from above. I think I may have to Import something more, I only importet the tableauserverclient. Does anybody has an idea what I do wrong?
relevant Code:
import argparse
import getpass
import logging
import tableauserverclient as TSC
Step 3: Query the image endpoint and save the image to the specified location
image_req_option = TSC.ImageRequestOptions (imageresolution=TSC.ImageRequestOptions.Resolution.High)
server.views.populate_image(view_item, image_req_option)
with open(args.filepath, "wb") as image_file:
image_file.write(view_item.image)
print("View image saved to {0}".format(args.filepath))`
Full Error Message:
Traceback (most recent call last):
File "pdf2.test.py", line 69, in <module>
main()
File "pdf2.test.py", line 59, in main
image_req_option = TSC.ImageRequestOptions(imageresolution=TSC.ImageRequestO ptions.Resolution.High)
AttributeError: 'module' object has no attribute 'ImageRequestOptions'
When I tried this I am getting an Attribute Error : 'Response' object has no attribute 'css'
I tried with this code :
response.css('h1.ctn-article-title::text').extract()
can anyone help please?
i'm trying to get text "Update Primary Care" from below code which is title :
Update Primary Care
CME
i'm placing my entire code :
response = requests.get(url, headers = headers)
Traceback (most recent call last):
File "<console>", line 1, in <module>
NameError: name 'requests' is not defined
import requests
response = requests.get(url, headers = headers)
Traceback (most recent call last):
File "<console>", line 1, in <module>
NameError: name 'url' is not defined
url = 'somethingurl'
response = requests.get(url, headers = headers)
response.css('h1.ctn-article-title::text').extract()
Traceback (most recent call last):
File "<console>", line 1, in <module>
AttributeError: 'Response' object has no attribute 'css'
response.css('h1').extract()
Traceback (most recent call last):
File "<console>", line 1, in <module>
AttributeError: 'Response' object has no attribute 'css'
response.css('h1.ctn-article-title::text').extract()
As Tarun pointed out in the comments: You are mixing scrapy and requests code.
If you want to create a scrapy response from requests response you can try:
from scrapy.http import TextResponse
import requests
url = 'http://stackoverflow.com'
resp = requests.get(url)
resp = TextResponse(body=resp.content, url=url)
resp.xpath('//div')
# works!
See the docs for requests.Response and scrapy.http.TextResponse objects.
In this case the line where your error occurs expects a CSSResponse object not a normal response. Try to create a CSSResponse instead of the normal Response to resolve the error.
You can get it here
More specifically use an HtmlResponse because your response would be some HTML and not plain text. HtmlResponse is a subclass of CSSResponse so it inherits the missing method.
add this line in your code and it will work fine
remove any imports for requests from any other package.
from scrapy.http import Request
I have built a scraper to retrieve concert data from songkick by using their api. However, it takes a lot of time to retrieve all the data from these artists. After scraping for approximately 15 hours the script is still running but the JSON file doesn’t change anymore. I interrupted the script and I checked if I could access my data with TinyDB. Unfortunately I get the following error. Does anybody know why this is happening?
Error:
('cannot fetch url', 'http://api.songkick.com/api/3.0/artists/8689004/gigography.json?apikey=###########&min_date=2015-04-25&max_date=2017-03-01')
8961344
Traceback (most recent call last):
File "C:\Users\rmlj\Dropbox\Data\concerts.py", line 42, in <module>
load_events()
File "C:\Users\rmlj\Dropbox\Data\concerts.py", line 27, in load_events
print(artist)
File "C:\Python27\lib\idlelib\PyShell.py", line 1356, in write
return self.shell.write(s, self.tags)
KeyboardInterrupt
>>> mydat = db.all()
Traceback (most recent call last):
File "<pyshell#0>", line 1, in <module>
mydat = db.all()
File "C:\Python27\lib\site-packages\tinydb\database.py", line 304, in all
return list(itervalues(self._read()))
File "C:\Python27\lib\site-packages\tinydb\database.py", line 277, in _read
return self._storage.read()
File "C:\Python27\lib\site-packages\tinydb\database.py", line 31, in read
raw_data = (self._storage.read() or {})[self._table_name]
File "C:\Python27\lib\site-packages\tinydb\storages.py", line 105, in read
return json.load(self._handle)
File "C:\Python27\lib\json\__init__.py", line 287, in load
return loads(fp.read(),
MemoryError
below you can find my script
import urllib2
import requests
import json
import csv
import codecs
from tinydb import TinyDB, Query
db = TinyDB('events.json')
def load_events():
MIN_DATE = "2015-04-25"
MAX_DATE = "2017-03-01"
API_KEY= "###############"
with open('artistid.txt', 'r') as f:
for a in f:
artist = a.strip()
print(artist)
url_base = 'http://api.songkick.com/api/3.0/artists/{}/gigography.json?apikey={}&min_date={}&max_date={}'
url = url_base.format(artist, API_KEY, MIN_DATE, MAX_DATE)
# url = u'http://api.songkick.com/api/3.0/search/artists.json?query='+artist+'&apikey=WBmvXDarTCEfqq7h'
try:
r = requests.get(url)
resp = r.json()
if(resp['resultsPage']['totalEntries']):
results = resp['resultsPage']['results']['event']
for x in results:
print(x)
db.insert(x)
except:
print('cannot fetch url',url);
load_events()
db.close()
print ("End of script")
MemoryError is a built in Python exception (https://docs.python.org/3.6/library/exceptions.html#MemoryError) so it looks like the process is out of memory and this isn't really related to Songkick.
This question probably has the information you need to debug this: How to debug a MemoryError in Python? Tools for tracking memory use?
I'm trying to read a .plist file on Mac OSX with the plistlib.
Sadly I always get an error when running the script
Traceback (most recent call last):
File "/Users/johannes/pycharmprojects/adobe-cache-cleaner/test.py", line 6, in <module>
pl = plistlib.load(fp2)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/plistlib.py", line 983, in load
header = fp.read(32)
AttributeError: 'str' object has no attribute 'read'
that's my script:
import plistlib
fp2 = "/Users/Johannes/Pythonproject/test.plist"
pl = plistlib.load(fp2)
print pl
It looks like ptlistlib ist expecting a file not a string:
import plistlib
with open("/Users/Johannes/Pythonproject/test.plist", "rb") as file:
pl = plistlib.load(file)
print pl
see https://docs.python.org/3/library/plistlib.html