I want to scrape a hidden phone number from a website using beautifulsoup
https://haraj.com.sa/1194697687, as you can see in this link
the phone number is hidden, and it only showed when you click "التواصل" button
The Button
Here is my code
from bs4 import BeautifulSoup
url = "https://haraj.com.sa/1199808969"
headers = {'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/104.0.0.0 Safari/537.36.'}
r = requests.get(url,headers=headers)
soup = BeautifulSoup(r.content,features='lxml')
post = soup.find('span', {'class', 'contact'})
print(post)
and here is the output I got
<span class="contact"><button class="sc-bdvvaa AGAbw" type="button"><img src="https://v8-cdn.haraj.com.sa/logos/contact_logo.svg" style="margin-left:5px;filter:brightness(0) invert(1)"/>التواصل</button></span>
BeautifulSoup won't be enough for what you're trying to do - it's just an HTML parser. And Selenium is overkill. The page you're trying to scrape from uses JavaScript to dynamically and asynchronously populate the DOM with content when you press the button. If you make a request to that page in Python, and try to parse the HTML, you're only looking at the barebones template, which would normally get populated later on by the browser. The data for the modal comes from a fetch/XHR HTTP POST request to a GraphQL API, the response of which is JSON. If you use your browser's developer tools to log your network traffic when you press the button, you can see the HTTP request URL, query-string parameters, POST payload, request headers, etc. You just need to mimic that request in Python - fortunately this API seems to be pretty lenient, so you won't have to provide all the same parameters that the browser provides:
def main():
import requests
url = "https://graphql.haraj.com.sa"
params = {
"queryName": "postContact",
"token": "",
"clientId": "",
"version": ""
}
headers = {
"user-agent": "Mozilla/5.0"
}
payload = {
"query": "query postContact($postId: Int!) {postContact(postId: $postId){contactText}}",
"variables": {
"postId": 94697687
}
}
response = requests.post(url, params=params, headers=headers, json=payload);
response.raise_for_status()
print(response.json())
return 0
if __name__ == "__main__":
import sys
sys.exit(main())
Output:
{'data': {'postContact': {'contactText': '0562038953'}}}
I was able to do this using selenium and chromedriver https://chromedriver.chromium.org/downloads just be sure to change the path to where you extract chromedriver and install selenium via pip;
pip install selenium
main.py
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.chrome.options import Options
import time
url = "https://haraj.com.sa/1199808969"
def main():
print(get_value())
def get_value():
chrome_options = Options()
chrome_options.add_argument("--headless")
driver = webdriver.Chrome("C:\Developement\chromedriver.exe",chrome_options=chrome_options)
driver.get(url)
driver.find_element(By.CLASS_NAME, "AGAbw").click()
time.sleep(5)
val = driver.find_element(By.XPATH, '//*[#id="modal"]/div/div/a[2]/div[2]').text
driver.quit()
return val
main()
Output:
[0829/155029.109:INFO:CONSOLE(1)] "HBM Loaded", source: https://v8-cdn.haraj.com.sa/main.bf98551ba68f8bd6bee4.js (1)
[0829/155030.571:INFO:CONSOLE(1)] "[object Object]", source: https://v8-cdn.haraj.com.sa/main.bf98551ba68f8bd6bee4.js (1)
[0829/155030.604:INFO:CONSOLE(1)] "[object Object]", source: https://v8-cdn.haraj.com.sa/main.bf98551ba68f8bd6bee4.js (1)
[0829/155031.143:INFO:CONSOLE(16)] "Yay! SW loaded 🎉", source: https://haraj.com.sa/sw.js (16)
0559559838
I am using Selenium and the Chrome web driver to log into my account on a website, but after the login, I want to use other libraries (such as requests) to interact with the website.
I am using Selenium to attempt to bypass reCAPTCHA v3, but I want to use the requests and beautifulsoup libraries to scrape data in the URL that comes after the login page (The URL that the login page redirects to, after logging in ).
Here is the code I've written for logging in, and a little snippet at the bottom which I plan to use for scraping the website post-login.
import requests
import os
import time
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.action_chains import ActionChains
chrome_options = Options()
chrome_options.add_argument("--headless")
driver = webdriver.Chrome("chromedriver", options=chrome_options)
action = ActionChains(driver)
url_1 = "https://ais.usvisa-info.com/en-am/niv/users/sign_in"
url_2 = "https://ais.usvisa-info.com/en-am/niv/account/settings/update_email"
email = "email"
password = 'password'
Headers = {
"user-agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.82 Safari/537.36"
}
def login():
driver.get(url_1)
driver.find_element_by_id("user_email").send_keys(email)
driver.find_element_by_id("user_password").send_keys(password)
driver.find_elements_by_class_name("icheckbox")[0].click()
driver.find_elements_by_name("commit")[0].click()
time.sleep(1)
print(driver.current_url)
login()
test = requests.get(url, headers=Headers)
What logging in is actually doing is modifying your cookies to add a key, which verifies that you are logged in. What we can do with this info is to take the cookie data and reuse it for the Python requests module. Let's start by extracting the cookies from the webdriver like so:
driver_cookies = driver.get_cookies()
Now that you have your cookies, you can inject them into future requests in the cookies parameter, like so:
test = requests.get(url, headers=Headers, cookies=driver_cookies)
Background
Considering this url:
base_url = "https://www.olx.bg/ad/sobstvenik-tristaen-kamenitsa-1-CID368-ID81i3H.html"
I want to make the ajax call for the telephone number:
ajax_url = "https://www.olx.bg/ajax/misc/contact/phone/7XarI/?pt=e3375d9a134f05bbef9e4ad4f2f6d2f3ad704a55f7955c8e3193a1acde6ca02197caf76ffb56977ce61976790a940332147d11808f5f8d9271015c318a9ae729"
Wanted results
If I press the button through the site in my chrome browser in the console I would get the wanted result:
{"value":"088 *****"}
debugging
If I open a new tab and paste the ajax_url I would always get empty values:
{"value":"000 000 000"}
If I try something like:
Bash:
wget $ajax_url
Python:
import requests
json_response= requests.get(ajax_url)
I would just receive the html of the the site's handling page that there is an error.
Ideas
I have something more when I am opening the request with the browser. What more do I have? maybe a cookie?
How do I get the wanted result with Bash/Python ?
Edit
the code of the response html is 200
I have tried with curl I get the same html problem.
Kind of a fix.
I have noticed that if I copy the cookie of the browser, and make a request with all the headers INCLUDING the cookie from the browser, I get the correct result
# I think the most important header is the cookie
headers = DICT_WITH_HEADERS_FROM_BROWSER
json_response= requests.get(next_url,
headers=headers,
)
Final question
The only question left is how can I generate a cookie through a Python script?
First you should create a requests Session to store cookies.
Then send a http GET request to the page that is actually calling the ajax request. If any cookie is created by the website, it is sent in GET response and your sessions stores the cookie.
Then you can easily use the session to call ajax api.
Important Note 1:
The ajax url you are calling in the original website is a http POST request! you should not send a get request to that url.
Important Note 2:
You also must extract phoneToken from the website js code which is stored in a variable like var phoneToken = 'here is the pt';
Sample code:
import re
import requests
my_session = requests.Session()
# call html website
base_url = "https://www.olx.bg/ad/sobstvenik-tristaen-kamenitsa-1-CID368-ID81i3H.html"
base_response = my_session.get(url=base_url)
assert base_response.status_code == 200
# extract phone token from base url response
phone_token = re.findall(r'phoneToken\s=\s\'(.+)\';', base_response.text)[0]
# call ajax api
ajax_path = "/ajax/misc/contact/phone/81i3H/?pt=" + phone_token
ajax_url = "https://www.olx.bg" + ajax_path
ajax_headers = {
'accept': '*/*',
'accept-encoding': 'gzip, deflate, br',
'accept-language': 'en-US,en;q=0.9,fa;q=0.8',
'sec-fetch-mode': 'cors',
'sec-fetch-site': 'same-origin',
'Referer': 'https://www.olx.bg/ad/sobstvenik-tristaen-kamenitsa-1-CID368-ID81i3H.html',
'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.100 Safari/537.36'
}
ajax_response = my_session.post(url=ajax_url, headers=ajax_headers)
print(ajax_response.text)
When you run the code above, the result below is displayed:
{"value":"088 558 9937"}
from selenium import webdriver
from bs4 import BeautifulSoup
from selenium.webdriver.firefox.options import Options
from bs4 import BeautifulSoup
import time
options = Options()
options.add_argument('--headless')
driver = webdriver.Firefox(options=options)
driver.get(
'https://www.olx.bg/ad/sobstvenik-tristaen-kamenitsa-1-CID368-ID81i3H.html')
number = driver.find_element_by_xpath(
"/html/body/div[3]/section/div[3]/div/div[1]/div[2]/div/ul[1]/li[2]/div/strong").click()
time.sleep(2)
source = driver.page_source
soup = BeautifulSoup(source, 'html.parser')
phone = soup.find("strong", {'class': 'xx-large'}).text
print(phone)
Output:
088 558 9937
I am trying to load cookies into my request session in Python from selenium exported cookies, however when I do it returns the following error:
"'list' object has no attribute 'extract_cookies'"
def load_cookies(filename):
with open(filename, 'rb') as f:
return pickle.load(f)
initial_state= requests.Session()
initial_state.cookies=load_cookies(time_cookie_file)
search_requests = initial_state.get(search_url)
Everywhere I see this should work, however my cookies are a list of dictionaries, which is what I understand all cookies are, and why I assume this works with Selenium. However for some reason it does not work with requests, any and all help in this regard would be really great, it feels like I am missing something obvious!
Cookies have been dumped from Selenium using:
with open("Filepath.pkl", 'wb') as f:
pickle.dump(driver.get_cookies(), f)
An example of the cookies would be (slightly obfuscated):
[{'domain': '.website.com',
'expiry': 1640787949,
'httpOnly': False,
'name': '_ga',
'path': '/',
'secure': False,
'value': 'GA1.2.1111111111.1111111111'},
{'domain': 'website.com',
'expiry': 1585488346,
'httpOnly': False,
'name': '__pnahc',
'path': '/',
'secure': False,
'value': '0'}]
I have now managed to load in the cookies as per the answer below, however it does not seem like the cookies are loaded in properly as they do not remember anything, however if I load the cookies in when browsing through Selenium they work fine.
Cookie
The Cookie HTTP request header contains stored HTTP cookie previously sent by the server with the Set-Cookie header. A HTTP cookie is a small piece of data that a server sends to the user's web browser. The browser may store the cookies and send it back with the next request to the same server. Typically, cookies to tell if two requests came from the same browser, keeping the user logged in.
Demonstration using Selenium
To demonstrate the usage of cookies using Selenium we have stored the cookies using pickle once the user had logged into the website http://demo.guru99.com/test/cookie/selenium_aut.php. In the next step, we opened the same website, adding the cookies and was able to land as a logged in user.
Code Block to store the cookies:
from selenium import webdriver
import pickle
options = webdriver.ChromeOptions()
options.add_argument("start-maximized")
options.add_experimental_option("excludeSwitches", ["enable-automation"])
options.add_experimental_option('useAutomationExtension', False)
driver = webdriver.Chrome(options=options, executable_path=r'C:\Utility\BrowserDrivers\chromedriver.exe')
driver.get('http://demo.guru99.com/test/cookie/selenium_aut.php')
driver.find_element_by_name("username").send_keys("abc123")
driver.find_element_by_name("password").send_keys("123xyz")
driver.find_element_by_name("submit").click()
pickle.dump( driver.get_cookies() , open("cookies.pkl","wb"))
Code Block to use the stored cookies for automatic authentication:
from selenium import webdriver
import pickle
options = webdriver.ChromeOptions()
options.add_argument("start-maximized")
options.add_experimental_option("excludeSwitches", ["enable-automation"])
options.add_experimental_option('useAutomationExtension', False)
driver = webdriver.Chrome(options=options, executable_path=r'C:\Utility\BrowserDrivers\chromedriver.exe')
driver.get('http://demo.guru99.com/test/cookie/selenium_aut.php')
cookies = pickle.load(open("cookies.pkl", "rb"))
for cookie in cookies:
driver.add_cookie(cookie)
driver.get('http://demo.guru99.com/test/cookie/selenium_cookie.php')
Demonstration using Requests
To demonstrate usage of cookies using session and requests we have accessed the site https://www.google.com, added a new dictionary of cookies:
{'name':'my_own_cookie','value': 'debanjan' ,'domain':'.stackoverflow.com'}
Next, we have used the same requests session to send another request which was successful as follows:
Code Block:
import requests
s1 = requests.session()
s1.get('https://www.google.com')
print("Original Cookies")
print(s1.cookies)
print("==========")
cookie = {'name':'my_own_cookie','value': 'debanjan' ,'domain':'.stackoverflow.com'}
s1.cookies.update(cookie)
print("After new Cookie added")
print(s1.cookies)
Console Output:
Original Cookies
<RequestsCookieJar[<Cookie 1P_JAR=2020-01-21-14 for .google.com/>, <Cookie NID=196=NvZMMRzKeV6VI1xEqjgbzJ4r_3WCeWWjitKhllxwXUwQcXZHIMRNz_BPo6ujQduYCJMOJgChTQmXSs6yKX7lxcfusbrBMVBN_qLxLIEah5iSBlkdBxotbwfaFHMd-z5E540x02-YZtCm-rAIx-MRCJeFGK2E_EKdZaxTw-StRYg for .google.com/>]>
==========
After new Cookie added
<RequestsCookieJar[<Cookie domain=.stackoverflow.com for />, <Cookie name=my_own_cookie for />, <Cookie value=debanjan for />, <Cookie 1P_JAR=2020-01-21-14 for .google.com/>, <Cookie NID=196=NvZMMRzKeV6VI1xEqjgbzJ4r_3WCeWWjitKhllxwXUwQcXZHIMRNz_BPo6ujQduYCJMOJgChTQmXSs6yKX7lxcfusbrBMVBN_qLxLIEah5iSBlkdBxotbwfaFHMd-z5E540x02-YZtCm-rAIx-MRCJeFGK2E_EKdZaxTw-StRYg for .google.com/>]>
Conclusion
Clearly, the newly added dictionary of cookies {'name':'my_own_cookie','value': 'debanjan' ,'domain':'.stackoverflow.com'} is pretty much in use within the second request.
Passing Selenium Cookies to Python Requests
Now, if your usecase is to passing Selenium Cookies to Python Requests, you can use the following solution:
from selenium import webdriver
import pickle
import requests
options = webdriver.ChromeOptions()
options.add_argument("start-maximized")
options.add_experimental_option("excludeSwitches", ["enable-automation"])
options.add_experimental_option('useAutomationExtension', False)
driver = webdriver.Chrome(options=options, executable_path=r'C:\Utility\BrowserDrivers\chromedriver.exe')
driver.get('http://demo.guru99.com/test/cookie/selenium_aut.php')
driver.find_element_by_name("username").send_keys("abc123")
driver.find_element_by_name("password").send_keys("123xyz")
driver.find_element_by_name("submit").click()
# Storing cookies through Selenium
pickle.dump( driver.get_cookies() , open("cookies.pkl","wb"))
driver.quit()
# Passing cookies to Session
session = requests.session() # or an existing session
with open('cookies.pkl', 'rb') as f:
session.cookies.update(pickle.load(f))
search_requests = session.get('https://www.google.com/')
print(session.cookies)
Since you are replacing session.cookies (RequestsCookieJar) with a list which don't have those attributes, it won't work.
You can import those cookies one by one by using:
for c in your_cookies_list:
initial_state.cookies.set(name=c['name'], value=c['value'])
I've tried loading the whole cookie but it seems like requests doesn't recognize those ones and returns:
TypeError: create_cookie() got unexpected keyword arguments: ['expiry', 'httpOnly']
requests accepts expires instead and HttpOnly comes nested within rest
Update:
We can also change the dict keys for expiry and httpOnly so that requests correctly load them instead of throwing an exception, by using dict.pop() which deletes an item from dict by the key and returns the value of deleted key so after we add a new key with deleted item value then unpack & pass them as kwargs:
for c in your_cookies_list:
c['expires'] = c.pop('expiry')
c['rest'] = {'HttpOnly': c.pop('httpOnly')}
initial_state.cookies.set(**c)
You can get cookies and use only name/value. You'll need headers also. You can get them from dev tools or by using proxy.
Basic example:
driver.get('https://website.com/')
# ... login or do anything
cookies = {}
for cookie in driver.get_cookies():
cookies[cookie['name']] = cookie['value']
# Write to a file if need or do something
# import json
# with open("cookies.txt", 'w') as f:
# f.write(json.dumps(cookies))
And usage:
# Read cookies from file as Dict
# with open('cookies.txt') as reader:
# cookies = json.loads(reader.read())
# use cookies
response = requests.get('https://website.com/', headers=headers, cookies=cookies)
Stackoverflow headers example, some headers can be required some not. You can find information here and here. You can get request headers using dev tools Network tab:
headers = {
'authority': 'stackoverflow.com',
'pragma': 'no-cache',
'cache-control': 'no-cache',
'dnt': '1',
'upgrade-insecure-requests': '1',
'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.117 Safari/537.36',
'sec-fetch-user': '?1',
'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
'sec-fetch-site': 'same-origin',
'sec-fetch-mode': 'navigate',
'referer': 'https://stackoverflow.com/questions/tagged?sort=Newest&tagMode=Watched&uqlId=8338',
'accept-encoding': 'gzip, deflate, br',
'accept-language': 'ru,en-US;q=0.9,en;q=0.8,tr;q=0.7',
}
You can create a session. The session class handles cookies between requests.
s = requests.Session()
login_resp = s.post('https://example.com/login', login_data)
self.cookies = self.login_resp.cookies
cookiedictreceived = {}
cookiedictreceived=requests.utils.dict_from_cookiejar(self.login_resp.cookies)
So requests wants all "values" in your cookie to be a string. Possibly the same with the "key". Cookies also does not want a list as your function load_cookies returns. Cookies can be created for the request.utils with cookies = requests.utils.cookiejar_from_dict(....
Lets say I go to "https://stackoverflow.com/" with selenium and save the cookies as you have done.
from selenium import webdriver
import pickle
import requests
#Go to the website
driver = webdriver.Chrome(executable_path=r'C:\Path\\To\\Your\\chromedriver.exe')
driver.get('https://stackoverflow.com/')
#Save the cookies in a file
with open("C:\Path\To\Your\Filepath.pkl", 'wb') as f:
pickle.dump(driver.get_cookies(), f)
driver.quit()
#you function to get the cookies from the file.
def load_cookies(filename):
with open(filename, 'rb') as f:
return pickle.load(f)
saved_cookies_list = load_cookies("C:\Path\To\Your\Filepath.pkl")
#Set request session
initial_state = requests.Session()
#Function to fix cookie values and add cookies to request_session
def fix_cookies_and_load_to_requests(cookie_list, request_session):
for index in range(len(cookie_list)):
for item in cookie_list[index]:
if type(cookie_list[index][item]) != str:
print("Fix cookie value: ", cookie_list[index][item])
cookie_list[index][item] = str(cookie_list[index][item])
cookies = requests.utils.cookiejar_from_dict(cookie_list[index])
request_session.cookies.update(cookies)
return request_session
initial_state_with_cookies = fix_cookies_and_load_to_requests(cookie_list=saved_cookies_list, request_session=initial_state)
search_requests = initial_state_with_cookies.get("https://stackoverflow.com/")
print("search_requests:", search_requests)
Requests also accept http.cookiejar.CookieJar objects:
https://docs.python.org/3.8/library/http.cookiejar.html#cookiejar-and-filecookiejar-objects
If I enter this URL in a browser it returns to me the valid XML data that I am interested in scraping.
http://www.facebook.com/ajax/stream/profile.php?__a=1&profile_id=36343869811&filter=2&max_time=0&try_scroll_load=false&_log_clicktype=Filter%20Stories%20or%20Pagination&ajax_log=0
However, if I do it from the server-side, it doesn't work as it previously did. Now it just returns this error, which seems to be the default error message
{u'silentError': 0, u'errorDescription': u"Something went wrong. We're working on getting it fixed as soon as we can.", u'errorSummary': u'Oops', u'errorIsWarning': False, u'error': 1357010, u'payload': None}
here is the code in question, I've tried multiple User Agents, to no avail:
import urllib2
user_agent = 'Mozilla/5.0 (Windows; U; Windows NT 6.1; he; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3'
uaheader = { 'User-Agent' : user_agent }
wallurl='http://www.facebook.com/ajax/stream/profile.php?__a=1&profile_id=36343869811&filter=2&max_time=0&try_scroll_load=false&_log_clicktype=Filter%20Stories%20or%20Pagination&ajax_log=0'
req = urllib2.Request(wallurl, headers=uaheader)
resp = urllib2.urlopen(req)
pageData=convertTextToUnicode(resp.read())
print pageData #and get that error
What would be the difference between the server calls and my own browser aside from User Agents and IP addresses?
I tried the above url in both chrome and firefox. It works on chrome but fails on firefox. On chrome, I am signed into facebook while on Firefox, I am not.
This could be the reason for this discrepancy. You will need to provide authentication in your urllib2 based script that you have posted.
There is a existing question on authentication with urllib2.