Fetching Pages with Requests
Before you can scrape a page, you need to download it. The requests library makes this simple.
import requests
response = requests.get("https://example.com")
print(response.text)
This fetches the page and gives you its HTML content as a string. The response object also tells you if the request succeeded.
print(response.status_code) # 200 means success
A 200 status means everything worked. A 404 means the page wasn't found. Always check the status before trying to parse the content.
The requests library handles all the complex HTTP details - cookies, redirects, headers - so you can focus on the data you want.
Learn requests and HTTP basics in my Web Scraping course.