Python script to monitor website changes Last Updated : 03 Apr, 2024 Summarize Comments Improve Suggest changes Share Like Article Like Report In this article, we are going to discuss how to create a python script to monitor website changes. You can code a program to monitor a website and it will notify you if there are any changes. This program has many useful scenarios for example if your school website has updated something you will come to know about it. Approach: We will follow the following steps to write this program: Read the URL you want to monitor.Hash the entire website.Wait for a specified amount of seconds.If there are any changes as compared to the previous hash notify me else wait and again and then again take the hash.Libraries required: Libraries we will be using are: time: To wait for a specified amount of time.hashlib: To hash the content of the entire website.urllib: To perform the get request and load the content of the website.Implementation: Python3 # Importing libraries import time import hashlib from urllib.request import urlopen, Request # setting the URL you want to monitor url = Request('www.geeksforgeeks.org', headers={'User-Agent': 'Mozilla/5.0'}) # to perform a GET request and load the # content of the website and store it in a var response = urlopen(url).read() # to create the initial hash currentHash = hashlib.sha224(response).hexdigest() print("running") time.sleep(10) while True: try: # perform the get request and store it in a var response = urlopen(url).read() # create a hash currentHash = hashlib.sha224(response).hexdigest() # wait for 30 seconds time.sleep(30) # perform the get request response = urlopen(url).read() # create a new hash newHash = hashlib.sha224(response).hexdigest() # check if new hash is same as the previous hash if newHash == currentHash: continue # if something changed in the hashes else: # notify print("something changed") # again read the website response = urlopen(url).read() # create a hash currentHash = hashlib.sha224(response).hexdigest() # wait for 30 seconds time.sleep(30) continue # To handle exceptions except Exception as e: print("error") Output: output Note: time.sleep() takes seconds as a parameter. You can make changes to notifications instead of printing the status on the terminal you can write a program to get an email. Code Explanation: The code starts by importing the libraries.Then it sets up a URL to monitor and performs a GET request on that website.The response is then stored in a variable called response.Next, the hash of the response is created with the help of hashlib and stored in currentHash.Next, time is set to sleep for 10 seconds before continuing while looping through an infinite loop which will continue until something changes or there's an exception.If anything changes, it will be printed out as well as another GET request performed on the website again after 30 seconds has passed without any change happening to either hashes (the first one or second one).The code is a Python script that monitors an URL for changes and notifies the user if there is one.The code first imports libraries needed to perform the desired task.It then sets up the URL to monitor, which will be www.geeksforgeeks.org Next, it performs a GET request on the website and stores it in a variable called response.The code then creates an initial hash using sha224(response).hexdigest().Next, it sleeps for 10 seconds before iterating through the loop again with urlopen(url).read() being performed every 30 seconds.If something changed in the hashes of currentHash and newHash, then print("something changed") is done to notify that something has changed. Comment More infoAdvertise with us Next Article Python Script to Open a Web Browser H hg070401 Follow Improve Article Tags : Python python-utility Practice Tags : python Similar Reads Python Script to Open a Web Browser In this article we will be discussing some of the methods that can be used to open a web browser (of our choice) and visit the URL we specified, using python scripts. In the Python package, we have a module named webbrowser, which includes a lot of methods that we can use to open the required URL in 4 min read How to Scrape Websites with Beautifulsoup and Python ? Have you ever wondered how much data is created on the internet every day, and what if you want to work with those data? Unfortunately, this data is not properly organized like some CSV or JSON file but fortunately, we can use web scraping to scrape the data from the internet and can use it accordin 10 min read How to scrape the web with Playwright in Python In this article, we will discuss about Playwright framework, Its feature, the advantages of Playwright, and the Scraping of a basic webpage. The playwright is a framework for Web Testing and Automation. It is a fairly new web testing tool from Microsoft introduced to let users automate webpages more 3 min read How to Check Loading Time of Website using Python In this article, we will discuss how we can check the website's loading time. Do you want to calculate how much time it will take to load your website? Then, what you must need to exactly do is subtract the time obtained passed since the epoch from the time obtained after reading the whole website. 3 min read How to Scrape Multiple Pages of a Website Using Python? Web Scraping is a method of extracting useful data from a website using computer programs without having to manually do it. This data can then be exported and categorically organized for various purposes. Some common places where Web Scraping finds its use are Market research & Analysis Websites 6 min read How to Extract Script and CSS Files from Web Pages in Python ? Prerequisite: RequestsBeautifulSoupFile Handling in Python In this article, we will discuss how to extract Script and CSS Files from Web Pages using Python. For this, we will be downloading the CSS and JavaScript files that were attached to the source code of the website during its coding process. F 2 min read Like