Blog > Article

How to Build a Real-Time PagerDuty-Compatible Monitor for GitHub Uptime

GitHub is a pretty critical part of a software developer’s workflow. So if it’s down, you’d want to know ASAP. But it often takes a while for the GitHub status page to update.

That’s one reason why it’s a good idea to set up continuous monitoring for the software so you can be in the know if it goes down. In this article, we’ll show you how to set up monitoring using GitHub that alerts you in PagerDuty and produces a queryable database. 

And since ChatGPT is such a useful, popular tool, we also used it to generate the code for this article – and our engineers reviewed said code. See notes for more information. Now let’s get into how to build the monitor. 


What We’ll Do

More specifically, we’ll guide you through setting up continuous monitoring for GitHub with AWS EC2 and RDS. We’ll be using Python to create a testing script that exercises core GitHub functionality every 2 minutes, and sends an alert to PagerDuty when two consecutive tests fail. This script will also store the results in an AWS RDS database for future reporting or analysis.

But before getting started with this project, it’s important to cover some of the basics. So let’s look at some of the prerequisites involving Python, AWS, GitHub, and PagerDuty. 



Before diving into the tutorial, you will need to have the following: 

  • A basic knowledge of Python programming
  • Access to an AWS account with the appropriate permissions
  • A basic knowledge of AWS EC2 and AWS RDS
  • A GitHub account with a repository for testing purposes
  • A PagerDuty account for receiving alerts

If you have these prerequisites, you’re good to get started and you’ll be ready for step 1: setting up your AWS EC2 and RDS instances. 


Step 1: Set Up AWS EC2 and RDS Instances

  1. Launch an AWS EC2 instance with Amazon Linux 2 (A t2.micro instance should be fine.)
  2. Set up an AWS RDS instance with MySQL or PostgreSQL as the database engine
  3. Note the RDS endpoint and credentials
  4. Connect to your EC2 instance via SSH

Step 2: Configure Your EC2 Instance

  1. Install Python 3, pip, and any other necessary packages on your EC2 instance.
  2. Install the necessary Python libraries using pip:
					pip install requests psycopg2-binary boto3 pagerduty-api


3. Set up your GitHub and PagerDuty API keys in a `` file on your EC2 instance

Step 3: Create a Function for Testing GitHub Functionality

Create a new file named `` and import the required libraries:

import os
import tempfile
from github3 import login

# Function to test GitHub functionality
def test_github():
    # Log in to GitHub
    gh = login(token=GITHUB_API_KEY)

    # Get the test repository
    repo = gh.repository(GITHUB_USERNAME, GITHUB_REPO_NAME)

    # Test 1: Pull code
        repo_contents = repo.file_contents("")
    except Exception as e:
        return False, f"Failed to pull code: {str(e)}"

    # Test 2: Push code
    test_branch_name = "test_branch"
        base_ref = repo.ref("heads/main")
        repo.create_ref(f"refs/heads/{test_branch_name}", base_ref.object.sha)

        test_file_path = os.path.join(tempfile.gettempdir(), "test_file.txt")
        with open(test_file_path, "w") as test_file:
            test_file.write("This is a test file for GitHub functionality monitoring")

        with open(test_file_path, "rb") as test_file:
                             message="Add test file for GitHub functionality monitoring",

    except Exception as e:
        return False, f"Failed to push code: {str(e)}"

    # Test 3: Make a pull request
        repo.create_pull(title="Test Pull Request",
                         body="This is a test pull request for GitHub functionality monitoring")

        # Clean up: delete the test branch
        test_branch_ref = repo.ref(f"heads/{test_branch_name}")
    except Exception as e:
        return False, f"Failed to create a pull request: {str(e)}"

    return True, "GitHub functionality test successful"

This function tests GitHub functionality, including pulling code, pushing code, and creating a pull request while ensuring no phantom artifacts remain in your GitHub repository.


Step 4: Set Up PagerDuty Integration

Create a new file named `` and import the required libraries:

from config import PAGERDUTY_API_KEY
from pagerduty_api import PagerDuty

# Initialize PagerDuty
pager = PagerDuty(api_key=PAGERDUTY_API_KEY)

def trigger_incident(service_key, description):
    pager.trigger_incident(service_key=service_key, description=description)

Step 5: Configure Continuous Monitoring and Alerts

Create a new file named `` and import the required libraries:

import time
import psycopg2
from github_test import test_github
from pagerduty_integration import trigger_incident

# Connect to the RDS database
conn = psycopg2.connect(
cursor = conn.cursor()

# Create the results table if it doesn't exist
cursor.execute('''CREATE TABLE IF NOT EXISTS results
                  (id SERIAL PRIMARY KEY, timestamp TIMESTAMP, status TEXT, message TEXT)''')

# Initialize variables for continuous monitoring
test_interval = 120  # 2 minutes
consecutive_failures = 0

while True:
    test_success, message = test_github()

    # Insert the test result into the database
    cursor.execute("INSERT INTO results (timestamp, status, message) VALUES (NOW(), %s, %s)",
                   ("success" if test_success else "failure", message))

    if test_success:
        consecutive_failures = 0
        consecutive_failures += 1
        if consecutive_failures == 2:
            trigger_incident(PAGERDUTY_SERVICE_KEY, f"GitHub monitoring failed twice in a row: {message}")


This script sets up continuous monitoring of GitHub functionality, runs tests every 2 minutes, and sends an alert to PagerDuty when two consecutive tests fail. The results of each test are stored in the RDS database.


Step 6: Query the Test Results

Create a new file named `` and import the required libraries:

import psycopg2

# Connect to the RDS database
conn = psycopg2.connect(
cursor = conn.cursor()

# Define a function to query the test results
def query_test_results():
    cursor.execute("SELECT * FROM results ORDER BY timestamp DESC")
    results = cursor.fetchall()

    for result in results:
        print(f"ID: {result[0]}, Timestamp: {result[1]}, Status: {result[2]}, Message: {result[3]}")

# Call the function to display the test results

This script connects to your AWS RDS instance and fetches the test results stored in the `results` table. The `query_test_results()` function retrieves the results in descending order by timestamp and displays them in a readable format.

To check the test results, you can run this bash script whenever needed:



Notes on ChatGPT

As you can see, ChatGPT was able to generate a pretty good solution for this in Python. It’s not perfect though and has a few flaws. One such flaw is that it can still leave artifacts behind such as the pull request or even the branch if the pull request fails to create despite the fact that the AI was asked to ensure it cleaned up any created artifacts. 

This is one example of why human beings can’t be pulled from the software engineering equation. These AI systems such as ChatGPT, Google Bard, or GitHub Copilot can be very helpful in code generation but anything generated by them should always be reviewed before being let loose in the wild.


Next Steps

GitHub is such an important part of your workflow, it’s important to know if it’s working or not. But once you have this system set up, you can be on top of any outages that might affect your tasks. 

Although the process we outlined above is relatively straightforward, it can take some effort. But if you monitor Github quickly and easily, you can do so with Metrist. 

Not only will you be able to monitor Github in real time, Metrist integrates with PagerDuty and offers webhooks automations triggered by service health metrics! It’s as simple as downloading Metrist and then configuring your monitors. To learn more go here – or set up a demo!

More articles