What went down this week? There were a number of notable issues with the cloud apps our companies depend on for the week of Monday, August 1st – Sunday, August 7th.
This week, there weren’t a lot of newsworthy outages per se, but some interesting, related news is that legislation regarding cloud outages is being developed in the US, UK, and EU, according to Information Weekly.
Stay tuned for more information about this legislation, and some of our observations about the the developments. In the meantime, you can catch up on some notable outages.
Notable Metrist-Reported Downtime
While these outages didn’t make the news, these issues caught by Metrist may have affected your company’s app and operations.
- Sentry was slow and issue creation failed for about 40 minutes in AWS-East-1 on Monday, August 1st (Interestingly, Sentry had a similar issue a week later on Monday, August 8th, again in AWS-East-1 region, but also the AWS-West-1 region.)
- GCP App Engine had several 5-20 minute issues in which it could not create versions in US-West-2 on Wednesday, August 3rd. These on-and-off issues started around midnight Pacific time and lasted until about mid-day.
Notable Metrist-Reported Azure Issues
Azure had several issues, some of which seemed related to each other. Many of the outages were “capacity-related,” according to the Azure error message.
- Azure Virtual Machines (VM) had issues lasting 30 minutes to 4.75 hours on and off all day on August 2nd. The platform could not create instances in Azure West US and Canada Central. Then the platform had issues in Canada Central again on August 4th, with several on and off all-day partial outages lasting 30 minutes to 5 hours. Error messages for both occurrences indicated that Azure did not have available sources in the region.
- Azure AKS could not create clusters in Canada Central two separate times lasting 1-2 hours each on August 2nd. The issue is likely related to the Azure VM outage since AKS is dependent on VM. This situation happened again on August 4th, when AKS experienced partial outages that appeared to be related to the concurrent VM outages.
- Azure Cosmos DB had three partial outages lasting 20-31 minutes each on August 4th, both impacting Azure US-East. Two of the outages were in the morning Pacific time, the third was IN THE TIME.
- Azure Blog Storage could not create storage accounts for 14 minutes in Azure Canada Central on August 5th. The error message indicated that the issue was capacity-related.
- Azure Cosmos DB could not create Cosmos accounts in Azure-US-East for 21 minutes on August 6th. The error message also indicated that the issue was capacity-related.
Apps are bound to go down, but as long as we’re aware and have a backup plan, our companies can be more resilient. If you’d like to keep track of the apps you depend on in real-time, try Metrist.