Confluent Cloud
Identified - We have identified the cause of the problem to be disruption in one of AWS’s availability zones (mec1-az2). We are taking steps to confirm the safest path to mitigation for any impacted Confluent Cloud services in this region.
Mar 01, 2026 - 18:48 UTC
Investigating - We are experiencing increased error rates in some of our Confluent Cloud services in the AWS me-central-1 region. This began at approximately 12:51 PM UTC today and is linked to a disruption in one of AWS’s availability zones (mec1-az2). Our team is actively applying mitigation steps and will provide updates as the situation evolves.
Mar 01, 2026 - 18:08 UTC

About This Site

Welcome to Confluent Cloud's status page. Here you will find high level availability information for the Confluent Cloud managed service. Visit https://confluent.cloud to manage your Confluent Cloud clusters.

Confluent Cloud Partial Outage
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Mar 1, 2026

Unresolved incident: Elevated Error Rates in AWS me-central-1 region.

Feb 28, 2026

No incidents reported.

Feb 27, 2026
Resolved - This incident has been resolved.
Feb 27, 18:16 UTC
Update - We are continuing to monitor for any further issues.
Feb 27, 03:25 UTC
Monitoring - A fix has been implemented and we are monitoring the results
Feb 26, 17:41 UTC
Update - At this time, single zone clusters in us-south1 Zone-a are impacted.
Feb 26, 01:11 UTC
Identified - GCP has identified the issue and is actively working on applying mitigation.
Feb 25, 12:02 UTC
Investigating - We are currently investigating the issue
Feb 25, 05:01 UTC
Resolved - This incident has been resolved.
Feb 27, 01:57 UTC
Identified - The issue has been identified and a fix is being implemented.
Feb 26, 23:07 UTC
Investigating - Azure is investigating the issue and we will post an update soon.
Feb 26, 19:56 UTC
Feb 26, 2026
Resolved - This incident has been resolved.
Feb 26, 22:24 UTC
Monitoring - A fix has been implemented and we are monitoring the results.
Feb 26, 22:23 UTC
Investigating - Provisioning new networks in this region can potentially fail. We are investigating the issue and will post an update soon.
Feb 26, 13:59 UTC
Feb 25, 2026
Resolved - This incident has been resolved.
Feb 25, 01:07 UTC
Identified - Issue has been identified and we are currently deploying the fix.
Feb 25, 00:33 UTC
Update - We are continuing to investigate this issue.
Feb 24, 21:16 UTC
Update - We are currently working on a mitigation.
Feb 24, 21:16 UTC
Investigating - We are experiencing an elevated level of Kafka REST API Errors with error code 429 in AWS us-west-2 and are currently looking into the issue. This issue impacts Kafka eSKU clusters in this regions.
Feb 24, 21:15 UTC
Feb 24, 2026
Resolved - The issue was resolved successfully. All systems working as expected.
Feb 24, 09:58 UTC
Monitoring - GCP has identified the root cause and has fixed the same. Currently monitoring.
Feb 24, 07:27 UTC
Update - We are continuing to investigate this issue with GCP. GCP is actively working on applying mitigation.
Feb 24, 00:23 UTC
Update - We are continuing to investigate this issue.
Feb 23, 22:18 UTC
Update - Starting 2/21/2026, 21:00 UTC We are observing Elevated Kafka Latency in GCP asia-southeast1 region for less than 0.1% of produce and fetch requests for some customers. Median latencies for both produce and fetch requests are not impacted. We are actively working with GCP to identify the root cause of the issue. We will provide next update in 2 hours or earlier.
Feb 23, 22:17 UTC
Investigating - We are currently investigating this issue.
Feb 23, 22:12 UTC
Feb 23, 2026
Feb 22, 2026

No incidents reported.

Feb 21, 2026

No incidents reported.

Feb 20, 2026

No incidents reported.

Feb 19, 2026

No incidents reported.

Feb 18, 2026

No incidents reported.

Feb 17, 2026

No incidents reported.

Feb 16, 2026

No incidents reported.

Feb 15, 2026

No incidents reported.