Confluent Cloud
Update - AWS regional recovery is expected to be extended in both me-central-1 and me-south-1 regions. Customers requiring immediate restoration in these two regions are encouraged to review regional failover options.
Mar 03, 2026 - 05:34 UTC
Update - Confluent Cloud services in AWS me-south-1 remain stable and operating normally following mitigation. We continue to monitor for any residual issues.
Confluent Cloud services in AWS me-central-1 continue to experience a complete outage due to the ongoing AWS regional infrastructure failure in that region.

Mar 02, 2026 - 15:37 UTC
Monitoring - Confluent cloud services in AWS me-south-1 are mitigated. They are currently being monitored.

Confluent Cloud Services in AWS me-central-1 are still disrupted owing to regional outage at AWS.

Mar 02, 2026 - 10:35 UTC
Update - Confluent Cloud services in AWS me-central-1 region are experiencing a major outage. They are linked to AWS's me-central-1 regional outage
Confluent Cloud services in AWS me-south-1 are being mitigated.

Mar 02, 2026 - 08:32 UTC
Investigating - We are experiencing increased error rates in some of our Confluent Cloud services in AWS me-south-1 and me-central-1 regions. This began at approximately 05:00 AM UTC today and is linked to disruptions in AWS’s availability zones in these regions (mec1-az1, mec1-az3 and mes1-az2). Team is currently working on mitigating these and will provide updates as the situation evolves.
Mar 02, 2026 - 07:19 UTC

About This Site

Welcome to Confluent Cloud's status page. Here you will find high level availability information for the Confluent Cloud managed service. Visit https://confluent.cloud to manage your Confluent Cloud clusters.

Confluent Cloud Partial Outage
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Mar 5, 2026

No incidents reported today.

Mar 4, 2026

No incidents reported.

Mar 3, 2026

Unresolved incident: Elevated error rates in AWS me-south-1 and me-central-1 regions.

Mar 2, 2026
Mar 1, 2026
Resolved - This incident has been resolved. All Confluent Cloud services are now healthy in me-central-1 region.
Mar 1, 22:05 UTC
Monitoring - The problem has been mitigated as of 21:15 PM UTC today. All Confluent Cloud services are now healthy in me-central-1 region and we will monitor for any residual issues before resolving this incident in 1 hour.
Mar 1, 21:41 UTC
Identified - We have identified the cause of the problem to be disruption in one of AWS’s availability zones (mec1-az2). We are taking steps to confirm the safest path to mitigation for any impacted Confluent Cloud services in this region.
Mar 1, 18:48 UTC
Investigating - We are experiencing increased error rates in some of our Confluent Cloud services in the AWS me-central-1 region. This began at approximately 12:51 PM UTC today and is linked to a disruption in one of AWS’s availability zones (mec1-az2). Our team is actively applying mitigation steps and will provide updates as the situation evolves.
Mar 1, 18:08 UTC
Feb 28, 2026

No incidents reported.

Feb 27, 2026
Resolved - This incident has been resolved.
Feb 27, 18:16 UTC
Update - We are continuing to monitor for any further issues.
Feb 27, 03:25 UTC
Monitoring - A fix has been implemented and we are monitoring the results
Feb 26, 17:41 UTC
Update - At this time, single zone clusters in us-south1 Zone-a are impacted.
Feb 26, 01:11 UTC
Identified - GCP has identified the issue and is actively working on applying mitigation.
Feb 25, 12:02 UTC
Investigating - We are currently investigating the issue
Feb 25, 05:01 UTC
Resolved - This incident has been resolved.
Feb 27, 01:57 UTC
Identified - The issue has been identified and a fix is being implemented.
Feb 26, 23:07 UTC
Investigating - Azure is investigating the issue and we will post an update soon.
Feb 26, 19:56 UTC
Feb 26, 2026
Resolved - This incident has been resolved.
Feb 26, 22:24 UTC
Monitoring - A fix has been implemented and we are monitoring the results.
Feb 26, 22:23 UTC
Investigating - Provisioning new networks in this region can potentially fail. We are investigating the issue and will post an update soon.
Feb 26, 13:59 UTC
Feb 25, 2026
Resolved - This incident has been resolved.
Feb 25, 01:07 UTC
Identified - Issue has been identified and we are currently deploying the fix.
Feb 25, 00:33 UTC
Update - We are continuing to investigate this issue.
Feb 24, 21:16 UTC
Update - We are currently working on a mitigation.
Feb 24, 21:16 UTC
Investigating - We are experiencing an elevated level of Kafka REST API Errors with error code 429 in AWS us-west-2 and are currently looking into the issue. This issue impacts Kafka eSKU clusters in this regions.
Feb 24, 21:15 UTC
Feb 24, 2026
Resolved - The issue was resolved successfully. All systems working as expected.
Feb 24, 09:58 UTC
Monitoring - GCP has identified the root cause and has fixed the same. Currently monitoring.
Feb 24, 07:27 UTC
Update - We are continuing to investigate this issue with GCP. GCP is actively working on applying mitigation.
Feb 24, 00:23 UTC
Update - We are continuing to investigate this issue.
Feb 23, 22:18 UTC
Update - Starting 2/21/2026, 21:00 UTC We are observing Elevated Kafka Latency in GCP asia-southeast1 region for less than 0.1% of produce and fetch requests for some customers. Median latencies for both produce and fetch requests are not impacted. We are actively working with GCP to identify the root cause of the issue. We will provide next update in 2 hours or earlier.
Feb 23, 22:17 UTC
Investigating - We are currently investigating this issue.
Feb 23, 22:12 UTC
Feb 23, 2026
Feb 22, 2026

No incidents reported.

Feb 21, 2026

No incidents reported.

Feb 20, 2026

No incidents reported.

Feb 19, 2026

No incidents reported.