Identified - We have identified the cause of the problem to be disruption in one of AWS’s availability zones (mec1-az2). We are taking steps to confirm the safest path to mitigation for any impacted Confluent Cloud services in this region.
Mar 01, 2026 - 18:48 UTC
Investigating - We are experiencing increased error rates in some of our Confluent Cloud services in the AWS me-central-1 region. This began at approximately 12:51 PM UTC today and is linked to a disruption in one of AWS’s availability zones (mec1-az2). Our team is actively applying mitigation steps and will provide updates as the situation evolves.
Mar 01, 2026 - 18:08 UTC
Welcome to Confluent Cloud's status page. Here you will find high level availability information for the Confluent Cloud managed service. Visit https://confluent.cloud to manage your Confluent Cloud clusters.
Confluent Cloud
Partial Outage
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Past Incidents
Mar 1, 2026
Unresolved incident: Elevated Error Rates in AWS me-central-1 region.
Resolved -
This incident has been resolved.
Feb 26, 22:24 UTC
Monitoring -
A fix has been implemented and we are monitoring the results.
Feb 26, 22:23 UTC
Investigating -
Provisioning new networks in this region can potentially fail. We are investigating the issue and will post an update soon.
Feb 26, 13:59 UTC
Resolved -
This incident has been resolved.
Feb 25, 01:07 UTC
Identified -
Issue has been identified and we are currently deploying the fix.
Feb 25, 00:33 UTC
Update -
We are continuing to investigate this issue.
Feb 24, 21:16 UTC
Update -
We are currently working on a mitigation.
Feb 24, 21:16 UTC
Investigating -
We are experiencing an elevated level of Kafka REST API Errors with error code 429 in AWS us-west-2 and are currently looking into the issue. This issue impacts Kafka eSKU clusters in this regions.
Feb 24, 21:15 UTC
Resolved -
The issue was resolved successfully. All systems working as expected.
Feb 24, 09:58 UTC
Monitoring -
GCP has identified the root cause and has fixed the same. Currently monitoring.
Feb 24, 07:27 UTC
Update -
We are continuing to investigate this issue with GCP. GCP is actively working on applying mitigation.
Feb 24, 00:23 UTC
Update -
We are continuing to investigate this issue.
Feb 23, 22:18 UTC
Update -
Starting 2/21/2026, 21:00 UTC We are observing Elevated Kafka Latency in GCP asia-southeast1 region for less than 0.1% of produce and fetch requests for some customers. Median latencies for both produce and fetch requests are not impacted. We are actively working with GCP to identify the root cause of the issue. We will provide next update in 2 hours or earlier.
Feb 23, 22:17 UTC
Investigating -
We are currently investigating this issue.
Feb 23, 22:12 UTC