-
- Network
-
During this maintenance period, we will be replacing our existing core routing platform in Colozueri CZ41. This will result in traffic being rerouted multiple times. Thanks to our redundant network design, only selected customers who are directly connected to the core routing platform (e.g. for BGP Transit) will experience an outage during maintenance. We will notify these customers directly.
[Scheduled] Core Router Maintenance CZ41
-
- Network
-
During this maintenance period, we will be replacing our existing core routing platform in NTT ES13. This will result in traffic being rerouted multiple times. Thanks to our redundant network design, only selected customers who are directly connected to the core routing platform (e.g. for BGP Transit) will experience an outage during maintenance. We will notify these customers directly.
[Scheduled] Core Router Maintenance ES13
-
- Deploio
-
We’re currently experiencing issues on the Deploio cluster. As a result, connections to services such as databases and key-value stores are not working reliably.
To resolve this, we need to replace one of the cluster nodes. During this maintenance, single-replica apps may experience a brief downtime.
All issues have been resolved.
We will carry out some adjustments on the Deploio nodes. While we do not expect any impact, there is a small chance of a brief disruption to Deploio apps during this work.
-
All maintenance work has been done.
[Resolved] Degraded service connectivity on Deploio cluster
Began: Ended: Duration: -
- Network
-
Network redundancy has been degraded in one of our racks in the NTT (E Shelter) data center, devices continue to operate properly using redundant paths. Engineers are on the way to the data center to replace the defective device.
We've now resolved the incident by replacing the affected switch in the rack. Thanks for your patience.
[Resolved] Local Network Redundancy Degraded (NTT E Shelter)
Began: Ended: Duration: -
- General
- Deploio
-
Our login infrastructure (auth.nine.ch) will be undergoing maintenance between 18:00 and 20:00 CEST. During this time the service will be restarted several times, and you may be temporarily unable to log in. This affects both cockpit and nctl.
-
The scheduled maintenance is now underway. We'll keep you updated on our progress.
The maintenance is now complete. Thanks for your patience.
[Complete] Login Infrastructure Maintenance (auth.nine.ch)
Began: Ended: Duration: -
- Virtualization
- Managed Kubernetes
- Deploio
-
We are currently experiencing degraded performance affecting our on-demand services, with KVS being the most impacted. Our engineering team is aware and actively working to resolve the issue.
We apologize for any inconvenience this may cause and will share updates as the situation develops.
The incident has been resolved and all services are back up and running. Thank you for your patience.
Root cause: Underlying VM restarts triggered new service instances to be spawned, causing interruptions to non-HA services.
Everything is fully operational again.
[Resolved] Instability with On Demand Services
Began: Ended: Duration: -
- Managed Kubernetes
-
Due to an issue with our hypervisor, we're currently experiencing problems managing PVCs (Persistent Volume Claims) on NKE clusters. Kubernetes based workloads with storage requirements (NKE workloads/Deploio/vClusters/certain managed on-demand services) may be impacted. We've opened a ticket with the vendor and are actively working on a resolution.
The problem has been identified and we're working on a fix together with the vendor.
The issue is still ongoing and we are actively working to resolve it with the vendor. For now, we recommend not launching new deployments or updates to existing workloads until further notice.
The issue has been resolved, and any pending tasks that could not complete have been worked through. According to our monitoring, everything is back to normal. We will keep a close eye on the situation and will close the status tomorrow morning if it remains stable.
-
We are still seeing recurring issues, but we’ve implemented a workaround to limit impact on customer workloads. Since the issue has reappeared, we are continuing to work with the vendor to implement a permanent fix.
-
We've now resolved the incident. Thanks for your patience.
[Resolved] NKE Storage Backend Issues
Began: Ended: Duration: -
- General
- Virtualization
- Managed Kubernetes
- Deploio
-
We are currently experiencing instability affecting our on-demand services. Some requests may fail or respond with increased latency. Our team is investigating the issue and working on a resolution. We will provide updates as the situation develops
We've now resolved the incident. Thanks for your patience. We are monitoring the situation further.
[Resolved] Instability with On-Demand Services
Began: Ended: Duration: -
- Network
-
During this maintenance period, we will be replacing our existing core routing platform in NTT ES34. This will result in traffic being rerouted multiple times. Thanks to our redundant network design, only BGP transit customers who are directly connected to the core routing platform will experience an outage during maintenance. We will notify these customers directly.
-
The scheduled maintenance is now underway. We'll keep you updated on our progress.
-
The maintenance is now complete. Thanks for your patience.
[Complete] Core Router Maintenance ES34
Began: Ended: Duration: - Past notices
- No further notices from the past 30 days.