Slowness issue on AI service
Incident Report for Deepip
Postmortem

Slowness issue on AI service

Summary

On January 27th at 8:43 PM UTC+1, a processing service encountered issues due to excessive RAM usage. This caused an Out-of-Memory (OOM) error, preventing automatic recovery and delaying service restoration.

Incident Timeline (UTC+1)

January 27, 8:43 PM
Customer impact start: Generations were not possible or slow.

January 27, 9:24 PM
Stable - Incident is set to stable

January 27, 9:30 PM
Customer impact end

End User Impact:

Users experienced slow or failed generation for 47 minutes.

What caused the incident?

A processing service issue due to excessive RAM usage. This caused an Out-of-Memory (OOM) error, which prevented automatic recovery and delayed service restoration.

Corrective elements put in place to ensure that this does not happen again

We improved our system reliability to ensure stable service operation and minimize restart failures.

We have adjusted resource allocation to enhance performance and reduce the risk of crashes.

We sincerely apologize for any inconvenience this incident may have caused.

Deep IP Team

Posted Jan 31, 2025 - 15:26 UTC

Resolved
Dear Users,

We want to inform you about a service disruption that occurred on January 27, 2025, impacting document generation on our platform leading to slow or failed generation requests for 47 minutes (8:43 PM - 9:30 PM UTC+1).

Our team promptly investigated and resolved the issue by adjusting system resource allocation and improving service reliability. We have also implemented additional monitoring and preventive measures to minimize the risk of similar incidents in the future.

For full details, please refer to our post-mortem report: https://status.deepip.ai/incidents/k4jgdqkj8w52

We sincerely apologize for any inconvenience this may have caused. If you have any further questions or concerns, please don't hesitate to reach out to support at support@deepip@ai

Best regards,
Deep IP Team
Posted Jan 27, 2025 - 19:30 UTC