Cloud security technologies are involved in a myriad of public and/or private functionalities that include authentication, authorisation, encryption, workload security and access controls.
Gartner predicts that by 2027, more than 70% of enterprises will use industry platforms to accelerate their business initiatives, up from less than 15% in 2023. With this increased use of cloud, organisations must invest more in technology to protect their data, applications and infrastructure services.
So, you get the picture: cloud is popular, on the rise and makes it easy to log, but ask − at what cost?
How did 'log-everything' tactics gain traction?
The log-everything approach emerged from on-premises thinking, where capturing data was difficult, so organisations erred on the side of excess. Cloud environments changed that dynamic.
Today, logging is exceptionally easy, and we've mistaken ease of collection for strategic value. The cloud environment generates millions of log entries daily and storage costs reflect this.
Yet when the inevitable breach occurs, security teams, faced with trying to find a needle in a haystack, struggle to reconstruct what happened because data volume has been confused with visibility.
Why less is more
Here's what an extensive logging strategy misses: Attackers don't generate more logs than legitimate users − they generate different patterns. That distinction is buried beneath terabytes of routine access logs, API calls and system events that no human will ever review.
A logging strategy has become a haystack construction project that brings with it significant performance impacts with high overheads. Moreover, logging too much data, or failing to remove old logs, can rapidly fill up disk space, causing applications or servers to crash.
Today, logging is exceptionally easy, and we've mistaken ease of collection for strategic value.
Also, something many businesses fail to take cognisance of and that is − logging is not free. In cloud environments, high volume logging increases ingestion and storage costs, which can become expensive.
Modern cloud attackers understand logging blind spots better than companies do. They know that while organisations are busy capturing authentication events, they are probably not correlating them with resource access patterns. They recognise that logs show what happened, but not the contextual why that distinguishes malicious from legitimate behaviour.
Most critically, they exploit the time lag between log generation and log analysis − a window that often stretches to hours or days.
Consider a recent engagement my team and I had where an attacker accessed sensitive data through a perfectly legitimate API call, using valid credentials, from an approved IP range. Every log entry appeared normal because the logs captured technical events, not intent.
It is important to understand that standard cloud logs generally record detailed information about activities, system states and events within a cloud environment, typically organised into management, data and network categories.
They are designed to track who did what, where and when, but not why. In this case, the breach wasn't visible until we correlated resource access with business context − information that does not appear in standard cloud logs.
Businesses spend enormous sums of money storing logs they'll never meaningfully analyse, while lacking the contextual data that would detect threats.
The financial impact of log-everything compounds the problem. Compliance frameworks encourage this by mandating retention periods without questioning log utility.
Getting it right
Effective cloud security requires selective, intelligent logging focused on high-value indicators rather than comprehensive capture.
This means the logging state changes over status confirmations, capturing relationship data between resources and instrumenting business logic rather than just infrastructure events. In essence, this prioritises quality over quantity.
The uncomfortable truth is that attackers succeed not despite logging but often because of it. The logs create a false sense of visibility, while masking what really matters. So, stop logging everything. Start logging what will actually be used to detect the threats that matter.
From a best-practice perspective, organisations should move away from volume-based logging to a context-driven strategy aligned to business risk.
Logging should prioritise high-impact events such as privilege changes, sensitive data access, configuration drift and anomalous API behaviour, rather than routine background activity.
These logs must then be enriched with identity, asset value and business process context so that analysts can distinguish normal operations from suspicious intent. It is equally important to reduce the time between log generation and analysis through automated correlation, alerting and response workflows.
Finally, when logging is aligned to risk, context and rapid action, it becomes a detection capability rather than a costly archive of unused data.
Share