Subscribe
  • Home
  • /
  • Storage
  • /
  • Data fabric focus to drive bandwidth, compute capacity demand

Data fabric focus to drive bandwidth, compute capacity demand

Data fabric solves the challenge of seamlessly accessing and consuming data, and functions in distributed and remote ICT and business environments.
Mervyn Mooi
By Mervyn Mooi, Director of Knowledge Integration Dynamics (KID) and represents the ICT services arm of the Thesele Group.
Johannesburg, 09 Feb 2022

Data fabric is expected to be a key focus area for organisations this year, as they look to optimise the value of their data.

This is a logical progression, as organisations spread across regional boundaries and increasingly need to integrate external data into their planning and forecasting, and seek to automate ingestion, integration and exploration, embed governance, and enable self-service across the enterprise.

What are data fabrics?

Data fabrics add a semantic layer to data lakes, making the vast volumes of data spread across a complex ecosystem of devices, applications and data infrastructure more readily available for consumption and reducing time to delivery.

Unlike data mesh, which connects data on the fly and plugs in various functionalities, a data fabric is architecturally in place – with data interlinked, partitioned and served up off a platform.

Data warehouses and data lakes are becoming limited in terms of functionality – they have become too large and may not have all the data the organisation needs, including data from external sources such as weather patterns or social media behaviour. Data lakes are normally organisational, whereas what data fabric and data mesh allude to entails access to any data, anywhere.

To deliver on the data needs of the future-proof enterprise, organisations will likely harness all three – data lakes, data fabric and data mesh − to bring data together, link it and dynamically connect and enable on-demand access for analysis.

Most organisations (big or small) have a certain degree of distribution and use different technologies to do different things. Today, with remote and hybrid workforces, organisations need to enable secure and governed access to trusted data anywhere, anytime, from any platform, bringing it into a common interface in the user experience space.

Data fabric solves the challenge of seamlessly accessing and consuming data and functions (applications) in our distributed and remote ICT and business environments.

With data sciences on the rise, the enabling capability that data fabric provides is pivotal.

This entails creating a user experience (functional and data interface) that remotely connects to more than one data and/or application environment (regardless of the infrastructure and technology), and this connection or access is seamlessly setup or already in place between the environments.

In the present world the methods to do this include data virtualisation, messaging and streaming, to mention but a few. These methods allow users to connect anywhere and bring together (consume) or move any data (structured and unstructured). Most modern-day end-user interfacing tools (apps) have this functionality – such as social media applications.

Spurred on by the demands of remote workforces and an increasing dependence on data, organisations are ramping up the use of readily available tools in a more strategically structured way to enable a seamless and automated data experience.

New resource demands

The convenience of it enabling a “wide” user experience cannot be discounted, especially in a world where data discovery and analysis is commonplace. Also, with data sciences on the rise, the enabling capability that data fabric provides is pivotal.

The extent of enabling data fabric is confined to the security and access rights of underlying environments. The speed of which data is obtained and moved is also a constraining factor, as users need to see and process data as quickly as possible.

But data volumes are increasing exponentially, and data will be required, processed and analysed with increasing frequency. This means a dramatic increase in the resources needed, such as bandwidth, compute power, storage and security.

Data security is a prime concern in this environment, because there is a risk that packets of data being moved and examined could become corrupted or be intercepted.

Traditional tools to stream data and pass pockets of data to anyone are in place already, as are data virtualisation solutions, where query and reporting tools can link to any dataset and bring them into one place without having to move the data for a report or presentation.

In doing so, they need to integrate into the data fabric all the tools necessary for data ingestion, extraction, testing and analytics, ensuring that lineage, observability, governance and compliance are all covered.

Most reporting and presentation solutions are already based on those methods, and they are commonplace, especially where analysts need to discover and analyse pockets of data.

For an integrated and future-proof data fabric, organisations now need to look at how these systems and tools are architected to deliver a unified view of the consumption layer and meet the needs of the broader enterprise. 

Share