With more than 200,000 sensors at its Pluto LNG in Western Australia, Woodside Energy is using SAP asset and maintenance data to optimise the plant and maintenance.

Yet, while the company has been data-driven for a long time, its analytics were performed in deep silos of expertise — which limited their efficacy. As part of a wider transformation program, the company formed a Data Sciences Group after its migration to the Cloud unlocked the value trapped in these silos, and shared the insights found in them across the organisation.

All the data is stored on Amazon S3 and is supplemented with IoT data and time series image data as well as spectrum analysis.

This Cloud-based data infrastructure is also a driver of innovation and agility.

Getting data to where it’s needed when it’s needed — at the speed of web — is critical to ensuring businesses can run at maximum effectiveness and match the ever-growing expectations of today’s customers.

For many organisations with a large technology debt and inflexible legacy systems, there’s the added problem of being able to marry data from traditional systems such as ERP with new data sources — often from third parties. And many of the new sources of data lack the structure that makes management easier.

That issue is covered in a new research paper from AWS called Modern Data Architecture, which looks at the problem specifically from the perspective of organisations using SAP.

These organisations can really gain a benefit from AWS Cloud by combining SAP data and non-SAP data — and especially semi-structured or even unstructured data with structured SAP data.

Data gravity

As data assets are brought together in a foundational data lake, this creates both the incentive and the opportunity to move ERP and other apps and data sources closer to the data lake.

The business benefit? It becomes much easier to run analytics services across the data in the lake. This is the concept of data gravity.

The concept of the gravity of data starts with the foundational data lake, which allows organisations to aggregate all transaction data, operational data, and unstructured data, and ideally provides mechanisms for managing master data.

Of course, there are other benefits — such as the chance to build a single security framework across the data lake and all the applications adjacent to it.

According to Shaun Gregory Executive Vice President Sustainability and Chief Technology Officer at Woodside Energy, “The traditional innovation cycle in our industry is more like six or seven years for a project, and Cloud-enabled us to drive that to [at first] six or seven months, and now we’re down to six or seven weeks. Our aspiration is on some of these things in prototype to be in six or seven days.”

Ultimately, data gravity pays a decision-making dividend by empowering executives. As Woodside’s chief data officer Lauchlan Wallace says, “It’s all about people. People are always a valuable asset. Having them wait, or be delayed due to the time it takes to run an algorithm, is not a place where we want to be.”

By enabling broad access to data, and the democratisation of that data, data lakes allow a wider range of stakeholders to extract value from the data. This would not be feasible without a modern data architecture.

Origin Energy, meanwhile, combined its SAP and non-SAP data into a foundational data lake to unlock data from data silos. Moving to the Cloud — to keep pace with the growth of its enterprise analytics environment with centralised data access — was the only viable option. Origin says benefits include easy integration of SAP data with other enterprise data sets. Data access has improved from several days to mere hours.

The energy giant needed to break down huge silos of organisational data, merge the data, and prepare it for analytics. By 2016 it became clear that as the volume and complexity of data increased, its existing infrastructure was struggling to support the increasing demands.

James Moor, General Manager of Data and Analytics at Origin Energy, said “We just couldn’t crunch the numbers fast enough on-premises, and needed an infrastructure configuration that would support what we needed to do”.

The new approach transformed the business, facilitating faster time-to-market for new and highly personalised customer offerings while significantly reducing data costs. It also meant that company data was accessible to functional teams across the organisation, since all the workloads and databases had been consolidated into one powerful engine.

Both these examples demonstrate the power of running SAP on AWS and building a contemporary data architecture that accelerates innovation.

To learn more, read Modern Data Architecture 

This article is published by Which-50’s Digital Intelligence Unit (DIU) on behalf of AWS. DIU Members such as AWS pay to share their expertise and insights with Which-50’s audience of senior executives.

DIU

LinkedIn
Previous post

‘Carrot’ approach to BNPL lands Klarna half a million Australian consumers in first year

Next post

Databricks Raises $US1 Billion Series G Investment at $US28 Billion Valuation