In the next ten years, globally, we’ll see regulation, accounting and business processes involving the virtual world turned on their head. Here’s why:

This is the second in a series of articles aimed at both professionals and the wider community.

The first explained the notion of a ‘digital twin’ (a computer generated twin of each object and process, together with all related data); and showcased current technology that is already enabling better decisions, more quickly, at less cost and with much lower risk.

This second teases out the existential risks we are all facing in relying on a virtual world that purports to mirror the real world, that also seamlessly integrates with it.

The third looks at an approach being explored by a National Workgoup to mitigate the risks by establishing a common governance framework to protect privacy, maintain security and enhance trust in the digital world – while also simplifying access. The approach is far from settled, though it is now sufficiently advanced to provide a base for thoughtful discussion; which this series seeks to ignite.

The fourth explores what the proposed approach may mean for how we procure new buildings and infrastructure.

The fifth looks at the ways we can share data to more quickly build our ‘digital infrastructure’.

It also explains why and how to get our models (and related data) on the balance sheet, and identifies a simple business process to keep our models in sync with the real world.

Underlying all these processes is the problem: how to ensure trust in the data?

This is a profound concern that goes to the heart of our ability to use Digital Twins for decision-making.

It is even more vexing in regard to our ‘virtual personas’.

As the first paper showed, not only can we now make realistic models of the built environment, we now use AI to make very realistic models of ourselves, enabling us to interact naturally in the virtual world.

Unfortunately, the same technology can also let ANYONE manipulate our voice to make it appear that we are saying things we never said in order to persuade people to do things that are against their own interests (in this recent case persuading a subordinate to transfer $200,000 to a bogus account).

Video too is being manipulated to create ‘deep fakes’, making us appear to do, as well as say, what we never did:

While Supasorn suggests ‘deep fakes’ still contain flaws that can be identified, and ends on an upbeat note; this following video from a group of Stanford University students shows we don’t need to wait for ‘deep fakes’ to be perfected. Even simple fakes are already distorting reality:

The student’s ‘deepening’ concerns are well expressed in this deeply troubling video by law professor, Danielle Citron:

Even when it is real, the technology permits ‘plausible deniability’: “I never said or did that… its FAKE!”, destroying the trust upon which every community is founded.

And, just as we can manipulate the images of people, so we can more easily change our digital models of any building or place to make it appear (say) more attractive, or with a better view, or a room larger than it really is, or with fake material specifications, or anything else that could advantage one party over another.

Worse, as we link our models to sensors, and these to our power, water, energy, transport and telecoms grids, the risks to our welfare grow exponentially from cyber-crime, terrorism and war, as well as from accidental acts, or simply from random fluctuations in systems (that even now no single person, or even group, can possibly comprehend in detail).

Alongside these ‘technical’ developments, identify theft is growing, and is set to become even more of a problem as criminals exploit not only your documents, but your voice and image as well.

Everything is now uncertain in the digital world: people, places, things and events. Nothing can be trusted. No longer can we say: ‘seeing is believing’, or that something ‘sounds off’ when ‘fake’ looks and sounds ‘real’.

Countermeasures to this dystopian future will not evolve by themselves. It will take time and deep consideration to shape the virtual world to meet our needs for privacy, security and trust.

In Australia, this work is being undertaken by (among others) a National Workgroup of more than 150 key stakeholders involved in planning, design, costing, manufacturing and construction, as well as asset and facility management, finance, leasing and sale, along with representatives of local and State governments and national and international technology providers, and the legal profession.

While the focus is on assuring ‘truth’ in the digital built environment, the same problems are faced in assuring our digital bodies.

The main challenges are no longer seen to be technical (though they are many).

The really tough ones relate to the governance framework, business processes and cultural norms within both the virtual and real worlds.

The next article in this series considers how we may mitigate the emerging risks so that we can rely on an ‘authoritative digital built environment’ and ‘self-sovereign identities’ for communication and decision-making; enabling us to transact business via our avatars with at least the same level of assurance (and possibly even better) as in the real world; limiting the potential for identity theft and fraud in the process.

LinkedIn
Previous post

Cover Story: 2019 Which-50 Digital Innovator of the Year - Peter Auhl

Next post

As digital ad costs grow, online fashion retailers look elsewhere to advertise

Join the digital transformation discussion and sign up for the Which-50 Irregular Insights newsletter.