Robots: a terrifying new world?
News
by
Rowland Manthorpe, associate editor, Wired
It was, admittedly, a very impressive backflip. But when US robotics firm Boston Dynamics released a video at the end of last year of Atlas, an eager-looking humanoid somersaulting off a box, the reaction made it seem like so much more. One headline called it “The most terrifying video on the internet.” A viral tweet put it simply: “We’re dead.”
In the same month, news broke that US-based AI property management company Zenplace had developed a robot to provide smart guided home viewings. The headlines – which included “robots take over real estate” and “your next agent could be a robot” – were just as sensational.
The truth is, the real threat is unlikely to come from a clumsy machine – in the case of Zenplace an iPad attached to a robotic arm that resembles something between a Segway and a scooter.
It was, admittedly, a very impressive backflip. But when US robotics firm Boston Dynamics released a video at the end of last year of Atlas, an eager-looking humanoid somersaulting off a box, the reaction made it seem like so much more. One headline called it “The most terrifying video on the internet.” A viral tweet put it simply: “We’re dead.”
In the same month, news broke that US-based AI property management company Zenplace had developed a robot to provide smart guided home viewings. The headlines – which included “robots take over real estate” and “your next agent could be a robot” – were just as sensational.
The truth is, the real threat is unlikely to come from a clumsy machine – in the case of Zenplace an iPad attached to a robotic arm that resembles something between a Segway and a scooter.
Indeed, it is unlikely to come from anything that conforms to our conventional image of a robot. We know this, because we have already seen it happen.
Less than a month after that Boston Dynamics video “broke the internet”, three college-age friends faced a court in Alaska accused of doing just that.
Paras Jha, Josiah White, and Dalton Norman have pleaded guilty to an attack that forced Twitter, Reddit, Netflix and many other sites offline on October 21, 2016.
How? With Mirai, one of the largest, strongest robots that has ever existed, built from a collection of webcams, routers, and set-top boxes.
Mirai was a “botnet,” a giant digital machine assembled from millions of cheap connected devices, which the three hackers commandeered by exploiting basic vulnerabilities in their security settings.
Once they had control, they instructed the devices to throw blasts of meaningless data at a target to overwhelm it. When they attacked Dyn, one of the internet’s essential pieces of infrastructure, the effect was spectacular.
An FBI special agent on the case likened it to the Manhattan Project, the research effort that produced the first nuclear weapons.
Yet this creation is miniscule compared to another robot assembling in the depths of the internet.
Like Jha, White, and Norman’s botnet, it is composed of many smaller “internet of things” devices, from the phones in our pockets to the smart fridges in our kitchens.
Whereas Mirai linked 200,000 to 300,000 computers, this machine could potentially contain every device connected to the internet. Indeed, some people believe it already has.
As security expert Bruce Schneier puts it: “We’re building a world-size robot, and we don’t even realise.”
So what does this mean? And what impact could it have on real estate?
What are we dealing with?
At heart, robots are simple beings, built from a combination of three elements.
First, sensors – robotic eyes and ears tracking everything from pressure to movement to location.
Second, computer processors, which assess the data and decide what to do as a result – the robot’s “brain”.
Finally, actuators form the robot’s hands and feet, taking the processor’s commands and putting them into action.
If you combine these elements, you can build a robot. Think of a home thermostat that measures the temperature, “decides” it’s too hot, and automatically turns down the radiators in the flat.
But the true power comes when these simple machines are plugged into a network. By doing so, you create a different kind of robot – one that is distributed and invisible.
The idea takes some getting used to. For me, it took hold when I spent time last year in Adidas’s new Speedfactory, the first shoe factory built by the brand in Western Europe for more than 40 years.
Thanks to robot production, this plant, now being replicated on a second site in Atlanta, Georgia, can turn raw materials into a trainer in a single day, as opposed to the 60 to 90 days presently needed in Asia, and all with just 150 staff.
But the automation didn’t register, because the robots didn’t look “robotic”. Instead of twirling cobbler automatons, there was a series of glass-fronted grey boxes, which staff manually loaded and unloaded as the shoe moved down the line.
Spend enough time with robots and this disappointment becomes commonplace. Some of the most significant robots yet created – the dishwasher, say, or the washing machine – don’t look like much from the outside.
But at Adidas something else was going on. These machines weren’t just performing their tasks in isolation, but recording and reporting every action for analysis back at headquarters.
Adidas used this information to create a virtual representation of the Speedfactory, known as a “digital twin”. With this tool, the team could observe the plant as it worked, locating errors or hitches in production.
Just as importantly, they could also try out different scenarios, testing the impact of different machines, or running through entire production cycles to examine the costs of a sudden switch.
“It gives us the capability to test and simulate without using actual product,” explained Adidas chief information officer Michael Voegele. First sensing, then processing, then (admittedly often manual) actuation. The Speedfactory was full of robots. But the real robot was the plant itself.
The concept of twinning was developed by Nasa during the early days of space travel. But it wasn’t until computer power increased and sensors became more sensitive that these analogue models could be turned into digital ones.
Now engineers are making digital twins of everything from cars (Maserati) to golf clubs (Callaway Golf). There’s a canal bridge in Amsterdam with a twin. General Electric creates twins of wind turbines. And of course there are those twins of each of us, based on our social media utterances and time spent idling online.
Digital twinning
It is digital twinning that can best demonstrate the power of robots on the future of real estate. And it is a million miles away from a cyborg-led property viewing.
“Millions of things will soon have digital twins,” says Klaus Helmrich of Siemens, which helped Adidas build its twins, and worked with Nasa to develop the twins for the Mars Curiosity rover.
“It is possible to create the ‘digital twin’ of a product, but also the ‘digital twin’ of a production process and the ‘digital twin’ of a product’s performance.”
Helmrich explains that with this ability to simulate scenarios, companies including real estate firms where so much of what is designed, built, masterplanned and maintained increasingly relies on data is like gold dust.
It allows access to information that facilitates planning a response to market changes. “Quite recently,” he says, “a customer told me that he would pay almost anything for having such information.”
The principle is being extended from factories to cities. The government of Singapore is using MindSphere, Siemens’ operating system for connected machines, to digitise its infrastructure.
In July last year, the National Infrastructure Commission launched a contest to create a “digital twin” of Bristol.
The notion is still in its infancy and crucial questions – including who would pay for it – remain unanswered.
Nevertheless, it’s an idea on the rise. “‘Digital Twin’ is the new ‘Smart City’,” wrote Jeremy Morley, the chief geospatial scientist of Ordnance Survey, after watching the teams present.
One appeal of digital twins is that it makes it easier to take advantage of machine learning.
For all their growing prowess, the most effective AI techniques, such as reinforcement learning, suffer from one disadvantage: they need many thousands of repetitions to produce their results.
For heavy physical robots, this is less than ideal. By contrast, digital representations can be processed millions of times with barely any friction. Digital twins turn factories and cities into playable game worlds.
The first results of this digitisation are already filtering out into the world. “Generative design”, as it is known, takes the data from sensors (a car going round a racetrack, say, or a brace for scoliosis), then asks algorithms to build products that fit the brief.
The twisted, asymmetric outputs may look weird, but they use less material, and solve the problem more effectively. It is not hard to imagine factory and city organisation being run the same way.
Indeed, generative design has been cited as a bona fide method for the design of future-proofed built environments that are “free of predisposed human bias towards what ‘good’ design is”.
Picture a city of driverless cars, all optimising their routes according to the dictates of a vast simulation. The prospect is simultaneously dystopian and utopian.
At first glance, the clean, controlled digital twins of blue-chip brands such as Adidas and Maserati might not seem to have much in common with the sprawling, criminal Mirai botnet.
In fact, in form, they are very similar. As robots made of robots, they are superorganisms – and, if they are connected to the internet, they are, in turn, part of the largest superorganism of all.
The world-sized robot identified by Bruce Schneier doesn’t come in an easily definable shape. It doesn’t have a cool name, or really a name at all. (Schneier suggests “the World-Sized Web”; it won’t catch on.)
It’s strange in other ways too: it doesn’t have an owner, or a central repository, a place it “lives”. It’s not designed, it grows. In other words, it’s just like the internet.
Like the devices it is composed of, the robot will no doubt bring many benefits. But if the internet age has taught us anything, it’s that those benefits will not be unalloyed. Governance will be an issue. So will security.
The three young creators of Mirai built their botnet to attack competitors in the computer game Minecraft. That should be a fact more terrifying than any Elon Musk fantasy of killer robots or paperclip-producing AIs.
These issues can be resolved, given patience, resolve and clarity of thought. But in order to do so, we will have to focus on the right things.
When it comes to robots, we need to stop fearfully searching for some stronger, more rational version of ourselves. No matter how cool the backflips.
Rowland Manthorpe, associate editor, Wired
What do robots and AI really mean for real estate?
In three key areas AI has enabled computers to go rapidly “from useless to utility”.
Computer vision (face and image recognition), voice recognition and natural language processing have all moved from being 20-30% error-prone a few years ago to now being on a par with – or in some circumstances even surpassing – human ability.
In effect, computers can now see, hear and read as well as we can.
And in the commercial real estate industry that really matters. The ability to understand the built environment by pointing a camera at it, to read the voluminous quantities of paperwork that weave around the industry, and to be able to interact with our customers (everyone who enters into any of our spaces and places) just by listening to what they have to say, is truly a transformational power now at our disposal.
In an AI-powered world, the commercial real estate industry will be able to do three things of huge importance that it cannot do today.
Firstly, we will be able to understand exactly how our buildings are working at a granular level, and in so doing we will be able to run them far more efficiently and effectively.
Secondly, we will be able to understand exactly how everyone who uses our buildings, spaces and places really does use them.
Where do they go? What do they use? When do they use it? All this data will enable us to define, refine and curate the UX – the user experience – of all our customers in a way we simply cannot do now.
This will be a true #SpaceAsAService world where we will be able to provide exactly the spaces and services that people need, wherever and whenever they need them.
And finally, we will be able to understand exactly who our customers are, what they need, desire and are pleased by in a way that has not been possible to date.
As we understand how our buildings operate, how they are used, and the needs and requirements of those who use them via having access to vastly more data, we will be able to build a better built environment for them.
Apply robotics to the mix and we might also find that how we construct our buildings will change entirely as well.
Antony Slumbers, digital strategist