Quantcast
Viewing all articles
Browse latest Browse all 7

Heart of the Matter

Image may be NSFW.
Clik here to view.
Heart_Matter_feat
Maritime’s culture of self-reliance is at odds with machine-centred automation. Bridging the divide between the physical and tangible, and the digital and virtual needs humans at its heart.

When Germanwings flight 9525 crashed into the Alps on 24th March this year the shock and grief of the loss was immediately followed by speculation as to what could possibly have happened.

In that initial 24-hour period before the events onboard became clear, many press reports chose to focus on the highly automated operation of the Airbus A320, and to what extent it could have played a role in the tragedy.

“The A320 is known to be very easy to fly. Actually some pilots even don’t like it because it’s too easy to fly, it’s fully automated and it provides computer assistance to pilots preventing them from overstressing the plane, from slowing down too much. The computer will cut in if you try to make a manoeuvre that might be too extreme,” said pilot Marin Medic speaking to one news outlet. “So basically it’s kind of a nanny-aircraft. If you make a mistake it will try to catch it and correct it. So as far as basic handling of the aircraft goes it should have been even easier to handle than let’s say a classical type like the Boeing.”

A plane that goes out of its way to stop you doing something extreme and is very easy to handle is exactly what we all want isn’t it? From the original autopilots, to computerised automation that kicked off with the first fly-by-wire aircraft in the 1970s, aviation has been steadily evolving in pursuit of just that aim. But in recent years there have been a growing number of serious questions about the unintended consequences of decades of increasingly sophisticated automation.

Following many experiments and research the evidence is pointing to the fact that our desire to dial down the risk of human pilot error is actually creating a situation where it’s becoming more likely.  In recent years the crashes of Air France Flight 447 in the Atlantic, Continental Flight 3407 in Buffalo both in 2009, and Asiana Flight 214 which failed to make a landing in San Francisco in 2013, have all been linked to pilot errors that were related to the automation onboard.

It may sound like a rather simple observation, but what it comes down to is that if you allow the plane to fly itself, pilots get out of practice. Between take-off and touch-down most aircraft these days are flown by the computer and that leads to what Qantas has called ‘automation addiction’. Aviation researcher Matthew Ebbatson describes it as ‘skill fade’, “Flying skills decay quite rapidly towards the fringes of ‘tolerable’ performance without relatively frequent practice,” he says. That becomes a real problem when pilots who are used to sitting back and looking out of the window are suddenly faced with an emergency situation the computer can’t cope with.

But the technology that allows planes to fly themselves, cars to drive themselves and will soon allow ships to sail themselves, is not levelling off. Not by a long chalk. The advances in algorithms, connectivity and machine learning means that it is growing at an exponential rate. But it is doing more than automating the work we don’t want to do, it is actually altering that work to suit itself.

Image may be NSFW.
Clik here to view.
The cars of the 1970s broke down. A lot. But at least they were designed to be fixed by humans. The automation we've introduced since - from cars to aeroplanes - is designed to supplant, not support the human, and that's beginning to cause problems.

The cars of the 1970s broke down. A lot. But at least they were designed to be fixed by humans. The automation we’ve introduced since – from cars to aeroplanes – is designed to supplant, not support the human, and that’s beginning to cause problems.

Take the cars of the 1970’s. They broke down. Frequently. Particularly if you had one from British Leyland here in the UK. To set the scene for younger viewers, they also had windows you had to roll by hand, windscreen wipers that sometimes went on a go-slow, and vinyl interiors which adhered themselves like napalm to bare legs on hot summer days.

But these cars differed in a more fundamental way than comfort. These were cars that you could lift the bonnet of, take a look around and actually fix if the need arose. But as computers got better and started to run the car for you, skills like using the choke, double de-clutching and breaking an egg into the radiator to plug up a leak have gone the way of the Dodo. Today the vast majority of cars on the road will simply give you a fault code that it takes another computer to recognise, and some highly skilled technicians to fix. As a result most people stuck at the side of the road will have zero chance of doing anything useful with their car to get it moving again.

Now no one’s arguing that the world would be a better place if we still had to break eggs into our car radiators occasionally—there are enough people maintaining classics who are handing down the old ways of the British Leyland Jedi—but there’s a wider point here.

The automation we’ve introduced into our cars, as our aeroplanes, isn’t built around supporting the human being. It is designed to supplant the human being wherever possible. The idea that machines are better than humans has been integral in the development of computing and automation from the outset. And the downside of this machine-centred development is beginning to reveal itself in unexpected ways.

In an article for the Wall Street Journal Nicholas Carr describes what happened to the first patient in the US to die during the recent Ebola outbreak. Thomas Eric Duncan presented at the Texas Health Presbyterian Hospital in Dallas but was misdiagnosed, leading to a lack of appropriate treatment which might have saved his life. The misdiagnosis was of course very sad, but the reasons it happened have bigger implications.

According to research published in the journal Diagnosis, the digital templates used by the hospital’s clinicians to record patient information may have contributed to a dangerous kind of ‘tunnel-vision’. “These highly constrained tools,” the researchers write, “are optimised for data capture but at the expense of sacrificing their utility for appropriate triage and diagnosis, leading users to miss the forest for the trees.” They conclude that medical software can’t be a “replacement for basic history-taking, examination skills, and critical thinking.”

The efficiencies that computing has brought to every industry from aviation to healthcare have meant that in most cases the way in which humans work has been altered to make it easier for the machines. And in a double-whammy, by taking away the routine tasks that are often the building blocks of higher cognitive input—taking histories, flying the plane, or plotting courses on a map—we have allowed our skills to atrophy and compromised the very thing that makes us superior to the machine.

When Professor Stephen Hawking made his comments about the inevitable dominance of artificial intelligence over humans they were widely reported. But it isn’t the artificial intelligence getting smarter that’s the real problem. The real problem is that to date we’ve voluntarily dumbed down to give it a leg up.

Airline pilots are trained not to override the automated systems unless they’re willing to justify why. Shipping takes a more relaxed view. We just switch stuff off.

The evidence is showing us that our insistence on building automation around the machine and not the human is not sustainable in the long term. And that’s as key for shipping as it is aviation or medicine. Crewless, autonomous ships are on their way, but right now we’re entering a highly disruptive interim period when the ships are going to get much smarter, and with relentless speed, and we’re still going to have to sail them safely and efficiently.

But in shipping and maritime we have a very different relationship with technology. Airline pilots are well aware that the automated systems on board are there primarily to ensure the safe and efficient operation of the aircraft. As a result they are expected to fully justify any decision to override the automated systems onboard, and can face serious consequences from the airline if they do. Here in shipping, we take a rather more relaxed view. Which quite often takes the form of just switching stuff off.

Ship operators have spent increasing amounts of money on technology solutions both to comply with safety and environment-related mandates and also to improve efficiency and oversight. Privately many are tearing their hair out over the fact that crews just won’t use the technology they’ve been given. Expensive route optimisation software is worthless if the Master decides to head out of port like a scalded cat and sail the ship like it’s been stolen until he gets some miles under his belt, then slow down and take it easy.

Admittedly some of this is generational but a lot more of it is about trust. The bottom line is that many crew don’t trust the technology they’ve got on board. In maritime we have a culture of self-reliance which machine-centred automation strategies are having difficulty challenging. And we all know that culture eats strategy for breakfast.

Image may be NSFW.
Clik here to view.
The overwhelming message of ECDIS is that the gubbins inside the box is more important than you are, and if you want to get it out then it's up to you to learn to speak its language.

The overwhelming message of ECDIS is that the gubbins inside the box is more important than you are, and if you want to get it out then it’s up to you to learn to speak its language.

ECDIS is a perfect example of machine-centred automation, perceived by many as something foisted on the industry by the IMO e-maritime agenda, automating the basic functions of the navigator and thereby degrading their historically vital skill set on the bridge.

A brief look at the dozens of different types of ECDIS, each with its own unique interface and logic is instructive. The overwhelming message of ECDIS is that the gubbins inside the box is more important than you are, and if you want to get it out then it’s up to you to learn to speak its language.

Humans have accepted that premise in their interactions with technology for some time. It has in part been driven by programmers creating from the machine’s perspective, rather than from the human. But there is a new wave of scientists now who are upending the traditional machine-centred model of automation and putting the human at the heart of things instead.
Utilising machines effectively means finding ways to bridge the divide between the physical and tangible and the digital and virtual, allowing us to communicate and work with machines in a way that works for us, first and foremost.

Researchers from Australia’s RMIT University have done just that, developing a system that lets drones communicate with air traffic controllers, not via a screen interface, but using a synthesized voice. The system was developed by RMIT in collaboration with Thales Australia’s Centre for Advanced Studies in Air Traffic Management (CASIA), and software engineering firm UFA Inc. It utilizes UFA’s ATVoice Automated Voice Recognition and Response software, allowing drones to both verbally respond to spoken information requests delivered by radio, and to act on clearances granted by air traffic controllers.

“Our project aimed to develop and demonstrate an autonomous capability that would allow a drone to verbally interact with air traffic controllers,” said Dr. Reece Clothier, leader of the RMIT Unmanned Aircraft Systems Research Team. “Using the system we’ve developed, an air traffic controller can talk to, and receive responses from a drone just like they would with any other aircraft.”

This kind of human-centred automation is not geared to eventually remove humans from the equation, but to optimise the utility of both human and machine. And there are other developments which are going to make that interaction even more human-focussed.

Image may be NSFW.
Clik here to view.
Utilising machines effectively means finding ways to bridge the divide between the physical and tangible and the digital and virtual, allowing us to communicate and work with machines in a way that works for us, first and foremost.

Utilising machines effectively means finding ways to bridge the divide between the physical and tangible and the digital and virtual, allowing us to communicate and work with machines in a way that works for us, first and foremost.

We’ve written before about SAFFiR, The Shipboard Autonomous Firefighting Robot, a human-sized autonomous robot developed by the US Navy which is capable of finding and suppressing shipboard fires. Whilst it’s also designed to work seamlessly with human firefighters, it’s not hard to see how—in common with all automation to date—it is also expected at some point and in some circumstances to replace them. But whilst SAFFiR is a remarkable piece of technology, there is another piece of firefighting tech which has been developed not to replace, but to work as an extension of the human.

Called Robot Reins, the small mobile robot—equipped with tactile sensors—will lead the way for firefighters moving through smoke-filled buildings, saving vital seconds by identifying objects and obstacles. Developed by King’s College London and Sheffield Hallam University, with funding from the Engineering and Physical Sciences Research Council (EPSRC), the robot not only acts as a pathfinder, but using haptic feedback will send vibrations back through the reins to provide data about the size, shape and even stiffness of any object the robot finds.

This is just one application of haptic technology, the potential of which could be huge in bridging that gap between human and machine. The technology uses an actuator to convert electrical, hydraulic or pneumatic energy into vibrations, which can be managed and controlled by software that determines the duration, frequency, and amplitude. When external forces engage the receptors in our somatosensory system, humans respond to that touch, texture or vibration in a highly-sophisticated way. By translating digital interaction into tactile feedback haptic technology is opening up the possibility of doing away with screens and headgear and making machines touch and speak to us more like we speak to and touch each other.

Consumer products like gaming and even the Apple smartwatch are already making use of this technology, and for us in maritime the potential of haptic feedback for both remote operations and maintenance, and simulation and training could be massive.

Critics of remote operations, autonomy and maintenance often point to the inability of someone on land to feel and experience the swell and movement of a ship at sea. Haptic technology, and in particular a new technique developed by researchers at the University of Bristol using projected ultrasound to directly create floating, 3D shapes that can be seen and felt in mid-air, could change all that.

Building on previous work at the university, the researchers have used an array of ultrasonic transducers to create and focus compound patterns of ultrasound to shape the air at which it was directed.

To make these shapes visible, the manipulated air is directed through a thin curtain of oil and a lamp used to illuminate it. According to the researchers, this results in a system that produces such accurate and identifiable shapes that users can readily match an image of a 3D object to the shape rendered by the prototype ultrasound system.

“Touchable holograms, immersive virtual reality that you can feel and complex touchable controls in free space, are all possible ways of using this system,” said Dr Ben Long, Research Assistant from the Bristol Interaction and Graphics (BIG) department at the University of Bristol. “In the future, people could feel holograms of objects that would not otherwise be touchable, such as feeling the differences between materials in a CT scan or understanding the shapes of artefacts in a museum.”

That’s pretty clever, but in combination with an algorithm this haptic feedback could be truly revolutionary. Haptic will allow us to feel the digital world, but in combination with an algorithm, it could allow the machine to feel and interpret the physical.

The Robot Reins not only guides the follower and offers tangible feedback, it also senses any hesitation or resistance and adjusts its pace accordingly. This is adaptive automation, using sensors and algorithms to take mental and physical feedback from the human and translate it into a real-time appreciation of the state of the human being that the machine can understand.

Adaptive automation sees the system translate human feedback and then based on it assign responsibility between human and machine to either assist the human, or keep them engaged. That’s precisely what we need to start developing for seafarers.

Using that knowledge the machine can then decide how much, or how little responsibility to assign to the human, and how much to manage itself—once again in real time. If the system detects that the Master or the pilot is struggling with a difficult procedure it will allocate more tasks to itself to free up the human. But if it senses the human is losing focus or interest it will shift more of the workload back again forcing the human to concentrate their attention on the task and build their skills.

Image may be NSFW.
Clik here to view.
ultrasound-3d-haptic

Haptic feedback could bridge the gap between human and machine, allowing us feel the swell of the ship remotely for example

The implications of this adaptive automation are far reaching. It could eventually remove the need for traditional training altogether, allowing crews on flight decks and bridges to be trained and upskilled daily as they go about their tasks.

But perhaps most significantly it would change the machine from being an uncommunicative, inflexible boss, to adding value as part of a team seafarers trust, and in which the machine itself has learnt to trust.

In fact the Robot Reins are already doing just that. Based on the way the human is moving and their previous actions, it is programmed to predict the human’s next actions. And using its algorithm, in tests of blindfolded volunteers, it could even successfully detect the human’s levels of trust in it.

Of course the Germanwings disaster wasn’t the result of error, computer or human, it was something far more wretched. But it doesn’t change the fact that aviation has pushed machine-centred autonomy to a point where it now has to find ways to mitigate the problems that approach has thrown up.

Here in maritime we have a chance to avoid those problems by embracing the potential of human-centred, adaptive autonomy at sea, and building the kind of interdependent trust that is going to characterise successful adoption and management of artificial intelligence in the future.

Machines are getting very smart indeed, but humans must be at the heart of them. They should augment us, not control us. And once we demonstrate that’s possible we’ll have a chance to build more of the trust amongst seafarers which is badly needed.

And in reality it isn’t just a chance, it’s a necessity.

Images © Getty/Wikimedia Commons/Raytheon Anschutz/easyJet/EPSRC/University of Bristol

 

This article appeared in the April 2015 issue of Futurenautics.

Image may be NSFW.
Clik here to view.
Mini_ipad_issue7

 free of charge as a PDF, read online and subcribe

Viewing all articles
Browse latest Browse all 7

Trending Articles