Quantcast
Viewing all articles
Browse latest Browse all 7

Nervous System

When it comes to technology in shipping defining the statistical risk isn’t enough. We need a human response that takes into account the social, cultural and emotional context. 

Image may be NSFW.
Clik here to view.
Feat_nervous_system
I recently gave the keynote address at a conference in London. My keynotes generally don’t pull any punches and, with an audience closely involved in implementing a key item on the maritime technology-agenda, this one didn’t either.

Having covered the mega-trends and emerging technologies that were going to shape the industry—from AI and 3D printing to the expectations of Millennial Gen Y and the upcoming Gen Z crew and employees—I put the argument that autonomous ships with few or no crew were inevitable in certain sectors. And that as part of a hyper-connected, collaborative industry approach, they provided a real chance to finally reduce the rate of accidents and fatalities in shipping that has remained stubbornly high for decades.

The job of a keynote should be to get people energised and challenged and to be a little bit provocative. So I was expecting some tougher questions and more entrenched views than I got.

It was clear that there were a small but significant number of attendees who were really keen to hear these arguments being made, and they said so. And when you’re given the premise that shipping has fatality rates ten times OECD best practice, and 85 per cent of accidents are caused by human factors which smart technology could mitigate, it’s hard to argue.

But I wish those silent dissenters had argued. Instead, over the course of the day as speakers came and went and the audience debated what had been said, one by one attendees cited warnings about the dire potential for danger and disaster at sea driven by technology adoption. They expressed a subtle but confident belief that the technology mega-trends that were going to affect humanity weren’t going to change seafarers, ships or regulators in maritime. And one by one, their colleagues reinforced those prejudices.

Despite being provided with statistical evidence that shipping was very unsafe and that technology could be used to mitigate that, a roomful of professional mariners still persuaded themselves that it was the technology which presented the real danger.

The result was that by the end of the day the premise with which the conference opened, namely that the intelligent deployment of technology offered a once in a generation opportunity to make shipping safer, had been entirely overturned.

The day closed with a solid consensus that shipping had spent a very long time creating the safe environment it currently enjoyed and that technology was eroding the skills and knowledge which was essential to keeping seafarers, ships and the environment, safe.

It was in many respects a fascinating thing to witness. This roomful of experienced and intelligent maritime professionals, despite being provided with statistical evidence that shipping was incredibly unsafe, that human factors were responsible for that, and that technology could and should be used to mitigate it, managed to persuade themselves and each other, that it was the technology which was dangerous. The consensus they reached was irrational, but very human. Understanding why they reached it, and how we can help to prevent it on an industry-wide basis is one of the most urgent issues shipping faces.

A few years ago the results of a study by researchers at the Karolinska Institute in Stockholm, Sweden was published in the British Journal of Cancer. It was one of those stories which the media—and particularly the British media—loves. The researchers reported that the daily consumption of that great British staple, a fry-up, increased your risk of pancreatic cancer by 20 per cent. Naturally, most people listened to this news over breakfast, whilst nose to nose with the smoked bacon and oozing snagger on its way to establish an intestinal beach-head from where it was destined ultimately to wreak a horrible revenge on their soft, innocent, pink underbelly.

Image may be NSFW.
Clik here to view.
snagger

Reports say that daily consumption of a British fry-up, increases your risk of pancreatic cancer by 20 per cent. So is that smoked bacon and oozing snagger on its way to establish an intestinal beach-head from where it is destined ultimately to wreak a horrible revenge on your soft, innocent, pink underbelly?

Headlines warned us that even a bacon sandwich a day was going to get us that 20 per cent closer to a cancer which is not only almost symptomless in the early stages, but also very difficult to treat, and therefore has one of the higher mortality rates. Little wonder then that the reports made a big impact and even now, three years on, the idea that sausages or a bacon sandwich will increase your risk of cancer is still widely accepted and referenced.

But what did that report really tell us? According to official figures approximately five in every 400 people develop pancreatic cancer, so if all 400 ate a fry up every day the number of people developing cancer would increase by 20 per cent—from five people to six people. But the absolute risk of contracting pancreatic cancer has only risen from five in 400 to six in 400—a risk of 0.25 per cent. Or here’s the way it could have been reported but would never have been; that the number of people who do not get pancreatic cancer has gone from 395 to 394.

The other thing that the reports didn’t tell you was that if the stress of deciding whether or not to chow down on a bacon sarnie or hot dog makes you light up a fag, then your risk of all cancers, including pancreatic, goes through the roof.

In an increasingly complex world we human beings have little choice but to rely on the data and statistics from researchers, regulators and governments, and follow the advice and policy derived from them. Despite these statistics being carefully researched and mathematically unassailable however, they still don’t always help us understand the world and what’s risky in it.

The statistician Hans Rosling is famous for having pointed out that the average Swede doesn’t have two legs. Thanks to those with one, or none, the population as a whole has an average of 1.9999999 legs. There are other things that are statistically correct but which you may not know, like that the safest year of your life is when you’re seven. Or that the radiation a CT scan exposes you to is as dangerous as being a mile from the Hiroshima bomb.

Now you may not have been aware of the risk attached to a CT scan, but most people will mitigate that new knowledge with the confidence that your treating physician will. Your physician is therefore going to make that risk evaluation for you, balancing the medical advantage that scan will provide against its potential dangers.

But now you do know, you might well decide to just mention it if you’re referred for a CT scan, even though the risk of that exposure causing you health problems is statistically extremely low.

Which demonstrates another real problem with risk—even accurate statistics won’t persuade human beings of the real level of risk if they’ve persuaded themselves otherwise, and conversely, will sometimes encourage them to expose themselves to even greater risk.

Proof of this comes in the study which followed the aftermath of the 9/11 terrorist attacks in the US. People didn’t want to travel by plane, which led to a leap in the number of road miles driven of around 5 per cent. According to estimates the increase in car travel and therefore road traffic accidents meant that 1,600 people died on America’s roads as a result of the ‘understandable’ bias against flying. That’s six times more people than died on the hijacked planes themselves.

That ‘understandable’ bias is also responsible for the fact that many of us with teenage sons, and particularly daughters, have a second job as a taxi service, ferrying our offspring to and from parties, cinemas etc. Statistically the likelihood of your daughter being attacked or abducted are vanishingly small, but it’s a risk most of us aren’t prepared to take, even though actually putting her in a car and driving on public roads is comparatively far riskier to her life and limb.

Image may be NSFW.
Clik here to view.
Heuristic_balls

Heuristics enable us to take quick and instinctive decisions like how to catch a ball, but when evaluating risk we need a far more comprehensive approach to analysing probabilities

David Spiegelhalter is the Winton Professor of the Public Understanding of Risk at Cambridge University and he spoke to the Daily Telegraph about why. “You’re troubled by what we call the asymmetry of regret,” he said. Statistically, the chances of your child being harmed are minuscule but because you value her so much, you’re haunted by the potential nightmare of being the unlucky one. If you know the data, but still feel justified in ignoring it – perhaps because it’s a risk to what you hold most dear – then who’s to say you’re wrong?”

When asymmetry of regret results in a few extra quid in petrol and some late-night excursions in the car, in the grand scheme of things maybe it’s no biggie. In fact that visceral, gut reaction to evaluating risk serves a very specific and useful purpose for human beings.

We generally use rules of thumb called heuristics, to enable us to take quick and instinctive decisions. When you catch a ball you don’t use mathematical equations to calculate the trajectory of the falling ball, you use intuition and heuristics. If you tried then life would become impossibly complex.

Unfortunately impossibly complex is exactly what technology is making the world, and whereas heuristics are great for catching balls, and a whole host of other things, when it comes to evaluating risk we need a far more comprehensive approach to analysing the probabilities and outcomes of a given situation.

When that situation involves something which we have an instinctive reaction to, it becomes extremely hard. Artificial Intelligence is exactly that. The idea of a driverless car, or bus, or ship challenges us at an emotional level, and how that impacts our decision making and estimates of risk is an important issue, yet one which has rarely been considered among those who manage risk professionally.

Take the British Royal Navy’s nuclear submarines for example. One of the most complex engineering achievements known to man it has a unique combination of potential hazards in a relatively small space. These include structural and environmental issues common to all large ships, underwater stability plus nuclear propulsion, explosives and, in the case of the deterrent submarine, nuclear weapons.

The Royal Navy undertakes a rigorous risk assessment of these submarines evaluating the risk of a sailor falling into the sea alongside nuclear accident. The ‘As Low As Reasonably Practicable’ (ALARP) principle is deployed to continually reduce risk from each potential hazard, until the cost of further effort would be grossly disproportionate to the extra safety achieved.

But in practice, despite the comparative risk of nuclear accident being far lower, it attracts a disproportionate response. “Far greater resources are devoted to managing nuclear safety than for other potential submarine hazards with the same risk assessment,” confirms Admiral Nigel Guild. “This is required by a public expectation of far greater risk reduction for a potential nuclear hazard, because it is not generally understood and it is held in significant dread.”

Thanks to a lack of public understanding and the ‘dread risk’ nature of the technology ‘asymmetry of regret’ leads the Royal Navy to invest 100 times more effort in mitigating the risk of nuclear accident on its ballistic submarines than any other risk.

Admiral Guild goes on, “To take a non-nuclear example, the risk of a seamanship accident, such as falling into the sea while working on the casing when the submarine is on the surface, is assessed in a similar way to any workplace potential hazard. In contrast to this, a potential nuclear event requires risk mitigation to achieve two orders of magnitude smaller risk assessment than would be sought for conventional risks. Another way of expressing this is by applying the ALARP principle: the effort required before it would be considered grossly disproportionate to the extra nuclear safety achieved is about 100 times more than for other risks.”

Thanks to a lack of public understanding and the ‘dread risk’ nature of the technology, asymmetry of regret leads the Royal Navy to invest 100 times more effort in mitigating the risk of nuclear accident than any other. So this isn’t really about safety, it’s about perception.

In a classic study, George Loewenstein developed the ‘risks as feeling’ hypothesis, identifying that there are numerous emotionally-driven factors that help to explain how human beings react to risky situations. One of them is the vividness with which these outcomes can be described or represented mentally.

Stop for a moment and consider the images which come immediately to mind when you read the word nuclear. If that list doesn’t include mushroom clouds, burning winds, birth deformities, and an invisible deadly radiation threat then you’re in a very small minority.

Now see what comes to mind when you read the words roboship, droneship, crewless or cyberthreat. Chances are that those images will be mostly negative too and largely drawn from everything from the Terminator movies to legends like the Marie Celeste. But almost none of you will actually be reacting based on any experience of the technology involved.

“We feel more threatened by certain kinds of risks – ones that are unfamiliar or little understood, that we have no control over – and because we rely heavily on our own experiences and trusted recommendations, we tend to stick with decisions that make no numerical sense,” says David Spiegelhalter.

Image may be NSFW.
Clik here to view.
IMO needs to demonstrate authority, policy and clarity around the adoption of technology in shipping. The introduction of autonomous technology on London's Docklands Light Railway may offer a blueprint.

IMO needs to demonstrate authority, policy and clarity around the adoption of technology in shipping. The introduction of autonomous technology on London’s Docklands Light Railway may offer a blueprint.

And that lack of control is where autonomy is really pushing our buttons. Nuclear is bad enough because if we get it wrong the consequences could be terrible. But autonomous technology can get it wrong all by itself—by definition we aren’t in control—or at least not in the sense that we are evolutionarily accustomed to. But despite our visceral reaction to it, autonomy is far less likely to fail than humans, and that’s happening with fatal results at sea every single day.

The speed with which shipping and maritime embraces the technology opportunity is frequently cited as being down to one thing—policy and regulation. The delegates in London comforted themselves and each other that COLREGs would put the kaibosh on autonomous ships, and sorting that out on its own would take IMO years to deal with.

In fact having launched the e-nautic agenda with e-navigation and mandatory carriage of ECDIS, a catalyst for technology-adoption within shipping, IMO has now taken its foot off the gas completely. With no new policy or working groups underway the IMO e-navigation drive—a necessary precursor to more sophisticated technologies including autonomy—has fizzled out.

The autonomous ship, like the truck and car is going to be one of the greatest changes for humankind but by not engaging with it in shipping yet IMO is storing up trouble for itself. When interviewed by Bloomberg on the subject of unmanned ships in 2013 an IMO spokesperson said that IMO hadn’t received any proposals on unmanned, remote-controlled ships, and yet we know categorically that they are in development, and that flag states and class societies are involved.

The fact that IMO apparently isn’t could be an indication that they are dangerously out of touch with how fast technology is moving in shipping, or perhaps worse, that the innovators involved—from manufacturers to class and flag—believe that IMO involvement could be prejudicial to the adoption of the technology in the industry.

But although flag, class and manufacturer will play a huge part in technology development and adoption there is still a need for authority, policy and importantly, clarity around the implementation of this technology. That role is one which should absolutely be filled by IMO, but if it isn’t already grasping the mettle then on current performance we could be looking at 15 years before any useful clarity or policy emerges. And that is just going to be too late.

In that time the vacuum could be filled by the development of a multi-tier shipping industry where one tier adheres to basic IMO compliance and the top tier begins to use technology to develop structures where it effectively regulates itself (see our Cutting the Cord article for more detail).

As DNV GL’s Tor Svensen told us in his interview last issue, we have developed a system where a ship can be certified to travel all over the world. Disrupting that without a real plan for what replaces it could have far-reaching consequences not just for shipping but for global trade itself.

There are those who privately wonder whether that might not be a good thing, and that IMO has had its day. But IMO is in a unique position to make a powerful contribution to the way we approach technology and risk perception in the industry, if it can establish between us and it what researchers describe as ‘critical trust’.

It shouldn’t take a manufacturer—even one as innovative as Rolls-Royce—to drive the technology agenda in maritime. That’s the job of policymakers, but with no steer from IMO there is nothing to report, other than prejudice and emotional response.

We need to be able to rely on IMO as an institution to manage risk, whilst retaining a critical attitude to its effectiveness, motivations and independence. But IMO must earn that critical trust. Its bedrock has to be a deep understanding of the issues and the technology it seeks to regulate.

Nothing has come out of IMO in recent years which gives any indication that there is a broad grasp of the pace, sophistication or complexity of the technology being developed, and far less how its implications will impact maritime stakeholders. The longer that situation—or perception—goes uncorrected, the more our trust in IMO to successfully regulate will be eroded.

Deep knowledge of the technology landscape is only one part though. The other is in how IMO communicates and influences the narrative on technology. Part of the reason that Futurenautics was born was to counter the accepted standard of technology reporting in the maritime sphere. Real investigative reporting of technology is almost non-existent, and too often shaped by the insular prejudices of the industry. As a result many shipping folk responded to Rolls-Royce’s intervention on autonomous ships as merely a publicity stunt to improve its profile.

It shouldn’t take a manufacturer—even one as innovative as Rolls-Royce—to drive the technology agenda in maritime. That’s the job of policymakers, but with no steer from IMO there is nothing to report, other than prejudice and emotional response.

With media not providing that investigative objectivity that holds industry prejudice to account, technology and prejudice are on a collision course. But IMO could avoid that if it chooses to. The introduction of autonomous technology on London’s Docklands Light Railway shows how.

Opened in 1987, the DLR was the first mass transit system in London to use driverless trains, and faced an understandably nervous public. But with the benefits of the system overwhelming the DLR embarked on a proactive communications programme. This included explaining the system, and the extensive and thorough nature of the safety trials taking place.

Exhibitions and literature put forward both the positives of the DLR and the negatives of traditional surface level transport, while the press was thoroughly briefed on the potential for the human operator on board to drive the train if required, and the oversight of humans in the control centre.

Perhaps most significantly, according to Mike Esbester of the University of Portsmouth, the novelty of the technology was downplayed. Instead, the DLR’s proponents acknowledged the debts owed to existing automatic technologies that the public might be familiar with, in a bid to demonstrate the tried and tested nature of such systems.

Autonomous ships and the technologies supporting them could bring massive and widespread safety benefits to maritime, but lack of policy, clarity and communication at the top of the industry could impede that. What is certain is that balancing risk and innovation is entering new territory and regulating and managing it is going to require bold vision, knowledge and real leadership.

Image may be NSFW.
Clik here to view.
HMS_Victorious

Balancing risk and innovation is entering new territory. Regulating and managing it is going to require bold vision, knowledge and real leadership.

The lesson of the DLR is that people can be persuaded to evaluate the risks rationally when it comes to technology, but it takes them having critical trust in the regulators—the humans—to do so.
Defining the statistical risk isn’t enough, we need a human response that takes into account the social, cultural and emotional context for the fear. If the system itself makes us nervous, the likelihood is we’ll find another. If IMO can grasp that now, it still has a lot of running to do, but it might just change maritime for the better.

I imagine that ‘nervous’ doesn’t begin to describe how the passengers felt boarding Germanwings flights the morning after the tragedy of its Barcelona-DÜsseldorf flight. What happened to one group was recorded on Facebook by passenger Britta Englisch.

“Yesterday morning at 8:40am, I got onto a Germanwings flight from Hamburg to Cologne with mixed feelings,” she describes. “But then the captain not only welcomed each passenger separately, he also made a short speech before take-off. Not from the cockpit, he was standing in the cabin. “He spoke about how the accident touched him and the whole crew. About how queasy the crew feels, but that everybody from the crew is voluntarily here. And about his family, and that the crew have a family, and that he is going to do everything to be with his family again tonight. It was completely silent. And then everybody applauded. I want to thank this pilot. He understood what everybody was thinking. And he managed to give me, at least, a good feeling for this flight.”

The pilot was Frank Woiton, an experienced Captain who had previously flown with Andreas Lubitz, the architect of the Germanwings disaster. He spoke to Germany’s Bild newspaper recounting how he hugged passengers as they boarded the hushed jet. Asked why he said, “People should see that in the cockpit there is also another human being.”

If maritime is to really benefit from the technology on the horizon then it needs IMO to come out of the cockpit, and start showing it understands that too.

Images credit © Ministry of Defence/Mayor of London/TFL/Getty Images

This article appeared in the April 2015 issue of Futurenautics

Image may be NSFW.
Clik here to view.
Mini_ipad_issue7

free of charge as a PDF, read online and subscribe

Viewing all articles
Browse latest Browse all 7

Trending Articles