Posted by on

TEDxManchester: You Are All Bionic Now

I gave the opening talk at TEDxManchester yesterday. It was a cracking event with a great range of speakers, covering everything from parenting in war zones, to freestyle dance, to online dating and musical coding.

Normally when I do a talk like TEDx I post my full script afterwards. But the reality is that my script for this talk ended up quite some way from the words I actually said. In this case the script was really just the bones of a talk against which I planned the slide deck. What I actually said when I got on stage added a lot more detail that only really fell into place during the couple of days before.

So rather than posting my script to accompany the slide deck, I thought I’d post a summary and a few interesting quotes from my research material. You can see the full slide deck here: TEDxManchester — You Are All Bionic Now

Navigate with arrow keys (or just scroll on devices that don’t support the javascript behind it). And read on for the thoughts behind it.

We Are All Bionic Now

The central argument of my talk was that we are all bionic now. All cyborgs enhanced by the power of pocket and remote computers to which we have happily outsourced the augmentation of our mental functions. Satnav for our sense of direction. Shared photo and video stores for memories. Calendars and digital assistants. Search engines increasing our knowledge and aiding our recall.

Because our mental image of a cyborg has been defined as the direct interface of man and machine at a physical level — the Terminator, the CyberMen, the Borg — we have missed the fact that technology has overcome the issues that obliged this physical melding when the term ‘cyborg’ was first created back in the Cold War. We now have very high performance interfaces to and between our machines. Not only can they accept rich data from a range of inputs, but they can use their processing power to make assumptions to fill in the blanks. And when they need more power they can access it on demand over fast Internet connections.

Today you no longer need to have a chip in your head or your brain directly connected to mechanical body parts in order to be a cyborg.

And that’s good because the challenges we are facing now are very different to the ones that scientists faced back in the Cold War. In a stable, developed economy like the UK our challenges are much more mental than physical. The places we need augmentation are not primarily in lifting heavy objects or surviving harsh extra-terrestrial environments. Or for that matter, war zones (though this issue drives the continued research of more physical cyborg applications).

We use our cyborg powers today to filter the morass of content that comes our way. To navigate a world that is changing ever faster. And to inform and enrich ourselves with knowledge and media, for pleasure or to help us tackle the challenges of our work.

The next step is for portions of our personality to break off from the physical whole and become semi-autonomous in the cloud and in other devices. In a total reverse of the original idea of a human brain in a robot body, fragments of human thoughts, experiences, preferences and needs will be encapsulated in code and allowed to roam across the Internet doing our bidding.

A limited micro-clone of you will be in your self-driving car, remembering your address, preferences and even preferred driving style.

Another micro-clone will handle mundane shopping tasks, ensuring that not only do you never run out of toilet paper, but that when your preferred brand isn’t available you get the next best thing based on an understanding of you.

Perhaps there should have been a moral debate about whether we want to be cyborgs. But the reality is that we are now. The question we have to address is how far we want it to go. And what we will all do with our cyborg powers.

###

Research

I found these excerpts from academic papers/books looking at the subject of cyborgs really useful — and fascinating,

“The use of the term ‘cyborg ‘ to describe a human-machine amalgam originated during the Cold War. It was coined by Manfred Clynes and Nathan Kline in Astronautics (1960) for their imagined man-machine bybrid who could survive in extraterrestrial environments. NASA, which needed an enhanced man for space exploration, sponsored their work. According to the original conception, the cybernetic organisms would remain human in a Cartesian sense; their bodies (like machines) would be altered, whilst their minds could continue their scientific research.”
TechnoFeminism, By Judy Wajcman 2004

“By including gender [in the Turing Test], Turing implied that renegotiating the boundary between human and machine would involve more than transforming the question of “who can think” into “what can think”. It would also necessarily bring into question other characteristics of the liberal subject, for it made the crucial move of distinguishing between the enacted body, present in the flesh on one side of the computer screen, and the represented body, produced through the verbal and semiotic markers constituting it in an in electronic environment. This construction necessarily makes the subject into a cyborg, for the enacted and represented bodies are brought into conjunction through the technology that connects them.”
How We Became Posthuman: Virtual Bodies in Cybernetics, Literature and Informatics, N Katherine Hayles, 1999

“I believe that these figures embody the libidinal-political dynamics of the consumerist ethos to which young peoplehave been systematically habituated during the contemporary period. …the Cyborg has incorporated the machineries of consumption into its juvenescent flesh.”
Rob Latham, Consuming Youth: Vampires, Cyborgs and the Culture of Consumption, 2002

Posted by on

Why risk is greatest when you’re happy and profitable

Do you run a happy ship? Does everyone in your organisation sing from the same hymn sheet? Are you all aligned to the same goals? Do these happy staff stay with you for years and years?

Then you’re probably in trouble. Because experience has taught me that these characteristics are often the precursors to a fall.

I’ll explain.

There are two types of organisation that typically call me up for consulting engagements. A small number get in touch when they’re doing well and they want to identify the next opportunity. The majority call me when things are not going so well and they’re looking to get back on track with some insight into where their market or sector is going.

These latter organisations often have much in common. When you interview the management and staff, you find a number of key characteristics:

  • Until recently. the organisation has been profitable for a long time — often growing (or in the public sector, funded with a manageable amount of cash, year on year)
  • People stay with the organisation a long time — more than seven years — and often trained with that company
  • Staff are totally sold on the company message. Apart from the usual gripes (IT, inter-departmental communication, their boss) you hear the same story about the company and the market from everyone

In these circumstances, something happens. Or rather it doesn’t. People don’t ask hard questions. Because they don’t want to risk the comfort of the warm bath they’re in. And because with so little exposure to what’s going on outside, they don’t know which questions to ask.

If you run an organisation that sounds like this one, get help. Bring in someone with a fresh pair of eyes. Get them to take a good, hard, critical look. Listen to what they come back with, and act on it.

Crucially, don’t let this analyst stay involved too long. Six months at most. Longer than that and they will be infected by the good will and happiness. They will lose objectivity and start to believe things like “that won’t affect us” or “that doesn’t work in this market.”

Once this analyst has done your diagnosis and made a prescription, bring in other people to help you make the change. Specialists in people, technology and transformation.

Six months later, bring your analyst back and tell them to look again. Don’t be surprised if they raise new criticisms.

Repeat the process.

In this environment, the only way you keep your organisation happy and profitable, is through constant evolution.

###

Need an analyst to help you see the darkness in your bright and shiny world? You need an Applied Futurist.

Get in touch and we can help you directly, or introduce you to one of our growing number of partners nationwide.

Posted by on

In An Information Age, Knowledge isn’t Power, It’s a Commodity

When politicians talk about a ‘knowledge economy’, it sounds like information is gold. A durable good that can be stored and trickled out to the market to keep its value high.

It isn’t.

Just a decade ago it might have been true. If you came up with a new product, process or business model, you probably had a few years grace before it was replicated. With the wind behind you, you could create a defensible position, for a while at least.

Technology has changed this.

Knowledge isn’t durable like gold. It’s a fast moving consumer good. A low-value, high-volume commodity. The power in knowledge today is not in holding it but managing its flow. Getting it into your business quickly, extracting its value, and moving on.

Most businesses don’t get this. And it’s not the leaders’ fault. We’ve spent years being conditioned into the idea that there is fundamental value at the heart of our businesses. That the way to improve them is to optimise what we do. Boost margins here. Squeeze costs there. Sell more. Charge more.

This is old thinking. Today, agility trumps optimisation.

The value at the heart of your business is constantly being eroded. The gold turned to lead. The power drained from the knowledge.

Technology has lowered the friction in the flow of information to the point where goods and ideas can flow much more easily. Between organisations and across borders.

Other people can do what you do. They can do it faster and cheaper. And do it in completely different ways through totally new channels. Threats can come from nowhere and become existential in a matter of months.

If you want to succeed and sustain in this fast-changing environment you have to make changes.

First, you need to make sure that you are exposed to the information that matters. That inside your market, and in adjacent or relevant markets, you are watching what is happening and taking that learning into your business. Listening to customers, listening to peers, looking for threats and opportunities around the corner. It’s too easy to run with your head down, focused on the challenges inside the walls of your own organisation.

Second, you need to ensure that information flows fast. From the customer, to the decision makers and back again. One of the first questions I ask new clients is about the length of this round trip. The real answer is often around 12 weeks. Too slow.

Ensure that you collect relevant data from your organisation. From customers, partners, departments and suppliers. Make sure that you share this information with the right people, in real time, with maximum clarity. No manual processing. No subcommittees and tiers of review where all meaning gets polished from the data.

Finally, make sure that you equip the people who matter with the power to respond. Take decisions yourself or push power to the edge. Enable people to act on the evidence that they see in a time frame that makes sense. Clue: that time frame is short.

Technology will help you to do some of these things. A well-designed customer interface. Integrated software systems. But these things only add value in a properly-structured organisation with the right behaviours.

Establishing those is much harder than signing a cheque for a new website or social media campaign.

Posted by on

Three Things I Want from Engineered Evolution

At the How To Change the World conference this week we heard from a range of speakers who talked in one way or another about the control we will soon have over our own physical development. It included the application of stem cells and other techniques in the regeneration of human tissue and organs — even to defeat ageing. And the use of psychedelic drugs to consciously expand our own thinking and change our brain plasticity to enhance learning. The options are many.

Whether through biology or technology — and frankly the boundaries between the two are blurry, given the importance of quantum physics in both — we are now in control of our own evolution. Natural selection is no longer the force it was. What traits we want to select, we have to choose, or even design. At the conference, Professor Julian Savulescu termed this ‘evolution under reason’, but you could equally call it ‘rational selection’ or ‘engineered evolution’.

This throws up a number of ethical dilemmas, particularly around the prospects for inequality, as today’s debate around gene editing is highlighting.

Assuming we can address those to the satisfaction of most — at least the rational portion — the prospects are rather exciting.

I’ve always been rather squeamish about human modification. Tattoos and piercings are not for me. And no, I’m not interested in the spam adverts for other forms of male enhancement. But there are certainly aspects of my abilities over which I would like greater control. Particularly the mental ones.

Here are three examples that are top of my wishlist.

Focus

Like most people there are particular times of the day when I am at my best. The exact hours change between summer and winter but it’s always first thing in the morning. It’s not always possible, or desirable to be at my desk by 7. And if I miss my window, which may only be three or four hours at most, then my day can be deeply unproductive. I might still plough through some expenses or achieve the rare feat of clearing my inbox, but I likely won’t create anything, and that’s largely what I get paid for.

There are other periods in the day when I get bursts of creativity, but these are less predictable. Even the usual methods of seeking distraction, or inspiration, or just letting my brain freewheel on a walk, often don’t give me more than a few minutes of renewed focus.

But what if I could turn this mind state on and off. With a switch or a pill? What could I achieve then?

There are a few options here today for this. I could try drugs like Adderall and Ritalin, but these are both illegal without prescription and have serious potential side effects. Similar drugs pop up as ‘legal highs’ but these carry all the same risks and more. If I were going to pop a pill I’d want it to be very well tested and regulated.

I could also try Transcranial Direct Current Stimulation or tDCS, an increasing popular alternative to drugs for DIY brain hackers. But again the science on this is in its early days. While there are enthusiastic proponents, my natural scepticism leads me to want some solid trials before I start to experiment.

The answers aren’t there yet, but there are clear opportunities.

More RAM

Computers have a neat way to deal with a shortage of short-term memory. They dump a chunk of it into long-term memory and then retrieve it when it’s needed.

Humans do something similar. Some can do this with their own minds, with pretty reliable recall. I am not one of those people. Instead I rely on tools: notebooks, apps, my calendar, photos.

I once tried to replicate the computer’s process more precisely. I maintained what I grandly called a ‘livepad’. A single cloud-stored document, always open, on which I could record notes, ideas, my todo list, unfinished blog posts. It worked for a while but my limited interface to it (the keyboard), unreliable connectivity, and simple lack of discipline meant that I dropped it after a while.

Imagine something similar, with a better interface, and a level of intelligence to it. A place where you could record ideas that could be replayed back to you at the right time. The added intelligence in the pad may even help you to find coherence and commonality in those ideas, as well as assisting you with more mundane tasks, like remembering where to be and when.

High Bandwidth Interface

I think in words, more than pictures. Language is my preferred interface, and the way that I record and share language most frequently is via the keyboard.

The keyboard has proven to have incredible longevity. It is perhaps three hundred years old, based on the earliest patents. But it has limitations. I can only communicate words with it (for the most part). It is not that fast — certainly not in my hands.

I could try to learn to touch type, but even then I am limited to a relatively cumbersome interface. I can’t capture my thoughts on the move (though I do a decent job of writing blogs with my thumb while travelling on packed Tube trains). I could use a voice interface, but this isn’t exactly private and could be very annoying for those around me: I talk loud.

Instead I want the words to flow straight from my brain to the page, or the storage system.

This is some way off unfortunately. Though we are reaching the point where we can control artificial limbs with thoughts, the understanding of the brain on which this incredible achievement is based remains limited. For all our comprehension it is still largely a black box to us.

Evolution in Our Control

These are examples of what we might be able to add to human physiology in the years ahead. Even the drugs could be added to new glands as they are in Iain M Banks’ Culture novels. But they are elective and trivial compared to some of the choices we will have to make soon. We will have the capability to eliminate some genetically-carried diseases by selectively editing people’s genomes.

With that sort of power in our hands, we all need to think about the implications*.

###

* If you want to make a start, you could do worse than to watch this video from Professor Julian Savulescu:

Posted by on

Twitter ‘Favorites’: A Case Study of Evolving Social Media Etiquette

How do you use Twitter’s ‘favorite’ button? Twitter itself suggests a couple of ways that people can use it here: https://support.twitter.com/articles/20169874-favoriting-a-tweet

“Favorites, represented by a small star icon in a Tweet, are most commonly used when users like a Tweet. Favoriting a Tweet can let the original poster know that you liked their Tweet, or you can save the Tweet for later.”

Personally, I use favourites1 largely for the latter reason as part of an attempt to overcome what I still believe is one of the biggest problems on the web: discovery.

I want to know who is talking about issues that are important to me, primarily the four categories we cover: the future human, future cities, future business and future communications. I also want to know about great keynotes (and not just TEDTalks), both as a speaker who is always looking to improve, and as a conference organiser with TMRW, thinking about the next event.

I don’t have time to manually scour Twitter for people talking about these things so I use an automated tool that finds and favourites tweets containing certain key phrases2. This creates a shortlist (or sometimes a long list) of tweets for me (and Mason, my colleague who co-curates the feed) to check out.

The phrases we search for are constantly being tweaked but as you can see from looking at the current favourites list, it has a pretty good hit rate of interesting stuff. From it I find new people to follow, interesting articles and things that we might retweet.

We also find abuse. Like this (excuse the language).

Now this person clearly uses favourites in the other way that Twitter suggests. They’re having a very difficult time. I can absolutely see how favouriting a tweet where they were documenting their problems could be seen as offensive — if you assume that by favouriting the tweet I was ‘liking’ their misfortune.

For me and others (I know I’m not alone in this), the favourite has two meanings. It’s not as simple as a Facebook ‘Like’. But we may well be in the minority. To the extent that in the future our usage of the favourite is not only not recognised, it is broadly accepted as wrong. Maybe that’s the case already?

Either way this is an interesting little case study of how the meaning of simple gestures in social media can evolve rapidly, be interpreted differently by different people, and how that difference in interpretation can clearly cause offence.

1. Dropping the quotes and adding a 'u' from this point on.
2. By the way I'm not trying to hide the fact that the tool that I use to do this is promoted as a marketing tool, that that is how we discovered it, or that it works very well at finding us new followers as well as people to follow. But research is absolutely a key part of its value.