The term ‘automation’ once conjured up images of robots doing manual tasks; now it encompasses intellectual or cognitive tasks being undertaken automatically. We are told that already the majority of financial transactions are carried out not with pencil and paper and calculators, but via algorithms.

The images of robots scurrying round the factory floor building motor vehicles or fulfilling customer orders in a vast warehouse, as happens in the Amazon organization, are easy enough to envisage and understand, although the programming behind these activities would be a mystery to most of us.

How many understand how an algorithm works, or even what is it?

Although the concept of an algorithm dates back to the 9th Century, it has come into its own during this century as we seek to automate a multitude of tasks previously done manually.

A simple definition of an algorithm is a self-contained sequence of actions to be performed, beginning with inputs and finishing with outputs.

In computer parlance, an algorithm is a well-defined procedure, a sequence of unambiguous instructions that allows a computer to solve a problem. Algorithms can perform calculations, data processing and automated reasoning tasks.

The most familiar algorithm is a kitchen recipe. It comprises the ingredients (the inputs) and the directions, instructions about how to combine the ingredients to produce the dish (the output).

Wikipedia provides an example of a simple algorithm in mathematics – a set of instructions to find the largest number in a list of numbers arranged in random order. Finding the solution requires looking at every number in the list. This simple algorithm, stated in words, reads:

- If there are no numbers in the set then there is no highest number.
- Assume the first number in the set is the largest number in the set.
- For each remaining number in the set: if this number is larger than the current largest number, consider this number to be the largest number in the set.
- When there are no numbers left in the set to iterate over, consider the current largest number to be the largest number of the set.

Such straightforward mathematical algorithms seem harmless enough. An input is processed and the output is reliably produced.

Now though these mathematical calculations are used in commerce and finance, for example in stock market transactions where the computer programs of stock brokers compete with one another to accomplish the most advantageous transactions for their clients. There are stories of stock broking firms using faster and faster computers and building faster and faster transmission lines to the stock exchange to outdo their competitors. Even a few thousandths of a second faster transmission can make all the difference.

At times the speed and number of such competing automated instructions have brought the stock market to a halt – the ‘flash crash’.

We need though to get away from the notion that mathematical algorithms are pure and free from bias because they use the science of mathematics. Cathy O’Neil, a Harvard PhD and data scientist tells us why in her recently published book:

In an article in

This idea is at the heart of O’Neil’s thinking on why algorithms can be so harmful. In theory, mathematics is neutral – two plus two equals four regardless of what anyone wishes the answer were. But in practice, mathematical algorithms can be formulated and tweaked based on powerful interests.According to O’Neil, algorithms can be used to reinforce discrimination and widen inequality, by ‘using people’s fear and trust of mathematics to prevent them from asking questions.’ This can occur when aspects of life other than objective mathematical propositions are the inputs to the algorithm.

O’Neil saw those interests first hand when she was a quantitative analyst on Wall Street. Starting in 2007, she spent four years in finance, two of them working for a hedge fund. There she saw the use of weapons of math destruction, a term O’Neil uses to describe “algorithms that are important, secret and destructive”.

The algorithms that ultimately caused the financial crisis met all of those criteria – they affected large numbers of people, were entirely opaque and destroyed lives. O’Neil left the hedge fund: “I left disgusted by finance because I thought of it as a rigged system and it was rigged for the insiders; I was ashamed by that – as a mathematician I love math and I think math is a tool for good.”

Her book explains how algorithms can do this – such as the ones used to measure the likelihood a convicted person will relapse into criminal behaviour: ‘When someone is classed as “high risk”, they’re more likely to get a longer sentence and find it harder to find a job when they eventually do get out. That person is then more likely to commit another crime, and so the model looks like it got it right.’

O’Neil tells us that ’…contrary to popular opinion that algorithms are purely objective, “models are opinions embedded in mathematics”. Think Trump is hopeless? That will affect your calculations. Think black American men are all criminal thugs? That affects the models being used in the criminal justice system.’

But O’Neill tells us that sometimes it’s hard for non-statisticians to know which questions to ask. Her advice is to be persistent. ‘People should feel more entitled to push back and ask for evidence, but they seem to fold a little too quickly when they’re told that it’s complicated.’ She adds: ‘If someone feels that some formula has affected their lives, at the very least they should be asking, how do you know that this is legal? That it isn’t discriminatory?’

Algorithms have the capability to sort through vast amounts of data – so-called big data. But what data should algorithms be sorting?

Writing in

We are becoming aware that our Internet browsing history, our Google searches, our contributions to Facebook, Twitter and other social media are being monitored and fed back to us in the form of suggestions about what we might buy or eat or how we should vote.

In

He cautions us about: ‘… the possibility of using big-data predictions about people to judge and punish them even before they've acted. Doing this negates ideas of fairness, justice and free will. In addition to privacy and propensity, there is a third danger. We risk falling victim to a dictatorship of data, whereby we fetishise the information, the output of our analyses, and end up misusing it. Handled responsibly, big data is a useful tool of rational decision-making. Wielded unwisely, it can become an instrument of the powerful, who may turn it into a source of repression, either by simply frustrating customers and employees or, worse, by harming citizens.’

Mayer-Schönberger presents two very different real-life scenarios to illustrate how algorithms are being used. He explains how the analytics team working for US retailer Target can now calculate whether a woman is pregnant and, if so, when she is due to give birth: ‘They noticed that these women bought lots of unscented lotion at around the third month of pregnancy, and that a few weeks later they tended to purchase supplements such as magnesium, calcium and zinc. The team ultimately uncovered around two-dozen products that, used as proxies, enabled the company to calculate a “pregnancy prediction” score for every customer who paid with a credit card or used a loyalty card or mailed coupons. The correlations even let the retailer estimate the due date within a narrow range, so it could send relevant coupons for each stage of the pregnancy.’

‘Harmless targeting, some might argue. But what happens, as has already reportedly occurred, when a father is mistakenly sent nappy discount vouchers instead of his teenage daughter whom a retailer has identified is pregnant before her own father knows?’

Mayer-Schönberger's second example of our reliance upon algorithms throws up even more potential dilemmas and pitfalls: ‘Parole boards in more than half of all US states use predictions founded on data analysis as a factor in deciding whether to release somebody from prison or to keep him incarcerated.’

Will we all awake one day and find that our lives are being controlled secretly by forces whose self interest, not ours, is being served? By forces that want us to buy in a certain way, transact our business in a certain way, view cultural and travel offerings in a certain way, vote in a certain way, behave in a certain way, and even think in a certain way? By forces that selectively benefit those at the top and penalize those at the bottom? By forces that increase the inequality that afflicts the world today?

Does that sound like George Orwell’s

But who will listen? Are our politicians aware of the threat of algorithms? More significantly, are they capable of altering this surreptitious take over of our world?

Have you noticed that your online searches often mysteriously throw up the very things that interest you?

If so, chances are that you may already be in the thrall of the algorithm creators, already slaves to the algorithm.

Are algorithms ruling your world?

Do you feel you are being manipulated through your Internet searches?

Have you had any troublesome experiences using the Internet?

Let us know in comments below.

Have you had any troublesome experiences using the Internet?

Let us know in comments below.

The broad mass of a nation...will more easily fall victim to a big lie that a small one.

Adolf Hitler

Imagine there's no heaven, It's easy if you try, No hell below us, Above us only sky.

John Lennon

“I believe we are in danger of being swamped by Asians."

Pauline Hanson

Future shock is the dizzying disorientation brought on by the premature arrival of the future.

Alvin Toffler

The ballot is stronger than the bullet.

Abraham Lincoln

## 0 COMMENTS

## LEAVE A COMMENT