John Q. Student
“Can machines think?” – Alan Turing, 1950
This is the one question humans have been asking for decades (even centuries) and it is a topic which has been widely explored until this very moment. This question was posed by Alan Turing in his essay way back in the 1950s. As we all know, one core function of computers nowadays is to enable a systematic automation of processes. These processes pertain to tasks that used to be performed by humans manually. However, computers (and machines) have always worked in a fashion where people command every single move it makes, and therefore the computing world is posed with the question, “Can machines think?” Alan Turing who is considered as the father of computer science has made large contributions through the years, paving the way for people who wish to know whether machines can do more than what they are programmed to do.
Alan Turing was born in the year 1912, and he lived in a time way before the age of the domination of computers. As early as 1936 he had already laid out the foundations and blueprints of the computers as we know today. He envisioned the core concept of programming which is enabling the machine to “implement instructions for making any calculation” . Turing’s expertise was not limited to computer science. He has also made contributions in the field of mathematics where he proved that there may be situations where a mathematical statement is neither provable nor improvable. This conclusion was actually assisted by his concept of the computer, intending on exploring and expanding the possibilities of mathematics.
His early conceptions of a computing machine were composed of three parts: (1) A limitless length of tape, (2) a read-write head, and (3) a rule book. The tape can be considered as the canvas where a certain code was inscribed by the read-write head. The information is then interpreted by the head to the machine. Then we have the rule book which determines what action to take based on the command coming from the tape. This is basically how what we call compilers work today. You write your code, have the compiler interpret it, and send the interpretation to the machine, telling it what to do.
Although he is considered as the father of computer science, he is also considered as the father of artificial intelligence. In his article where he asked the question of whether machines can think, he also proposed a game which was eventually called the Turing Test. This game is composed of three players; two humans and one computer. The aim of the game is to trick Person A into thinking that he conversing with a human (Person B) rather than the machine. It is done through an interface where Person A cannot see who he/she is conversing with.
However, at the time when Alan Turing had proposed the idea of the Turing Test, it was not until the late 20th century (specifically, the year 1990) when it was actually possible to conduct the experiment. This was due to the limitations of technology at the time when the basic and widely used function of the computer is to actually compute mathematical statements. In the year 1990, the Loebner prize for artificial intelligence was found and is continually awarded yearly through a Turing Test in a competitive setting. The programs which participated in this event were not multi-million projects by large companies worldwide, but rather small projects whose teams consisted of enthusiasts most of the time.
Decades have passed since Alan Turing’s bold experiment idea. In that time people are left to wonder how close we are to thinking machines, but the world of computer science has yet to see a machine which can completely fool a human (or a panel of humans). In fact, there have been no successful programs entered in the yearly Turing Test which have fooled the panel entirely, only “most human-like” bronze awards have been given out to the contenders usually under grounds of technicality and a little rule-breaking, at times. Despite this, Loebner refuses to change the mechanics and remains faithful to the words of Turing. He emphasized during this past year’s 100-year celebration competition that “Your job is to convince the judges that you are the human.” .
Aside from being a computer scientist, mathematician, and a big figure in artificial intelligence, Alan Turing was also very knowledgeable in the field of cryptanalysis – “the art of deciphering encoded messages”. He is widely known for his contributions during the World War II where he served as a leading scientist in the British Code-Breaking team. He undertook the task of breaking the code to the German Navy Enigma which was a considerably complex thing to do, and yet he was able to do it (without anyone else’s help, for that matter). This feat of his made him a wartime hero for the Allied forces, after which he was awarded an OBE (Order of the British Empire).
After the war, Turing stayed in Manchester for a period of time where he experienced a break-in inside his house. During the investigation period it was discovered by the police that, apparently, Turing and one of the accomplices in the break-in have met before; in a homosexual encounter. Keep in mind that during this era, homosexuality was still considered as a sin in many parts of the world, and at this instance it was considered a crime. The tables were turned and somehow, Turing became the criminal. He was sentenced to a sanction of his choosing, either receiving a chemical castration (injecting hormones to make the genitals smaller) or conviction and chose the former. Two years later, he was found dead by his housekeeper with a bitten apple in hand. An official apology was given by the British Prime Minister around 55 years after Alan Turing’s death, quite a long period of time to apologize for how they treated him.
True enough, Alan Turing has molded our world into what it is today. In every nook and corner we see the fruits of his amazing imagination and intelligence. The computers we now use, whether it may be smart phones, tablets, or laptops, are an inseparable part of our daily lives and have made all kinds of work so easy that none of us should be in any position whatsoever to complain about “too much work”. His question “Can machines think?” can actually be considered as the root of all the ideas with regards to artificial intelligence. In movies we find different interpretations as to what would happen when machines do get a mind and a will of their own whether good or bad. Computer scientists continue to explore the possibilities of AI through events such as the Turing Test, the development of Apple’s Siri, and various online chatbots such as Cleverbot and Eliza. His three-part machine explained how the core of our personal computers work years before it was implemented, I mean, it’s amazing how he thought of those things at a time when people would probably have laughed at him for his ideas if not for his skill in proving the relevance of his work. He was not only influential in the field of computer science but also other fields like cryptanalysis and mathematics which makes him all-the-more admirable. Personally, I find it difficult to accept that he died the way he did. It made me realize that yes, he may have done amazing things and contributed largely to the world today, but he is only still human and cannot do anything but to succumb to the demands and ideals of society. All the great things he did were almost obliterated upon revealing the fact that he was different. And although an apology was given, I feel as if it would have been better not to apologize and “come clean”. I understand that during his time, people weren’t as liberated and open-minded as they are now, but I can’t help but wonder what other things Turing could have accomplished if he lived on and had a happy life and died of natural causes. Nevertheless, he is still considered as one of the greatest minds of the modern world and we should all be thankful that such a man as Alan Turing had existed.
References
History: Code Breaking. (n.d.). Retrieved February 19, 2013, from BBC: http://www.bbc.co.uk/history/code_breaking/
Meltzer, T. (2012, June 17). Alan Turing's legacy: how close are we to 'thinking' machines? Retrieved February 19, 2013, from The Guardian: http://www.guardian.co.uk/technology/2012/jun/17/alan-turings-legacy-thinking-machines
Siegfried, T. (2013, January 11). A Methodical Mind: Alan Turing. Retrieved February 19, 2013, from COSMOS: The Science of Everything: http://www.cosmosmagazine.com/features/a-methodical-mind/
Printing sponsored by:
- Alan Turing's legacy: how close are we to 'thinking' machines?
This year, the Loebner prize, the annual competition to find a computer that can pass for a human, was held in Turing's former stomping ground of Bletchley Park. How did they do?
- Tom Meltzer
- The Guardian, Sunday 17 June 2012 15.30 EDT
Eveb in 2012, bots don't always get it perfect Photograph: Guardian
It is 100 years this week since the birth of the revered wartime codebreaker Alan Turing, and 67 years since he was awarded an OBE for leading the team, in Bletchley Park's Hut 8, that cracked the German navy's Enigma code. It has also now been 60 years since he was convicted for gross indecency, after admitting to being in a consensual same-sex relationship, and sentenced to chemical castration by means of regular injections of oestrogen, as an alternative to time in prison. It's 58 years to the month since he killed himself, and just less than three years since a British prime minister saw fit to issue an official apology for his treatment.
Though best known for the story of his wartime heroism and the appalling circumstances of his death, in academic circles, Turing's name carries other connotations. Among philosophers and computer scientists, he is known as the father of artificial intelligence, thanks in part to a single essay penned in 1950, asking the question, "Can machines think?" In the article, published in the philosophical journal Mind, Turing proposed a game capable of providing an answer: a competitive conversation in which a computer and a human attempt to convince a judge that they too are a conscious, feeling, thinking thing.
The game would come to be known as the "Turing test". At the time, it was impossible to conduct: humans had yet to create the necessary networks and software; computer programs were nowhere near intelligent enough to simulate anything resembling conversation. It took another 40 years for Turing's imagined game to become a reality, when in 1990 the American philanthropist Hugh Loebner founded the annual Loebner prize for artificial intelligence, "the first formal instantiation of the Turing test".
Alan Turing (right) at work in 1946. Photograph: Science & Society Picture Librar/SSPL via Getty Images
The prize is not, by Loebner's own admission, a rigorous academic test. The programs competing are also not necessarily the most impressive in the field: entrants tend to be enthusiasts' passion projects, rather than multimillion-pound ventures, such as the iPhone's talking assistant Siri.
Computers have not evolved quite as Turing expected them to, but Loebner has stayed determined to run the competition to the founding father's precise specifications. To mark the centenary of Turing's birth this year, the contest was held for the first time in its history at Bletchley Park, and I went along to see if a computer could manage to persuade a panel of humans that it was a real person.
"Your job," explains the award's colourful founder Loebner, to his four nervous volunteers, "is to convince the judges that you are the human." Moments later, the four of them will sit down at their screens and begin the first of four competitive online chats. Their opponents hum quietly on the table next to them: four unmanned computers, each set up by a neutral engineer, each with a different conversational software program installed, known as "chatbot", designed by AI enthusiasts to be mistaken for a human being.
Across the hall, in Bletchley Park mansion's cosy Morning Room, four judges sit at another bank of screens. In each of the competition's 25-minute rounds, the judges will hold two online chats simultaneously – one with a volunteer and one with one of the chatbots. They have not been told in advance which is the person and which the computer. If a bot manages to fool two or more of the judges, it will win its creator a gold medal engraved with Turing's image, and $100,000 (£64,000).
This is Loebner's "grand prize", which nobody has ever won. In fact, year on year, with very few exceptions, not a single judge is fooled. The last time a chatbot successfully "passed" – in a single round of the 2010 competition – it did so only because a volunteer didn't follow instructions and chose to imitate a robot. When none of the judges are fooled, a $5,000 "bronze award" is given to the bot they rank "most human-like".
Being here at Bletchley Park, says Loebner, is "like treading on hallowed ground". But Turing might have been a little disappointed with the competitors. When he proposed the game, he predicted computers would be comfortably passing the test "in about 50 years' time". Yet 62 years on, Loebner is disparaging about the competitors. "These are rudimentary," he says. "They have the intelligence of a two-year-old."
Bletchley Park, Buckinghamshire. Photograph: Alamy
It isn't hard to see what he means. The first bot gives itself away just 10 seconds into its opening conversation. "Hi, how are you?" asks the judge in both windows. "I'm fine, thanks for asking," comes one reply, the other: "Please rephrase as a proper question, instead of 'Jim likes P.'" No prizes for spotting the human there.
Another bot blows its cover by asking : "Did you hold funerals for your relatives when they died?" (The judge's response: "No, I normally cut up the bodies and buried them myself.") A third bombards questions: "Have you recently been to a live theatre?", "Have you recently been to the art gallery?", "Do you want a hug? Do you have a child? Do you want a child? I can't."
One tries to confuse a judge by being petulant ("Do you have a point? I must have missed it"), while last year's winner, Talking Angela, does its best to fool them by posing as a teenage girl: "I really like Lady Gaga. I think it's the combination of the sound and the fashion-look that appeals to me," before coming unstuck by claiming: "I'm a cat."
As predicted, the judges aren't taken in at all. "It became apparent quite quickly in all cases," says volunteer judge Michael Stean, who is also a chess grandmaster, though he admitted to being fooled by small patches of one or two of the conversations. "I think if you went through the conversations and you edited out the answers that were obviously wrong, it would be quite a close contest."
David Levy, whose bots have won the bronze prize twice, has managed to fool a judge just once: "The first time I won was 1997. We stayed up and watched the news the night before, and I wrote a script based on that. The news was that Ellen DeGeneres came out as a lesbian." Levy's bot began all its conversations by asking the judge what they made of the news, and even shared its own opinions. "In the first section, one of the judges was completely fooled."
David Levy: 'There's a fortune to be made in this field'. Photograph: Teri Pengilley
Though he won that year, Levy is keen to stress the many practical applications of the technology. "I think there's an absolute fortune to be made in this field," he explained. "I think already there are areas of medical diagnosis where it's been proven thatcomputers can do better than doctors. The problem is there's a huge amount of litigation. But the logical question is which would you rather be diagnosed by, a human doctor who's 80% right or a computer doctor that's 95% right?"
It's not just medicine either. Levy is confident that in 30 or 40 years, "there will be robots that are very human-like that people will be forming friendships with, and having sex with, and falling in love with".
For now though, even this year's "most human-like computer" is unlikely to be receiving any love letters. With the rankings tallied, the $5,000 prize goes to American chatbot Chip Vivant, the same bot that told one judge, "Please rephrase as a proper question, instead of 'Jim likes P'". For its creator, American programming consultant Mohan Embar, it is success at the fifth attempt. "It feels wonderful, obviously. In the early 2000s, when reading transcripts of previous years' competition, my mouth started to water and I knew I wanted to be a part of this."
For Embar, creating his bot wasn't about deceiving the judges so much as offering them a meaningful conversation. "I'm not interested in creating a chatbot that fools people, but rather one that can empathise with and provide comfort to people who can't or don't want to get it from a real person. I've become keenly aware of the futility of creating a program that comes anywhere close to fooling someone who knows what they're doing."
Before I leave I ask Loebner if he thinks anyone will ever manage it. "It'll come," he says. "Probably long after I die."
- A methodical mind: Alan Turing
COSMOS Magazine
His work helped bring Hitler to his knees, and laid the foundations for our modern, computerised world. But Alan Turing died a victim of prejudice and intolerance.
- Article
- Comments 000
Alan Turing
ARGUABLY, AND IT would be a tough argument to win if you took the negative side, computers have had a greater impact on civilisation than any invention since the wheel. Sure, there was the steam engine, automobile and airplane, the printing press and the mechanical clock. Radios and televisions also created their share of societal waves. But, look around. Computers do everything TVs and radios ever did. And computers tell time, control cars and planes, and have rendered printing presses pretty darn near obsolete.
Computers have invaded every realm of life, from work to entertainment to medicine to education: reading, writing and arithmetic are now all computer-centric activities. Every nook and cranny of human culture is controlled, coloured or monitored by the digital computer. And yet, 100 years ago, no such machine existed. In fact, in 1912 the word computer referred to people (typically women) using pencils and paper or adding machines.
Fittingly, 1912 was when Alan Turing was born. And he’s the person to blame if you don’t like how computers have taken over the world because no one did more to build the foundations of computer science than Turing. In a paper published in 1936, he envisaged the principle behind all of today’s computing devices, sketching out the theoretical blueprint for a machine able to implement instructions for making any calculation.
Turing wasn’t, of course, the first to come up with the idea of a computer. Charles Babbage had grand plans for a computing machine a century earlier, and even he had precursors. Not long after Babbage, 19th century English mathematician and philosopher George Boole developed the underlying binary mathematics that modern digital computers later adopted. Even that was first conceived much earlier, in the 17th century, by Gottfried Leibniz. But it was Turing who combined ideas from abstract mathematical theory and concrete mechanical computation to describe precisely how, in principle, machines could emulate the human brain’s capacity for solving mathematical problems.
“Turing gave a brilliant demonstration that everything that can be reasonably said to be computed by a human computer using a fixed procedure can be computed by a machine,” writes computer scientist Paul Vitányi in a recent book, Alan Turing: His Work and Impact.
Tragically, though, Turing didn’t live to see the computer takeover. He died a victim of prejudice and intolerance. His work lived on, though, and his name remains linked both to the idealised machine he devised and a practical test for machine intelligence; a test that foreshadowed the powers that today’s computers have begun to attain.
BORN IN LONDON on June 23, 1912, Turing grew up in an era when mathematics was in turmoil. Topics such as the nature of infinity, set theory and the logic of axiomatic systems had commandeered the attention of – and confused – both practitioners and philosophers interested in the foundations of mathematics. Constructing an airtight logical basis for proving all mathematical truths had been established as the ultimate goal of mathematical inquiry.
But in 1931, Austrian logician Kurt Gödel dashed that hope, demonstrating that some true statements could not be proved (within any mathematical system sufficiently complex to be good for anything). In other words, no system built on axioms – presumed truths at the core of an argument – could be both complete and internally consistent. That is, you couldn’t prove all true statements about a system by deductions from its axioms.
A second deep question remained, though. Even if not all true statements can be proved, is there always a way to decide whether a given mathematical statement is provable or not?
Turing showed the answer to be ‘no.’ He wasn’t the first to figure that out; as he was finishing his paper, the American logician Alonzo Church, at Princeton University, published his own proof of such ‘undecidability.’ Turing’s triumph was not in the priority of his claim, but rather in the creative way his proof was constructed. He proved the ‘no’ answer by inventing his computer.
He didn’t actually build that computer (at first, anyway), nor did he seek a patent. He conceived a computational machine in his imagination – and outlined the essential principles by which it would work – to explore the limits of mathematics.
Turing’s machine was deceptive in its conceptual simplicity. Its basic design consisted of three parts: a limitless length of tape, marked off by squares on which symbols could be written; a read-write ‘head’ that could inscribe symbols on the tape and decipher them; and a rule book to tell the machine what to do depending on what symbol the head saw on the tape.
These rules would tell the head what to do in response to a given symbol and then which rules to use next. Suppose, for instance, the head detects a 1 on the tape. A possible rule might be to move one square to the left and write a 1; or move one square to the right and write a 0; or stay on the initial square, erase the 1 and leave that square blank. By following well-thought-out rules, such a mechanism could compute any number that could be computed (and write it as a string of 0s and 1s).
One of the prime consequences of Turing’s analysis was his conclusion that some numbers could not be computed. He adopted Gödel’s device of assigning a number to every possible mathematical statement and then showed that this inability to compute all numbers implied that the provability of some statements could not be decided. (And Turing showed that his proof of undecidability was also equivalent to Church’s more complicated proof.) Turing’s result was immediately recognised as exceptional by his professor at the University of Cambridge, who advised Turing to go to Princeton for graduate school to work with Church.
TURING’S IMAGINARY COMPUTER (christened the ‘Turing machine’ by Church) offered additional lessons for future computer scientists. Depending on the type of calculation you wanted to perform, you could choose from Turing machines with different sets of instructions. But, as Turing showed, there was no need for a roomful of machines. A portion of one computer’s tape could contain the rules describing the operations needed for carrying out any particular computation. In other words, you could just give that machine a rule book (today, you’d call it a program) that tells it what to do. Such a ‘universal Turing machine’ could then be used to solve any problem that could be solved.
During his time at Princeton, Turing discussed these ideas with mathematician John von Neumann, who later set out similar principles when describing the stored program general purpose computer, the model for digital computers ever since. Today’s computers, whether Macs or PCs or teraflop supercomputers, are all Turing machines.
“Von Neumann realised that Turing had achieved the goal of defining the notion of universal computing machine, and went on to think about practical implementations of this theoretical computer,” wrote Miguel Angel Martín-Delgado of Universidad Complutense in Madrid in a recent paper in the journal Arbor.
TURNING’S THOUGHTS about his machine went well beyond the practicality of mixing mathematics and mechanics. He was also entranced by the prospect of machines with minds.
When Turing referred to the machine’s configuration as its ‘state of mind,’ he really did consider it analogous to the state of mind of a human computer, using a notepad, pencil and rulebook rather than tape, head and program. Turing’s imaginary machine demonstrated that the computing abilities of the person and the mechanical computer were identical. “What he had done,” wrote his biographer, Andrew Hodges, “was to combine a naive mechanistic picture of the mind with the precise logic of pure mathematics.”
Turing believed that people were machines – that the brain’s magic was nothing more than physics and chemistry ‘computing’ thoughts and behaviors. Those views emerged explicitly years later, when he devised the now-famous test of artificial intelligence that goes by his name. To analyse whether machines can think, Turing argued, the question must be posed in a way that enables an empirical test. As commonly described, the Turing test involves a human posing questions to an unseen respondent, either another human or a computer programmed to pretend to be human. If the computer succeeds in deceiving the interrogator, then – by Turing’s criteria – it qualifies as intelligent.
Actually, Turing’s proposal was a bit more elaborate. First, the interrogator was to pose questions to two unseen humans – one man, one woman – and attempt to determine which one was which. After several trials, either the man or the woman was to be replaced by a computer, and the game repeated, this time the interrogator attempting to tell which respondent was human. If the interrogator succeeded no more often in this task than when the respondents were both human, then the machine passed the thinking test.
Since Turing’s paper appeared, in 1950, multiple objections to his test have been raised, some of which Turing anticipated and responded to in the paper. But the test, nevertheless, inspired generations of computer scientists to make machines smart enough to defeat chess grandmasters and embarrass humans on Jeopardy! Today you can talk to your smartphone and get responses sufficiently humanlike to see that Turing was on to something. He even predicted a scenario similar to something you might see today on a TV commercial. “One day ladies will take their computers for walks in the park and tell each other ‘My little computer said such a funny thing this morning!’” he liked to say.
Turing seeded a future in which machines and people interact at a level that is often undeniably personal. But he was not around to participate in the realisation of his imaginations. Four years after that paper on artificial intelligence appeared, he was dead.
DURING WORLD WAR II, Turing had been the key scientist in the British government’s code-breaking team. His work on cracking the German Enigma code was, of course, a secret at the time, but later was widely recognised as instrumental in the Allies’ defeat of Germany. After the war, Turing returned to computer science, eventually developing software for a sophisticated (at the time) programmable computer at the University of Manchester, in Britain.
While at Manchester, he published his paper on the artificial intelligence test. Later, still at Manchester, he encountered a profound absence of intelligence in the British criminal code. During a police investigation of a break-in at Turing’s home, he acknowledged that he knew the culprit’s accomplice from a homosexual encounter. And so Turing became the criminal, prosecuted for ‘gross indecency’ under a law banning homosexual acts. Upon his conviction, Turing chose the penalty of chemical castration by hormone injection rather than serving a term in prison. His security clearance was revoked.
Two years later, Turing’s housekeeper found him dead in bed, a partly eaten apple at his bedside. It was officially ruled a suicide by cyanide. At the age of 41, the man who played the starring role in saving Western democracy from Hitler became the victim of a more disguised form of evil.
In his tragically truncated life, Turing peered more deeply into reality than most thinkers who had come before him. He saw the profound link between the symbolisms of mathematical abstraction and the concrete physical mechanisms of computations. He saw further how computational mechanisms could mimic the thought and intelligence previously associated only with biology. From his insights sprang an industry that invaded all other industries, and an outlook that today pervades all of society.
Science itself is infused with Turing’s information-processing intuitions: computer science is not merely a branch of the scientific enterprise — it’s at the heart of the enterprise. Modern science reflects Turing’s vision. “He was,” wrote Hodges, “the Galileo of a new science.”