Perceptron: A History of Neural Networks
Perceptron paved the way for GenAI, 60+ years ago
The Perceptron is the ancestor of Neural Networks and Generative AI, but it arrived 60 years too early and, above all, its promise was a tad overstated, to say the least. One started to imagine impossible things. Besides, artificial intelligence as no match for natural human resentment. Especially that of Marvin Minsky. The result was the first winter of AI. Joe Bloggs might think that ChatGPT is a “revolution“. But for scientists, computer scientists, philosophers and others who like to read them, it is above all an evolution. The evolution of a way of thinking that goes back a very long way. Which comes as no surprise to readers of Kurt Vonnegut (Player Piano, 1957).
A History of Neural Networks: the Perceptron
In a fascinating radio programme on French State Radio Station France Culture, Antoine Beauchamp described the invention of the ‘Perceptron’. A deliciously ’50s sounding name, as the France Culture host pointed out. In the course of this fascinating interview, I discovered that the New York Times had gone wild over the invention of the Perceptron. I went looking for traces of a Times piece on the subject, and I found this 1958 article.
It mentions the IBM “704”, an IBM 704 “a 5-ton computer the size of a room – powered by a series of punched cards.” After 50 attempts, explains Melanie Lefkowitz from Cornell University, “The computer learned to distinguish between cards marked on the left and those marked on the right.”
Here’s the story told by the Times in July 1958.
A Device that Learned by Doing
Back in 1958, the Navy had introduced the early stages of an electronic computer to a bunch of representatives from the News Media. The US Navy believed it possessed capabilities far beyond our imagination. A reporter from API, Associated Press International reported on that visit in a 1958 New York Times Perceptron piece, still visible today on the Times’s timemachine.
This computer was deemed “the embryo of a computer” and was expected to “walk, talk, see, write, and even reproduce itself”. More astonishingly, it was “anticipated to be aware of its own existence”.
This technological marvel, the Weather Bureau’s $2,000,000 “704” computer, had demonstrated its learning capabilities by successfully distinguishing between right and left after just fifty attempts. This demonstration was conducted by the Navy for the press, showcasing the potential of this embryonic (literally since they compared the computer to an embryo) technology. The reporter was truly fascinated to witness such advancements in the field of computing back then.
Neural Networks and Anthropomorphism
It’s interesting to see the anthropomorphism one bestowed on AI. And it’s still going on today when we call ChatGPT “He” or “She”. And let’s admit, we all do that. Here the reporter was describing the Perceptron as an “embryo” as if computers were a new form of earthlings.
Back then, the Navy had announced its plans use this research to build the first of its “Perceptron thinking machines” (more anthropomorphism). The completion of this ambitious project was expected in about a year, with an estimated cost of $100,000.
The obsession of reading and writing dates back a long way in AI history. It’s strange that Yuval Harari has overlooked that. Just so you know, the first poems to be generated by computers date back 1957!
Franck Rosenblatt From Cornell University
The Perceptron was designed by Dr. Frank Rosenblatt, who conducted the demonstration by himself. He declared that “the machine would be the first device to think as the human brain. As do human beings, Perceptron will make mistakes at first, but will grow wiser as it gains experience,” the Times reported.
Dr. Rosenblatt, who was a research psychologist at the Cornell Aeronautical Laboratory, Buffalo, declared “Perceptrons might be fired to the planets as mechanical space explorers” according to the Times journalist.
More interestingly, the machine was said to have been able to tell the difference between the right and the left by looking at “squares” on the cards it was provided. All this happened with no human controls and the computer showed stunning abilities to “learning by doing”.
In the first fifty trials, the machine made no distinction between them. It then started registering a “Q” for the left squares and “O” for the right squares. Dr. Rosenblatt said he could explain why the machine learned only in highly technical terms. But he said the computer had undergone a “self-induced change in the wiring diagram”.
Self-Learning, Back in 1958
When Geoffrey Hinton declared he couldn’t understand what GenAI was doing nor why, this in fact isn’t new either. Even though this form of self-learning was arguably very limited.
You can have a look at the full New York Times Perceptron article at this address, should you wish to buy a reprint. I will quote from the article and give my comments hereafter.
The incredible prowess of the Perceptron is remarkable, but so is the overconfidence of its inventor. And it was precisely this smugness that led to the AI winter of 1974. Doesn’t this Mr Rosenblatt remind you of another more contemporary AI character?
- GenAI impact on jobs: doom or boon? - 01/07/2024
- Music and AI: Back to the Future - 07/06/2024
- Learning AI with the help of robots - 05/06/2024