Author: Ashley Woodward
Recently it seems that everyone has been jumping on the AI bandwagon. The French philosopher Raymond Ruyer (1902-1987) made a pretty good head start when, in 1954, he published his book Cybernetics and the Origin of Information. The term Artificial Intelligence had not even been invented yet (that came in 1956), but the topic of thinking machines was already being investigated under the name of cybernetics. Recently I have been working on a translation of this book , and have of course been asking myself how Ruyer’s arguments stack up with the benefit of hindsight. As we would expect, much is outdated: he says it would be difficult to imagine machines being successful with crosswords, translation, and chess, all areas where we have seen great advances. But it is often also remarkable what he did manage to see.
One of the surprisingly prescient things he comments on back in the 1950s is self-driving cars (now understood to be part of the area of AI called Autonomous Systems). In his Cybernetics book, he suggests that ‘an automaton could be set up to drive a car. A radar, or photoelectric cells, could inform the steering effectors; an acoustic device could inform the acceleration effectors, or tiered effectors, for changes of speed.’ This description is pretty close to how self-driving car technology today, such as it is, actually works. Multiple companies are developing such technologies, and most use a variety of inputs – for example, radar, lidar, and cameras. Tesla is alone in using only cameras, connected to a sophisticated AI-driven system for processing machine vision and controlling the car accordingly.
But how autonomous could a car really be?
Ruyer continues in a sceptical vein:
It is difficult to conceive – although technological progress can achieve it asymptotically – an automation for the driving of a vehicle such that the driver only has to press a control button: ‘Maximum speed according to the circumstances’, so that the machine achieves it mechanically and that between the pure will of the driver and its realization, there would be no psycho-physiological intermediary.
To test his intuitions, let’s imagine Ruyer is magically transported to the present, and is offered a test drive of the latest Tesla. What would he make of it? Would it force him to change his mind, and admit that he was wrong?
Since 2014, Elon Musk has been promising Tesla’s investors more or less every year that fully self-driving cars are only a year away. In 2022, Tesla announced Full Self-Driving Capability in its cars. We can imagine Ruyer behind the wheel, switching the car to Autopilot mode, and sitting back and enjoying the ride as it drives itself down the highway.
However, he better not fall asleep. Or even take his hands off the wheel. According to Tesla’s website, in answer to the Frequently Asked Question ‘Do I still need to pay attention while using Autopilot?’, the answer is:
Yes. Autopilot is a hands-on driver assistance system that is intended to be used only with a fully attentive driver. It does not turn a Tesla into a self-driving car nor does it make a car autonomous.
Before enabling Autopilot, you must agree to “keep your hands on the steering wheel at all times” and to always “maintain control and responsibility for your car.”
This is important, because self-driving cars are known to make mistakes, which need to be corrected by the intervention of a conscious human driver. In 2022, it was reported that there were 400 self-driving car crashes in the last year, with Teslas involved in 70% of them. While some dispute this, it has been reported that there is significantly more chance of accidents with self-driving cars than fully human-driven ones. The problem is what’s known as ‘real world AI’, and Musk has recently said that cars won’t be fully autonomous until this problem is solved. The issue is that while AIs perform well in controlled environments, in the messy ‘real world’ there are a seemingly infinite number of unexpected things that can go wrong (a cow crossing the road, something falling from the sky, a stop sign that’s difficult to see, etc.) Self-driving cars are prone to mistaking inputs. A frequently reported problem is sudden braking as if there is an obstacle, when there isn’t one (which admittedly does seem much better than the reverse).
So even in the latest Tesla, Ruyer has to keep his hands on the wheel, and pay close attention to the road. He is disappointed, but not too surprised. His arguments through Cybernetics and the Origin of Information – and many other works, such as his 1966 book The Paradoxes of Consciousness and the Limits of Automation – insist that technologies can only be auxiliarys for consciousness; that they can only intervene to help human beings achieve purposes which they set for themselves, and can appreciate.
The real test for Ruyer would not even be the car’s ability to drive somewhere he wants to go without him having to pay attention and supervise it. One of his main critical targets is a basic contention of cybernetics, that human beings and machines are fundamentally no different. This belief gives credence to the idea that humans will one day be replaced by machines. Ruyer explains with another reference to the self-driving car:
People who fear the mechanization of humanity through technology seem to believe that, for example, automobiles, through the power of improvement, will first have automatic steering, then will be able to follow a road on their own according to a program, allowing the owner to stay at home while their car travels. They will then be able to choose their own route, according to the roads indicated as scenic by a guide; then they will be able to explore the roads themselves and determine which are scenic.
Of course, even if we are not quite there yet, we can today easily imagine cars driving around by themselves, and using image recognition software and algorithms with aesthetic criteria to determine what routes might be determined as ‘scenic’. The cars could then take us on these scenic routes, for us to enjoy. We can even imagine them taking themselves out for a drive for reasons we don’t understand, just as twenty of Cruise’s self-driving taxis spontaneously decided to meet at an intersection in San Francisco.
But what Ruyer is getting at here is that it would be absurd to imagine the cars wanting to go for a drive in order to ‘enjoy’ the scenery for themselves. (It would be like the artificially intelligent car K.I.T.T. from the 80s TV show Knight Rider getting some recreation on its days off from chasing bad guys.)
So what would impress Ruyer – and perhaps force him to admit that he was wrong – would not really be a kind of ‘test-drive’ at all, but if his (more advanced) Tesla took itself out for a pleasant drive while he is at home, just for the fun of it. But he thinks this will never happen:
These fears are entirely childish […] Something transcendent, in man and beyond man, will always frame his industrial machines.
His idea of ‘framing’ here seems as relevant today as it was back in the ‘50s, at the dawn of the information age. For all the talk of Artificial Intelligence and automation today, and all its impressive results, everyone but the occasional credulous Google engineer knows that machines do not really have intelligence in the same way that humans do. They might one day soon be able to emulate many intelligent human behaviours perfectly, or perform them even better than humans (as they already do in some areas). But it is conscious human purposes which still frame what AIs do, by ‘enveloping’ them with their intentions, and having a necessary place at the beginning and end of their processes. It’s humans who create machines in the first place, and who are satisfied or dissatisfied with their results, according to what humans want. It still seems absurd to seriously entertain the idea that machines could really want anything (at least for now).
Raymond Ruyer, Cybernetics and the Origin of Information. (1956) Translated with Amélie Berger-Soraruff, Andrew J. Illiadis, and Daniel W. Smith, Rowman & Littlefield, forthcoming January 2024.
9.1 accidents per million miles driven for self-driving cars, versus 4.1 for human-driven cars: