A global archive of independent reviews of everything happening from the beginning of the millennium
Read our Copyright Notice click here
For publication dates click here
Bluecar Photo: Francisco J Gonzalez
DRIVERLESS CARS 2014 - PART 2
(JULY - DECEMBER)
Reviewed by ANDRE BEAUMONT
See also Driverless Cars 2014 (January - June) for all the hard technical detail about driverless cars which perhaps we will not have to cover again this year.
9 July 2014
Vincent Bolloré's Bluecar scheme, or Autolib', was introduced in Paris with proper entrepreneur's flair in December 2011 with 250 lightweight electric cars purpose-designed by Pininfarina.
Numbers had reached 1,750 (using 5000 charging points) in 2012 and will reach 3000 (using 6600 charging points) before the end of 2014. They provide an inexpensive way to hire a small electric car.
As with Tesla (who were somewhat later) there was some proprietary battery technology, lithium metal polymer in this case, behind the idea. There was also the concept of using the 'big data' from the movement of the vehicles in numerous ways, such as to assist wider traffic management. In addition, charging the cars would take place at times when the wholesale cost of electricity was lowest. Given the great difficulty in storing electricity on a large scale, the idea of using the batteries of a large fleet of cars as an electricity reserve for demand management was also explored.
Perhaps it is a simple twist of fate that a Bolloré company, Blue Solutions, will soon be bringing these cars to London. Maybe it is like dropping 'a coin into the cup of a blind man at the gate'.
Privately, I have advocated bringing the Google cars to Britain. We are one of two countries in the EU which does not legally require a driver to be at the controls of a car. So we have an exploitable first mover advantage. Yet we do not have the technology except some nascent expertise with pods. Google and Nissan do. So we should invite one or the other to bring their best technology here.
Bolloré will build out the infrastructure for electric vehicles in London. Then he should simply sell his concession to Google for billions and we can see the Bluecars replaced by driverless cars.
As can be seen, one lightweight electric self-drive car would be replaced by another lightweight self-driving car.
Then we can get on with the experiment of the driverless car coming to Britain - in London.
31 August 2014
When the video of the latest Google car came out four months ago there was something troubling - but one had to treat a promotional prototype as a promotional prototype.
The occupant/car interface was all wrong, an error an experienced car maker would not be making.
20 or more years ago we got rid of lift attendants. When you went to some hotels or department stores a lift attendant would be in the lift, ask you which floor you wanted to go to and operated the lift for you.
Once dispensed with, people pressed buttons in the lift for the floor they wanted to go to and if they got it wrong they pressed another button or two and eventually got to the floor they wanted. This is how most people still use lifts.
Now there are lifts where it is impossible to change the destination once you are inside. You should have remembered to enter the floor you wanted on a keypad outside the lift before you entered. The confusion caused is often remedied by lift jockeys standing on key floors redirecting and helping people get to the floor they want. This is all wrong, too.
In the car video a passenger presses a button and off the passengers go to a pre-selected destination. Presumably you have to press the button again if you want to stop the car before the destination is reached but that is all you can do.
This is not particularly to criticise Google. I'd like to see it demonstrate an updated interface to me one day.
The principles, though, should perhaps be as follows:
1) A driverless car should operate without a driver.
2) An autonomous car should not be remotely controlled in most aspects of navigation. Autonomous in this context is taken to mean the car makes most of the navigation decisions.
(Remote control of haul trucks around mine workings or even satellite control of fleets of commercial vehicles as roadtrains are quite different cases).3) The passenger should be able to modify or alter the destination, pause and halt the car when in the car without becoming the driver.
If the passenger has to take over as a conventional driver, perhaps using conventional controls, then it defeats some of the objectives of having robotic automation.
It is like reintroducing dedicated human control of lifts at critical points by using lift jockeys.
If a driverless car has no driver that must remain the case but that should not preclude passengers giving commands to the car from within it.
The issues are quite easily illustrated with an example.
I think the city centre to airport run is the classic type of route where early driverless cars could demonstrate economic viability.
Recently I took an early morning taxi from central London to Heathrow airport Terminal 5. The route was a clear run but the drop off area was heavily congested. The driver stopped near the beginning of the drop off area whilst what we really needed was the further end off the drop off area.
With a human driver you can ask "could you drive on to the end, please?" but had a driverless car's destination been simply entered as 'Terminal 5', with simple push button start and stop, the real autonomy that people want from a small passenger vehicle - to go wherever they want and to make modifications in the light of conditions and wishes - might not have been deliverable.
In the UK, the Automotive Council's driverless pods project in Milton Keynes will concentrate heavily on the vehicle-user interface.
In my view the early limiting factor for driverless cars is not the robotics, the sensor technology or the automotive technology, all of which can be placed onboard a vehicle to give it autonomy but the vehicle-user interface.
Once in a driverless car, entering commands by mobile phone is too slow and fiddly for traffic conditions.
The realistic options may be voice command or visual representation of the roadspace ahead on a fairly large screen with touch commands.
Google has considerable experience of both of these; traditional vehicle manufacturers have it in vehicle-user interfaces. It will be interesting to see who gets to robustly useable solutions first.
Visions for driverless vehicle projects can differ but there is a worrying lack of people who can nail down all aspects and not miss things.
When I studied architecture we studied subjects like structures, environmental sciences and building construction not so that we would know how to design a structure as well as a structural engineer but so that none of the engineers or builders we might have to work with could pull the wool over our eyes.
So I, at least, always leant towards being the chairman of the design team, a position often accorded to the architect, rather than the chief designer.
There is no point in a ventilation services engineer producing the most efficient solution if the results are that the ducts run straight across the room at head height. (In fairness, no such thing would be proposed). Everything must be integrated and nothing important must go wrong. New buildings never fall down. (Catastrophic failure like Ronan Point is a once in 50 years folk memory in Britain).
One thing I like about the Bluecar project is that it is oriented towards creating a user experience. It is setting out not to miss things.
16 September 2014
The BBC has reported that the Heathrow Terminal 5 pods have done a million miles running autonomously.
Though they navigate autonomously their route is confined by guide walls as they run the route to business parking 2.4 miles away.
They are also centrally controlled, a principle you can go with until you come to real life obstacles that only fully autonomous cars, or ones with drivers, can navigate such as roundabouts with multiple users entering and exiting. In such situations central control, even by satellite, is going to struggle to cope with.
Central control in a modified physical environment can, however, be a solution and we have long said autonomous vehicles will require modifications to the physical environment and, inter alia, the UK should trial a modified physical environment in a real, existing context.
In practice Ultra Global PRT, the British company behind the Terminal 5 project is establishing itself as world leader in pods. The technology behind them is not particular advanced in the sense that none of the 3D lidars and advanced mapping capabilities the Google cars have are necessary on these vehicles.
So a different market choice establishes itself - put more money into the physical environment and infrastructure to be able to put less into the technology.
You could put pods onto open roads if the area they ran on was physically predictable and of limited radius. So an area with no roundabouts, railway crossings, bicycles or horses and excluding goods vehicles during the day time would be a good choice for a trial and a possibility in some rural locations.
What the pods would need would be software that brought the vehicles to a halt temporarily if anything crossed their paths - humans particularly.
Such a trial could be conducted in a sparsely populated rural area but not in a town.
At Heathrow, fences keep humans off the route. They are not primarily there to keep the vehicles in although they obviously perform that function, too, if a malfunction were to arise.
Pod with not dissimilar footprint to a car - twelve feet long, five feet wide and seating six with luggage
These types of pods might be ideal in shuttling people to and from tourist sites along protected routes.
In Amritsar, Ultra Global will be building a 4.8 mile elevated circuit.
The route-following part of autonomous navigation is essentially safe when there are no obstacles. Pikes Peak has been done - if not by a pod.
Britain cannot afford dedicated circuits like in Amritsar except where commercial advantages present themselves: Heathrow has estimated that 70,000 bus journeys a year have become unnecessary because of the pods.
However, in a low density rural area of limited radius they could run on lightly modified existing roads. Were changes in mapped routes to take place from time to time, due to unforseen temporary obstacles, local cue maps could be uploaded to the vehicles' memories as they passed waypoints.
The pods might substitute for the absence of public transport in such an area so rather than 21 pods as at high density Heathrow just 4-6 might be sufficient for a trial.
The pods' terminus might be at a point where users could transfer to a rural bus service for onward travel.
Were a small number of rural villages within a defined radius used for the trial bicycle owners might be given special training in giving the pods a wide berth if bicycle use were to be permitted there.
22 September 2014
California has granted licences for 29 autonomous vehicles to use all the public highways - 25 Google modified Lexus cars, 2 Mercedes cars and 2 Audi cars. The main limiting stipulation is that humans should be able to resume immediate full control at any time.
If the concept of driverless cars is not to take needless setbacks a comprehensive rethink of the interfaces by which humans can resume immediate full or sufficient control of such cars is necessary.
A steering wheel, two or three pedals, a handbrake and a fully qualified driver are essentially an overkill requirement for the interface.
Commercial airliners, depending on make, are piloted using either a column or joystick. Pilots sit side by side, both with either a column or joystick.
Two small columns with two paddles on each for progressive braking and acceleration could be the core of what might be needed in consumer driverless cars.
Either of the front seat occupants could assume control in an emergency but there would be no requirement that they hold a driving license.
Were V2V (vehicle-to-vehicle) and V2I (vehicle-to-infrastructure) technology also installed on driverless cars because governments had already legislated for it irrespective of progress with driverless cars then the instances where emergency assumption of control on public highways might be necessary would be further curtailed.
Source: NHTSA. Diagrammatic V2V and V2I. (Vehicles talk to each other exchanging information such as vehicle size, position, speed, heading, lateral/longitudinal acceleration, yaw rate, throttle position, brake status, steering angle, wiper status, turn signal status.)
In the case of vehicles approaching and navigating a roundabout - one of the harder situations to write algorithms for - the V2V and V2I would tell the driverless car of other vehicles approaching the roundabout before it could see them and which vehicles were 'nudging' into the roundabout to jockey for position to enter next whilst the autonomous technology would navigate the car through the roundabout using its sensor data. At the same time it would be giving data about its own movements to other vehicles through V2V and V2I.
9 October 2014
Speaking of trying not to miss anything, the August 2014 NHTSA document Vehicle-to-vehicle communications: Readiness of V2V technology for application is excellent, tries not to miss anything and convinces me that V2V should be regarded as a complementary technology to driverless car technology.
By emphasising the importance of relative position fixing of vehicles by GPS rather than absolute positioning on Earth it overcomes one of the doubts about GPS: how to get consistently accurate position fixes for highway navigation.
As the table shows the estimated accuracy of V2V using GPS is given as not within 1.5m. In practice, as discussed in the document, it could be within 1.0m if position fixing were assisted by roadside equipment but this would be costly. The other six sensor types listed in the table are those that might be used by autonomous vehicles. Individually or in combination these could give position fixes to within an estimated 0.2m.
The field of view of V2V is also greater than for any other individual sensor at any given moment in time though in practice a rotating lidar or a combination of sensors can give a 360 degree field of view.
What the autonomous technology sensors listed cannot do is see round or through obstacles.
By eliminating any mention of earlier proposals that pedestrians and cyclists carry V2V receivers and transmitters (as locating their absolute position on Earth to within 1.0m accuracy would still not be sufficiently accurate to ensure their safety) one of the objections to adopting the American version of V2V has been removed.
As the table shows GPS positioning has difficulty in urban canyons created by tall buildings, in tunnels and under heavy foliage. Indeed, anywhere where a clear view of the sky is not possible so underground car parks and multi-storey car parks can be added to the list.
Lidar and radar, on the other hand, have problems in poor weather that V2V essentially will not have.
[We do have some of our own solutions to all these problems (which do not involve WiFi)].
Some autonomous vehicle projects lean very heavily on mapping. If V2V were integrated with the robotic controls, these projects would lean less heavily on map processing whilst still delivering what the market needs - some driverless cars as soon as possible.
The cryptographic framework proposed is convincing and DSRC frequencies are the best means to get the data to the vehicles (up to 150MB per vehicle per day in worse case scenarios).
Another advantage to a driverless car industry of V2V is that it builds out a network of roadside equipment V2I units. About 19,000 are estimated for a build out in the U.S.
This vehicle-to-infrastructure equipment could be used to upload local maps to driverless cars and to provide them with low latency networking.
Source: MIT. Proposed site for new vehicle testing environment in Ann Arbor where previous V2V testing was undertaken
7 November 2014
One report in May said that the new Google cars will emerge at the end of the year with joysticks and safety drivers. This would be an advance on traditional controls and closer to the fly-by-wire controls found on Airbus airliners.
17 November 2014
A month ago Audi did some fast circuits of Hockenheim with an autonomous RS7 having already done Pikes Peak in an autonomous TTS four years before.
30 November 2014
Do high level and existential risks exist specifically in relation to autonomous technology applied to road vehicles?
The question will be examined further but the answer is yes.
A few 40 ton trucks crossing a suspension bridge independently driven pose relatively little risk to the structure.
A few 400 ton autonomous commercial vehicle roadtrains crossing might pose more.
If two-thirds of the vehicles on the road were autonomous and their navigation systems were knocked out by an unexpected electromagnetic pulse, perhaps caused by the sun, how would you clear crowded motorways of 'dead' vehicles in a timely manner?
Are driverless cars and other autonomous vehicles really like aeroplanes in that to operate to their full potential they require changes to the built environment and infrastructure?
When aeroplanes were invented they took off from fields. Then some realised that they could take off from straight roads or even, in amphibious mode, from the sea. The early planes, though, mostly used airfields.
Now nearly all use airports and these have required massive changes to the built environment and infrastructure.
When aeroplanes were invented did anyone know that they might pose an existential risk?
No but during the cold war when nuclear armed bombers were permanently on alert or in the skies were they not - though in practice keeping the peace?
In bringing autonomous technology to the roads is there an existential risk we cannot predict?
Photo: HM Treasury National Infrastructure Plan 2014
5 December 2014
Driverless cars can only benefit from enough computing power which would comprise a neural network if they themselves are networked back to central computers or to one another (which does not mean this is necessarily recommended).
There is not enough space onboard individual cars to carry massive computing power and in all events the cost would be prohibitive. So driverless cars will be unable to do natural scene recognition.
Progressing through a large long canopy of trees from which a large bird swoops down apparently heading for the car might not be a scenario that the car understands in the way a human would understand this scene (because the occurrence of the condition may be too infrequent for an algorithm to have been written to cope with it in advance).
Its lidar, if located in the right place, might track the flight of the bird with millimetre accuracy and the car might take some evasive action like braking sharply but this might not be the action a human driver would choose to take conscious of following vehicles and that most birds will be taking evasive action themselves.
Conscious is, of course, the right word as the AI used by driverless cars will never remotely approach consciousness.
Any action the cars takes will be based on algorithmic responses to sensor data acquired.
There is the possibility of machine learning in the development phase of a model but in cars released to the consumer market will this be risked?
Many conventional cars are already programmed to learn the driver's driving style and in F1 there is a great deal of telemetry of this kind of information but will advanced real-time machine learning be permitted for a fleet of cars that is already self-driving?
31 December 2014
At Christmas Dr Who accepted a bit of help from Father Christmas and we tend to forget that the Tardis was more or less the first self-driving vehicle.
So what little presents were dropped off.
Near the Royal Observatory in Greenwich a parcel was dropped off allowing shuttles building on Navia technology to be tested on closed roads and for other vehicles to be tested in Tardis-like simulators by the Transport Research Laboratory.
In Bristol the insurance and legal aspects of self-driving cars will be investigated with a little help from Santa who knows well the risks of autonomous navigation at night with so much space debris around.
Doubtless in Sweden Volvo have already programmed their cars to avoid his reindeer.
In Milton Keynes a shiny new pod toy has been delivered.
Near Coventry a closed circuit will test V2V and V2I gadgets.
Google have been doing what they know best and have tracked Dr Who's location data so Father Christmas has delivered them a Tardis lookalike car (look at that lidar on top of the latest version).
(Can't wait to see it in an episode of Dr Who: you read it first here).
Google confined itself to a relatively unfuturistic statement:
Today (22 December 2014) were unwrapping the best holiday gift we couldve imagined: the first real build of our self-driving vehicle prototype. The vehicle we unveiled in May was an early mockupit didnt even have real headlights!
Since then, weve been working on different prototypes-of-prototypes, each designed to test different systems of a self-driving car - for example, the typical car parts like steering and braking, as well as the self-driving parts like the computer and sensors. Weve now put all those systems together in this fully functional vehicle - our first complete prototype for fully autonomous driving.
Were going to be spending the holidays zipping around our test track, and we hope to see you on the streets of Northern California in the new year. Our safety drivers will continue to oversee the vehicle for a while longer, using temporary manual controls as needed while we continue to test and learn. Happy holidays!
Driverless Cars 2015