Summary of Semester

When I first chose this class, I knew little of autonomous driving and its implications aside from the occasional article with a catchy headline that I clicked. I chose to focus my blog on the technology behind self driving cars, because without the technology, autonomous vehicles are still only in the imaginations of science fiction writers and concept car designers. The beginning search for technology related to self driving cars lead me to LIDAR.

After learning more about LIDAR, I concluded that in the minds of most people LIDAR is the technology that the progress of AVs is based on. As of now, LIDAR can be very expensive, hindering its ability to be the main technology that AVs rely on due to economic reasons. Because of this expense some companies, like Tesla, are relying on cameras to “see” for their autonomous driving systems. Mobileye, an Israeli company that is also working on developing AVs with solely cameras, boldy states on their website that “From the outset, Mobileye’s philosophy has been that if a human can drive a car based on vision alone – so can a computer.” While some people will rely on one technology or the other, most people in the industry envision a combination of LIDAR, cameras, and radar to help the AVs “see” by collecting a massive amount of data.

What do these cars plan to do with this data? Well, creating a 3D map of all the roads in the world would be the best use of this data to help cars have a “memory” of the roads ahead and make them easier to navigate. There are many companies mapping across the world right now, trying to accelerate the deployment of AVs. Considering the cars can not store all of the data necessary to compile an up-to-date 3D map of the world, they will have to receive a lot of this information remotely. This is why some internet service providers have brought up AVs to the repeal net neutrality. The internet service providers envision “fast lanes” where the data needed for autonomous driving is fed to the car at high rates and high capacity.


When will AVs be on the road?

One might think that after researching the current technology behind AVs, I might have a fair estimate of when we will commonly see these cars on the road, but in reality I’m not quite sure. Right now, you can see AVs on the road, even ones without a safety driver behind the wheel. These cars are mainly being deployed as self driving taxis. As these taxis become more and more common, the apprehension about a car driving itself will disappear as well. What makes it hard for me to give a timeline on when AVs might become available to the masses is the government. As of now US government has put very little restriction on AVs, allowing developers to deploy up to 2,500 autonomous vehicles that don’t meet safety standards, with plans in place to increase that to 100,000. My fear is that when the first accident involving an autonomous vehicle will release a big public backlash calling for stronger restrictions on AVs, thereby hindering the process of their development. My best guess for when AVs will become popular, even with the hinderance of restrictions, would be 25 years.

After taking this class, I see now that the market for autonomous vehicles is one full of research and development and ripe for the taking. Increasingly autonomous features are constantly being brought to market, usually on high end vehicles, but the race for a fully autonomous vehicles is (*pun intended*) a bumpy road. The companies that reach the finish line and become the prominent players in the AV market will not only have to perfect the technology, but also master the marketing, lobbying, and deployment of their product in order to defend their position. I’m excited to witness and to continue to learn about this development of technology that will change the way most everyone goes about their daily lives, but mostly, I’m excited that I’ll be able to sleep while commuting.

The Intersection of Net Neutrality and Self Driving Cars

Autonomous vehicles will be using machine learning and vehicle to vehicle communication in order to navigate the roads. As remarked in a past blog on mapping roads for autonomous vehicles, many companies are working to collect as much data about roads in order to make autonomous driving easier, allowing AVs to know conditions on the road ahead. In order to access this data, AVs will need to connect to databases and download this information. This will need to be done at very high speeds, since every fraction of a second in integral to a safe experience. This is where the debate over net neutrality enters.

Net Neutrality Debate

As of now the internet in the United States abides by net neutrality. This means that internet service providers (ISPs) give access to the internet with out censorship or favoring content and applications. This in essence means that when one pays for the internet, they gain access to the whole thing. As of now the FCC has determined to possibly repeal net neutrality, which was put in place during the Obama administration. This would enable internet service providers to block certain content or put it behind a pay wall. In Portugal, some ISPs have begun doing just this by bundling certain content as seen below.

The FCC’s deliberations on this topic have been met with broad dissent from the general public, but ISPs and some other entities are pushing for repeal.

So where do self driving cars and net neutrality intersect?

As self driving cars reach the roads, they will require very large amounts of data at very fast speeds. This is where the repeal of net neutrality comes in. In Comcast’s 161 page letter to the FCC, they wrote:

“At the same time, the Commission also should bear in mind that a more flexible approach to prioritization may be warranted and may be beneficial to the public… And paid prioritization may have other compelling applications in telemedicine. Likewise, for autonomous vehicles that may require instantaneous data transmission, black letter prohibitions on paid prioritization may actually stifle innovation instead of encouraging it.”

Self driving cars will NEED the data that is sent wirelessly to the vehicles at very fast speeds. They will need to know if road conditions have worsened on the road ahead or that there is construction around the corner. ISPs have argued that net neutrality will inhibit this data transmission. Mark Cuban, a serial entrepreneur, agrees somewhat with the ISPs.

The repeal of net neutrality has the potential to help autonomous driving become more of a reality quicker, but is it worth it?



Without net neutrality in Portugal, mobile internet is bundled like a cable package

Self Driving Taxis


As self driving cars come closer and closer to a realization, the taxi/ride hailing industry prepares for upheaval. nuTonomy kick started self driving taxis when they put their vehicles on the road in Singapore and quickly paired with Grab, a ride hailing service. Since then, self driving taxi services have spread across the world. Like nuTonomy, all self driving car taxis have had a safety driver behind the wheel, until now. After eight years of testing their self driving technology across six states, Waymo, Google’s autonomous vehicle play, announced that they will begin offering self driving taxis that don’t require a person behind the wheel in Phoenix.


This is a huge step for autonomous vehicles, but it is telling that the first application of full self driving capabilities is self driving taxis. Many view this new venture as the future of car ownership, where people own cars less and are more likely to use ride hailing with autonomous vehicles. The opportunity for companies is so enticing since there will be no driver taking a cut of the income. The two major ride hailing companies in the United States have both become big players in the market; Uber has been developing their own cars and technology, and Lyft’s plans are to partner with companies that have their own cars.

Some companies are planning on developing autonomous vehicle technology not for typical cars but more geared towards higher capacity vehicles and public transportation. Navya is one company planning to do just that. They began testing the vehicles at the University of Michigan in Ann Arbor and have expanded to Las Vegas, even though the vehicle made mistakes early on in its debut. Other companies, like Tesla, envision individual owners of cars lending out their cars when they aren’t using them.

The advent of self driving taxis does not come without its drawbacks though. Every crash in a self driving taxi will receive major press and create media storms, causing people to lose confidence in the services. Self driving taxis will also displace most all taxi drivers, of which there are over 200,000 in the US alone. As the technology progress, there will be hurdles to cross, but autonomous vehicles are coming soon in the form of self driving taxis.



Mapping for Self Driving Cars

People seeking to develop autonomous vehicles are all interested collecting as much data as they can. A lot of this data comes in the form of mapping of roads. Autonomous vehicles are outfitted with multiple sensors and cameras that help the car to “see” its surroundings.  The input in these sensors is run through powerful computers that process the data and tell the cars how to drive. These computers sometimes make mistakes or don’t “see” something. But what if these computers could access all the input from a bunch of other vehicles are already have a map of what the road ahead is going to look like?


There are many companies out there right now trying to create these maps, making autonomous vehicles a closer reality. Companies are outfitting vehicles with LIDAR, RADAR, and cameras and driving across the world collecting data, creating a 3D mapping of the roads. “As humans, if we are blindfolded and dropped in a new place, we’ll find our bearings—we have millions of years of common sense to help guide our awareness,” says Nikhil Naikal, Mapper’s CEO. “A machine, on the other hand, needs a large amount of up-to-date 3D map data to have foresight of what to expect around the corner” (CNN).

Above is a video of Nokia’s mapped out version of New Orleans, using data from LIDAR sensors that were positioned on cars that drove around the city. Mapping out the roads can even help more than just creating a simple 3D of the regular roads. Many companies envision maps that are constantly being updated so unusual occurrences like construction or an object in the road can be detected, and one car can alert all other vehicles in the area. Driving during inclement weather, such as snow covering the road, will become more of a reality as the car will know where lane markers are within a centimeter and any important signs.

Though, these maps are not infallible. A small mistake in the mapping can lead to bigger mistakes by autonomous vehicles. A CNN article cites Ford making a mistake years ago where one pixel was wrong, saying the ground was raised 10 inches where is was flat. This caused all the autonomous vehicles to swerve where there was no reason too. Mistakes like this could lead to accidents in the real world.

Despite some of these faults, people in the industry see mapping full of potential and worth billions of dollars. Every road driven by cars will be building a clearer, more up-to-date map. Not only will mapping bring autonomous vehicles closer, it will enhance the technology thoroughly.



Autonomous Cars Will Require a Totally New Kind of Map

Cameras: The Eyes of Autonomous Vehicles

Last post, we learned about LIDAR technology helping cars envision their surroundings but also noted the drawbacks of the technology. I mentioned that Elon Musk and Tesla, a leader in the autonomous vehicles, have chosen to not use LIDAR technology while developing an autonomous vehicle. So how are the future Tesla and other autonomous vehicles going to “see”?

Cameras! Just like you can look through a camera and see a clear picture of the environment around you, cars will be able to do the same thing. By outfitting cars with cameras at all angles, the vehicles will be able to maintain a 360 degree view of their surroundings. Tesla outfits their cars that have the Full-Self Driving Hardware with eight cameras around the car. These cameras take their 2D images and run them through a computer that is 40 times more powerful than the previous model in Tesla’s. The computer creates a 3D mapping of the surroundings, allowing the car to navigate itself on the road.

Above is the video featured on Tesla’s website showing a fully autonomous car driving along with the feeds from the cameras installed in the car.


Advantages of Cameras

As you can see in the above video, the cameras are able to pick up a lot from lane markings to road signs. This makes cameras advantageous over other technologies employed in self driving cars, because the ability to read signs and see colors will allow cars to navigate modern roads without driver input.  Cameras also have a big advantage for both the consumer and manufacturer of the vehicle – price. When learning about LIDAR in the last post, we saw that one of the biggest hurdles of the technology was cost, with prices reaching in the tens of thousands. Cameras on the other hand cost to be only a couple hundred to a few thousand. The processing power needed to analyze all the data from the cameras can be expensive, but the whole package is still much cheaper than LIDAR, making it a more suitable candidate for wide adoption. As the quality of cameras and the software interpreting the images advances, cameras are seen as the number one technology to be used in self driving cars, mainly in conjunction with other sensors.


Can Cameras Stand Alone?

Most everyone in the autonomous vehicles envision the AVs of the future incorporating cameras in some capacity to drive the car, but can cameras be the only technology behind helping the cars see? Cameras have been found to have trouble navigating the cars in adverse weather conditions as their “vision” can be blocked or clouded. Many companies current attempts at self driving cars seem to suggest that cameras will need to be complemented with either radar or LIDAR. Tesla outfits their cars with a radar on the front of their car and many others used LIDAR to help the car get a clearer picture of its surroundings.  Mobileye, an Israeli company backed by Intel working on helping AVs “see”, states on their website that “From the outset, Mobileye’s philosophy has been that if a human can drive a car based on vision alone – so can a computer.” Mobileye, who used to work with Tesla, believes cameras can be the sole source of vision, but the seem to be among the minority. More likely, cameras will be one of the most important common parts of the puzzle of a truly autonomous vehicle but only a part.





LIDAR (a.k.a LiDAR or Lidar) is a developing technology commonly used in autonomous vehicles. LIDAR commonly stands for “light detection and ranging” or some variation of that. Like sonar and radar, LIDAR uses echolocation to develop an “image” of the devices surroundings. In order to create this image, LIDAR uses lasers to send out light pulses and tracks how long it takes for the lights to bounce back. Charles Townes and Arthur Schawlow are thought to be the some of the first people to theorize the modern LDIAR in 1958.



Radar and sonar are both great tools that for creating a map of local surroundings, but both have limitations. Radar uses radio waves but has limitations in short distances. Sonar, using sound waves, is limited in long range uses. LIDAR works well in both long range and short range, making it a great tool for self driving cars, which need to map obstacles and objects on the road far and near.


How does LIDAR work?

As described above, LIDAR units emit a light pulse and count how long it takes for the pulse to return. This echolocation information is coupled with GPS coordinates and a inertial measurements unit (IMU), which measures tilt and angle, allows for a computer to compile an image of the surrounding area. As you can see from the image below, once all the data is complied LIDAR produces a clear, comprehensive image. These images were created by a LIDAR sensor on a National Oceanic and Atmospheric Administration aircraft flying over Bixby Bridge in California.

The above video, from 2011, shows how LIDAR can be used with autonomous vehicles to map their surroundings. It allows the cars to “see” all obstacles around the car and allow an autonomous vehicle to navigate around the obstacles.


People Using LIDAR for Autonomous Vehicles

LIDAR is not new to cars. It has been used for various reasons and can be found on many vehicles today. In 1992, Mitsubishi used LIDAR technology for distance warning, and since then, many other auto manufacturers have followed suit and further developed the technology. LIDAR is used as the base technology for many adaptive cruise control systems that are seen on the road today. As more automation comes to cars, LIDAR is seen by some as one of the most important technologies for fully autonomous vehicles. Google and Uber are two of the biggest names using LIDAR in their development of autonomous vehicles, and Velodyne is a company at the forefront of further development in LIDAR technology. These LIDAR sensors can be seen on the tops of vehicles scanning the surrounding area.


Limitations for LIDAR in Self Driving Cars

Although LIDAR is seen as essential by many people for fully autonomous vehicles, it does have many limitations. Snow and fog have been found to hinder LIDAR’s ability as well as any object that could block the sensor.  Another hurdle for LIDAR is the expensive costs associated with the technology. Although some LIDAR systems can be made for low cost, the systems needed for autonomous driving can be well beyond $10,000, with Velodyne’s top sensor (being used by Google and Uber) selling for $80,000. These price tags will inhibit the adoption of LIDAR technology if auto manufacturers attempt to make autonomous vehicles for the masses in the near future. Elon Musk, head of Tesla, has remarked the he thinks LIDAR is unnecessary for autonomous vehicles and has therefore not incorporated LIDAR in Tesla’s autonomous vehicle development.


Other uses of LIDAR

  • Mapping Forests
  • Police Speed Trap Guns
  • Mapping coastline
  • Apollo 15 mission
  • Video Game mapping of real life racing tracks





Hi I’m James Quinn! I’m a junior at Tufts University studying Mechanical Engineering. This blog is being written during my ex-college class on self-driving cars.

My interests in self-driving cars lies in the technology used to make the cars autonomous, and the effects this will have on people with disabilities. I’ve always wondered how engineers will make the cars “see” the road and obstacles, like radar, LIDAR, and GPS. Without the development of this technology, self-driving cars would only remain in science fiction. I’m also interested in the benefits of self-driving cars for disabled people. A couple people in my family are restricted from driving due to age and epilepsy, and I’ve always hoped that they could regain some of their independence with the advent of self-driving cars.