My research, for a general audience

Posted on November 12, 2016

It’s an oft-repeated trope that computers work with 0s and 1s.

The significance is that 0 is very different than 1. There are no in-between values for computers. Hence, the information is digital, like counting with the digits of our hands: we take discrete jumps from one number to the next. Digital computers fundamentally operate by manipulating discrete signals. Much information is discrete: yes or no values, counting numbers, (typed) text, dates, etc. Computers are well-equipped to handle this kind of data.

But not everything is discrete. Consider time. Between noon and 1pm, there is an entire continuous gradation of times in between. Whereas it is easy to tell the binary 0 from the binary 1, the difference between noon and 1pm is of a different nature. Whereas one must make a discrete leap to jump from 0 to 1, it is possible to continuously slide along a path of possible times from noon to 1pm.

Because digital information is discrete, it can’t represent time and still capture this phenomenon. In practice, computers represent time with finitely many values which are distinct from each other. For instance, there might be a digital representation for 2322^32 different times, evenly spaced between noon and 1pm. When a sensor measures a time and records it, it finds one of these representations which is sufficiently close to the actual time. The process of approximating a continuous space with a discrete one is known as quantization or discretization.

Computers compute with other continuous entities, such as sound, images, video, quantity, and probability, by using these discrete approximations. In general, these approximations work pretty well. The success of digital computers in graphics, image processing, and numerical computation, indicates that these approximations generally work alright. There is inevitably error in comparison to the true answer, but in many cases it seems not to matter much.

But sometimes the error can be a real problem. For instance, in 1991, a U.S. missile missed its target due to rounding error in its computations to compute its trajectory. Today, there is safety-critical software controlling autonomous cars and drones, or rockets and planes, using these approximations which are inherently unsound.

My research studies how it is possible to program using continuous spaces themselves, rather than discretized approximations of them. This would eliminate the errors which result from using discretized approximations. It would also make it easier to reason about computer programs which manipulate continuous spaces, and to prove that such programs behave as we hope that they do.

Given the digital nature of computers, this may sound like a difficult task. Instead of attempting to encode continuous values as digital data, we encode them as computer programs themselves, which we can then interact with to locate that continuous value. For instance, a real number xx is a program which takes as input a positive rational number ε\varepsilon (which is discrete!) asking how precisely to estimate that real number, and returns a rational number (also discrete) which differs from xx by no more than ε\varepsilon.