In a camera, the sensor receives a two-dimensional, time-dependent, continuous distribution of light energy. As we know that the outputs of most of the sensors are continuous voltages, which are analog signals. So, to digitally process or store an image, we need to take a snapshot of that energy and convert it into a digital signal. Therefore, we need to covert both the coordinates of the signals and the amplitudes of the signals. To do that, we can use sampling and quantization.
The difference between sampling and quantization is, in sampling, the coordinates (space) are converted into digital signals. While in quantization, the amplitudes (intensity of colors) are converted into digital signals.
In spatial sampling, the analog signal is converted into a discrete signal by considering the geometry of the sensor elements of the acquisition device. By changing the geometry and the acquisition, people have built many types of image sensors for different purposes.
In temporal sampling, we measure the signal by considering the regular intervals and the amount of light incident on the sensor elements.
For example, we can take sensors on digital cameras. They work by using a triggered electrical charging process, induced by a continuous stream of protons. Then the signal is measured by considering the number of charges built up in the sensor elements during the exposure time.
Quantization of visual image is the conversion of the intensity of light (analog signal), into a digital signal. So it creates the intensity of light in a digital image. To get a sharp high-quality image, the quantization level should be high as possible.
We can’t convert a full analog signal into a digital signal, since storing an analog signal requires infinite memory. So, we have to do some approximations to do that. Therefore the digital signal is not the same as the analog one.