Linearity Exercise

    In this exercise, you will produce a linearity curve for the NASA/IRTF's NSFCAM 256 x 256 InSb CCD array with the 2850 nm flatfield images as shown in tLecture and in the background of this webpage (blown up and repeated).

    The linearity_exercise.mat file can be downloaded here and contains all the data you need.  Its contents are described below in Table 1.

Variables saved in 
the linearity_exercise.mat
Array datatype and size
Contents
itime
40x1 Float32
integration times
ff_images
256x256x40 Int32
40 flatfield images of size 256 x 256
Table 1 - Contents of the linearity_exercise.mat file

1.  Ultimately we would like to identify the bad (i.e., "challenged") pixels, such as those on the edge of the array, but first lets just look at the median array response as function of integration time (itime):
for i=1:size(ff_images,3);
temp = double(ff_images(:,:,i));%grab a frame and convert to double precision
med_val(i) = median(temp(:));%determine the median of the frame
avg_val(i) = mean(temp(:));%determine the average of the frame
std_val(i) = std(temp(:));%determine the standard deviation of the frame
end;


Fig. 1 - Median and Average array response

As you can see in Fig. 1, the median array response starts to deviate from linearity at around 4.5 ms, so lets restrict the remainder of our analysis to integration times below this value. Fig. 1 is a linearity curve (i.e., integration time vs. average digital number). More specifically, the y-axis values are the median of the entire array and the x-axis values are the integration times stored in itime (milliseconds). The error bars represeent the one-sigma standard deviation of the entire array at each integration time. After examining the curve carefully, determine the output DN level where non-linear response begins.

2.  Now, lets step through the entire array and determine the gain (slope of pixel response vs. integration time) and offset (pixel response at zero integration time) on a pixel-by-pixel basis. We can also calculate the relative gain and relative offset by determining the slope and offset of a given pixel's response as compared to the median vector calculated above. These relative gain and offset maps, which are partially redundant with the actual gain and offset maps, can be used to flatten our images. I find that using the relative gain and relative offset maps are better for flattening images than the standard gain and offset as the median array response will be robust to any inherent non-linearities in detector response. Regardless, we also want to calculate the standard gain and offset in order to determine the detector's linearity with integration time.

The following lines of code will populate the gain and offset maps. While their are many ways to determine the best-fit coefficients for a linear fit between two vectors in Matlab, the below example make use of the LSCOV function. The LSCOV function solves the generic problem A x = B and, unlike functions such as polyfit, also returns the one-sigma errors on each coefficient as well as an estimate of the mean squared error. Furthermore, if errors are known for the input vectors (which you will have for portions of LAB 2), you can also pass a matrix of input variance into LSCOV.

backstr=[''];
for i=1:256;
for j=1:256;
temp_y = squeeze(ff_images(i,j,:));%get the pixel data
temp_y = double(temp_y); %convert to double precision
ind = find(itime < 4.5); %restrict data to the linear range
A = ones(numel(itime(ind)),2); %create the A matrix (A*x=B)
A(:,1) = itime(ind); %populate the A matrix with itime
B = temp_y(ind); %create the B matrix
[x,stdx,mse] = lscov(A,B); %solve for the best-fit coefficients
gain(i,j) = x(1); %populate the gain matrix
offset(i,j) = x(2); %populate the offset matrix
gain_std(i,j) = stdx(1); %populate the gain standard deviation
offset_std(i,j) = stdx(2); %populate the offset standard deviation
chisq(i,j) = mse;%populate the normalized chi-squared approximation
A(:,1) = med_val(ind); %populate the A matrix with the array median reponse
B = temp_y(ind); %create the B matrix
[x,stdx,mse] = lscov(A,B); %solve for the best-fit coefficients
rgain(i,j) = x(1); %populate the relative gain matrix
roffset(i,j) = x(2); %populate the relative offset matrix
rgain_std(i,j) = stdx(1); %relative gain error
roffset_std(i,j) = stdx(2); %relative offset error
rchisq(i,j) = mse; %relative mean squared error
end;
%print a periodic update to the screen to show progress
if mod(i,10)==0;
fprintf(1,backstr);
str = ['Finished with row ' num2str(i) ' Of ' num2str(size(ff_images,1))];
fprintf(1,str);
back_str = [];
while numel(backstr) < 2.*numel(str);
backstr = [backstr '\b'];
end;
end;


Fig. 1 - Absolute (left) and Relative (right) gain/offset calculations for Pixel (100,100)


Now that we have maps of the absolute and relative gain and offset, as well as their respective errors, we can use these array to identfiy good / bad pixels and generate a bad pixel mask. Depending on what you goals are for the detector, you may wish to be less or more stringent regarding how you define "good" pixels. For the purposes of this exercise, I will define a useable pixel as one that has an absolute offset greater than 0, relative gain within 20% of the median array response, and relative measn-squred error less than 1500 DN^2. Please look at the histrograms of the various arrays (gain, rgain, offset, roffset, gain_std, rgain_std, chisq, and rchisq yourself and come up with your own definition of a good/bad pixel. Using this definition, 11% of the array defined as a bad pixel. If we were observing a resolved source, we might try and replace these bad pixels with an interpolation of its nearest-neighbors (most likely bi-linear or cubic). If we were planning to observe a point source, we would try to place the target in a section of the field of view that avoided bad pixels so as not to currupt the point spread function. The following lines of code show you how to identify bad pixels based on a given set of criteria and generate a pixel mask.

bad = find( gain < 0 | rgain < 0.8 | rgain > 1.2 | offset < 0 | rchisq > 1000);%ideinfy the poisitons of the bad pixels
mask = ones(size(gain));%generate a mask array (all ones for now)
mask(bad) = 0%set the bad pixels to zero in the mask (can also use nan for many appications)
mask(bad) = nan;
for i=1:40;
temp = double(ff_images(:,:,i));%grab a frame and convert to double precision
temp_out = (temp-roffset.*mask)./(rgain.*mask);%flatten the frame
ff_images_flat(:,:,i) = temp_out;%save the flattened array
end;

And don't forget to save your work to a ".mat" file so that you can access it later:
save linearity_exercise_results.mat gain gain_std offset offset_std rgain rgain_std roffset roffset_std chisq rchisq mask itime ff_images ff_images_flat


Fig. 3 - Bad pixel mask (upper-left), Relative Gain (upper-middle), and Relative Offset (upper-right) maps in addition to raw (lower left) and flattened (lower-middle and lower-right) frame 15 (1.13 ms) response image.


Note that the residual spatial noise in the flattened image (lower-right hand corner of Fig. 3) is randomly distributed.