>

> restart;

> interface(imaginaryunit=_I); # can now use I as a symbol

Verifying the Trichromatic Method of Measuring Chlorophylls

The measurement of algae in natural streams is a tedious and unrewarding task, partly because of the undefined nature of the 'beasts'. Identification and cell counting are most consuming and many researchers opt to use a surrogate instead. One such surrogate is the chlorophylls; chlorophyll a, chlorophyll b, and chlorophyll c; their concentrations may indicate the number of cells or the number of organisms and the ratios of the chlorophylls may indicate the type of algae.

Here we revisit the trichromatic method of measuring the chlorophylls. This uses a measurement of the optical depth of a sample at three different wavelengths and, knowing the spectrometric equations (a 3X3 matrix), the concentrations of the chlorophylls is calculated.

The theory uses Beer's Law, in a form

Optical Density= log[10](I[0]/I) = epsilon*L*c

which derives from the simple statement

'The relative light extinction is proportional to the concentration of absorbing material and the pathlength through which the light travels'

This form is the integrated version where the extinction coefficient epsilon is constant through the pathlength L. Measurements are made in a spectroscopic cell (cuvette) of pathlength L; the intensity of incoming light is I[0] and I is the light intensity the emerges from the cell, having passed through the cuvette.

With three chlorophylls absorbing light, this equation is applied to three wavelengths; 664, 647 and 630 nm; and extinction (absorption) occurs due to the combined effects of the three chlorophylls. That is, there are 9 extinction coefficients and the equations of extinction are

> restart;

> OD[664]=epsilon[664,a]*a*L+epsilon[664,b]*b*L+epsilon[664,c]*c*L;
OD[647]=epsilon[647,a]*a*L+epsilon[647,b]*b*L+epsilon[647,c]*c*L;
OD[630]=epsilon[630,a]*a*L+epsilon[630,b]*b*L+epsilon[630,c]*c*L;

OD[664] = epsilon[664,a]*a*L+epsilon[664,b]*b*L+eps...

OD[647] = epsilon[647,a]*a*L+epsilon[647,b]*b*L+eps...

OD[630] = epsilon[630,a]*a*L+epsilon[630,b]*b*L+eps...

>

Inversion of the Spectroscopic Equations

The above are proper constituative equations for the light absorptioin, knowing the concentrations of the 3 chlorophylls. In use, however, this equation set is solved for the chlorophyll concentrations and the concentrations are determined from three measurements of optical depth, the trichromatic method.

The literature seems to no longer have the original 9 extinction coeffieients (see, for example Jeffery et al., Phytoplankton Pigments in Oceanography , UNESCO Publishing, 7 place de Fontenoy, 75700 Paris, page 600). Here we invert the spectroscopic equations for the algal concentration to regenerate the extinction coefficients. The spectroscopic equations come from Standard Methods for the Examination of Water and Wastewater , published by the American Public Health Association, 1015 Fifteenth St., NW, Washington, DC 20005; on page 10-19. It is presumed that the optical densities are measured relative to the optical density at 750 nm, to account for the effect of turbidity in the water on the light extinction.

> with(linalg);

Warning, the protected names norm and trace have been redefined and unprotected

[BlockDiagonal, GramSchmidt, JordanBlock, LUdecomp,...
[BlockDiagonal, GramSchmidt, JordanBlock, LUdecomp,...
[BlockDiagonal, GramSchmidt, JordanBlock, LUdecomp,...
[BlockDiagonal, GramSchmidt, JordanBlock, LUdecomp,...
[BlockDiagonal, GramSchmidt, JordanBlock, LUdecomp,...
[BlockDiagonal, GramSchmidt, JordanBlock, LUdecomp,...
[BlockDiagonal, GramSchmidt, JordanBlock, LUdecomp,...
[BlockDiagonal, GramSchmidt, JordanBlock, LUdecomp,...
[BlockDiagonal, GramSchmidt, JordanBlock, LUdecomp,...
[BlockDiagonal, GramSchmidt, JordanBlock, LUdecomp,...

The matrix equation to solve is

> abc=A.X; # non-commuting multiply (&* in Maple V)

for X where X is a matrix of the Optical Depths.

abc = A.X

The elements of the matrices ( abc, A and X ) must be named properly, and have proper indices. A problem is repetition, or multiple typing, which often leads to errors. Clarity is improved by sequence operations. Clever operations procedures and commands also help, some are introduced here.

Here the neutral operator & is introduced, as the improvised symbol `&_`. The standard way is to use the 'expansion operator' or expression sequence operator $. The &_ is a little more convenient, and it works with any sequence.

> ?dollar

> X:='X'; # unassigns X so that it is just a name

X := 'X'

> X[i] $ i = 1..3;

X[1], X[2], X[3]

but

> X[i] $ i = (1,2,3,a,10);

Error, wrong number (or type) of parameters in function $

> `&_`:=proc(x::name,r::list) local i,s; s:=NULL; for i in r do s:=s,x[op(i)]; od; s end;

The op( ) allows that the argument be a list

`&_` := proc (x::name, r::list) local i, s; s := NU...

and

> X&_[1,2,3,a,10];

X[1], X[2], X[3], X[a], X[10]

so

> wls:=664,647,630: # sequence of wavelengths of interest

> X:=matrix(3,1,[OD&_[wls]]);

X := matrix([[OD[664]], [OD[647]], [OD[630]]])

> [wls];

[664, 647, 630]

> matrix(3,1,%);

matrix([[664], [647], [630]])

> evalm(%&*matrix(1,3,[a,b,c]));

matrix([[664*a, 664*b, 664*c], [647*a, 647*b, 647*c...

> map(x->V2/V1*L*epsilon[op(x)],%);

matrix([[V2*L*epsilon[664,a]/V1, V2*L*epsilon[664,b...

> evalm(X)=%.matrix(3,1,[a,b,c]);
# the . denotes non-commuting multiply (&* in Maple V)

matrix([[OD[664]], [OD[647]], [OD[630]]]) = matrix(...

This naming can be burdonsome but it is often easier to simply direct edit so as to make the structure you want. In this case, use the 'Copy and Paste', with ?hotkeys. This consists of copying the output and using Ctrl-c to copy and Ctrl-v to paste in the elements you want.

>

for the OD values in terms of the concentrations a, b and c.

> matrix(3,3,
[[11.85,-1.54,-0.08],
[-5.43,21.03,-2.66],
[-1.67,-7.60,24.52]]);
A:=evalm(%.V1/(V2*L));

matrix([[11.85, -1.54, -.8e-1], [-5.43, 21.03, -2.6...

A := matrix([[11.85*V1/(V2*L), -1.54*V1/(V2*L), -.8...

> abc:=matrix(3,1,[a,b,c]);

> sol:=linsolve(A,abc,X);

abc := matrix([[a], [b], [c]])

sol := matrix([[.7075931367e-7*V2*L*(14447.*c+12385...

> X:=matrix(3,1,[OD&_[wls]]); # X overwritten in linsolve?

X := matrix([[OD[664]], [OD[647]], [OD[630]]])

> evalm(X)=map(expand,sol);

matrix([[OD[664]], [OD[647]], [OD[630]]]) = matrix(...

> evalm(X);

matrix([[OD[664]], [OD[647]], [OD[630]]])

With concentrations of a, b and c in milligrams/litre, the extinction coefficients for the chlorophylls are 87.64, 51.38 and 42.60 in 90% acetone. This is slightly at variance with Jeffrey at al.(1997) in that the extinction coefficients for wavelengths 664, 647 and 630 are, respectively, 87.67, 51.36 and 19.44 Considering the differences between different sources, (see Vernon, 1960), this is not unreasonable. It is interesting, however, that the combined extinction coefficient for chlorophylls c1 and c2 given by Jeffrey (1963) is about twice the value obtained above. We would suggest that this is simply an accounting error, whereby the concentrations of the combined chlorophylls might be twice that of a single chlorophyll.

The other matter is the lack of reference for the off-peak absorption, that is, the extinction coefficients at 647 and 630 for chlorophylls b and c, the extinction coefficients at 664 and 630 for chlorophylls a and c, and the extinction coefficients at 664 and 647 for chlorophylls a and b. If one accepts the spectrometric equations, as above, the extinction coefficients are those in the above matrix. Unfortunately, one must accept that this is the case, because the off-peak extinction values don't seem to be available.

The other way around, with extinction coefficient values (on axis) are

> restart; with(linalg):

Warning, the protected names norm and trace have been redefined and unprotected

> matrix(3,3,
[[87.67,0,0],
[0,51.36,0],
[0,0,19.44]]);

matrix([[87.67, 0, 0], [0, 51.36, 0], [0, 0, 19.44]...

> A:=%:

The units of the extinction coefficients are litres/(gm*cm). When multiplied by the concentration of a given chlorophyll in gms/litre, the result is the absorbance or optical density per cm of cell path. In the case of exclusive absorption by the three chlorophylls, the situation is simply a division.

> abc:=matrix(3,1,[a,b,c]);

abc := matrix([[a], [b], [c]])

> X:=matrix(3,1,[OD||(1..3)]);

X := matrix([[OD1], [OD2], [OD3]])

> linsolve(A,X,abc);

matrix([[.1140641040e-1*OD1], [.1947040498e-1*OD2],...

>

Digitizing some Spectroscopic Data

There are various sources of data, but none have a tabular listing of the required 9 extinction coefficients, to validate the data, above.

As a confirmation we look at some of the composited spectral curves. That figure, digitized, is presented as chlorophyll.gif and chlorophyll.eps in directory 'Light extinction figures'. It contains containing the spectra of chlorophyll a, b and c and is derived (digitized from) Halldal's work, Accessory Pigments and Enhancement Effects, In Checcucci, A. and Weale, R. A., (1973), P rimary Molecular Events in Photobiology . Elsevier, pg 167. the digitizing is done here with the digitizing program 'capture'.

> libname;

Be careful in assigning the libname. Loss of its main library can cause Maple to crash. If there is a problem, remember the the main library is usually in

Here we take the extreme case, and include Mylib in the library path. Indeed, make sure the image tools are in the directory Mylib\\image-files. The programs are available from http://maple.murdoch.edu.au/~scott/files/

> libname:=%,
"C:\\Program Files\\Maple 6\\Mylib\\image-capture";

libname :=

> currentdir();

> printf(ssystem("dir Mylib\\image-capture")[2]);

 Volume in drive C has no label.

 Volume Serial Number is A4D2-DA81

 Directory of c:\Program Files\Maple 6\Mylib\image-capture

19/01/2001  02:26p      <DIR>          .

19/01/2001  02:26p      <DIR>          ..

06/12/2000  01:52p             139,491 capture.tar.gz

22/11/2000  01:12p             166,400 CaptureW.exe

22/11/2000  01:12p             166,400 CaptureC.exe

22/11/2000  01:00p               2,095 README

19/01/2001  01:58p                 181 rubbish

               5 File(s)        474,567 bytes

               2 Dir(s)     540,147,712 bytes free

> ssystem("CaptureW -f pipefile");

[0,

The following system command opens the file pipefile in the Windows operating system, under the DOSprompt. The file

should be in the place where it was formed, notionally in c:\program files\Maple 6\ for it to be found so quickly. Otherwise it may be necessary to change the current directory. One way logically to do this is to open Maple first, retaining the current directory under the Maple 6 directory. The DOS editor edit is a very simplest editor that will do most simplest things, and won't do what you don't ask, like putting in extra carriage controls or changing the file name, to add an extension. However, the menu works from the alt key; for instance, alt-f activates the file menu and that operation followed by an x exits edit.

Note also that 'pipefile' is a scratchfile and will not retain its contents. Whenever the CaptureW command is activated, the file is overwritten. The data are displayed below, for the purpose of retention of the information. If you want to 'regenerate' the original data, copy the output to an input execution group, add a semicolon, hit return, and regenerate sdata.

Also, the original data are lodged in file pipefile.txt, in the trichromatic directory.

>

> ssystem("edit pipefile");

[0,

> ### WARNING: persistent store makes one-argument readlib obsolete
readlib(readdata): open(pipefile,READ):

> sdata:=readdata(pipefile,[string]);

sdata := [
sdata := [
sdata := [
sdata := [
sdata := [
sdata := [
sdata := [
sdata := [
sdata := [
sdata := [
sdata := [
sdata := [
sdata := [
sdata := [
sdata := [
sdata := [

> close(pipefile);

> map(parse,sdata);

Error, incorrect syntax in parse: reserved word `end` unexpected (4)

> nops(sdata);

135

One untoward data point at the end.

> sdata[1..134];

[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[

> lldata:=map(x->[parse(x)],%); # conversion to listlist

lldata := [[33, 500, yaxis], [34, 193, yaxis], [34,...
lldata := [[33, 500, yaxis], [34, 193, yaxis], [34,...
lldata := [[33, 500, yaxis], [34, 193, yaxis], [34,...
lldata := [[33, 500, yaxis], [34, 193, yaxis], [34,...
lldata := [[33, 500, yaxis], [34, 193, yaxis], [34,...
lldata := [[33, 500, yaxis], [34, 193, yaxis], [34,...
lldata := [[33, 500, yaxis], [34, 193, yaxis], [34,...
lldata := [[33, 500, yaxis], [34, 193, yaxis], [34,...
lldata := [[33, 500, yaxis], [34, 193, yaxis], [34,...
lldata := [[33, 500, yaxis], [34, 193, yaxis], [34,...
lldata := [[33, 500, yaxis], [34, 193, yaxis], [34,...
lldata := [[33, 500, yaxis], [34, 193, yaxis], [34,...
lldata := [[33, 500, yaxis], [34, 193, yaxis], [34,...
lldata := [[33, 500, yaxis], [34, 193, yaxis], [34,...
lldata := [[33, 500, yaxis], [34, 193, yaxis], [34,...
lldata := [[33, 500, yaxis], [34, 193, yaxis], [34,...
lldata := [[33, 500, yaxis], [34, 193, yaxis], [34,...
lldata := [[33, 500, yaxis], [34, 193, yaxis], [34,...

> nops(%);

134

>

Checking for 'studder' in the data set

The procedure stalk( ) derived on pg 130, ' a Helping Hand ' removes redundant data from the data set. That is, any 'studders' in the digitizing process, that have produced more than one point, in their turn. That is, points that repeat one after another are removed. The procedure 'stalks' its prey and remembers all the data it has processed. If it acts on two pieces of identical data and the second is identical, the second will be ignored. Resetting of stalk is compled by evoking forget( ), which wipes out its memory. A 'bug' in the operation of stalk( ) is that it is necessary to assign an initial value to stalk, which enters its memory table. Use a value that is unlikely ever to be used. Also, with earlier Maples, it is necessary to evole the forget command through readlib(forget).

stalk is a recursive routine, that uses some of its own attributes to make further judgements on itself. Recursion is a powerful tool in Maple, it may be used in procedures and other ways, including manually and branched multiple statements in Maple execution groups. The linear one-statement-at-a-time approach is easy to communicate and document and for-the-complete-idiot to use, but it ignores many extradinary approaches to problem solving; it matches some of the parallel and branched patterns of the human mind.

> ### WARNING: persistent store makes one-argument readlib obsolete
readlib(forget);

proc (f) local g, x, tx, val_x, y, i, in_pkg, pkg, ...

> stalk:=proc(x::anything) local rt,last;
if type(op(4,eval(stalk)),table) then
rt:=op(4,eval(stalk));
entries(rt); last:=op(%);
forget(stalk);
stalk([x]):=x;
if last=x then NULL else x fi;
else stalk([x]):=x; x;
fi;
end;

stalk := proc (x::anything) local rt, last; if type...
stalk := proc (x::anything) local rt, last; if type...
stalk := proc (x::anything) local rt, last; if type...
stalk := proc (x::anything) local rt, last; if type...
stalk := proc (x::anything) local rt, last; if type...
stalk := proc (x::anything) local rt, last; if type...
stalk := proc (x::anything) local rt, last; if type...
stalk := proc (x::anything) local rt, last; if type...
stalk := proc (x::anything) local rt, last; if type...
stalk := proc (x::anything) local rt, last; if type...
stalk := proc (x::anything) local rt, last; if type...
stalk := proc (x::anything) local rt, last; if type...

> stalk(dowf):=doup;

stalk(dowf) := doup

> stalk(weasiness):=gluttonous;

stalk(weasiness) := gluttonous

> type(op(4,eval(stalk)),table);

true

> op(4,eval(stalk)); # reveals the memory table

TABLE([weasiness = gluttonous, dowf = doup])

> entries(%);

[gluttonous], [doup]

> forget(stalk);

> op(4,eval(stalk)); # all erased

> stalk(zz):=whatever:

> map(stalk,[b,2,2,2,3,3,4,5,1,1]); # ignores the b

[b, 2, 3, 4, 5, 1]

> map(stalk,[7,2,2,2,3,3,4,5,1,1]); # gets all the repeats

[7, 2, 3, 4, 5, 1]

> op(4,eval(stalk));

TABLE([[1] = 1])

> data:=map(stalk,lldata);

data := [[33, 500, yaxis], [34, 193, yaxis], [34, 5...
data := [[33, 500, yaxis], [34, 193, yaxis], [34, 5...
data := [[33, 500, yaxis], [34, 193, yaxis], [34, 5...
data := [[33, 500, yaxis], [34, 193, yaxis], [34, 5...
data := [[33, 500, yaxis], [34, 193, yaxis], [34, 5...
data := [[33, 500, yaxis], [34, 193, yaxis], [34, 5...
data := [[33, 500, yaxis], [34, 193, yaxis], [34, 5...
data := [[33, 500, yaxis], [34, 193, yaxis], [34, 5...
data := [[33, 500, yaxis], [34, 193, yaxis], [34, 5...
data := [[33, 500, yaxis], [34, 193, yaxis], [34, 5...
data := [[33, 500, yaxis], [34, 193, yaxis], [34, 5...
data := [[33, 500, yaxis], [34, 193, yaxis], [34, 5...
data := [[33, 500, yaxis], [34, 193, yaxis], [34, 5...
data := [[33, 500, yaxis], [34, 193, yaxis], [34, 5...
data := [[33, 500, yaxis], [34, 193, yaxis], [34, 5...
data := [[33, 500, yaxis], [34, 193, yaxis], [34, 5...
data := [[33, 500, yaxis], [34, 193, yaxis], [34, 5...

> nops(data);

126

There were 8 repeats. Not that, here, we have ignored 'close encounters' and we might have found many more repeats had we given a small, but allowable, error to the point analysis. This is not done in the book, but the reader may see how to alter stalk to do this--at least for numbers.

> #save data,stalk, "trichromatic.m";
# saves for further use, in a later session.
# the data are gradually built up in this file.
# Don't remove the # from the start unless you want to start over.

Calibration of the data

> restart;

> read "H:\\public_html\\ViewBook\\trichromatic\\trichromatic.m";

> #read "trichromatic.m";

> select(has,data,xaxis);select(has,data,yaxis);

[[34, 500, xaxis], [126, 502, xaxis], [220, 503, xa...
[[34, 500, xaxis], [126, 502, xaxis], [220, 503, xa...

[[33, 500, yaxis], [34, 193, yaxis], [33, 500, yaxi...

> xdata:=select((x,n)->type(op(n,x),numeric),data,3);

xdata := [[33, 500, 400], [126, 502, 500], [220, 50...

The axes data are redundant with all the vertical or y-values relating to either zero or full scale; the horizontal or x-values relate, in sequence, to 400, 500, 600 and 700 nm wavelength for the absorption or extinction measured. The use of least squares should give best values in both the vertical and horizontal cases. The arguments will be 0 and 1 for the vertical and 400, 500 and 600 for the horizontal.

Four least square calculations are required; two for each coordinate, for each of the two axes.

> with(linalg);

Warning, the protected names norm and trace have been redefined and unprotected

[BlockDiagonal, GramSchmidt, JordanBlock, LUdecomp,...
[BlockDiagonal, GramSchmidt, JordanBlock, LUdecomp,...
[BlockDiagonal, GramSchmidt, JordanBlock, LUdecomp,...
[BlockDiagonal, GramSchmidt, JordanBlock, LUdecomp,...
[BlockDiagonal, GramSchmidt, JordanBlock, LUdecomp,...
[BlockDiagonal, GramSchmidt, JordanBlock, LUdecomp,...
[BlockDiagonal, GramSchmidt, JordanBlock, LUdecomp,...
[BlockDiagonal, GramSchmidt, JordanBlock, LUdecomp,...
[BlockDiagonal, GramSchmidt, JordanBlock, LUdecomp,...
[BlockDiagonal, GramSchmidt, JordanBlock, LUdecomp,...

> xticks:=map2(op,3,xdata);

xticks := [400, 500, 600, 700]

> xaxis_xposition:=map2(op,1,xdata);

xaxis_xposition := [33, 126, 220, 313]

> xaxis_yposition:=map2(op,2,xdata);

xaxis_yposition := [500, 502, 504, 503]

> nn:=nops(%);

nn := 4

Now we make the redundant set of equations, by substituting for all the corresponding x and y values into the algebraic form y=a*x+b. Note that curly brackets { } represent a set; that a set has no particular order and no identical components.

> eqs:={seq(subs({x=xticks[i],y=xaxis_xposition[i]},
y=a*x+b),
i=1..nn)};

eqs := {126 = 500*a+b, 220 = 600*a+b, 313 = 700*a+b...

> consts:=indets(%);

consts := {a, b}

> leastsqrs(eqs,consts);

{a = 467/500, b = -3407/10}

> xaxis_xeq:=subs(%,xpixel=a*wavelength+b);

xaxis_xeq := xpixel = 467/500*wavelength-3407/10

The x-axis is determined by the combining of this x argument with the similar one for the argument.

> leastsqrs({seq(subs({x=xticks[i],
y=xaxis_yposition[i]},y=a*x+b),i=1..nn)},consts);

{a = 11/1000, b = 2481/5}

> xaxis_yeq:=subs(%,ypixel=a*wavelength+b);

xaxis_yeq := ypixel = 11/1000*wavelength+2481/5

The combination of the last two equations, as parametric equations, yields the x,y equation for the xaxis as a line, with pixel arguments.

> solve(xaxis_xeq,wavelength)=
solve(xaxis_yeq,wavelength);

500/467*xpixel+170350/467 = 1000/11*ypixel-496200/1...

> xeq:=isolate(%,ypixel);

xeq := ypixel = 11/934*xpixel+934397/1868

>

Exercise

Include all the xaxis data in this calculation. The point is that Maple can do this analysis with large data sets, and can carry the analysis through with some intelligence, which is rather unexpected in an non-thinking computer program. The listlist data contains 3 different digitized versions of the axis, with tick marks in the same order.

Next we repeat this calculation for the y-axis. These data are minimal and only come from the origin and the top of the scale, notionally given the values of 0 and 1. Here we will do the editing by hand, since there is so little to do. Copy the output on the second line and edit it.

> select(has,data,yaxis); # all the y-axis data

[[33, 500, yaxis], [34, 193, yaxis], [33, 500, yaxi...

> [[33,500,0], [34,193,1], [33,500,0], [34,191,1]];

[[33, 500, 0], [34, 193, 1], [33, 500, 0], [34, 191...

> yticks:=map2(op,3,%);

yticks := [0, 1, 0, 1]

> yaxis_xposition:=map2(op,1,%%);

yaxis_xposition := [33, 34, 33, 34]

> yaxis_yposition:=map2(op,2,%%%);

yaxis_yposition := [500, 193, 500, 191]

> nn:=nops(%);

nn := 4

> leastsqrs({seq(subs({x=yticks[i],
y=yaxis_xposition[i]},y=a*x+b),i=1..nn)},consts);

{a = 1, b = 33}

> yaxis_xeq:=subs(%,xpixel=a*rel_absorbance+b);

yaxis_xeq := xpixel = rel_absorbance+33

> leastsqrs({seq(subs({x=yticks[i],
y=yaxis_yposition[i]},y=a*x+b),i=1..nn)},consts);

{a = -308, b = 500}

> yaxis_yeq:=subs(%,ypixel=a*rel_absorbance+b);

yaxis_yeq := ypixel = -308*rel_absorbance+500

In this inxtance the combined last two equations with pixel arguments determin the slope of the yaxis and the 'drawing line to locate the curve, in pixelated coordinates..

> solve(yaxis_xeq,rel_absorbance)=solve(yaxis_yeq,rel_absorbance);

xpixel-33 = -1/308*ypixel+125/77

> yeq:=isolate(%,ypixel);

yeq := ypixel = -308*xpixel+10664

It is interesting that this parametric analysis allows for similar errors in both the x and y values, as one would expect form the way the digitizing is done.

The Absorbance Curves

The pixelated data from the chlorophylls are specified by listlist elements containing a, b and c. Each will be fitted with spline functions.

> select(has,data,a);

[[34, 321, a], [42, 314, a], [45, 308, a], [57, 220...
[[34, 321, a], [42, 314, a], [45, 308, a], [57, 220...
[[34, 321, a], [42, 314, a], [45, 308, a], [57, 220...
[[34, 321, a], [42, 314, a], [45, 308, a], [57, 220...
[[34, 321, a], [42, 314, a], [45, 308, a], [57, 220...

> map(x->[op(1..2,x)],%); # listlist of a data

[[34, 321], [42, 314], [45, 308], [57, 220], [63, 2...
[[34, 321], [42, 314], [45, 308], [57, 220], [63, 2...
[[34, 321], [42, 314], [45, 308], [57, 220], [63, 2...
[[34, 321], [42, 314], [45, 308], [57, 220], [63, 2...

To be useful in the spline operation, these data must be made into separate lists of x and y values.

> unlink:=proc(x) map2(op,1,x),map2(op,2,x); end;

unlink := proc (x) map2(op,1,x), map2(op,2,x) end p...

> unlink(%%);

[34, 42, 45, 57, 63, 66, 73, 77, 82, 91, 114, 147, ...
[34, 42, 45, 57, 63, 66, 73, 77, 82, 91, 114, 147, ...
[34, 42, 45, 57, 63, 66, 73, 77, 82, 91, 114, 147, ...

Next we consider a spline function as a possible quazi-analytical fit to the data. These functions can be simple lines, or more complex quadratics, cubics, quartics and higher order fits. All the fits 'cause' the derivatives at orders less than the order of the fit to be continuous.

> spline(%%,x,linear);

Error, (in spline) 1st and 2nd arguments must be two lists or two vectors

The remaining 'error' is that one value (end of first row in the output, above) is repeated. The specification of these corresponding values is through a trial and error process, redoing the following commands until the offending data points appear. Note that the 'ditto' here ignores the error committed above. For convenience, procedure redundant( ) is formed to pinpoint redundant x values, and show companion y values. Also, the sequence of lists (with the simple comma for separation) is converted into a listlist so that the identity of the elements is obvious and trackable.

> offdata:=[%];

offdata := [[34, 42, 45, 57, 63, 66, 73, 77, 82, 91...
offdata := [[34, 42, 45, 57, 63, 66, 73, 77, 82, 91...
offdata := [[34, 42, 45, 57, 63, 66, 73, 77, 82, 91...

> 28..29;
offdata[1][%];
offdata[2][%%];

28 .. 29

[289, 289]

[341, 384]

> redundant:=proc(x,y) local i,n;
n:=nops(x);
for i to n-1 do
if x[i]=x[i+1] then
print(`Redundant x values `,[x[i],x[i+1]],` @ `,[i,i+1],
` correspond to y values `,[y[i],y[i+1]]);
fi;od;
end;

redundant := proc (x, y) local i, n; n := nops(x); ...
redundant := proc (x, y) local i, n; n := nops(x); ...
redundant := proc (x, y) local i, n; n := nops(x); ...
redundant := proc (x, y) local i, n; n := nops(x); ...
redundant := proc (x, y) local i, n; n := nops(x); ...
redundant := proc (x, y) local i, n; n := nops(x); ...
redundant := proc (x, y) local i, n; n := nops(x); ...
redundant := proc (x, y) local i, n; n := nops(x); ...
redundant := proc (x, y) local i, n; n := nops(x); ...

> redundant(op(offdata)); # op remakes the two lists

`Redundant x values       `, [289, 289], ` @ `, [28...

What is happening?

A strange situation owing to small errors in the digitizing has produced a vertical line, which the spline function can not cope with. To avoid this, we could break the data into two pieces, leaving the vertical feature and hoping that it will not interfere with the interpolation process during the analysis of the chlorophyll spectra. Here, for expedience, we 'corrupt' the databy inserting an average value. Later this should be checked ito see if there is a significant effect on the resulting absorbance. The procedure combine_data( ) does the work of the multiple subsop( ) commands.

> subsop([2,28]=(341+384)/2,offdata);

[[34, 42, 45, 57, 63, 66, 73, 77, 82, 91, 114, 147,...
[[34, 42, 45, 57, 63, 66, 73, 77, 82, 91, 114, 147,...
[[34, 42, 45, 57, 63, 66, 73, 77, 82, 91, 114, 147,...

> subsop([2,29]=NULL,%); fixed_data:=op(subsop([1,29]=NULL,%));

[[34, 42, 45, 57, 63, 66, 73, 77, 82, 91, 114, 147,...
[[34, 42, 45, 57, 63, 66, 73, 77, 82, 91, 114, 147,...
[[34, 42, 45, 57, 63, 66, 73, 77, 82, 91, 114, 147,...

fixed_data := [34, 42, 45, 57, 63, 66, 73, 77, 82, ...
fixed_data := [34, 42, 45, 57, 63, 66, 73, 77, 82, ...
fixed_data := [34, 42, 45, 57, 63, 66, 73, 77, 82, ...

> combine_data:= proc(rng,x,y)
local r1,r2,n,m,xa,ya;
r1:=op(1,rng);r2:=op(2,rng); n:=r2-r1+1; m:=nops(x);
convert(x[rng],`+`)/n; convert(y[rng],`+`)/n;
[op(1..r1-1,x),%%,op(r2+1..m,x)],[op(1..r1-1,y),%,op(r2+1..m,y)]; end:

> fixed_data:=combine_data(28..29,op(offdata));

fixed_data := [34, 42, 45, 57, 63, 66, 73, 77, 82, ...
fixed_data := [34, 42, 45, 57, 63, 66, 73, 77, 82, ...
fixed_data := [34, 42, 45, 57, 63, 66, 73, 77, 82, ...

> spline(%,x,linear);
# appropriate linear spline function
adata:=unapply(%,x): # generates procedure adata( )

PIECEWISE([1403/4-7/8*x, x < 42],[398-2*x, x < 45],...

> plot(-adata(x),x=40..310);

[Maple Plot]

The curve is upside down because the pixel values are counted relative to the upper left of the screen. It is interesting that we can analyse for peak values by simply 'clicking on' the curve at the maxima and reading the pixel values in the Context Menu, on left side. This is not improved by using higher order interpolation functions, such as quadratic cubic, quartic or higher order splines.

> spline(fixed_data,x,quadratic): plot(-%,x=40..310);

[Maple Plot]

> spline(fixed_data,x,cubic): plot(-%,x=40..310);

[Maple Plot]

> select(has,data,b): map(x->[op(1..2,x)],%): junk:=unlink(%):

> redundant(%);

`Redundant x values       `, [276, 276], ` @ `, [37...

As before, some offensive data.

> combine_data(37..38,junk);

[33, 40, 44, 48, 56, 59, 61, 66, 72, 77, 83, 89, 93...
[33, 40, 44, 48, 56, 59, 61, 66, 72, 77, 83, 89, 93...
[33, 40, 44, 48, 56, 59, 61, 66, 72, 77, 83, 89, 93...
[33, 40, 44, 48, 56, 59, 61, 66, 72, 77, 83, 89, 93...

> spline(%,x,linear):
# worth having a look (convert to ;)
bdata:=unapply(%,x):

> select(has,data,c): map(x->[op(1..2,x)],%): junk:=unlink(%):redundant(%);

No offensive data

> spline(%,x,linear):

> cdata:=unapply(%,x):

> plot(-[adata,bdata,cdata],40..310); # Maple scales and adds tables

[Maple Plot]

> save data,adata,bdata,cdata,
unlink,redundant,combine_data,stalk, "trichromatic.m";

Reading off the curve

The procedure is to pick a wavelength and calculate the xpixel and ypixel values using xaxis_xeq and xaxis_yeq, respectively. Then run parallel to the yaxis using yeq to form a line that must intersect with the curve. To avoid vectors and avoid the problem of infinite slopes, a parrallel shift of an equation is equivalent to adding an arbitrary constant.

> xaxis_xeq; xaxis_yeq;

xpixel = 467/500*wavelength-3407/10

ypixel = 11/1000*wavelength+2481/5

> lhs(yeq)=rhs(yeq)+const;

ypixel = -308*xpixel+10664+const

Let us do this with a single value of wavelength, 650nanometers.

> subs({subs(wavelength=650,%%%),subs(wavelength=650,%%)},%);

10067/20 = -356936/5+const

> solve(%,const);

1437811/20

> subs(const=%,%%%);

ypixel = -308*xpixel+1651091/20

This line must run through the curve, to find the value of y.

> xpixel=solve(adata(xpixel)=rhs(%),xpixel); subs(%,%%);

xpixel = 1639555/6152

ypixel = 7231629/15380

These values are the x and y pixel values that correspond to the curve in question, for chlorophyll a.

Now we run back to the y axis, along a line parallel to the xaxis.

> lhs(xeq)=rhs(xeq)+const; subs({%%,%%%},%);

ypixel = 11/934*xpixel+934397/1868+const

7231629/15380 = 2892240277/5745968+const

> const=solve(%,const);

const = -952518413/28729840

> subs(%,%%%); # the equation for the line from the curve to the yaxis

ypixel = 11/934*xpixel+13418507447/28729840

> solve({%,yeq},{xpixel,ypixel});

{ypixel = 94011558289/201116570, xpixel = 292956506...

> {isolate(yaxis_yeq,rel_absorbance),isolate(yaxis_xeq,rel_absorbance)};

{rel_absorbance = -1/308*ypixel+125/77, rel_absorba...

> subs(%%,%); # the results of both substitutions are the same

{rel_absorbance = 935246673/8849129080}

So, at 650nanometers, the relative absorption for chlorophyll a is

> evalf(%,3);

{rel_absorbance = .106}

Roughly, this looks reasonable on the figure. Let us do this automatically, now, with a procedure, absorbance( )

Automatic Fitting of the Data

> absorbance:=proc(nm,abc) local abcdata;
global xeq,yeq,xaxis_xeq,xaxis_yeq,yaxis_xeq,yaxis_yeq;
abcdata:=cat(abc,"data"); # name of pixel procedure
lhs(yeq)=rhs(yeq)+const;
subs({subs(wavelength=nm,xaxis_xeq),
subs(wavelength=nm,xaxis_yeq)},%);
subs(const=solve(%,const),
%%);
xpixel=solve(abcdata(xpixel)=rhs(%),xpixel); subs(%,%%);
lhs(xeq)=rhs(xeq)+const; subs({%%,%%%},%);
const=solve(%,const);subs(%,%%%);
solve({%,yeq},{xpixel,ypixel});
{isolate(yaxis_yeq,rel_absorbance),
isolate(yaxis_xeq,rel_absorbance)};
subs(%%,%); subs(%,rel_absorbance);
evalf(%,3);
end;

absorbance := proc (nm, abc) local abcdata; global ...
absorbance := proc (nm, abc) local abcdata; global ...
absorbance := proc (nm, abc) local abcdata; global ...
absorbance := proc (nm, abc) local abcdata; global ...
absorbance := proc (nm, abc) local abcdata; global ...
absorbance := proc (nm, abc) local abcdata; global ...
absorbance := proc (nm, abc) local abcdata; global ...
absorbance := proc (nm, abc) local abcdata; global ...
absorbance := proc (nm, abc) local abcdata; global ...
absorbance := proc (nm, abc) local abcdata; global ...
absorbance := proc (nm, abc) local abcdata; global ...
absorbance := proc (nm, abc) local abcdata; global ...
absorbance := proc (nm, abc) local abcdata; global ...
absorbance := proc (nm, abc) local abcdata; global ...
absorbance := proc (nm, abc) local abcdata; global ...
absorbance := proc (nm, abc) local abcdata; global ...
absorbance := proc (nm, abc) local abcdata; global ...
absorbance := proc (nm, abc) local abcdata; global ...
absorbance := proc (nm, abc) local abcdata; global ...

> map2(absorbance,670,[a,b,c]);

[.736, .856e-1, .780e-2]

The three relative absorbances at 670nanometers.

> save data,adata,bdata,cdata,absorbance,
xeq,yeq,xaxis_xeq,xaxis_yeq,yaxis_xeq,yaxis_yeq,
redundant,combine_data,stalk,unlink, "trichromatic.m";

>

Reproduced Spectral Plot

The procedure above is complete and detailed, but cumbersome and inefficient. For plotting and further analysis it is far better to remake the data in 'a fit and proper' form and then fit the data to a functional form. To do this, we convert the data into a simplest, abscissa and ordinate set, the selected wavelength is then used to simply chose the expected absorbance.

To do this, a single point on the curve is considered, in terms of pixels. A line parrallel to the y-axis is run vertically from this point to find the intercept with the x-axis, the nanomater value. Another line parallel to the x-axis intersects the y-axis at the rel_absorbance value.

> restart;

> read "H:\\public_html\\ViewBook\\trichromatic\\trichromatic.m";

> #read "trichromatic.m";

> absorbance(680.,a);

.151

> xeq,yeq;

ypixel = 11/934*xpixel+934397/1868, ypixel = -308*x...

> select(has,data,a);

[[34, 321, a], [42, 314, a], [45, 308, a], [57, 220...
[[34, 321, a], [42, 314, a], [45, 308, a], [57, 220...
[[34, 321, a], [42, 314, a], [45, 308, a], [57, 220...
[[34, 321, a], [42, 314, a], [45, 308, a], [57, 220...
[[34, 321, a], [42, 314, a], [45, 308, a], [57, 220...

> map(x->[op(1..2,x)],%);

[[34, 321], [42, 314], [45, 308], [57, 220], [63, 2...
[[34, 321], [42, 314], [45, 308], [57, 220], [63, 2...
[[34, 321], [42, 314], [45, 308], [57, 220], [63, 2...
[[34, 321], [42, 314], [45, 308], [57, 220], [63, 2...

> dpt:=op(1,%);

dpt := [34, 321]

Take the first data point for the chlorophyll a absorption. fit it to the y-axis curve offset by a constant.

> op(1,yeq)=op(2,yeq)+const;

ypixel = -308*xpixel+10664+const

> subs(xpixel=dpt[1],ypixel=dpt[2],%);

321 = 192+const

> subs(const=solve(%,const),%%);

ypixel = -308*xpixel+10793

> solve({xeq,%},{xpixel,ypixel});

{ypixel = 13092351/26153, xpixel = 19226927/575366}...

> subs(%,[xaxis_xeq,xaxis_yeq]);

[19226927/575366 = 467/500*wavelength-3407/10, 1309...

> map(solve,%,wavelength);

[115232400/287683, 115232400/287683]

These equations give exactly the same wavelength, corresponding to the pixel values of the digitized data point. Do the same thing with the vertical. First, go horizontal, parallel to the x-axis to meet the yaxis. Then, the interaction with the y-axis specifies the pixel value along the y-axis. Back calculating the the rel_absorbance gives the y-value of a 'fit and proper' data point, in wavelength and rel_absorbance.

> op(1,xeq)=op(2,xeq)+const;

ypixel = 11/934*xpixel+934397/1868+const

> subs(xpixel=dpt[1],ypixel=dpt[2],%);

321 = 935145/1868+const

> subs(const=solve(%,const),%%);

ypixel = 11/934*xpixel+149720/467

> solve({yeq,%},{xpixel,ypixel});

{xpixel = 9660736/287683, ypixel = 8394984/26153}

> subs(%,[yaxis_xeq,yaxis_yeq]);

[9660736/287683 = rel_absorbance+33, 8394984/26153 ...

> map(solve,%,rel_absorbance);

[167197/287683, 167197/287683]

The procedure below completes the operation. Note that it contains some 'cut and paste', to smooth the operation.

> fit_N_proper:=proc(dpt,a) local x,y;
global xeq,yeq,xaxis_xeq,xaxis_yeq,yaxis_xeq,yaxis_yeq;
ypixel = -308*xpixel+10664+const;

subs(xpixel=dpt[1],ypixel=dpt[2],%);
subs(const=solve(%,const),%%);
solve({xeq,%},{xpixel,ypixel});
subs(%,{xaxis_xeq,xaxis_yeq});
x:=op(map(solve,%,wavelength));
# 'bombs out' if values are not the same
op(1,xeq)=op(2,xeq)+const;
subs(xpixel=dpt[1],ypixel=dpt[2],%);
subs(const=solve(%,const),%%);
solve({yeq,%},{xpixel,ypixel});
subs(%,{yaxis_xeq,yaxis_yeq});
y:=op(map(solve,%,rel_absorbance));
[x,y]; end;

fit_N_proper := proc (dpt, a) local x, y; global xe...
fit_N_proper := proc (dpt, a) local x, y; global xe...
fit_N_proper := proc (dpt, a) local x, y; global xe...
fit_N_proper := proc (dpt, a) local x, y; global xe...
fit_N_proper := proc (dpt, a) local x, y; global xe...
fit_N_proper := proc (dpt, a) local x, y; global xe...
fit_N_proper := proc (dpt, a) local x, y; global xe...
fit_N_proper := proc (dpt, a) local x, y; global xe...
fit_N_proper := proc (dpt, a) local x, y; global xe...
fit_N_proper := proc (dpt, a) local x, y; global xe...
fit_N_proper := proc (dpt, a) local x, y; global xe...
fit_N_proper := proc (dpt, a) local x, y; global xe...
fit_N_proper := proc (dpt, a) local x, y; global xe...
fit_N_proper := proc (dpt, a) local x, y; global xe...
fit_N_proper := proc (dpt, a) local x, y; global xe...
fit_N_proper := proc (dpt, a) local x, y; global xe...
fit_N_proper := proc (dpt, a) local x, y; global xe...

> select(has,data,a):

> map(x->[op(1..2,x)],%):

> a_data:=evalf(map(fit_N_proper,%),3);

a_data := [[401., .581], [409., .604], [412., .624]...
a_data := [[401., .581], [409., .604], [412., .624]...
a_data := [[401., .581], [409., .604], [412., .624]...
a_data := [[401., .581], [409., .604], [412., .624]...
a_data := [[401., .581], [409., .604], [412., .624]...

> select(has,data,b):

> map(x->[op(1..2,x)],%):

> b_data:=evalf(map(fit_N_proper,%),3);

b_data := [[400., .133], [407., .189], [412., .234]...
b_data := [[400., .133], [407., .189], [412., .234]...
b_data := [[400., .133], [407., .189], [412., .234]...
b_data := [[400., .133], [407., .189], [412., .234]...
b_data := [[400., .133], [407., .189], [412., .234]...
b_data := [[400., .133], [407., .189], [412., .234]...

> select(has,data,c):

> map(x->[op(1..2,x)],%):

> c_data:=evalf(map(fit_N_proper,%),3);

c_data := [[400., .227], [405., .270], [410., .358]...
c_data := [[400., .227], [405., .270], [410., .358]...
c_data := [[400., .227], [405., .270], [410., .358]...
c_data := [[400., .227], [405., .270], [410., .358]...
c_data := [[400., .227], [405., .270], [410., .358]...

> plot([a_data,b_data,c_data],400..725,
style=line, title="Relative Absorbances",
titlefont=[TIMES,BOLD,16], thickness=2,
legend=["chlorophyll a","chlorophyll b","chlorophyll c"]):

> PLOT(TEXT([720,0],"nanometers",ALIGNBELOW,ALIGNRIGHT),
AXESTICKS([400,450,500,550,650,700],[.2,.4,.6,.8])):

>
plots[display](%,%%); RAP:=%:

[Maple Plot]

>

> save data,adata,bdata,cdata,absorbance,
fit_N_proper,a_data,b_data,c_data,
xeq,yeq,xaxis_xeq,xaxis_yeq,yaxis_xeq,yaxis_yeq,
redundant,combine_data,stalk,unlink, "trichromatic.m";

The Absorbance Maximums

These are the values used in the trichromatic analysis, with a wavelength shift of perhaps 10-20nanometers, depending on the solvent. The real wavelengths of interest [664,647,630] must correspond to to these wavelengths; these are all the second peaks. The easiest way to get the ratios of extinction coefficients for the 6 off-peak values is to simply 'refit' the 'Fit_N_Proper' data.

> restart;

> read "H:\\public_html\\ViewBook\\trichromatic\\trichromatic.m";

> #read "trichromatic.m";

> `chlorophyll a`,429, 0.92,667, 0.76;

> `chlorophyll b`,452, 0.84,650, 0.43;

> `chlorophyll c`,447, 0.92,634, 0.07;

`chlorophyll a`, 429, .92, 667, .76

`chlorophyll b`, 452, .84, 650, .43

`chlorophyll c`, 447, .92, 634, .7e-1

> unlink(a_data);

This has untoward data at 5 and 6 points from the end.

[401., 409., 412., 425., 431., 434., 442., 447., 45...
[401., 409., 412., 425., 431., 434., 442., 447., 45...
[401., 409., 412., 425., 431., 434., 442., 447., 45...
[401., 409., 412., 425., 431., 434., 442., 447., 45...

> subsop(-6=NULL,%[1]),subsop(-6=NULL,-5=evalf((%[2][-5]+%[2][-6])/2,3),%[2]);

[401., 409., 412., 425., 431., 434., 442., 447., 45...
[401., 409., 412., 425., 431., 434., 442., 447., 45...
[401., 409., 412., 425., 431., 434., 442., 447., 45...
[401., 409., 412., 425., 431., 434., 442., 447., 45...

> spline(%,x,linear);

PIECEWISE([-.5718749964+.2874999991e-2*x, x < 409.]...

> a_rel_absorbance:=unapply(%,x):

>

> unlink(b_data); # same error--lucky?

[400., 407., 412., 416., 424., 428., 430., 435., 44...
[400., 407., 412., 416., 424., 428., 430., 435., 44...
[400., 407., 412., 416., 424., 428., 430., 435., 44...
[400., 407., 412., 416., 424., 428., 430., 435., 44...
[400., 407., 412., 416., 424., 428., 430., 435., 44...

> subsop(-6=NULL,%[1]),subsop(-6=NULL,-5=evalf((%[2][-5]+%[2][-6])/2,3),%[2]);

[400., 407., 412., 416., 424., 428., 430., 435., 44...
[400., 407., 412., 416., 424., 428., 430., 435., 44...
[400., 407., 412., 416., 424., 428., 430., 435., 44...
[400., 407., 412., 416., 424., 428., 430., 435., 44...
[400., 407., 412., 416., 424., 428., 430., 435., 44...

> spline(%,x,linear):

> b_rel_absorbance:=unapply(%,x):

>

> unlink(c_data); # no errors

[400., 405., 410., 421., 432., 439., 443., 447., 45...
[400., 405., 410., 421., 432., 439., 443., 447., 45...
[400., 405., 410., 421., 432., 439., 443., 447., 45...
[400., 405., 410., 421., 432., 439., 443., 447., 45...

> spline(%,x,linear):

> c_rel_absorbance:=unapply(%,x):

The relative absorbances should give a handle to the 'off-peak' values or the contribution of the other chlorophylls to absorption that is dominated by one of chlorophyll a, chlorophyll b, and chlorophyll c. This is done by taking ratios.

In this case, a spreadsheet is used. It is obtained through the Menu item Insert/Spreadsheet. Notice that, when you 'click on' a certain cell in the spreadsheet, the Context Menu bar appears at the bottom of the menus, with an open CM window. Copying from the worksheet is accomplished by the normal Ctrl-c, Ctrl-v when the item is placed into the CM window. You must be careful to activate the correct cell and have the attention of the CM window before you paste in the item. Use Enter to complete the operation, and place the item in the cell.

When refering to a datum from another cell, add a ~ before the cell descriptor to alert the spreadsheet that the information is to come from that cell. You will probably have to resize cells no and then. Adjust the lower right corner by 'clicking-on' the spreadsheet and grabbing the corner. If the formatting gets out of hand, 'click-on' the upper left corner, and resize the column and row widths using Spreadsheet/Column/Width and Spreadsheet/Row/Height.

Of course, you don't need to copy in all the information. Editing and Fill/Down will complete the calculations, provided the relative referencing is done properly. Note that after you have highlighted where the Fill is to happen, you can complet the action with either the right mouse button or the Spreadsheet menu item. Inserting a column (or row) is completed by clicking on the head of the column just after where you want the insertion, and using Spreadsheet/Column/Insert. Formatting, and the number of displayed decimals is altered under Properties, either from the menu or the right mouse button.

> ?worksheet,spreadsheet,references

Wavelength Relative*OD[a] `Divided by max` Relative*OD[b] `Divided by max` Relative*OD[c] `Divided by max`
667 .756000000 1.000000000 .117666668 .1556437407 .13875556e-1 .1835391005e-1
650 .1049500000 .2452102804 .428000000 1.000000000 .50033333e-1 .1169003107
634 .1090000000 1.487039563 .19200000 2.619372442 .733000000e-1 1.000000000








Wavelength epsilon[a] epsilon[b] epsilon[c]


667 87.64 10.31230678 1.216053728


650 12.59890421 51.38 2.570712650


634 63.34788538 111.5852660 42.60

From the table, it is clear that chlorophyll c is not even a primary absorber at its peak wavelength. Using ratios, Estimates of the cross-extinction coefficients may be estimated. From above, with a little editing, the standard matrix for the extinctions in mg/litre is as follows:

> E:=matrix([[87.6,6.79,1.02],
[24.33,51.4,5.65],
[13.5,16.3,42.6]]);

E := matrix([[87.6, 6.79, 1.02], [24.33, 51.4, 5.65...

The generated results are what was expected, and is quit removed from the standard results. One option is to look at the ratios for the actual wavelengths used, ignoring the offsets due to the solvent environment. This doesn't make a lot of difference. Some further collaboration of this dispartiy is required, perhaps with some other data.

>

Wavelength Relative*OD[a] `Divided by max` Relative*OD[b] `Divided by max` Relative*OD[c] `Divided by max`
664 .746250000 1.000000000 .153166667 .2052484650 .20268889e-1 .2716099028e-1
647 .1018000000 .2422655878 .420200000 1.000000000 .56433333e-1 .1343011257
630 .101080000 1.381818182 .146857143 2.007616446 .7314999999e-1 1.000000000








Wavelength epsilon[a] epsilon[b] epsilon[c]


664 87.64 13.42352670 1.776365432


647 12.44760590 51.38 6.900391838


630 58.86545455 85.52446060 42.60

> evalm(E);

matrix([[87.6, 6.79, 1.02], [24.33, 51.4, 5.65], [1...

A last Variation

These results show a significant difference in the 'cross extinction' coefficients. It would be better to not rely on the relationship between the curves, and only use values within the curves

>

Wavelength Relative*OD[a] `Divided by max` Relative*OD[b] `Divided by max` Relative*OD[c] `Divided by max`
664 .746250000 1.000000000 .153166668 .3645089672 .20268889e-1 .2770866576
647 .1018000000 .1364154104 .420200000 1.000000000 .56433333e-1 .7714741355
630 .101080000 .1354505863 .146857143 .3494934388 .7314999998e-1 1.000000000








Wavelength epsilon[a] epsilon[b] epsilon[c]


664 87.64 18.72847073 11.80389161


647 11.95544657 51.38 32.86479817


630 11.87088938 17.95697289 42.60

> evalm(E);

matrix([[87.6, 6.79, 1.02], [24.33, 51.4, 5.65], [1...

The comparison with E, above and the preceding spreadsheets reveals that there is not a consistent varification of the data in E. In general, however, it is possible that epsilon[630,a], epsilon[630,b], epsilon[664,c], eps... are correct. However, it appears that epsilon[664,b] is, perhaps, half the value expected from the spectra, and epsilon[647,a] is about twice the value expected from the spectra. These data need further comfirmation, and probably should not be used quantitatively before the whole matter is reinvestigated.

>

Exercise

The figure containing the spectra of chlorophyll a, b and c is from Halldal, Accessory Pigments and Enhancement Effects, In Checcucci, A. and Weale, R. A., (1973), P rimary Molecular Events in Photobiology . Elsevier, pg 167. That figure, digitized, is presented as chlorophyll.gif and chlorophyll.eps in directory 'Light extinction figures'. An alternative set of figures is presented in J. T. O. Kirk's book , Light and Photosynthesis in Aquatic Systems, 2nd Ed. (1996), Cambridge, pg 229. These figures are also scanned in the same directory. Using your viewer or your browser, repeat the calculations, as given above and see if this alternative data produces the same results as above. Use the procedures give as much as possible to symtemise the analysis, and hopefully, minimise your effort. Remember that the firures are generally derived for different solvents and that can shift the curves 10-20 nanometers or so. Hence it is a good idea to only record relative values, perhaps relative to the peak values.