HP Forums
Interpolations for Two and Three Independent Variables - Printable Version

+- HP Forums (https://archived.hpcalc.org/museumforum)
+-- Forum: HP Museum Forums (https://archived.hpcalc.org/museumforum/forum-1.html)
+--- Forum: Old HP Forum Archives (https://archived.hpcalc.org/museumforum/forum-2.html)
+--- Thread: Interpolations for Two and Three Independent Variables (/thread-204931.html)



Interpolations for Two and Three Independent Variables - Namir - 11-15-2011

Recently I was looking for information about multivariable interpolation. I was curious about interpolation with two and three independent variables. My first stop was Wikipedia. The algorithm presented there dealt with interpolation based on the coordinates of the independent variables forming a square. That is, the bilinear interpolation used points (x1,y1), (x1,y2), (x2,y1), and (x2,y2).

Further search on the Internet located a paper that dealt with bilinear and trilinear interpolation. The paper showed more elegant equations for square-shaped coordinates for the bilinear interpolation. The paper also included equations for the cube-shaped trilinear interpolation. Implementing these algorithms in BASIC, RPN, or RPL should be easy.

After implementing the bilinear and trilinear interpolations, I asked myself the question, “What if the anchor coordinates for the bilinear interpolation do not form a square? Likewise, what if the anchor coordinates for the trilinear interpolation do not form a cube?”

I proceeded to calculate the coefficients for the multi-variable polynomials used for the bilinear and trilinear interpolations. After a little bit of math wrangling, I realized that my best bet was to use Vandermonde types of matrices. Such matrices would allow me to calculate the coefficients of the interpolating polynomials. In addition, you need fewer anchor points than the square or cube approaches. One advantage of calculating the interpolating polynomial coefficients is that you can reuse the coefficients for additional interpolations that use the same anchor points. This is an advantage that is not available when using interpolation equations that simply yield the final answer.

Using the Vandermonde types of matrices allowed me to easily implement quadratic interpolations for two and three independent variables. If the implementation uses a language that works with matrices, then the calculations are greatly simplified! Such is the case with Matlab, RPL and the HP-71B that has the MATH ROM.

Namir


Re: Interpolations for Two and Three Independent Variables - Marcus von Cube, Germany - 11-15-2011

Namir, do you think the matrix support in WP 34S is strong enough to support your algorithms? Can you post some background and a solution to look at?


Re: Interpolations for Two and Three Independent Variables - Namir - 11-15-2011

In the case of a bilinear interpolation, one can build the Vandermonde matrix A as

+- -+
| 1 x1 y1 |
| |
| 1 x2 y2 |
| |
| 1 x3 y3 |
+- -+

and the B vector as:

+- -+
| f(x1,y1) |
| |
| f(x2,y2) |
| |
| f(x3,y3) |
+- -+

The coefficients in array C as obtained by:

C = inverse(A) * B

To interpolate at point (Xint,Yint) use:

y = C(1) + C(2) * Xint + C(3) * Yint

In the case of a trilinear interpolation, one can build the Vandermonde matrix A as

+- -+
| 1 x1 y1 z1 |
| |
| 1 x2 y2 z2 |
| |
| 1 x3 y3 z3 |
| |
| 1 x4 y4 z4 |
+- -+

and the B vector as:

+- -+
| f(x1,y1,z1) |
| |
| f(x2,y2,z1) |
| |
| f(x3,y3,z3) |
| |
| f(x4,y4,z4) |
+- -+

The coefficients in array C as obtained by:

C = inverse(A) * B

To interpolate at point (Xint,Yint,Zint) use:

y = C(1) + C(2) * Xint + C(3) * Yint + C(4) * Zint




Re: Interpolations for Two and Three Independent Variables - Marcus von Cube, Germany - 11-15-2011

Namir, thanks. It doesn't look impossible to implement on the 34S. Do you have a benchmark example?

Instead of multiplying with the inverse of the matrix, a linear solve should do as well.


Re: Interpolations for Two and Three Independent Variables - Namir - 11-15-2011

Here are benchmarks for the bilinear and trilinear interpolation:

X	 Y 	 FX		Interpolation Data	
3 2 5 X 4
3 4 -1 Y 3
5 2 9 Z Calc 4
5 4 3 Z Act 4


X Y Z FX X 1.5
1 1 1 16 Y 1.5
1 1 2 21 Z 1.5
1 2 1 19 FX Calc 19
1 2 2 24 FX Act 19
2 1 1 14
2 1 2 19
2 2 1 17
2 2 2 22

Speed is not a real issue here because there are no iterations.

Namir


Re: Interpolations for Two and Three Independent Variables - Namir - 11-15-2011

In the case of a biquadratic interolation, one can build the Vandermonde matrix A as

+- -+
| 1 x1 y1 x1^2 y1^2 x1*y1 |
| |
| 1 x2 y2 x2^2 y2^2 x2*y2 |
| |
| 1 x3 y3 x3^2 y3^2 x3*y3 |
| |
| 1 x4 y4 x4^2 y4^2 x4*y4 |
| |
| 1 x5 y5 x5^2 y5^2 x5*y5 |
| |
| 1 x6 y6 x6^2 y6^2 x6*y6 |
| |
+- -+

and the B vector as:

+- -+
| f(x1,y1) |
| |
| f(x2,y2) |
| |
| f(x3,y3) |
| |
| f(x4,y4) |
| |
| f(x5,y5) |
| |
| f(x6,y6) |
+- -+

The coefficents in array C as obtaied by:

C = inverse(A) * B

To interpolate at point (Xint,Yint) use:

y = C(1) + C(2) * Xint + C(3) * Yint + C(4)*Xint^2 + C(5) * Yint^2 + C(6) * Xint * Yint

In the case of a triquadratic interolation, one can build the Vandermonde matrix A as

+- -+
| 1 x1 y1 z1 x1^2 y1^2 z1^2 x1*y1 x1*z1 y1*z1 x1*y1*z1 |
| |
| 1 x2 y2 z2 x2^2 y2^2 z2^2 x2*y2 x2*z2 y2*z2 x2*y2*z2 |
| |
| 1 x3 y3 z3 x3^2 y3^2 z3^2 x3*y3 x3*z3 y3*z3 x3*y3*z3 |
| |
| 1 x4 y4 z4 x4^2 y4^2 z4^2 x4*y4 x4*z4 y4*z4 x4*y4*z4 |
| |
| 1 x5 y5 z5 x5^2 y5^2 z5^2 x5*y5 x5*z5 y5*z5 x5*y5*z5 |
| |
| 1 x6 y6 z6 x6^2 y6^2 z6^2 x6*y6 x6*z6 y6*z6 x6*y6*z6 |
| |
| 1 x7 y7 z7 x7^2 y7^2 z7^2 x7*y7 x7*z7 y7*z7 x7*y7*z7 |
| |
| 1 x8 y8 z8 x8^2 y8^2 z8^2 x8*y8 x8*z8 y8*z8 x8*y8*z8 |
| |
| 1 x9 y9 z9 x9^2 y9^2 z9^2 x9*y9 x9*z9 y9*z9 x9*y9*z9 |
| |
| 1 x10 y10 z10 x10^2 y10^2 z10^2 x10*y10 x10*z10 y10*z10 x10*y10*z10 |
| |
| 1 x11 y11 z11 x11^2 y11^2 z11^2 x11*y11 x11*z11 y11*z11 x11*y11*z11 |
| |
+- -+

and the B vector as:

+- -+
| f(x1,y1,z1) |
| |
| f(x2,y2,z1) |
| |
| f(x3,y3,z3) |
| |
| f(x4,y4,z4) |
| |
| f(x5,y5,z5) |
| |
| f(x6,y6,z6) |
| |
| f(x7,y7,z7) |
| |
| f(x8,y8,z8) |
| |
| f(x9,y9,z9) |
| |
| f(x10,y10,z10) |
| |
| f(x11,y11,z11) |
+- -+

The coefficents in array C as obtaied by:

C = inverse(A) * B

To interpolate at point (Xint,Yint,Zint) use:

y = C(1) + C(2) * Xint + C(3) * Yint + C(4) * Zint +
C(5) * Xint^2 + C(6) * Yint^2 + C(7) * Zint^2 +
C(8) * Xint * Yint + C(9) * Xint * Zint + C(10) * Yint * Zint +
C(11) * Xint + Yint + Zint




Re: Interpolations for Two and Three Independent Variables - Namir - 11-15-2011

The advantage of using the Vandermonde type of matrices is that you can use it to define your custom interpolation function. For example, if I want to interpolate using:

Z = a + b/X + c/Y + d*X*Y

Then the Vandermonde matrix will have a matrix with the following columns:

Column 1: is full of ones.
Column 2: column of 1/X values
Column 3: column of 1/Y values
Column 4: column of X*Y values.

Since we have 4 coefficients we need four rows in the Vandermonde matrix.

Another example is using the following model that involves three independent variables X, Y and T:

Z = a + b sin(X) + c sin(2*X) + d ln(Y/T) + e X*T

Then the Vandermonde matrix will have a matrix with the following columns:

Column 1: is full of ones.
Column 2: column of sin(X) values
Column 3: column of s(2*X) values
Column 4: column of ln(Y/T) values.
Column 5: column of X*T values

Since we have 5 coefficients we need five rows in the Vandermonde matrix.

Thus the Vandermonde matrix approach gives us the power to choose to work with any number of variables that suites our application and also choose from wide variety of (thousands if not millions) models for interpolation. This conclusion answer an old question of mine about pushing new limits for interpolation.

Another way to look at this approach is basically we are using Vandermonde-style matrices to do a curve fit with a minimum required number of data points (meaning R-square of the fitted model is always 1).

Namir


Edited: 15 Nov 2011, 5:45 p.m.