1. I have written an executable script that read a large sample file ("Sample.data") , and uses rbf to create an interpolant function:

- Code: Select all
`import numpy as np`

from scipy.interpolate import Rbf

x1i,x2i,x3i,x4i,x5i,x6i,x7i,x8i,x9i,x10i,x11i,x12i,x13i,x14i,x15i,di = np.loadtxt('/home/hhamdi/Desktop/RBF_Test/data.txt' , unpack=True)

rbfi=Rbf(x1i,x2i,x3i,x4i,x5i,x6i,x7i,x8i,x9i,x10i,x11i,x12i,x13i,x14i,x15i, di)

2. The vector of values for interpolation i.e. (t1,.. t15) are generated each time with other program in a separate text file (i.e "interpolation.dat"). I read those (t1,..t15) from this file and calculate the corresponding interpolation value Y.

- Code: Select all
`Y = rbfi(t1, t2, t3, t4, t5, t6, t7, t8, t9, t10, t11, t12, t13, t14, t15)`

However this is very slow, because each time a new vector (t1,...t15) is create, I need to first open the original fixed sample file ("Sample.data"), and again create the fixed interpolant function. Is there any way that I can only create the interpolant function (Rbf) from the Sample.dat file only one time, and to save it somewhre or save it like a native python function, and then only use it for any new value of a vector (t1,...t15). This is very helpful because it save some timed not to open the sample file each time.