help-octave
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: fixed points piecewise-linear fitting


From: Macy
Subject: Re: fixed points piecewise-linear fitting
Date: Sat, 17 Mar 2012 11:50:32 -0700

Sorry to answer my own thread but after posting, I read OP's comments!

I agree in including the human in the loop, solutions are much more 'real'. For 
example, in trying to deconvolve a known profile from the data to 'crisp' the 
edges ONLY worked if you sat and watched results while making changes to the 
coefficients. Alas, laborious.

But to me that means there IS a measure of success. It was just not stated 
well. Else the lms method would have worked.

You might try something, look at the Fourier Transform of the resulting 
solution. You should NOT be adding excess energy into the higher frequency 
terms [in my case the FFT was a spatial FFT that meant physically unrealizable 
and/or unlikely structures] Nature tends to have a 1/f statistical limit.  
Perhaps applying this very complex additional constraint to 'how you measure 
success' will add enough additional terms to move closer to being able to yield 
a unique solution, make the summation of the lms more unique.  For example, 
function creating 1 + 4 and the function that created 4 + 1 will produce 
completely different FFT, thus you have an 'extra' qualifier to any proposed 
solution and therefore wildly varying functions can no longer 'appear' as the 
same solution to your optimization.  

A bit wordy, but I had been trying to mathematicallly describe the qualifiers 
added by sitting and simply 'looking' at my solutions.  Again, I think an 
optimum function can be achieved if I can just mathematically define 'success' 
better than a simple summation of errors, so that the computer can do the work.



--- address@hidden wrote:

From: Macy <address@hidden>
To: <address@hidden>
Subject: Re: fixed points piecewise-linear fitting
Date: Sat, 17 Mar 2012 11:27:57 -0700

Please accept my apologies, I am WAAAAY out of my element here but reading this 
triggered a memory regarding two points.

1. The mathematical premise was that in order to get ALL derivatives to exist, 
convolve the data/function with an 'adjustable' unity area function with 
exponential edges. With a shape like exp(-x/k), or such. The convolution will 
not appreciably change your basic function but has the ability to make it 
'well-behaved' with every derivative existing. Then, check what happens as you 
take the lim k->0 of course taking the limit takes the convoluting function to 
an ideal impulse function, thus, convolution replicates your original function. 
[This is from classroom of Ron Bracewell at Stanford, author of Fourier 
Transform ?? He used the convolution and taking coefficient to the limit to 
enable proof of solutions, that otherwise could not exist.]

2. I tried and tried to deconvolve a simple known wave shape from data to 
'improve' the data. NEVER COULD! I used all kinds of methods to lower errors 
etc. yet the solution totally eluded me. I attribute that to the FACT that 
there was more than 1 solution to yield the same quality of match. In other 
words, the definition of success, or in this case, more closely approximating 
the correct solution WAS NEVER STATED PROPERLY. 

As I read this thread it appeared that this could be the same for this 
instance. You could move closer to the solution and NEVER KNOW it, because a 
summation is pretty blind as to the individual contributors. What I'm saying is 
that a HUGE space of identical solutions is created by a summation process. 1 + 
4 is the same as 4 + 1, yet two different functions could generate those two 
patterns. Again, summation 'smushes' all your result/information into 
nonexistance.


Now, you see why I preficed my comments with an apology? But if you can just 
tweak your thinking a bit, change something, you may get closer to the answer.  



--- address@hidden wrote:

From: CdeMills <address@hidden>
To: address@hidden
Subject: Re: fixed points piecewise-linear fitting
Date: Sat, 17 Mar 2012 10:30:36 -0700 (PDT)


Sergei Steshenko-2 wrote
> 
> 
> 
> Well, in my case it's just sum(abs(Yf_interpolated - Y)), not
> sum((Yf_interpolated - Y).^2).
> 
> I don't understand your "A linear solution would work".
> 
> 
> My solution is poor man's brute force one.
> 
> I.e. I have a pretty good Yf initial approximation, and I have Y_step.
> 
> I have an outer loop which is iterations and an inner loop on 'k'
> 
> For each Yf(k) I try (Yf(k) + Y_step) and (Yf(k) - Y_step) and check
> whether my sum(abs(Yf_interpolated - Y)) becomes smaller or not.
> 
> If it is smaller, Yf(k) is replaced with Yf(k) +/- Y_step - depending on
> which of them gives smaller sum(abs(Yf_interpolated - Y)).
> 
> The outer loop is run until there is no improvement in
> sum(abs(Yf_interpolated - Y)) or until number of iterations is exhausted -
> in my case very good fitting is not critical. In my case I never reach
> iterations limit - I intentionally make it hight; for my data I think I
> never have more than 200 iterations.
> 
> In my case length(Yf) == length(Xf) == 64.
> 
> Of course, I have no mathematical proof that I am reaching the global
> minimum of sum(abs(Yf_interpolated - Y)). Typically
> sum(abs(Yf_interpolated - Y)) becomes about 2 times smaller than it was
> for initial Yf, i.e. in practice I see that my algorithm improves fitting.
> 
> My dirty little secret is that probably in my case initial Yf
> approximation would do :).
> 
> 
Is your problem about robust identification ? 
- if you want to solve it directly, see
https://en.wikipedia.org/wiki/Least_absolute_deviations

-In any case, the function to be minimised do possess a first derivative,
but it is not continuous. So the second derivative does not exist
everywhere, and usual optimisation algorithms will not converge, as the
evolution of the cost function upon the parameter set is jumpy. 
Maybe you could try something similar but continuous like regression based
upon the hyperbolic cosine. In this case, the steps are 
1) get an a priori, robust estimate of the variance, S
2) minimise sum(cosh(yinterp - y)/S)

Pascal 

--
View this message in context: 
http://octave.1599824.n4.nabble.com/fixed-points-piecewise-linear-fitting-tp4480432p4480927.html
Sent from the Octave - General mailing list archive at Nabble.com.
_______________________________________________
Help-octave mailing list
address@hidden
https://mailman.cae.wisc.edu/listinfo/help-octave


_______________________________________________
Help-octave mailing list
address@hidden
https://mailman.cae.wisc.edu/listinfo/help-octave




reply via email to

[Prev in Thread] Current Thread [Next in Thread]