Planet Octave

June 25, 2016

Francesco Faccio

Midterm week and next steps

Hello! This is my final post before the end of midterm evaluation and I would like to show what I've done during these days and which are the next steps. After the previous post I wrote some of the options that the user can provide using odeset. I wrote a function to allow the user to pass the modified Jacobian through a function that returns DF/DY and DF/DYP or through a cell array (if the Jacobian is constant) and I completed options MaxStep, InitialStep, MaxOrder (commit ea1a311). I started with mentors to write function decic, which computes initial conditions of the problem F(t,x,x') considering some values of x and x' fixed by the user. This function will be submitted as a patch in the next weeks. I tried to build Octave with KLU module included (it is an optional part of IDA which depends on SuiteSparse) but I had some problems with CMake that I hope to solve during the next days. KLU module will be used to solve sparse problems in ode15i, when a sparse Jacobian is supplied. Once I will be able to write the "sparse part" of ode15i, I will show some efficiency tests between my .oct file, Sundials' MEX and C file and ode15i in Matlab. After completing this part, I will try to improve the quality of the code: - writing a class whose methods will be able to call the function supplied by the user without global pointers and perform more efficient data conversion (avoiding loops) - Refactoring the code, putting all the checks in an other function For the examples I provided in the previous post, consider commit 4f60e96.

by Francesco Faccio ( at June 25, 2016 07:13 PM

June 21, 2016

Barbara Lócsi

Midterm evaluations

You can find my work here:

Project goals

Certain calling forms of the eig function are currently missing, including:
  • preliminary balancing
  • computing left eigenvectors as a third output
  • choosing among generalized eigenvalue algorithms
  • choosing among return value formats of the eigenvalues (vector or matrix) see more here.

Calling forms

The aim for this period was:

  • Finish preliminary balancing if it is not finished, start working on implementing left eigenvector calculation

The preliminary balancing task is done, I wrote a blog post about it:
I started working on implementing the left eigenvector calculation and it is mostly done, some tests needs to be added to which will be completed until the end of this week.

Remaining tasks for the second period:

  1. choosing among generalized eigenvalue algorithms
  2. choosing among return value formats of the eigenvalues (vector or matrix) see more here.

by Barbara Lócsi ( at June 21, 2016 02:33 AM

June 20, 2016

Chiara Segala

Week 4: augmented matrix

In this week I wrote a code that implements the augmented matrix described in theorem 2.1 of [HAM 11]. This matrix will be given as input to expmv and will allow us to implement the exponential integrators by evaluating a single exponential of this augmented matrix avoiding the need to compute any φ functions. The code is

function [Atilde, eta] = augmat(h,A,V)

p = size(V,2);
W = fliplr(V/diag(h.^(0:p-1)));
eta = 2^-ceil(log2(max(norm(W,1),realmin)));
Atilde = [ A , eta*W ; zeros(p,size(A)) , diag(ones(p-1,1),1) ];

in [HAM 11] eta is introduced to avoid rounding errors. I made a small change to avoid further errors, I added the max function so that smaller values ​​than realmin will not be considered.

Now I summarize my work during this month.

Week 1: phi functions. I implemented four functions, based on [BSW 07] (phi1m.m, phi2m.m, phi3m.m, phi4m.m).

Week 2-3: general exponential schemes. I implemented the two schemes for a general exponential Runge-Kutta and Rosenbrock integrator (exprk.m, exprb.m).
As already mentioned, these schemes are not really fast and efficient, but these are working codes that I applied to four different exponential methods (exprk2.m, exprk3.m, exprb3.m, exprb4.m) and I will use them as a reference when I go to implement the official methods.

Week 4: augmented matrix (augmat.m).

My codes can be found here

by Chiara Segala ( at June 20, 2016 12:02 PM

Francesco Faccio

Summary of the work of the first month


Mid-term evaluation has now arrived so it's time to summarize the work I've done and check which goals I have achieved. During this period I enjoied working with the community and the advices given by the mentors and by the other members have been really helpful. The most important change to the project is that, discussing with mentors, we decided to start implementing ode15i because it's more general than ode15s and to build ode15s later around it.
Here you can find the code I've written so far:

The most difficult task in the first part of this project was to have Octave compiled with link to Sundials. After having accomplished this, I checked the presence and usability of the library nvector_serial which contains the implementation of IDADENSE and IDABAND modules. I aggregated its build flags with the flags of sundials_ida and included the header nvector_serial.h in the dld-function.

I checked the licenses of Sundials and SUPERLUMT (a package which will be used as a sparse direct solver, independent from Sundials) and they have 3-Clause license, so they are compatible with GNU-license and can be used.

After configuring the correct flags, I began writing a minimal wrapper of ode15i of the form:

[t, y] = ode15i (odefun, tspan, y0, yp0, options)

The first problem was to deal with Sundials' own types. Sundials uses realtype and N_Vector. An N_Vector is a vector of realtype, while a realtype can be both a float, a double or a long double, depending on how Sundials has been built (default type is double). I assumed to use the default double realtype and wrote functions N_Vector ColToNVec (ColumnVector data, long int n) and ColumnVector NVecToCol (N_Vector v, long int n), which convert an Octave ColumnVector to an N_Vector and viceversa.

I checked some minimal input conditions, wrote a few input validation tests, set AbsTol, Reltol, tspan, y0 and yp0 of type realtype or N_Vector.

Once preprocessed data, the moment of glory of Sundials' functions arrived. The first call was to IDACreate(), which creates an IDA memory block and returns a pointer which is then passed as the first argument to all subsequent ida function calls.

Sundials then asks to provide a function which computes the residual function in the DAE. This function must have the form:

int flag = resfun (realtype tt, N_Vector yy, N_Vector yp, N_Vector rr, void *user data)

As a temporary workaround I wrote a function which converts yy, yp and tt in ColumnVector(s), uses feval to evaluate the DAE (passed through a global pointer of type octave_function) and put the output in rr.

Then a call to IDAInit, IDASVtolerances (or IDASStolerances if AbsTol is a scalar), IDADense and IDADlsSetDenseJacFn (if supplied) sets up the linear solver.

Sundials accepts only a Jacobian Function of the form  J = DF/DY + cj*DF/DYP, where cj is a scalar proportional to the inverse of the step size (cj is computed by IDA's solver). I used the same workaround of the residual to evaluate J when it's required.

Finally in the main loop a call to IDASolve solves the DAE system and gives the solution in output.

What this wrapper can solve:

this wrapper of ode15i can solve a system of differential equations of the form f(t, y, y') = 0, integrating from t0 to tf with initial conditions y0 and yp0. The output of the function is the solution of the DAE evaluated ONLY in the points supplied in tspan.
It accepts as option a scalar RelTol and a scalar or vector AbsTol. If the user wants to supply a Jacobian, it must be of the form J = DF/DY + cj DF/DYP. Both the system of DAE and the Jacobian must be a function handle.

I tested this wrapper using the 2 benchmark problems described in the previous post.

In Robertson chemical kinetics problem I found the right solution both passing a Jacobian and letting Sundials approximate it. The script I used is the following:

function res = robertsidae(t, y, yp)
res = [-(yp(1) + 0.04*y(1) - 1e4*y(2)*y(3));
-(yp(2) - 0.04*y(1) + 1e4*y(2)*y(3) + 3e7*y(2)^2);
y(1) + y(2) + y(3) - 1];

function jacc = jacobian(t, y, yp, c)
jacc = [-0.04-c, 1e4*y(3), 1e4*y(2);
0.04, -c-6*1e7*y(2)-1e4*y(3), -1e4*y(2);
1, 1, 1];

options = odeset('RelTol', 1e-3, 'AbsTol', 1e-6, ...
'Jacobian', @jacobian);

y0 = [1; 0; 0];
yp0 = [-1e-4; 1e-4; 0];
tspan = [0 4*logspace(-6, 6)];

[t, y] = ode15i(@robertsidae, tspan, y0, yp0, options);

y(:, 2) = 1e4*y(:, 2);
semilogx(t, y)
ylabel('species concentration');
title('Robertson DAE problem with a Conservation Law');
legend ('y1', 'y2', 'y3');

As a result I got this:

The second problem was to find the solution of a 2-D heat equation semidiscretized to a DAE on the unit square, as described in the previous post.

After discretizing the domain, the system of 100 differential algebraic equations was passed to ode15i (as sparse methods are in progress, I used the dense solver of Sundials also for this problem, without passing the Jacobian).

Here you can find the script I wrote to solve the problem. A 3-D time dependent plot shows how the solution evolves in time and space.

uu0 = zeros(100, 1);

%Initialize uu in all grid points
for j = 1:10
yfact = (1 / 9).*(j - 1);
offset = 10.*(j - 1);
for i = 1:10
xfact= (1 / 9).*(i - 1);
loc = offset + (i - 1);
uu0(loc + 1) = 16 * xfact * (1 - xfact) * yfact * (1 - yfact);

up0 = zeros(100, 1);

%Set values of uu and up at boundary points
for j = 1:10
offset = 10 * (j - 1);
for i = 1:10
loc = offset + (i - 1);
if (j == 1 || j == 10 || i == 1 || i == 10 )
uu0 (loc + 1) = 0; up0 (loc + 1) = 0;

function res = klu (t, uu, up)
res = [ uu(1);
up(12) - 81 * (uu(11) + uu(13) + uu(2) + uu(22) - 4*uu(12));
up(13) - 81 * (uu(12) + uu(14) + uu(3) + uu(23) - 4*uu(13));
up(14) - 81 * (uu(13) + uu(15) + uu(4) + uu(24) - 4*uu(14));
up(15) - 81 * (uu(14) + uu(16) + uu(5) + uu(25) - 4*uu(15));
up(16) - 81 * (uu(15) + uu(17) + uu(6) + uu(26) - 4*uu(16));
up(17) - 81 * (uu(16) + uu(18) + uu(7) + uu(27) - 4*uu(17));
up(18) - 81 * (uu(17) + uu(19) + uu(8) + uu(28) - 4*uu(18));
up(19) - 81 * (uu(18) + uu(20) + uu(9) + uu(29) - 4*uu(19));
up(22) - 81 * (uu(21) + uu(23) + uu(12) + uu(32) - 4*uu(22));
up(23) - 81 * (uu(22) + uu(24) + uu(13) + uu(33) - 4*uu(23));
up(24) - 81 * (uu(23) + uu(25) + uu(14) + uu(34) - 4*uu(24));
up(25) - 81 * (uu(24) + uu(26) + uu(15) + uu(35) - 4*uu(25));
up(26) - 81 * (uu(25) + uu(27) + uu(16) + uu(36) - 4*uu(26));
up(27) - 81 * (uu(26) + uu(28) + uu(17) + uu(37) - 4*uu(27));
up(28) - 81 * (uu(27) + uu(29) + uu(18) + uu(38) - 4*uu(28));
up(29) - 81 * (uu(28) + uu(30) + uu(19) + uu(39) - 4*uu(29));
up(32) - 81 * (uu(31) + uu(33) + uu(22) + uu(42) - 4*uu(32));
up(33) - 81 * (uu(32) + uu(34) + uu(23) + uu(43) - 4*uu(33));
up(34) - 81 * (uu(33) + uu(35) + uu(24) + uu(44) - 4*uu(34));
up(35) - 81 * (uu(34) + uu(36) + uu(25) + uu(45) - 4*uu(35));
up(36) - 81 * (uu(35) + uu(37) + uu(26) + uu(46) - 4*uu(36));
up(37) - 81 * (uu(36) + uu(38) + uu(27) + uu(47) - 4*uu(37));
up(38) - 81 * (uu(37) + uu(39) + uu(28) + uu(48) - 4*uu(38));
up(39) - 81 * (uu(38) + uu(40) + uu(29) + uu(49) - 4*uu(39));
up(42) - 81 * (uu(41) + uu(43) + uu(32) + uu(52) - 4*uu(42));
up(43) - 81 * (uu(42) + uu(44) + uu(33) + uu(53) - 4*uu(43));
up(44) - 81 * (uu(43) + uu(45) + uu(34) + uu(54) - 4*uu(44));
up(45) - 81 * (uu(44) + uu(46) + uu(35) + uu(55) - 4*uu(45));
up(46) - 81 * (uu(45) + uu(47) + uu(36) + uu(56) - 4*uu(46));
up(47) - 81 * (uu(46) + uu(48) + uu(37) + uu(57) - 4*uu(47));
up(48) - 81 * (uu(47) + uu(49) + uu(38) + uu(58) - 4*uu(48));
up(49) - 81 * (uu(48) + uu(50) + uu(39) + uu(59) - 4*uu(49));
up(52) - 81 * (uu(51) + uu(53) + uu(42) + uu(62) - 4*uu(52));
up(53) - 81 * (uu(52) + uu(54) + uu(43) + uu(63) - 4*uu(53));
up(54) - 81 * (uu(53) + uu(55) + uu(44) + uu(64) - 4*uu(54));
up(55) - 81 * (uu(54) + uu(56) + uu(45) + uu(65) - 4*uu(55));
up(56) - 81 * (uu(55) + uu(57) + uu(46) + uu(66) - 4*uu(56));
up(57) - 81 * (uu(56) + uu(58) + uu(47) + uu(67) - 4*uu(57));
up(58) - 81 * (uu(57) + uu(59) + uu(48) + uu(68) - 4*uu(58));
up(59) - 81 * (uu(58) + uu(50) + uu(49) + uu(69) - 4*uu(59));
up(62) - 81 * (uu(61) + uu(63) + uu(52) + uu(72) - 4*uu(62));
up(63) - 81 * (uu(62) + uu(64) + uu(53) + uu(73) - 4*uu(63));
up(64) - 81 * (uu(63) + uu(65) + uu(54) + uu(74) - 4*uu(64));
up(65) - 81 * (uu(64) + uu(66) + uu(55) + uu(75) - 4*uu(65));
up(66) - 81 * (uu(65) + uu(67) + uu(56) + uu(76) - 4*uu(66));
up(67) - 81 * (uu(66) + uu(68) + uu(57) + uu(77) - 4*uu(67));
up(68) - 81 * (uu(67) + uu(69) + uu(58) + uu(78) - 4*uu(68));
up(69) - 81 * (uu(68) + uu(60) + uu(59) + uu(79) - 4*uu(69));
up(72) - 81 * (uu(71) + uu(73) + uu(62) + uu(82) - 4*uu(72));
up(73) - 81 * (uu(72) + uu(74) + uu(63) + uu(83) - 4*uu(73));
up(74) - 81 * (uu(73) + uu(75) + uu(64) + uu(84) - 4*uu(74));
up(75) - 81 * (uu(74) + uu(76) + uu(65) + uu(85) - 4*uu(75));
up(76) - 81 * (uu(75) + uu(77) + uu(66) + uu(86) - 4*uu(76));
up(77) - 81 * (uu(76) + uu(78) + uu(67) + uu(87) - 4*uu(77));
up(78) - 81 * (uu(77) + uu(79) + uu(68) + uu(88) - 4*uu(78));
up(79) - 81 * (uu(78) + uu(70) + uu(69) + uu(89) - 4*uu(79));
up(82) - 81 * (uu(81) + uu(83) + uu(72) + uu(92) - 4*uu(82));
up(83) - 81 * (uu(82) + uu(84) + uu(73) + uu(93) - 4*uu(83));
up(84) - 81 * (uu(83) + uu(85) + uu(74) + uu(94) - 4*uu(84));
up(85) - 81 * (uu(84) + uu(86) + uu(75) + uu(95) - 4*uu(85));
up(86) - 81 * (uu(85) + uu(87) + uu(76) + uu(96) - 4*uu(86));
up(87) - 81 * (uu(86) + uu(88) + uu(77) + uu(97) - 4*uu(87));
up(88) - 81 * (uu(87) + uu(89) + uu(78) + uu(98) - 4*uu(88));
up(89) - 81 * (uu(88) + uu(90) + uu(79) + uu(99) - 4*uu(89));

tspan = linspace(0, 0.3, 100);
options = odeset('RelTol', 1e-6, 'AbsTol', 1e-8);

[t, y] = ode15i(@klu, tspan, uu0, up0, options);

sol = zeros(10, 10, 100);
for z = 1:100
for i = 1:10
sol(:, i, z) = y(z, (((i - 1) * 10) + 1):(i * 10));


for k = 1:100
surf(sol(:, :, k))
axis([0 10 0 10 0 1]);
title('2-D heat equation semidiscretized to a DAE on the unit square');

This is the solution at time t = 0.0091:

Larger problems will be used in order to test the efficiency of the code, because this 2 were solved almost immediately.

by Francesco Faccio ( at June 20, 2016 01:14 AM

June 19, 2016

Amr Mohamed


Hi all,
I would like to share my experience through the first weeks of GSoC 2016 with GNU Octave.
My project is mainly concerned with computational geometry as it aims at creating 2D polygon functions as a part of the geometry package.
The main function is polybool which permits performing boolean operations on polygons.
The project’s bitbucket repository can be found here:

  • We started working on the project earlier than the program official start and were able to finish the first required functions for changing the representation of the polygons between cell arrays format and NaN-delimited vectors format.
    These two functions polyjoin/polysplit are implemented as .m files and are used to manipulate the polygons’s representation.
  • Then , we started implementing the ispolycw function that checks the orientation of multiple polygons at the same time using the Boost::Geometry library.
    For self-intersecting polygons, The orientation of the polygon is defined as the orientation of the leftmost point and its two neighbouring points.
  • A Makefile was written to compile and link the written cc files.
    The Makefile was inspired from the sockets package for Octave.
  • Currently, We are focusing on implementing the polybool and we managed to create an initial working version of the function.
    Here are some snapshots for the output of the polybool function:poly

theta = linspace(0, 2 * pi, 1000);
x1 = cos(theta) – 0.5;
y1 = – sin(theta);
x2 = x1 + 1;
y2 = y1 + 1;

plot(x2, y2)
[xa, ya] = polybool(‘union’, x1, y1, x2, y2);
[xb, yb] = polybool(‘intersection’, x2, y2, x1, y1);
[xc, yc] = polybool(‘xor’, x1, y1, x2, y2);
[xd, yd] = polybool(‘subtraction’, x1, y1, x2, y2);
subplot(2, 3, 1)
axis equal
patch(xa{1}, ya{1}, ‘FaceColor’, ‘r’)
axis ([- 2 2 – 2 2])

subplot(2, 3, 2)
axis equal
for k = 1:numel(xb), patch(xb{k}, yb{k}, ‘FaceColor’, ‘r’) end
axis ([- 2 2 – 2 2])

subplot(2, 3, 3)
axis equal
[x1 y1] = polysplit(x1, y1);
for k = 1:numel(x1), patch(x1{k}, y1{k}, ‘FaceColor’, ‘b’), end
axis ([- 2 2 – 2 2])
title(‘Polygon 1’)

subplot(2, 3, 4)
axis equal
for k = 1:numel(xc), patch(xc{k}, yc{k}, ‘FaceColor’, ‘r’) end
axis ([- 2 2 – 2 2])

subplot(2, 3, 5)
axis equal
for k = 1:numel(xd), patch(xd{k}, yd{k}, ‘FaceColor’, ‘b’), end
axis ([- 2 2 – 2 2])

subplot(2, 3, 6)
axis equal
[x2 y2] = polysplit(x2, y2);
for k = 1:numel(x2), patch(x2{k}, y2{k}, ‘FaceColor’, ‘b’), end
axis ([- 2 2 – 2 2])
title(‘Polygon 2’)

by amrkeleg at June 19, 2016 11:32 PM

June 17, 2016

Abhinav Tripathi


The mid term evaluations of GSoC 2016 are close. This blog post is to summarize the complete work done under the ‘Symbolic Project’ under the organization ‘Octave’.

I will mention each of the goals that were set for the mid term and then give an overview about what was done for that:

1a) Octave, Symbolic, and PyTave dev versions installed.
I built Octave from source successfully on ubuntu 16.04 then used symbolic with it. Also built pytave from source using the dev version of Octave.
My Symbolic fork on github can be found at:
My main Pytave fork can be found at:
I also forked pytave from Colin’s fork (to work on experimental features) which can be found at:

1b) Some basic communication working between PyTave and Symbolic. This might use some existing implementations in a non-optimal way (e.g., its ok if Symbolic returns objects as xml strings).
PR #452 added the functionality to convert python types into Octave types including @sym objects using pytave. Then PR #460 fixed the communication when lists/tuples were passed from python to Octave.
Then we also added proper conversion of tuples and booleans from python to Octave types in pytave repo in the PRs #9 and #11 respectively.

1c) Most of Symbolic’s tests and doctests continue passing, although some failures ok at this point.
Tested on Ubuntu 16.04. Most of the tests pass on Linux. There are still many tests which are failing (with the new IPC) but we will work on them once we have a stable pytave IPC mechanism.
With use of pytave, the errors have now been converted to python exceptions which seems to be the main reason of many tests failing. Also, sometimes it seems that the way we chose to return the outputs is conflicting with how the existing IPC mechanisms were doing it. But, these are minor failures and can be taken care of later as the new IPC matures.

1d) The above works on at least one OS (probably GNU/Linux).
It works good on Linux (our local machines).
We are currently trying to integrate use of pytave on the build-bot. The work can be tracked on the PRs #477 & #478 on github.

2a) PyTave converts various Python objects into appropriate Octave objects. PyTave needs to be extended to return proper @pyobj when it cannot convert the object. Also, symbolic must be able to convert such objects to @sym objects by calling proper python functions via PyTave (if they are indeed @sym). That is, bypass the current generating of xml strings.
Instead of editing pytave to convert @sym objects before returning, we incorporated a different mechanism in Symbolic only which converts the objects it gets into @sym objects if necessary using py* functions. The major work was on the PR #452
Groundwork has been laid for storing objects persistently on python side using @pyobject from pytave. We will also work on improving @pyobject to allow for calling all the attributes of a python object from Octave.

2b) Improve the BIST test set coverage of both PyTave and Symbolic for any new features added.
Added some tests to pytave with the PRs #10 and #11 on bitbucket.

2c) Improve doctest coverage of PyTave.
Not many doctests were needed so this part is low priority for now. Might still do it this week.

We also had some Stretch Goals’ which were planned in case we get time and following is the progress for them:

3) Improve on 2a) with some TBD mechanism: perhaps a pure m-file callback feature in the PyTave export code.
Since the workaround for @sym conversion was already used in symbolic so this was not required. Moreover, we are working on adding @pyobject support to pytave on Colin’s fork:

4) The above works on both GNU/Linux and MS Windows.
Pytave needs to be built on Windows which might take a while. On Linux, we have it working. For building on windows, we are having an extensive and comprehensive discussion and are getting closer to the goal everyday.
First ‘cygwin’ was tried to build pytave on windows but many tools were not found on cygwin, moreover Octave already has a MSYS environment so we shifted to using something like that. Currently we were trying MSYS2 environment for building pytave, all the libraries were available but we got stuck at a point when trying to link with octave libraries. Octave was build with GCC 4 while MSYS2 has GCC 5 and due to some ABI incompatibility of GCC between the two versions, it was not possible to continue the build.
So, now we will try to build it from within octave, using the the MSYS bash shell. But, now we need to (probably) build libboost-dev and python-dev using that shell and then move to building pytave with it.

5) Objects passed from Symbolic back to Python should use PyTave mechanisms.
The work can be seen in the PR #465 which tries to call py* functions to store the variables into python. We have written a function which stores the required variables to a python list. Currently, it is not very stable and causes octave to crash in some tests but we are working on improving it. It now supports passing lists and 2-D matrices. But, it needs to be extended to support other types like cell-arrays and n-D matrices.

In the start of the project we had some minor enhancements to Symbolic. In the mid way we also had some enhancements on the pytave side which were needed to move ahead with the aforementioned goals.
All of my contributions to the repos can be tracked at following links:
My contributions to octsympy‘s main repo on github
My contributions to pytave‘s main repo on bitbucket
My Contribution to Colin’s pytave fork

This was the summary as per the goals set. Now, a quick summary as per the timeline we decided just to showcase how closely we are following the path we decided at the start of the project:

30th May — Work on adding the sym conversion to PyTave and cleaning up the conversion mechanism in Symbolic.
Till 30th May we had already added the mechanism to convert sym objects that come from pytave. We still have to work on it for some cases, as we have many failing tests. But, we have a basic structure that doesn’t rely on any XML stuff for object conversion.

15th June — Improve tests and doctests. Work on building PyTave and testing on Windows. No more crufty XML!
Till 15th June we had improved BISTs of pytave. The building of pytave has been underway and we have got rid of the XML stuff in pytave IPC.

27th June —  Try to get working PyTave with Symbolic on Windows (if needed, use cygwin) 
We have been following the timeline from the start. We also have other smaller goals in progress aside to this (including build pytave on travis and using pytave to pass variables to python). We have used MSYS2 to build pytave on windows. But that led to some conflict in gcc versions of MSYS2 and the ones present in Octave. Finally it seems that boost and python-dev have to be build using the MSYS environment of Octave and then only pytave can be built using those tools.


All of the aforementioned goals and timeline can be found at my wiki page:


For any feedback/comment, feel free to post a comment here…

by genuinelucifer at June 17, 2016 07:02 PM

June 15, 2016

Chiara Segala

Week 2-3: general exponential schemes

During these two weeks I implemented the two schemes for a general exponential Runge-Kutta and Rosenbrock integrator, see the schemes described in my second post.
I also implemented another file for exponential Runge-Kutta integrators based on the following scheme.
Given the problem
u'(t) = F(t,u) = A u(t) + g(t, u(t)),
u(t_0) = u_0,
the new scheme is
U_{ni} = u_n + c_i h_n \varphi_1(c_i h_n A) F(t_n,u_n) + h_n \sum_{j=2}^{i-1} a_{ij}(h_n A) D_{nj} ,
u_{n+1} = u_n + h_n \varphi_1(h_n A) F(t_n,u_n) + h_n \sum_{i=2}^s b_i(h_n A) D_{ni} ,
D_{nj} = g(t_n + c_j h_n, U_{nj}) - g(t_n, u_n).
The main motivation for this reformulation is that the vectors $D_{nj}$ are expected to be small in norm so the application of matrix functions to these vectors are more efficient.

I applied the general pattern to the third-order exponential Runge-Kutta method with tableau

where $\varphi_{j,k} =\varphi_j (c_k h A) $ and $\varphi_j =\varphi_j (h A) $.

Then I also applied the general Rosenbrock pattern to the fourth-order exponential method with a third-order error estimator. Its coefficients are

where $\varphi_j =\varphi_j (h J_n) $.

I tested the correctness and order of the schemes with the following example, a semilinear parabolic problem
\frac{\partial U}{\partial t} (x,t) - \frac{\partial^2 U}{\partial x^2} (x,t) = \frac{1}{1 + U(x,t)^2} + \Phi (x,t)
for $x \in [0,1]$ and $t \in [0,1]$ with homogeneous Dirichlet boundary conditions. $\Phi$ is chosen in such a way that the exact solution of the problem is $U(x,t) = x(1-x)e^{t}$. I discretize this problem in space by standard finite differences with 200 grid points.

For details see [HO 10] and [HO 05].

Within this week I will insert on Bitbucket the codes of phi functions and the three general schemes .

by Chiara Segala ( at June 15, 2016 02:32 AM

June 14, 2016

Barbara Lócsi

Preliminary balancing


I have created a repository, you can take a look at my work here:

Preliminary balancing

Currently in octave eig uses preliminary balancing and it can’t be turned off, while in Matlab[1] it can be.

§  e = eig(A)
§  [V,D] = eig(A)
§  [V,D,W] = eig(A)
§  e = eig(A,B)
§  [V,D] = eig(A,B)
§  [V,D,W] = eig(A,B)
§  [___] = eig(A,balanceOption)
§  [___] = eig(A,B,algorithm)
§  [___] = eig(___,eigvalOption)
§  lambda = eig (A)
§  lambda = eig (A, B)
§  [V, lambda] = eig (A)
§  [V, lambda] = eig (A, B)

The aim of this task was to change the *geev LAPACK calls to the extended *geevx which allows us to turn off the balancing. The ability to do this is important becuase the balancing can be harmful:

“Balancing sometimes seriously degrades accuracy. In particular, one should not balance a matrix after it has been transformed to Hessenberg form. However, we must emphasize that balancing is usually not harmful and often very beneficial. When in doubt, balance.” [2]


To test the preliminary balancing I used Matlab’s example for „Eigenvalues of a Matrix Whose Elements Differ Dramatically in Scale” [3]

A = [ 3.0     -2.0      -0.9     2*eps;
     -2.0      4.0       1.0    -eps;
     -eps/4    eps/2    -1.0     0;
     -0.5     -0.5       0.1     1.0];
[VN,DN] = eig(A,'balance');

But regardless of the balancing is on or off the results were the same.

  -4.4409e-16   2.2204e-16  -2.9055e-16  -1.6653e-16
   8.8818e-16   2.7756e-16   3.1094e-17  -1.3878e-16
   1.7176e-17   1.5361e-18   6.6297e-18   0.0000e+00
  -6.9389e-17  -4.4409e-16   2.2204e-16   6.9389e-17

(Which is the more accurate one, the one we want)
It seems like the matrix is not balanced. Moreover balance(A) is the same as A:


ans =
   0   0   0   0
   0   0   0   0
   0   0   0   0
   0   0   0   0

The reason for this is that the stopping criteria for balancing was changed in LAPACK  3.5.0 [4], so it deals more intelligently with cases where the balancing causes loss of accuracy. [5][6][7]

“However, for the case where A is dense and poorly scaled, the new algorithm will still balance the matrix and improve the eigenvalue condition number. If accurate eigenvectors are desired, then one should consider not balancing the matrix.” [5]

 Other matrix

A = [3 -2 -0.9 0; -2 4 1 -0; -0 0 -1 0; -0.5 -0.5 0.1 1]; % drop the eps terms for now

Balancing this matrix is not harmful, so LAPACK will balance as we expect it to do.
A = [3 -2 -0.9 0; -2 4 1 -0; -0 0 -1 0; -0.5 -0.5 0.1 1]; % drop the eps terms for now
[VN,DN] = eig(A,'nobalance');
[VN,DN] = eig(A,'balance');
[VN,DN] = eig(A);

ans =

  -4.4409e-16   2.2204e-16  -2.4947e-16  -1.6653e-16
   8.8818e-16   3.8858e-16  -4.1531e-17   1.3878e-16
   0.0000e+00   0.0000e+00   0.0000e+00   0.0000e+00
  -8.3267e-17  -4.4409e-16   2.2204e-16  -4.1633e-17

ans =

   0.0000e+00  -4.4409e-16   2.2204e-16   1.6653e-16
   0.0000e+00   0.0000e+00  -2.2204e-16  -1.3878e-16
   0.0000e+00   0.0000e+00   0.0000e+00   0.0000e+00
   0.0000e+00   1.3878e-17   0.0000e+00   0.0000e+00

ans =

   0.0000e+00  -4.4409e-16   2.2204e-16   1.6653e-16
   0.0000e+00   0.0000e+00  -2.2204e-16  -1.3878e-16
   0.0000e+00   0.0000e+00   0.0000e+00   0.0000e+00
   0.0000e+00   1.3878e-17   0.0000e+00   0.0000e+00

On this example we can see that the eig(A) is working as the eig(A,'balance') which is a behaviour we want.

So the ability to turn off the balancing is not as important as it was before LAPACK 3.5.0. but there are some cases when the balancing is bad, so the ability to turn off the balancing is still important.

“However, for the case where A is dense and poorly scaled, the new algorithm will still balance the matrix and improve the eigenvalue condition number. If accurate eigenvectors are desired, then one should consider not balancing the matrix.” [5]

by Barbara Lócsi ( at June 14, 2016 05:18 AM

June 09, 2016

Francesco Faccio

First Goals and next steps

MathJax TeX Test PageHello!

I have not written a post for a while because I have had some health issues.

During the first two weeks of GSoC I have worked on Autotools and I have compiled Octave with link to SUNDIALS. The first step for doing this was to check the presence and usability of ida.h in, so I used the macro OCTAVE_CHECK_LIB which also sets the flags CPPFLAGS, LDFLAGS and LIBS. Then I set the right configuration variables in the build-aux folder and modified build-env namespace. Finally I wrote a dld-function which includes ida.h and calls the function IDACreate from SUNDIALS which returns a pointer to the IDA memory structure.
This dld-function generates an oct-file which can be executed from Octave.

All these changes are visible in my public repository on Bitbucket:

In the next few days I will further investigate the recursive dependencies of SUNDIALS and their license and set up the correct build flags for such dependencies, I will write more tests in in order to check the availability of functions and headers of the library.

After discussing with mentors we decided to start the implementation of ode15i because it's more close to IDA and more general than ode15s. Once ode15i will be written, ode15s will be built around it.

We have also decided which are the next steps before midterm evaluation:
  • implement a minimal .oct wrapper for IDA in Octave with a primitive interface such as $[t , y] = ode15i (odefun, tspan, y0, yp0, Jacobian)$
    that invokes IDA with all options set to default values

  • use two benchmark problems to test the correctness and speed of the code:
    I will compare it with the C implementation of SUNDIALS and with the m-file implementation relying on the mex interface of SUNDIALS 

As benchmark problems we have chosen two examples which deal with dense and sparse methods.

The first one regards Robertson chemical kinetics problem, in which differential equations are given for species $y_{1}$ and $y_{2}$ while an algebraic equation determines $y_{3}$. The equations for the species concentrations $y_{i}(t)$ are:

\begin{eqnarray*} \begin{cases} y_{1}^{'} = -0.04y_{1} + 10^{4}y_{2}y_{3} \\ y_{2}^{'} = 0.04y_{1} - 10^{4}y_{2}y_{3} - 3\cdot 10^{7}y_{2}^{2} \\ 0 = y_{1} + y_{2} + y_{3} - 1 \end{cases} \end{eqnarray*}

The initial values are taken as $y_{1} = 1$, $y_{2} = 0$ and $y_{3} = 0$. This example computes the three concentration components on the interval from $t = 0$ through $t = 4\cdot 10^{10}$.

This is the plot of the solution (the value of $y_{2}$ is multiplied by a factor of $10^{4}$).

Dense methods of IDA are applied for solving this problem.

The second problem is a $2D$ heat equation, semidiscretized to a DAE. The DAE system arises from the Dirichlet boundary condition $u = 0$, along with the differential equations arising from the discretization of the interior of the region.
The domain is the unit square $\Omega = \{0 \leq x, y \geq 1\}$ and the equations solved are:

\begin{eqnarray*} \begin{cases} \partial u/\partial t = u_{xx} + u_{yy} & (x, y) \in \Omega \\ u = 0 & (x, y) \in \partial \Omega \end{cases} \end{eqnarray*}

The time interval is $0 \leq t \leq 10.24$, and the initial conditions are $u = 16x(1 − x)y(1 − y)$.
We discretize the PDE system (plus boundary conditions) with central differencing on a $10 \times 10$ mesh, so as to obtain a DAE system of size $N = 100$. The dependent variable vector $u$ consists of the values $u(x_{j}, y_{k}, t)$ grouped first by $x$, and then by $y$. Each discrete boundary condition becomes an algebraic equation within the DAE system.

In this problem IDA's sparse direct methods are used and the Jacobian is stored in compressed sparse column (CSC) format.

Regarding functions which deal with input ode check:
Functions check_input and set_ode_options, which I started to write before the beginning of the coding period, will be improved after the midterm evaluation.

by Francesco Faccio ( at June 09, 2016 02:49 PM

June 01, 2016

Chiara Segala

Week 1: phi functions

pre.cjk { font-family: "Nimbus Mono L",monospace; }p { margin-bottom: 0.25cm; line-height: 120%; }

During the first week of GSoC, I wrote four m-files for matrix functions, phi1m,..., phi4m.
I implemented the fourfunctions, based on [BSW 07].
Matrix functions are defined by the recurrence relation

In my files, I use an algorithm based on first a variant of the scaling and squaring approach and after scaling, I use a Padé approximation.
Below the code phi1m.m

function [N, PHI0] = phi1m (A, d)

Nis φ1(A) and PHIOisφ0(A). dis the order of the Padé approximation.
First, I scale A by power of 2 so that its norm is < ½


s= min (ceil (max (0,1+log2 (norm (A, inf)))), 1023);

A = A/2^s;

then I use a (d,d)-Padé approximation

where the polynomials are

I write polynomials in Horner form.

ID = eye (size (A));


Ncoeff = sum (cumprod ([1, d-(0:i-1)])./cumprod ([1, 2*d+l-(0:i-1)]).*(-1).^(0:i)./(factorial ((0:i)).*factorial (l+i-(0:i))));

Dcoeff = ((-1)^(i))*prod ([(d-i+1):1:d])/(prod ([(2*d+l-i+1):1:(2*d+l)])*factorial (i));

N= Ncoeff;

D= Dcoeff;

for i = (d-1):-1:0

   Ncoeff = sum (cumprod ([1, d-(0:i-1)])./cumprod ([1, 2*d+l-(0:i-1)]).*(-1).^(0:i)./(factorial ((0:i)).*factorial (l+i-(0:i))));

   N = A*N + Ncoeff * ID ;

   Dcoeff = ((-1)^(i))* prod([(d-i+1):1:d])/(prod ([(2*d+l-i+1):1:(2*d+l)])*factorial (i));

   D = A*D + Dcoeff * ID ;


N = full (D\N);

and finally I use the scaling relations

PHI0 = A*N+ID;

for i = 1:s

   N = (PHI0+ID)*N/2;

   PHI0 = PHI0*PHI0;


in the same way I wrote the other three files.  As soon as possible, I will create a repository on Bitbucket and I will put the codes there.

by Chiara Segala ( at June 01, 2016 04:25 AM

Phi functions and general exponential schemes

p { margin-bottom: 0.25cm; line-height: 120%; }

May 23 will start the coding period and I'm trying to figure out how I will implement my functions before the mid-term evaluation.

With the advice of my mentors Marco and Jacopo, I decided to start with the implementation of the phi functions, necessary to calculate the matrix functions in the two schemes for a general exponential Runge-Kutta and Rosenbrock integrator.
These schemes will not be really fast and efficient, but I will use them as a reference when I go to implement the official methods. It will be useful to verify the correctness of my codes.

As regards the implementation of the phi functions I will refer to

[BSW 07] “EXPINT — A MATLAB Package for Exponential Integrators”, Havard Berland, Bard Skaflestad and Will M. Wright, 2007,
DOI: 10.1145/1206040.1206044, webpage (software without a license).

While the general schemes that then I'm going to implement are as follows:

  • Exponential Runge-Kutta integrators
p { margin-bottom: 0.25cm; line-height: 120%; }
Consider a problem of the form



p { margin-bottom: 0.25cm; line-height: 120%; }

the numerical exponential Runge Kutta scheme for its solution is


and the coefficients and are constructed from exponential functions or approximations of such functions.

  • Exponential Rosenbrock integrators
Consider a problem of the form


the numerical exponential Rosenbrockscheme for its solution is


for details about formulas see [HO 10].

by Chiara Segala ( at June 01, 2016 04:14 AM

May 31, 2016

Abhinav Tripathi


So, according to the timeline that we decided, the first phase has been complete. The timeline stated:

30th May — Work on adding the sym conversion to PyTave and cleaning up the conversion mechanism in Symbolic.

I think we have done some good work to fulfill the objective. We have had many changes to add proper @sym conversion on Symbolic side. The following 2 PRs take care of that:

Also, we worked on the PyTave side to fix conversion of tuples and booleans properly from python to octave. It can bee seen in the following PRs:


As far as the midterm goals are concerned. We have completed 1a), 1b) 1d) & 2a).

Most of 1c) is also done. For now it is acceptable is my view. We now move to 2b) and 2c).

The next goal (as per timeline is)-
15th June — Improve tests and doctests. Work on building PyTave and testing on Windows. No more crufty XML!

We already got rid of the XML stuff in the new PyTave IPC. We will now be working on adding and improving test coverage of  PyTave (both BISTs and doctests). In approximately 10 days,  we will move on to start trying to build PyTave on windows.

by genuinelucifer at May 31, 2016 06:27 PM


We (me and both mentors) decided upon the following timeline to be followed during the course of this project:

30th May — Work on adding the sym conversion to PyTave and cleaning up the conversion mechanism in Symbolic.
15th June — Improve tests and doctests. Work on building PyTave and testing on Windows. No more crufty XML!
27th June — Try to get working PyTave with Symbolic on Windows (if needed, use cygwin) [Mid Term Evaluations]

5th July — Get a successfully working Symbolic with PyTave (all the tests and features working). Continue work on Goal 3.
20th July — Finalize implementation for Goal 3.
31st July — Work on improvements to PyTave such as adding support for Python objects from within Octave (printing, calling methods, saving etc…).
10th August — Recode @sym methods as required to take benefit of PyTave. Also, add some other methods into Symbolic from #215
15th August — “Blue-skies” stuff. Try to fix the unicode utf8 rendering on Windows. Explore possibility of incorporating PyTave into Octave. [gsoc final code submission begins]
23rd August — Finish all the code, test on buildbots and Windows. Submit to Google. [Final deadline for code submission]
Afterwards — Complete any goals which are left. Continue contributing to Octave…


Following goals were set for the mid term evaluations :

1a) Octave, Symbolic, and PyTave dev versions installed.

1b) Some basic communication working between PyTave and Symbolic. This might use some existing implementations in a non-optimal way (e.g., its ok if Symbolic returns objects as xml strings).

1c) Most of Symbolic’s tests and doctests continue passing, although some failures ok at this point.

1d) The above works on at least one OS (probably GNU/Linux).

2a) PyTave converts various Python objects into appropriate Octave objects. PyTave needs to be extended to return proper @pyobj when it cannot convert the object. Also, symbolic must be able to convert such objects to @sym objects by calling proper python functions via PyTave (if they are indeed @sym). That is, bypass the current generating of xml strings.

2b) Improve the BIST test set coverage of both PyTave and Symbolic for any new features added.

2c) Improve doctest coverage of PyTave.

Stretch Goals: 

3) Improve on 2a) with some TBD mechanism: perhaps a pure m-file callback feature in the PyTave export code.

4) The above works on both GNU/Linux and MS Windows.

5) Objects passed from Symbolic back to Python should use PyTave mechanisms.

by genuinelucifer at May 31, 2016 05:59 PM

May 19, 2016

Amr Mohamed


It’s only four days for GSoC coding period to start and the project seems to be in the right track.
During the last few weeks, I have finished the first .m files for the polyplit and polyjoin functions.
Currently i am trying to prepare a makefile for compiling the rest of the functions as they will be .cc files not .m files.

by amrkeleg at May 19, 2016 09:00 PM

May 17, 2016

Chiara Segala

Presentation and timeline


I'm Chiara Segala, graduated at University of Verona, Italy. I attended my bachelor's degree in applied mathematics and I am doing now the second year of the master's degree.
I was selected for the GSoC 2016 with the project Exponential Integrators (GNU Octave organization).
Exponential integrators are a class of numerical methods for the solution of partial and ordinary
differential equations.

This is an estimated TIMELINE of my work:

05/05 - 23/05
Study the theory about the exponential Runge-Kutta and Rosenbrock-type integrators:
[HO 10] “Exponential integrators”, Marlis Hochbruck and Alexander Ostermann, 2010, DOI: 10.1017/S0962492910000048
[HO 05] "Explicit exponential Runge-Kutta methods for semilinear parabolic problems", Marlis Hochbruck and Alexander Ostermann, 2005, DOI: 10.1137/040611434, preprint.
Familiarize with the other ODE solvers in Octave, odepkg.
Take a look at
[J 14] “EXPODE - Advanced Exponential Time Integration Toolbox for MATLAB”, Georg Jansing, 2014, webpage.
Study the theory and the features of the expmv code:
[HAM 11] “Computing the Action of the Matrix Exponential, with an Application to Exponential Integrators”, Awad H. Al-Mohy and Nicholas J. Higham, 2011, DOI: 10.1137/100788860.

23/05 - 27/06
week 1 : implementation of phi functions
week 2 : implementation of a scheme for a general exponential Runge-Kutta integrator, see section 2.3 of [HO 10], using the phi functions
week 3 : implementation of a scheme for a general exponential Rosenbrock integrator , see section 2.4 of [HO 10], using the phi functions
week 4 : implementation of a method for the construction of matrix Ã, see theorem 2.1 of [HAM 11]

Mid-term evaluation

27/06 - 15/08
week 1-2 : implementation of an advanced method with adaptive time stepping from Runge-Kutta family, using expmv
week 3-4 : implementation of an advanced method with adaptive time stepping from Rosenbrock-type family, using expmv
week 5-6 : implementation of validation tests, e.g. analyze the order of convergence of the methods and some examples
week 7 : add some improvement and code clean up
week 8 : write documentation

15/08 - 22/08 : review of all the work

by Chiara Segala ( at May 17, 2016 05:39 AM

May 15, 2016

Francesco Faccio



This is a Timeline for the project ode15s.

As discussed with the mentors, our goals for the mid-term evaluations are to build Octave with all the dependencies of SUNDIALS and to create an m-file which deals with the input parameters and the options of a generic ODE/DAE solver.
The final goal is  to have a well tested and documented implementation of ode15s.

Community Bonding Period: 
-Familiarize with Autotools and the structure of Octave
-Study the documentation of SUNDIALS and Oct-files

Week 1-2 (May 23 - Jun 5):
-Add SUNDIALS as a dependency and build Octave from source (I will usa a dld_function which calls a function of SUNDIALS)

Week 2-3 (Jun 6 - 19):
-Write an m-file which deals with the input of a generic ode/dae solver

Midterm Evaluations

Week 4 (Jun 20 - 26):
-Design the code of ode15s (I will choose which functions of SUNDIALS will be used)

Week 5-6 (Jun 27 - Jul 10):
-Write Oct-files

Week 7 (Jul 11 - 17): 
-Write tests 

Week 8-9 (Jul 18 - 31):
-Test compatibility between Matlab and Octave
-Test the performance of the algorithm

Week 10-11 (Aug 1 - 14):
-Write the documentation and perform more tests

Week 12 (Aug 15 - 21)
-Review of the work

OPTIONAL: If I finish the work early, I will try to write a (slower) version of ode15s which uses DASPK or DASSL (this is for people who don't have SUNDIALS installed)

Final Evaluations

by Francesco Faccio ( at May 15, 2016 02:29 PM

May 07, 2016

Francesco Faccio

Introducing the project


my name is Francesco, I'm a student in Mathematical Engineering at Politecnico di Milano and during this summer I will work with GNU Octave, as a GSoC student, on the implementation of ode15s, a solver for stiff Differential Equations and Differential Algebraic Equations.

In the next few days I will upload a timeline with goals for the midterm and final evaluations.

In this blog you will find information about the project's progress.

by Francesco Faccio ( at May 07, 2016 02:28 AM

May 06, 2016

Amr Mohamed



Hello there ,
I would like to share my experience during the first weeks of GSoC program with the great GNU Octave organisation as things are starting to heat up now.
My project aims at adding multiple polygon functions to the octave-geometry package.
So,  I started by looking at the package’s structure , building and running it on Octave.
I have also created a remote on bitbucket to share my code for future reviews.
Finally , i created a new bookmark (feature bookmark) and committed by implementation for the polysplit function there.

by amrkeleg at May 06, 2016 01:35 AM


I’m Amr Mohamed , a third year student at faculty of engineering , Ain Shams University, Cairo,Egypt.
I will work during this summer on “Implement boolean operations on polygons” as part of GSoC 2016.
My profile on Octave’s wiki can be found at :


My initial timeline will be:

30-4 / 6-5

Implementing the polysplit function and testing it.

7-5 / 13-5

Implementing the polyjoin function and testing it.

14-5 / 10-6 

Finalize the implementation of the conversion functions (polysplit-polyjoin)

11-6 / 19-6

Implementing and testing the ispolycw function. 

20-6 / 27-6

Midterm Evaluation week

25-6 / 1-7

Working on the poly2cw and ispolycw scripts

2-7 / 8-7

Performing severe testing to all the implemented functions

9-7 / 15-7

Working on the polybool script.

16-7 / 22-7

23-7 / 29-7

Implementing and testing the poly2fv function.

30-7 / 5-8

A 2 week buffer for finalizing the scripts , debugging them and writing the documentation.

6-8 / 12-8

13-8 / 23-8

Tidying the code, writing tests, improving the  documentation and submitting the code sample.


by amrkeleg at May 06, 2016 01:30 AM

May 05, 2016

Barbara Lócsi

Hello World!

I am Barbara Lócsi, a Software Engineering student at the Budapest University of Technology and Economics. During this summer as a GSoC student I will work on Generalized eigenvalue problem. I will blog about my progress here.
You can find more information about me and my application here: 

Here you can see my timeline
  • Community Bonding period (Until May 22)
    • I am already started working on implementing preliminary balancing I would like to finish most of this task before the coding period begins.
  • Week 1-2 (May 23 - Jun 5)
    • Finals, non-coding time
  • Week 3-4 (Jun 6 - Jun 19)
    • Finish preliminary balancing if it is not finished, start working on implementing left eigenvector calculation
  • Midterm evaluations (Jun 20 - Jun 27)
  • Week 6 (Jun 27 - Jul 3)
    • Finish the left eigenvector task (if not finished)
  • Week 7-10 (Jul 4 - Jul 31)
    • algorithm choosing for eigenvalue calculation (chol or qz)
    • creating tests
  • Week 11 (Aug 1 - Aug 7)
    • documenting
  • Week 12-13 (Aug 8 - Aug 28)
    • deciding return value format of the eigenvalues (vector or matrix)
    • testing, documenting

by Barbara Lócsi ( at May 05, 2016 01:00 PM

May 03, 2016

Abhinav Tripathi


This is my blog detailing the progress on the GSoC project – ‘Octave Symbolic Package‘… The results were out on 23rd April 00:30 a.m. (IST – GMT+5.30)

The official octsympy project can be found on github:
My fork of the project can be found on my profile:

After the commencement of GSoC 2016, the following PR has been merged already:
which deals with some tests failing on windows due to new scripts not being loaded into octave cache.

We are currently working on deciding the upcoming goals. The main goal would be to get PyTave working with octsympy and replace the existing approach of using popen2 to communicate with python.

This project has a long way to go before completing its goals…

I look forward to the exciting journey ahead:)

by genuinelucifer at May 03, 2016 10:19 AM

March 01, 2016


Octave in 2016 Google Summer of Code

We’re in GSoC this year, for our second time as an independent organization!

Student applications for the paid summer internships are due 25 March.

Check out the Wiki for potential projects and application instructions.

by Nir at March 01, 2016 02:45 PM

January 14, 2016

Juan Pablo Carbajal

¡Animarse a animar!

Todos disfrutamos de una buena animación, aquí veremos como podemos animar nuestros datos en GNU Octave. Les presento algunos ejemplos a modo de motivación:

Para lograrlo vamos a conocer la función comet y, para hacer nuestras propias animaciones, vamos mirar un poco en detalle los objetos generados por la función plot.

La función comet

En su forma más intuitiva, esta función toma como entradas tres argumentos: 2 vectores (o listas de valores) y una valor escalar (un número). Los dos vectores son los pares de valores (x,y) que se irán dibujando uno a uno. El argumento escalar define el tiempo de espera antes de mostrar el siguiente punto. Probemos lo siguiente:

T   = 10;
fps = 25;
t   = linspace (0, T, fps*T).';

l = 0.5;
w = 2*pi*0.5;

R = exp (-l * t);
x = R .* cos (w * t);
y = R .* sin (w * t);

fig = figure (1);
set (fig, "Name", "Espiral");

comet (x, y, 1/fps);

Si ejecutas el código, deberías ver algo como lo que se muestran en el siguiente video:
Pregunta 1:
¿Entiendes todo el código?

Anatomía de una animación
Para realizar cualquier tipo de gráfico en Octave necesitamos una figura (la ventana donde vemos el gráfico, creada con la función figure). Esta figura contiene un par de ejes coordenados (un "hijo", que se guarda en el campo de la figura llamado, en inglés, "children"); que en Octave se llaman axes y se que crean con la función homónima. Dentro de estos ejes podemos tener objetos gráficos, como líneas, puntos y polígonos.

Como se darán cuenta, las figuras pueden tener mas que un hijo, es decir una figura puede contener varios ejes permitiendo poner varios gráficos en la misma figura.

Resumiendo: una figura contiene un par (o más) de ejes coordenados, los cuales a su vez contienen objetos gráficos
Veamos un ejemplo

close all
fig = figure (1, "Name", "Figura 1");  # Figura con nombre
ax  = axes ();                         # fig adopta ejes automáticamente.

x = randn (10,1);                      # Graficamos algo
h = plot (ax, x, '.');                 # y lo ponemos en los ejes creados

Por supuesto, todo esto (excepto el nombre de la figura) se hace automáticamente simplemente ejecutando plot (x,'.') ¡Verifícalo!
En la última línea del código, le pedimos a la función plot que nos devuelva el objeto gráfico. Podemos ver las propiedades de este objeto ejecutando get(h) y obtendremos una lista enorme de propiedades del objeto gráficos y sus valores actuales. Fíjate que en esa lista hay campos con los nombres ydata y xdata. Estos campos contienen los puntos que se muestran en el plot. Si modificamos estos campos, modificaremos el gráfico. Por ejemplo podemos desplazar los valores un lugar hacia la derecha haciendo

t = get (h, "xdata");
set (h, "xdata", shift (t, -1));

Pregunta 2
¿Puedes modificar el código para realizar la animación que se muestra abajo (click para ver animación)?
gif animado

Hemos visto los principios de animación: prepara el gráfico y luego modifica su contenido.
En general es mala idea rehacer el gráfico a cada instante, pues es muy lento y las animaciones quedan mal.
Puedes animar cualquier propiedad de los objetos gráficos, ejes y figuras.

Si tienes alguna idea y necesitas ayuda no dudes en preguntar en el foro

by Juan Pablo Carbajal ( at January 14, 2016 12:49 AM

November 28, 2015

Jordi Gutiérrez Hermoso

Octave code sprint 2015

So, let’s get this going!

#yop-poll-container-3_yp57099b0f0daee { width:200px; background:#fff; padding:10px; color:#555; overflow:hidden; font-size:12px; } #yop-poll-name-3_yp57099b0f0daee { font-size:14px; font-weight:bold; } #yop-poll-question-3_yp57099b0f0daee { font-size:14px; margin:5px 0px; } #yop-poll-answers-3_yp57099b0f0daee { } #yop-poll-answers-3_yp57099b0f0daee ul { list-style: none outside none; margin: 0; padding: 0; } #yop-poll-answers-3_yp57099b0f0daee ul li { font-style:normal; margin:0px 0px 10px 0px; padding:0px; font-size:12px; } #yop-poll-answers-3_yp57099b0f0daee ul li input { margin:0px; float:none; } #yop-poll-answers-3_yp57099b0f0daee ul li label { margin:0px; font-style:normal; font-weight:normal; font-size:12px; float:none; } .yop-poll-results-3_yp57099b0f0daee { font-size: 12px; font-style: italic; font-weight: normal; margin-left: 15px; } #yop-poll-custom-3_yp57099b0f0daee { } #yop-poll-custom-3_yp57099b0f0daee ul { list-style: none outside none; margin: 0; padding: 0; } #yop-poll-custom-3_yp57099b0f0daee ul li { padding:0px; margin:0px; font-size:14px; } #yop-poll-container-3_yp57099b0f0daee input[type='text'] { margin:0px 0px 5px 0px; padding:2%; width:96%; text-indent:2%; font-size:12px; } #yop-poll-captcha-input-div-3_yp57099b0f0daee { margin-top:5px; } #yop-poll-captcha-helpers-div-3_yp57099b0f0daee { width:30px; float:left; margin-left:5px; height:0px; } #yop-poll-captcha-helpers-div-3_yp57099b0f0daee img { margin-bottom:2px; } #yop-poll-captcha-image-div-3_yp57099b0f0daee { margin-bottom:5px; } #yop_poll_captcha_image_3_yp57099b0f0daee { float:left; } .yop_poll_clear { clear:both; } #yop-poll-vote-3_yp57099b0f0daee { } .yop-poll-results-bar-3_yp57099b0f0daee { background:#f5f5f5; height:10px; } .yop-poll-results-bar-3_yp57099b0f0daee div { background:#555; height:10px; } #yop-poll-vote-3_yp57099b0f0daee div#yop-poll-vote-3_yp57099b0f0daee button { float:left; } #yop-poll-vote-3_yp57099b0f0daee div#yop-poll-results-3_yp57099b0f0daee { float: right; margin-bottom: 20px; margin-top: -20px; width: auto; } #yop-poll-vote-3_yp57099b0f0daee div#yop-poll-results-3_yp57099b0f0daee a { color:#555; text-decoration:underline; font-size:12px;} #yop-poll-vote-3_yp57099b0f0daee div#yop-poll-back-3_yp57099b0f0daee a { color:#555; text-decoration:underline; font-size:12px;} #yop-poll-vote-3_yp57099b0f0daee div { float:left; width:100%; } #yop-poll-container-error-3_yp57099b0f0daee { font-size:12px; font-style:italic; color:red; text-transform:lowercase; } #yop-poll-container-success-3_yp57099b0f0daee { font-size:12px; font-style:italic; color:green; } .yop-poll-forms-display{}
When should the Octave 2015 code sprint be?

by Jordi at November 28, 2015 02:23 PM

August 31, 2015

Juan Pablo Carbajal

Archivando el trabajo

En esta clase veremos como crear y ejecutar un archivo m (mfile).

Todos los comandos que ejecutamos en GNU Octave pueden guardarse en un archivo de texto. Es importante que el archivo sea de texto plano (ascii), puesto que este es el formato que GNU Octave espera en los archivos de comandos. Esto es verdad en general para la mayoría de los lenguajes de programación.

Para empezar abrimos un archivo de texto con nuestro editor preferido (en mi caso, que trabajo en linux, uso Gedit). Enfatizo que no deben usar procesadores de texto como Office porque estos programas no guardan sus archivos en texto plano. Si estas en Windows puedes usar Notepad++.

El siguiente video muestra como trabajo en Ubuntu:

Crear y ejecutar mfile en octave-cli

El video muestra la creación de un mfile que corresponde a una función. Todavía no hemos aprendido a escribir funciones así que por el momento vamos a ver otro tipo de mfile: el archivo de comandos (script).

Abrí un archivo de texto y escribí los siguientes comandos:

disp ("Hola, este es un archivo de comandos! (script)");
a = 1:5;
txt = repmat (", %d",1,length(a)-1);
printf (["Los primeros %d números son: %d" txt "\n"],length(a), a);

Guarda este archivo notando el directorio donde quedará guardado (en el video utilicé el escritorio). Voy a suponer que el nombre que le diste al archivos es script_00.m
Ahora inicia una sesión de octave. Asegúrate que el archivo esta en el directorio actual: ejecuta pwd para ver el directorio actual y ls parta ver la lista de archivos.
Para ejecutar el script, simplemente lo llamamos por su nombre (sin extensión): 


¡Listo! Has creado tu primer script en Octave. La siguiente lista enumera los pasos para trabajar con scripts:
  • Abrir un archivo ASCII (texto plano) donde escribir los comandos.
  • Guardar el archivo con extensión .m (fijarse donde se guarda).
  • Llamar al archivo desde GNU Octave utilizando el nombre sin la extensión (asegurarse que pwd corresponde con el lugar done se guardó el archivo).
Es importante utilizar scripts (es la forma recomendada de trabajar), puesto que los comandos quedan archivados y podemos corregir errores y volver a ejecutarlos sin tener que escribir todo de nuevo.

Si estas usando una versión de Octave 3.8 o más nueva puedes utilizar la interfaz gráfica que incluye un editor de textos. El siguiente video muestra como crear un script que imprime los números del 1 al 10.

Crear y ejecutar mfile en octave-gui

La GUI de Octave puede configurarse en español si uds. lo desean. noten que esto solo cambia el idioma del menú, no el idioma del lenguaje de programación (funciones, ayuda y mensajes).

No duden en escribir sus preguntas y comentarios, aquí abajo o en el foro de discusión.

by Juan Pablo Carbajal ( at August 31, 2015 05:48 PM

August 27, 2015

Asma Afzal

Wrapping up

So, the GSoC period has come to an end now.

Project Goals  
My project was about creating Matlab compatible wrappers for the optim package.  Here is a brief list of my project goals.

1- lsqnonlin wrapping nonlin_residmin and residmin_stat
2- lsqcurvefit wrapping nonlin_curvefit and curvefit_stat
3- nlinfit wrapping nonlin_curvefit (it was initially decided to wrap leasqr but changed to avoid extra computations)
4- quadprog wrapping __qp__ instead of qp and returning lambda in the form of a structure as in Matlab
5- fmincon wrapping nonlin_min
6- Test and demos for the above functions
7- Stretch goals: previously decided to create other missing functions or perhaps additional backends but before midterm I decided instead to include optimoptions in my to do list.

The functions lsqnonlinlsqcurvefit and nlinfit are complete with tests and demos and integrated in the optim package. Since nlinfit is from the statistics package in Matlab, additional functions such as statset, statget were required for handling options. These functions are implemented with minor modifications in optimset, optimget and __all_opts__ as statset, statget and __all_stat_opts__ and are now a part of optim package.

The function quadprog required directly wrapping __qp__ instead of qp for the ordering of lambda. It is in the final stages of review and will soon be integrated.

fmincon has not been thoroughly reviewed yet. I will send it to Olaf after quadprog is committed to optim. 

Hiccups in the stretch goal
I couldn't create optimoptions in the GSoC time frame because it was a bit open ended and I had to come up with an object oriented design for the function. I was trying to understand how Matlab implements it for quite some time. Anyway, I didn't pursue it further and shifted my focus on the refinement of my almost complete functions to get them integrated in optim.  

Interesting Takeaways
This is my first experience of working with any open source organization and it's definitely a pleasant one. It's delightful to see people using my functions and possibly benefiting from my work [2-3]. :)

I think I have managed to meet all the goals I set before the start of GSoC. (Regrets? Well, I could have saved more time for optimoptions and it would've been better to discuss it way before than being stuck for a while.)

I'm extremely grateful to the Octave community, especially my mentors Olaf Till and Nir Krakauer for their unrelenting support. GSoC wouldn't have been possible without their constructive feedback. I have learned a lot from this experience.


by Asma Afzal ( at August 27, 2015 01:52 PM

August 20, 2015

Piotr Held

Summary of project

I have not written a post in a long while because I have had family issues. Although my progress was hampered, it was not completely halted. I was able to add some more features and clean some of the old code up. I have added two new sections with five new functions:
  • Testing for nonlinearity:
    • surrogates
    • endtoend
    • timerev
  • Spike trains:
    • spikeauto
    • spikespec
They are complete with documentation and some demos. The wiki has also been updated with a demo of surrogates. Thus the original plan has been completed apart from some of the functions in the final section Tutorial and randomize. There have been some additional functions: endtoend, spikeauto, spikespec and av_d2. These were added as a logical addition to the existing function.

What happened to randomize?

The first time I read the documentation and tried to plan the project I did not realize how big randomize is. Therefore, once I realized that it was a toolbox and not a function I needed to design how this toolbox should work with/in Octave. This took about a week of my time. I have not had any experience designing anything similar to randomize. I considered it an interesting challenge. I tried to create a hybrid that could use C++ and Octave functions to run simulated annealing. I wanted the main loop to be in C++ because this algorithm cannot be vectorized, so it will run slowly on Octave. 

My plan was to create some "runner" that can call some abstract methods of the base cost function class. I was able to accomplish this only partly, even though I introduced some polymorphism I only created one cost function (I did not finish the second one). This one cost function and the runner worked properly (as tested against the results from TISEAN). I was even able to do a type of pausing (like in other functions in the package), which allows the user to break the execution without having to kill Octave. All of this worked correctly with the one cost function. My plan was to modify the code to ensure the runner could manage any type of cost functions (that inherit from the base cost function class) once the second cost function was completed

Afterwards, I planned to create a base abstract class in Octave using classdef and similar keywords. Then I would have created an instance of the C++ class that would run the Octave methods using polymorphism. This was not possible however because classdef (and similar keywords) are yet not fully supported by Octave. They are not parsed properly by help, are not documented and not all of the Matlab functionality is present as of now.

Looking back at this attempt of porting the randomize toolbox I believe that with a good plan it could have taken more than half of my programming time this summer to create a complete toolbox with good tests and documentation. I do not regret trying to port it, but I think I can say that this part was not completed, even though much work was put into it.

All of the code created is placed in 'devel' folder. There are also some tests there that utilize the cgreen (cgreen++ actually) framework. It is almost completely undocumented, as I spent most of my time developing new code.

Other thoughts

I am very pleased with the support of the community. I expected they wouldn't have time to help, and at first I had trouble receiving the help I need, but once I understood what time (depending on who was online) to ask for help and which channel to use it became my biggest help in completing the project. My project needed help from people who knew things about how Octave was written in C++ and not just Octave developers. There were plenty of people who could assist me in just that.

One of the most exciting moments of the whole project was when I saw a problem on the mailing list that I encountered a few weeks earlier and could give some ideas on how to solve it.

If I were to do the project all over again and have to choose between the help of the community and the experience I have today after working on this project for over 3 months I would choose the community, because there are so many knowledgeable people there that know the inns and outs of Octave.

by Piotr Held ( at August 20, 2015 07:44 PM

August 19, 2015

Juan Pablo Carbajal

¿Para qué me sirven los vectores?

En la clase de vectores comentaba como se puede crear un vector en GNU Octave, pero quedó en el tintero la motivación para el uso de vectores ¿Para qué me pueden servir esas listas de numeritos que llamamos vectores?

Cartas de amor...

Supongo que alguna vez han mandado una carta por correo postal. Para que la carta llegue a destino necesitamos proveer cierta información además del nombre del destinatario. Supongamos que la carta la enviamos dentro de Argentina. Necesitamos indicar la provincia a donde va la carta, la cuidad, la calle y el número de casa. En principio podríamos organizar la información en de esta forma

carta = [ "Buenos Aires","Lomas de Zamora", "Ceferino Namuncura", "150" ];

Ok, esto ya se parece a un vector pero las componentes son strings en vez de números. No hay drama, simplemente podemos indicar la provincia con un número del 1 al 23 (Argentina tiene 23 provincias) y hacer algo parecido con las ciudades y las calles. El número de casa lo podemos usar directamente. Es decir que el destino de nuestra carta podría representarse con el vector

carta = [ 23, 1352, 12345, 150 ];

donde cada componente fue reemplazada por un número según una tabla especificada. La primera componente del vector nos indica la provincia, la segunda nos muestra las ciudad, la cuarta nos indica la calle y la quinta componente la altura de la calle ¡Estas cartas viven en un espacio de 5 dimensiones!
Un conjunto de estas cartas se puede organizar en una lista de vectores, una matriz:

cartas = [ 23, 1352, 12345, 150; ...
          5, 130, 4, 756; ...
          12, 7, 2341, 29 ];  

Cada vector fila de esa matriz representa una carta. La primera columna de la matriz nos muestra las provincias, la segunda columna nos muestra las ciudades, la cuarta nos indica la calle y la quinta columna la altura de la calle. 

Panadero, quiero pan!

Ok, quizás es ejemplo fue muy abstracto. El siguiente ejemplo me lo pasaron por Facebook, también es abstracto, pero quizás es más fácil de entender. 

Pensemos por un momento en recetas de cocina, en particular en la lista de ingredientes de cada receta. En la tabla puse dos ejemplos de ingredientes para hacer pan

 Pan de cazuela  Pan fuerte
 1.Harina (gr)
 250 300
 2.Sal (gr) 5 20
 3.Levadura (gr) 5 20
 4.Aceite (ml) 0 50
 5.Agua (ml) 175 50
 6.Miel (ml) 0 100
 7.Huevos 0 0

Esta tabla ya nos muestra una representación de estas recetas como vectores. En este caso son dos vectores columna que en GNU Octave podríamos anotar como

recetas = [ 250 300; 5 20; 5 20; 0 50; 175 50; 0 100; 0 0 ];

Los ingredientes del pan de cazuela son la primera columna de esta matriz y los del pan fuerte son la segunda. Noten que estos vectores son de 7 dimensiones (7 ingredientes), pero vale preguntarse si la dimensión Huevos tiene sentido en el espacio de recetas de pan... ¿Hay recetas de pan que utilizan huevos?

Los dos ejemplos anteriores son bastante abstractos y si los has entendido me puedo imaginar que ya tienes muchas ideas de como representar otras cosas como vectores o matrices. 

Trayectorias espaciales

El siguiente ejemplo es paradigmático en el campo del cálculo numérico, y es muy relevante para este curso: representar la posición de objectos en el espacio!

Imagina una hormiga caminando sobre una hoja de papel A4 (ancho: 21cm, largo: 29.7cm). Para nuestra conveniencia hemos marcado una de las esquinas de la hoja como punto de referencia. En cualquier instante podemos leer la distancia entre la hormiga y la esquina de la hoja en las dos dimensiones del papel: ancho y largo. Ponemos la hormiga en un punto del papel y una vez por segundo miramos donde se encuentra. Podríamos observar algo así:

trayectoria hormiga

Figura 1: Posiciones de la hormiga en el papel. Cada punto representa la posición de la hormiga en las coordenadas del papel observadas cada 1 segundo. La línea punteada es la trayectoria de la hormiga observada con más frecuencia.

La posición luego del primer segundo la guardamos en una matriz con una fila y con dos columnas (un vector fila!)

p(1,1:2) = [ 10.7 14.8 ];

Luego de otro segundo obtenemos la nueva posición y la guardamos en la segunda fila de la matriz:

p(2,1:2) = [ 11.1 14.8 ];

Y así sucesivamente. Lego de 10 segundos, la matriz tiene 10 filas y 2 columnas. Esta matriz representa la trayectoria de la hormiga a intervalos de 1 segundo. Cada vector fila nos indica la posición en un dado instante de tiempo.

p = [ 10.7 14.8; ...
      11.1 14.8; ...
      11.4 15.1; ...
      11.2 15.0; ...
      11.0 15.3; ...
      11.8 15.8; ...
      11.6 16.8; ...
      11.8 17.0; ...
      11.0 17.3; ...
      10.8 17.2];

Creo que estos tres ejemplos deberían entusiasmarte para pensar qué cosas pueden representarse utilizando vectores y matrices y cómo hacerlo. Vale la pena preguntarse como podemos utilizar estas representaciones. Por ejemplo, usando la matriz de la hormiga ¿Cuál es la distancia entre la primera y la ultima posición de la hormiga?

¿Se te ocurren otros ejemplos? ¡Anótalos en los comentarios!

Nota: El archivo adjunto es un script de GNU Octave que genera la trayectoria de la hormiga y crea la figura que se muestra en esta clase.

by Juan Pablo Carbajal ( at August 19, 2015 07:11 PM

August 16, 2015

Asma Afzal

Week 11 and 12: Integrating existing work in optim package.

A recap of the progress in two weeks:

  • I had to let go of optimoptions (for GSoC) mainly because of
    • time constraints 
    • and also because I don't have much experience with objected oriented programming. For optimoptions, I will have to come up with a design. I started with class implementation using classdef as in Matlab, but it is in its infancy in Octave and it could possibly be a limiting factor.
  • I am refining my existing functions and including tests and demos so they can be integrated in the optim package.
    • lsqnonlin and lsqcurvefit required additional options documentation and OutputFcn and Display setting. These two functions have now been successfully integrated in the optim package. [1]
    • Functions nlinfit and quadprog are under review.
    • I am working on fmincon now. Still have to discuss which backend should be used. lm_feasible can return Lagrange multipliers, gradient and hessian, but since it adheres to the constraints in all iterations, it behaves differently (from Matlab's algos) and sometimes less efficiently as octave_sqp, which only respects the constraints for the final result.

by Asma Afzal ( at August 16, 2015 08:28 AM

August 03, 2015

Asma Afzal

Week 10: Preliminary work on optimoptions

Thoroughly checking how optimoptions works in  Matlab.

options = optimoptions (SolverName)

Things to do:

  • Identify if Solver name is the right string or function handle

  • Cater for multiple Algorithms
     A subset of options for different algorithms.
  • Transfer relevant options of different solvers to modify/create option.
    oldoptions = optimoptions('fmincon','TolX',1e-10)
    newoptions = optimoptions('lsqnonlin',oldoptions)
  • Using dot notation or optimoptions to modify previously created options.
    (Second argument in optimoptions can be old options)
  • Display options:
    Set by user on top (for the current algo)
    Default options
    Options set for other algorithms.
Implementation ideas:

In Matlab, these two calls generate the same options object optim.options.Fmincon:

What would be more appropriate, IMO, will be to have a function optimoptions of the following format
opts = function optimoptions(SolverName,varargin)

    obj = optim.options.SolverName(varargin)

    opts = struct(obj)


This function will
  1. Instantiate the relevant class and request for default options from the solver.
  2. Compare the user provided options to add relevant options.
  3. Display options of the current algorithm.
  4. The output can also be returned in the form of a struct to be compatible with optimget.

by Asma Afzal ( at August 03, 2015 09:21 AM

July 24, 2015

Asma Afzal

Week 8 and 9: The ordering of lambda in quadprog

I was trying to dig through to figure out why the ordering of lambda in [1] does not match that of quadprog in Matlab.
The example shows how the values were different:

C = [0.9501    0.7620    0.6153    0.4057
    0.2311    0.4564    0.7919    0.9354
    0.6068    0.0185    0.9218    0.9169
    0.4859    0.8214    0.7382    0.4102
    0.8912    0.4447    0.1762    0.8936];
d = [0.0578
A =[0.2027    0.2721    0.7467    0.4659
    0.1987    0.1988    0.4450    0.4186
    0.6037    0.0152    0.9318    0.8462];
b =[0.5251
Aeq = [3 5 7 9];
beq = 4;
lb = -0.1*ones(4,1);
ub = 1*ones(4,1);
H = C ' * C;

f = -C ' * d;

[x, obj_qp, INFO, lambda] = qp ([],H,f,Aeq,beq,lb,ub,[],A,b);
lambda = 

Reordering lambda based on the length of constraints resulted in
lambda.eqlin = 
lambda.lower =

lambda.upper =

lambda.ineqlin =

Matlab however,  gave lambda.eqlin= -0.0165 for this example and lambda.lower =

There were two issues with the ordering:
  1.  the sign for Lagrange multipliers corresponding to linear equality constraint is always different from Matlab's
  2. The multipliers corresponding to the bound constraints as underlined above are swapped.
I tried several different examples to understand what is going on. For all the examples, the sign for lambda.eqlin was consistently different. Although, I still can't pinpoint why but for now I am just multiplying lambda.eqlin by -1.

For the swapping issue, I tried the same example with just the bound constraints:

[x, obj_qp, INFO, lambda] = qp ([],H,f,[],[],lb,ub)

but only considering lower bound constraints gave:
[x, obj_qp, INFO, lambda] = qp ([],H,f,[],[],lb,[])

which is how it is supposed to be. Tracing back, I found out that the ordering of lambda vector in qp.m [2] is not [equality constraints; lower bounds; upper bounds; other inequality constraints] like I previously assumed. From lines 287 and 288 in qp.m [2], the bound constraints are added to the inequality constraint matrix alternatively. So the issue wasn't swapping but understanding how the constraints are passed to __qp__.

In my code in [1], I had to make significant changes in the original qp.m code such as:
- Inequality constraint matrix has the order: [Linear inequality constraints; lower bounds; upper bounds]
- Check for too close bounds forming an equality constraint- This brings indexing issues as now the Lagrange multiplier value corresponding to bounds is in place of multipliers corresponding to linear equality constraints.
Also, Matlab only accepts too close bounds when using a medium scale algorithm and since the lower bound is approximately equal to the upper bound and it is considered as a single equality constraint, the single Lagrange multiplier is placed in the corresponding lambda.upper field while the corresponding lambda.lower value is zero.

Continuing the above example:

lb(4) = 0.3;
ub = 0.3*ones(4,1);

[x, obj_qp, INFO, lambda] = qp ([],H,f,[],[],lb,ub)
lambda =


Here, lb(4) = ub(4) and hence the constraint is treated as an equality constraint so the value for corresponding Lagrange multipliers is present on top (underlined)
I added checks for such cases and now my code in [1] gives same results as Matlab:


lambda =
  scalar structure containing the fields:
    lower =
    upper =


- qp.m strips off the -Inf constraints before passing them to __qp__. I am doing the same in quadprog. I have added further checks to make sure the multipliers are placed in the right positions in their respective fields. 

Plans for the next weeks:
- Get feedback from my mentor on the changes in quadprog.
- Begin intial work on optimoptions.


by Asma Afzal ( at July 24, 2015 05:40 AM

July 20, 2015

Piotr Held

Plans for randomize

This past week I have spent time trying to establish some testing framework for C++ methods and also trying to create a model for what the abstract base classes for cool, cost and permute should look like. I would like them to have the following methods/members:
  • Cost (Cost_fcn):
    • const Matrix *series-> pointer to the series for which a surrogate is generated
    • double cost-> current cost
    • void cost_transform (Matrix *)-> initial transformation which is used for better calculation of cost
    • Matrix cost_invert () const-> assigns to the input variable the inverse of the transformation performed above (to get the actual surrogate, not just a representation of it in a different form)
    • double cost_update (octave_idx_type nn1, octave_idx_type nn2, double cmax, bool &accept)-> perform quick update of cost (for a swap of elements under index nn1 and nn2) and decide if cost is smaller than maximum cost (cmax) if yes, accept new cost and return true otherwise reject new cost and return false
    • double cost_full()-> performs a full calculation of the cost, takes longer than cost_update()
    • getters/setters for cost
  • Cool (Cool_fcn):
    • double temp-> holds the current temperature
    • double cool (double cost, bool accept, bool &end)-> takes current cost, accept which holds whether the last cost_update() was accepted or not and returns the new temp and sets flag end to indicate if the simulated annealing is over
  • Permute (Permute_fcn):
    • Matrix *series-> holds the pointer to the series, and modifies the series only when exch() is called
    • void permute (octave_idx_type &n1, octave_idx_type &n2) const-> generates two indexes n1 and n2 that can be used to calculate Cost_fcn::cost_update()
    • void exch (octave_idx_type n1, octave_idx_type n2)-> exchange element under n1 with element under n2 in the series
Those are the methods I intend to have in the base/abstract classes which will be called by the Simulated Annealing runner code. I have not decided what that code should look like, but the current version seems to be working as well as the randomize program from TISEAN package.

I was also hoping to create a subclass of each of those abstract classes built to call GNU Octave code. This would allow the user to create their own functions without having to write anything in C++. However, this idea might not be practical for the following reasons:
  1. The example of Simulated Annealing provided in the TISEAN package (ver. 3.0.1), takes about 0.7 seconds to run using only C++ code and performs on average 900,000 calls to Cost_fcn::cost_update() and calling a simple function in Octave that many times (using the for loop) took 16 seconds
  2. I have trouble deciding how to neatly pass these functions/classes to randomize along with some parameters the user might want to include. I originally thought of using classdef - the new keyword introduced in Octave 4.0.0. I hoped to create an abstract Octave class and then let anyone subclass it to create their own cost, cool and permute classes. The problem is that classdef and all of the associated keywords are not documented, moreover according to Carnë Draug the help function will not recognize this new type of Octave class. So even if all of the needed functionality was available in Octave I might not be able to document it for the user
If obstacle 2. can be overcome it might still be beneficial for the package to create this type of functionality, regardless of how long the code will execute.

This week I plan to refine the design of the abstract classes as well as port more of the cost function options from TISEAN.

[Update]: I modified the design a bit and updated this post to fit the new code.

by Piotr Held ( at July 20, 2015 09:09 AM

July 14, 2015

Asma Afzal

Week 7: quadprog wrapping __qp__

I have wrapped quadprog on __qp__ instead of qp.m in [1].

Main differences between quadprog in [1] and qp.m.

- Input argument placement
  quadprog(H, f, A, b, Aeq, beq, lb, ub, x0, options)  =  qp (x0, H, f, Aeq, beq, lb, ub, [], A, b, options) 

- Check for empty inputs A and b
qp ([],H,f,Aeq,beq,lb,ub,[],A,[])

This works. qp simply ignores inequality constraints due to if checks
in lines  258, 266 and 275 of qp.m. Matlab gives an error if A is empty and b is not and vice versa.

quadprog (H, f, A, [], Aeq, beq, lb, ub)
Error: The number of rows in A must be the same as the length of b. I have added this check in line 181 in [1].

- Lambda output as a structure instead of a vector as in qp.m.

Ordering of lambda:
  • The order of lambda vector output (qp_lambda) from __qp__(in my code) is [equality constraints; inequality constraints; lower bounds; upper bounds]. 
  • The multipliers are present if the constraints are given as inputs so the size of qp_lambda depends on the size of constraints. 
  • Variables idx_ineq, idx_lb and idx_ub make sure I pick the right values. 

H = diag([1; 0]);
f = [3; 4];
A = [-1 -3; 2 5; 3 4];
b = [-15; 100; 80];
l = zeros(2,1);
[x, obj, info, qp_lambda] = qp ([], H, f, [], [],l,[],[], A, b)
[x,fval,exitflag,output,lambda] = quadprog (H, f, A, b,[],[],l,[])

qp_lambda =


lambda =
scalar structure containing the fields:

    lower =


    upper = [](0x0)
    eqlin = [](0x0)
    ineqlin =


Things to do:
  • Check the sign issue for lambda.eqlin (qp gives values -1* Matlab's)
  • Check if __qp__ changes the order of constraints. The values of lambda from qp.m in the last example in [2] are there but not coinciding with the respective constraints.
  • Move on to optimoptions.


by Asma Afzal ( at July 14, 2015 07:28 PM

July 09, 2015

Piotr Held

Progress report and plans

So far my progress has been as planned. Before the end of the midterm evaluation I was able to publish on my repository version 0.2.0 of the package, which included all of the functions from section Dimensions and entropies from the TISEAN documentation. As I mentioned in my previous post the functions that needed to be ported in this section are slightly different from what I wrote in my outline. The ported functions are:
  • d2
  • d1
  • boxcount
  • c2t
  • c2g
  • c2d
  • av_d2
I also wrote demos for most of those functions and updated the tutorial on the wiki page.

The first part of this week I spent improving on the build process. The function __c2g__ relies on C++ lambdas to work, therefore a configure script needed to be introduced to ensure the compiler has this capability. As was suggested by John Eaton, I tried to make the impact of not having that capability as small as possible. Currently if the compiler does not recognize C++ lambdas simply __c2g__ is not compiled and the function c2g does not work.

The plans

I was hoping to port all of the functions in the next section, Testing for Nonlinearity, by the end of the week. This might not be possible as randomize turned out to be a bigger function than I anticipated. It is actually not a function at all but, as the author of the TISEAN documentation puts it, "a toolbox". It generates surrogate data using simulated annealing. It needs to be supplied with three functions:
  1. the cost function 
  2. the cooling function -- how the temperature decreases
  3. the permutation function -- what to change every step
So currently if the user wants their own version of any of the functions above the user needs to write it in FORTRAN. My goal for this project would be to allow the user to write (use) their own octave function. The SA algorithm is an iterative method so using Octave code is not a good idea (as each line must be parsed when using for or while loops). As far as I understand the samin routine from the optim package will not suffice as it does not generate surrogate data, and has fewer options. Due to the size of this function it might take me some time to complete it.

I plan to tackle this problem as follows: I will rewrite in C++ the equivalent function to randomize_auto_exp_random and then try to refactor and modify the code to accept other functions. I plan to include all of the functions that are available in TISEAN in the Octave package, either through rewriting them or through linking to them. And I would like to make it easy for new functions to be added.

Further reading on randomize is available on the TISEAN documentation in the General Constrained RandomizationSurrogates paper Appendixrandomize description and randomize function extension description.

by Piotr Held ( at July 09, 2015 04:27 PM

July 02, 2015

Asma Afzal

Week 5 and 6: Refining fmincon

So my fmincon implementation is coming in shape [1]. 

[x,fval,exitflag,output,lambda,grad,hessian] = 

I came across a few issues which turned out to be bugs. Olaf pushed fixes in the central repository. Listing the issues for the record:
Setting gradc (the gradient of general equality/inequality functions)A bug in nonlin_min.m (and __nonlin_residmin__.m)
      objective_function = @ (p) p(1)^2 + p(2)^2;
 pin = [-2; 5];
 constraint_function = @ (p) p(1)^2 + 1 - p(2);
 gradc = @(p) [2*p(1);-1];
 [p, objf, cvg, outp] = nonlin_min (objective_function, pin, optimset
("equc", {constraint_function, gradc}))

error: function handle type invalid as index value
- Giving linear inequality/ equality constraints to lm_feasible. A bug in nonlin_min.m
      f = @(x) -x(1) * x(2) * x(3);
S = [1  -1;   2  -2;   2  -2]
b = [0;72];
x0 = [10;10;10];
[x,fval] = nonlin_min( f, x0, optimset ("inequc",{S,b}) )
      error: __lm_feasible__: operator -: nonconformant arguments (op1 is
3x1, op2 is 3x0)
-Any zero value in initial guess vector for nonin_residmin/nonlin_min gave an error. Required a minor change of sign(0)==0 in __dfdp__.m.
      k = 1:10;
func = @(x) 2 + 2 * k - exp (k * x(1)) - exp (k * x(2));
x0 = [0;0.5];
x = nonlin_residmin(func,x0)

warning: division by zero
warning: called from
    __dfdp__ at line 367 column 21
    __nonlin_residmin__> at line -1 column -1
    __lm_svd__ at line 191 column 9
    __nonlin_residmin__ at line 1125 column 21
    nonlin_residmin at line 98 column 21
    runlsqnonlin at line 9 column 3
error: svd: cannot take SVD of matrix containing Inf or NaN values

Functionality for Returning Hessian and Gradient
New options "ret_objf_grad" and "ret_hessian" to be introduced in nonlin_min (by Olaf). If anyone of these options is set to true, the 'outp' structure output argument of nonlin_min will contain additional fields .objf_grad and .hessian. My code currently checks this. 

Rearranging values of lambda in the fields of a structure.
- For lm_feasible, outp will contain an additional field lambda, a structure which contains Lagrange multipliers in fields separated by constraint type.

I added an additional feature in [1] to cater for the non linear constraint function set using deal();
      c = @(x) [x(1) ^ 2 / 9 + x(2) ^ 2 / 4 - 1;
        x(1) ^ 2 - x(2) - 1];
ceq = @(x) tanh (x(1)) - x(2);
nonlinfcn = @(x) deal (c(x), ceq(x));
obj = @(x) cosh (x(1)) + sinh (x(2));
z = fmincon (obj, [0;0], [], [], [], [], [], [], nonlinfcn)

z =

To do:
1- Write test cases/refined examples for for lsqnonlin, lsqcurvefit, nlinfit and fmincon.
2- Start wrapping quadprog to __qp__ instead of qp.m (because of the ordering of the lambda output).

by Asma Afzal ( at July 02, 2015 08:52 PM

June 26, 2015

Piotr Held

Progress report

The main goal of this post will be to create a progress report before the coming midterm assessment.
As I mentioned before I planned to complete the Dimensions and entropies section of the TISEAN documentation. This seems to be still a realistic goal.

Currently I have ported d2, av-d2, c2g, c2t along with writing documentation and demos for them. The current state of the tests needs improvement because they rely heavily on external files generated using the corresponding TISEAN programs. Because most of those functions/programs are closely linked I plan to improve on this feature once functions from the entire section are ported.

Currently I am working on c1 which already passes it's test. Once I complete it and write the documentation and demo, the only programs/functions that will need to be ported are boxcount and c2d. Once they are complete I plan to release version 0.2.0.

My elaborated proposal located on the octave wiki states that I planned to also port c2. Although the source code for such a program does exist in the TISEAN package (ver. 3.0.1), it does not seem to be mentioned in the documentation. Furthermore, installing the package on a computer does not give access to this program. Also, it seems to be redundant with other programs in the package. Therefore, I will not port it.

by Piotr Held ( at June 26, 2015 03:37 PM

June 24, 2015

Asma Afzal

Progress Update: Midterm Evaluation

Adding functions to the Optim package for Octave using existing back-ends.

Expected deliverables before midterm:
  • 'lsqnonlin' using 'nonlin_residmin'
Done in [1]. Differences in backends, nonlin_residmin uses "lm_svd" algorithm for optimization as currently the only backend. However, lsqnonlin in Matlab can choose between "trust-region-reflective" and "Levenberg-Marquardt" (LM) algorithms.
Another difference is in complex inputs. lm_svd does not support complex valued inputs whereas Matlab's LM algorithm can accept complex input arguments. One way of providing complex inputs to lsqnonlin in Octave is to split the real and imaginary parts into separate variables and running the optimization.  
  • 'lsqcurvefit' using 'nonlin_curvefit', 'nonlin_residmin', or 'lsqnonlin'
Done in [2] using nonlin_curvefit. lsqcurvefit is very similar to lsqnonlin with only a few minor interface differences. lsqcurvefit explicitly takes independent variables and the observations as inputs while these values can be wrapped inside the objective function while using lsqnonlin. Additional bounds for the optimized parameters can be specified. 
  • 'nlinfit' using 'leasqr',
I wrapped nlinfit on nonlin_curvefit and curvefit_stat as leasqr repeats the optimization to compute the additional statistics (Jacobian and Covariance matrix) while curvefit_stat saves this computation overhead. I have partially implemented nlinfit in [3] (It hasn't been thoroughly reviewed yet). Two missing features are: 1) Error models and Error parameters estimation, and 2) Robust Weight function. Meanwhile, no such functionality exists in the Octave's optimization backends for the missing features. My current implementation supports the input of scalar positive array of weights for robust regression.  
Since nlinfit is from the statistics toolbox of Matlab, it uses statset and statget to create and get options respectively. I created additional functions statset, statget and __all_stat_opts__ with minor changes to the code in optimset, optimget and __all_opts__.
  • 'fmincon' using 'nonlin_min',
In progress [4].

Future goals:
    1. Complete fmincon implementation.
    2. Create solver specific options using optimoptions and desirably still be able to use optimget. 
    3. Arranging lambda output for quadprog by wrapping it on __qp__ instead of qp.m
    4. Test cases for all the implemented functions.


by Asma Afzal ( at June 24, 2015 05:46 PM

Juan Pablo Carbajal

Crea tus propias funciones

A esta altura están en posición de escribir sus propias funciones. Pero ¿Qué es una función?

Una función es una entidad que recibe una serie de argumentos como entradas, opera sobre ellos y luego devuelve ciertos resultados. Las funciones cumplen un rol fundamental: encapsulan tareas. Al encapsular tareas, en especial aquellas que se repiten seguido, nuestros programas son mas fáciles de leer y a veces hasta más eficientes.

Considera la situación de saludar a personas que encuentras durante tu día. Supongamos que tu frase elegida como saludo es "Hola, <nombre>",  donde <nombre> es el nombre de pila de la persona a al que saludamos. Dado que a esto lo vamos a hacer muy seguido, podríamos definir una función (o rutina) que haga lo siguiente:

1. Toma el nombre de la persona a saludar.
2. Adjunta "Hola, ".
3. Imprima el saludo
4. Devuelva un resultado con el saludo.

Esta función en octave sería

function frase = saludar (nombre)
  frase = ["Hola, " nombre];
  disp (frase);

En una sesión de octave utilizaríamos esta función de la siguiente manera

x = "María";
y = saludar (x);

En la pantalla veríamos "Hola, María" y el contenido de la variable y sería exactamente esa frase.

Pregunta 1
¿Puedes definir una función que tome dos entradas y devuelva la suma de las mismas?

Estructura de una función

El siguiente esquema muestra la estructura de una función

function <resultados> = <nombre de función> ( <entradas> )

Las palabras y símbolos en negrita son necesarios y obligatorios.

Las palabra clave function es necesaria (y obligatoria) para definir una función. Un archivo que contiene una función debe siempre tener como primer comando ejecutable esta palabra clave. El archivo donde guardamos la función siempre tiene que llamarse  <nombre de funcion>.m , es decir el nombre del archivo debe coincidir con le nombre de a función que allí se define.
Las variables que se pasarán al ámbito desde donde se ha llamado la función (p. ej. la sesión de Octave) se devuelven como un vector en <resultados>. En el caso de la función saludar.m de más arriba, devolvíamos solo una variable. Pero una función puede devolver un montón de cosas o nada. Ejemplos

function [x, y] = ceroyuno ()
 x = 0;
 y = 1;

function diceHola ()
  disp ("Hola");

De manera similar, las variables de entrada  se pasan en una lista en <entradas>. Las entradas pueden ser muchas o ninguna (como en los ejemplos anteriores). Ejemplo

function z = resta (x,y)
  z = x - y;

Pregunta 2:
¿Cual es el nombre del archivo .m donde debemos guardar la función resta definida en el ejemplo anterior?

Pregunta 3:
¿Como se define una función que devuelva la cantidad de letras "a" en un string?


Cuando discutimos como guardar nuestro trabajo, vimos que desde Octave podemos ejecutar el contenido de un archivo de texto. Los comandos en ese archivo tienen acceso a las variables que existen en el ámbito (en inglés scope) de la sesión de Octave; esto quiere decir que los scripts pueden "ver" y "editar" las variables que creamos directamente en la sesión, y que si estos scripts crean nuevas variables estas quedaran en nuestra sesión luego de que el script haya terminado de ejecutarse.

Una función, en cambio, crea su propio ámbito de variables. Las variables que son creadas dentro de una función no son accesibles desde la sesión de Octave, o por lo menos no directamente. Excepto en casos particulares, las variables creadas dentro de una función son eliminadas cuando la función termina.

Con estas ayuditas ya puedes empezar crear tus propias funciones. El mundo de las funciones es mucho más amplio de lo que he descrito aquí ¡No te olvides de explorar!. Espero tus dudas y preguntas en el foro.

by Juan Pablo Carbajal ( at June 24, 2015 02:48 PM

June 22, 2015

Asma Afzal

Week4: fmincon wrapping nonlin_min

Time flies.. A third of the way through already..

fmincon/nonlin_min is the most elaborate function of all that I have previously 
implemented so before actual coding I would like to thoroughly check the the 
mapping of arguments and options. 

[x,fval,exitflag,output,lambda,grad,hessian] = fmincon (fun,x0,A,b,Aeq,beq,lb,ub,nonlcon,options)
[x,fval,exitflag,output] = nonlin_min (fun,x0,settings)

A,b Aeq, beq- Linear inequality and equality constraints
Inequality constraints:
Matlab standard: A * x - b <= 0 ,
Octave standard: m.' * x  + v >= 0. 
This implies: m=-A.' , v=b.
Set in Octave using 
optimset ("inequc", {m,v})
Similar for equality constraints:
optimset ("equc", {m,v})

lb, ub- Lower and upper bounds
Set in Octave using optimset
 optimset ("lbound", ..., "ubound",...)

nonlcon- Nonliner constraint function handle 
In Matlab, nonlinear constraints are given in a function with the following format

function [c,ceq,gradc,gradceq] = mycon(x)
c = ...     % Compute nonlinear inequalities at x.  
ceq = ...   % Compute nonlinear equalities at x.
% Optional output arguments:
gradc = ...   % Gradient of c(x).
gradceq = ...   % Gradient of ceq(x).
options = optimoptions('fmincon','GradConstr','on')

Alternative to nonlcon in nonlin_min
  optimset ("equc", {constraint_function})

Options- Options common to both fmincon and nonlin_min
User-supplied Gradient
In Matlab, the objective function must return the second output when the 
GradObj option is set:
User-supplied Hessian
In Matlab, the objective function must return the third output when the Hessian 
option is set:

Lambda- returned as a structure in Matlab but as a vector field of the "output" 
ouput argument in Octave.

Things to investigate:
1. Algorithm mapping. 
2. Exitflag mapping.
3. Setting gradc (the gradient of general equality/inequality functions). The second 
entry for the equc/inequc setting implements this feature in Octave as stated by 
optim_doc but I was unable to make it work properly.
4. Returning additional statistics (Hessian, Gradient)- I used 
residmin_stat/curvefit_stat previously for lsqnonlin and lsqcurvefit. No such function for nonlin_min.
5. Rearranging values of lambda in the fields of a structure.

P.s. this week I spent quite some time with Mercurial and how to possibly work in the optim package. There was a slight miscommunication/confusion but it is clear now and I will continue to publish my code on github..

by Asma Afzal ( at June 22, 2015 08:53 PM

June 18, 2015

Juan Pablo Carbajal

Si, si en cambio, sino: IF-ELSEIF-ELSE

En la clase sobre lógica discutimos un poco la forma de generar valores verdadero/false, usualmente llamados valores booleanos (se lee bu-leanos y viene del nombre de un inglés llamado George Boole).

Casi todos los lenguajes de programación ofrecen una forma de controlar la ejecución de código utilizando como criterio valores booleanos. Es decir, cierta parte del código se ejecuta si una condición lógica es verdadera. Una forma ejemplar de realizar esta ejecución selectiva es utilizando IF-ELSE (si, sino) o IF-ELSEIF (si, si en cambio) o una combinación de ambos.


En Octave, todos aquellos comandos dentro de un bloque if se ejecutan solo si a condición dada es verdadera. 

if (x == 0)
  disp ("Equis es igual a cero");

El bloque if se empieza en la linea donde llamamos a esta función y termina con endif (o simplemente end, es lo mismo). En este caso, se imprime a la pantalla la oración "Equis es igual a cero" cuando la comparación x == 0 es verdadera. La condición puede ser tan compleja como sea necesario (aunque esto puede hacer el código difícil de entender), por ejemplo

if (x == 0 || x == 5)
  disp ("Equis es igual a cero o igual a cinco");

Si la condición fuese falsa, el código dentro del bloque nunca se ejecuta. En cualquier caso, una vez evaluado el bloque (con o sin ejecución del código correspondiente), el programa continua. El siguiente ejemplo imprime "Equis es igual a cinco" dependiendo del valor de x, pero siempre imprime "Terminé".

if (x == 5)
  disp ("Equis es igual a cinco");
disp ("Terminé.");

Pregunta 1:
¿En que situación se imprime a pantalla el mensaje "¡Muy loco!"?

if (x == 0 && x == 5)
  disp ("¡Muy loco!");

¿Veremos alguna vez el mensaje en pantalla?


El bloque if se puede extender para que cierto código se ejecute solo cuando la condición es falsa. Considera el siguiente código

disp ("Eqis es: ")

if (x > 0)
    disp ("positivo.");

if (x <= 0)
    disp ("negativo o cero.");

Dado que la segunda condición es lo opuesto a la primera podemos simplificar usando else

disp ("Eqis es: ")
if (x > 0)
    disp ("positivo.");
    disp ("negativo o cero.");

Pregunta 2:
¿Cómo harías para separar en "positivo", "cero" o "negativo" el ejemplo anterior?


Una respuesta a la pregunta anterior viene de la mano de elseif (si en cambio), que nos permite evaluar una condición adicional.

disp ("Eqis es: ")
if (x > 0)
    disp ("positivo.");
elseif (x == 0)
    disp ("cero.");
    disp ("negativo.");

Pregunta 3:
¿Podés dar otros ejemplos que produzcan lo mismo que el ejemplo anterior?

Debe notarse que el código dentro de un elseif se ejecutara solo y solo si la condición del if es falsa. En el siguiente ejemplo jamás veremos el mensaje "¡Ahá!" en pantalla aunque la condición del elseif es siempre verdadera.

if (true)
 disp ("Sí, es verdad.");
elseif (true)
 disp ("¡Ahá!");

Esto quiere decir que las condiciones en un bloque IF-ELSEIF deberían que ser mutuamente exclusivas (nunca ambas pueden ser verdaderas al mismo tiempo). Esto no es necesario, como se muestra en el siguiente ejemplo, pero la lógica resultante es confusa

if (x > 0 && x <=1)
    disp ("Entre cero y uno (incluido).");
elseif (x > 0.5)
    disp ("Más que un medio.");

Pregunta 4:
¿Podés simplificar la lógica del ejemplo anterior?

Espero recibir sus preguntas y dudas en el foro!

by Juan Pablo Carbajal ( at June 18, 2015 10:34 AM

June 14, 2015

Piotr Held

Improving the code

TISEAN was originally written as a command line set of programs. Because of this all the code is not very portable and many variables are global ones. So far this has been dealt with by creating local variables and extending the number of variables in function calls (in some cases up to 11). This is not optimal for code clarity, ease of maintaince and because many of the variables are passed as values (they are parameters) it also caused a slight slowdown in execution speed.

Due to all of these downsides I have contemplated possible solutions which I will attempt to describe.

Using structs

One idea that came to mind is to pack all of the global variables into a struct and pass the struct to all of the functions and obtain the global variables from this struct. This solution certainly solves the problem of passing so many parameters to functions. However, it does not improve code portability because every *.cc oct-file function needs its own struct. This solution is also problematic because all of the names of global variables have to be referenced now through the struct so center[i][j] would become[i][j] (obviously the name of the struct could be as short as p).

Using classes

Another quite simple solution would be to create a class. This class would have data members that were previously the global variables and function members that were the old functions called from old main(). As there are similarities between different TISEAN programs, it could be possible to even create a prototype class and inherit from it.

There are however downsides to this option as well. First of all, Octave code guidelines specifies that classes should be in separate files. This would mean creating 2 more files for each program that was ported using the C++ wrapper. Apart from that, the memory might have to allocated using new/delete, because the preferred method of using the macro OCTAVE_LOCAL_BUFFER might be difficult (or impossible) to apply to this case. This objection can be worked around in other ways, such as using Array classes to allocate the data and then get a pointer to them using fortran_vec().


Performing the aforementioned code improvement, although helpful, is not critical. Therefore any attempts to implement it will be deferred until after the functions outlined at the beginning of the project are complete.

Timeline update

So far I have been giving progress reports on the TISEAN porting project. This time, however, I would like to also compare the outlined schedule for the project with the actual progress made.

Since the last post I have additionally ported:
  • xzero
  • lyap_r
  • lyap_k
  • lyap_spec
In one of my first posts I stated that I would like to finish Dimensions and Entropies before the midterm assessment. Currently I have finished up Lyapunov Exponents and I plan to start working on Dimensions and Entropies this week. Since there are 2+ weeks to the Midterm Assessment I believe it is possible to complete all of the goals for this section of the project as planned.

by Piotr Held ( at June 14, 2015 08:58 PM

Asma Afzal

Week3: nlinfit, statset, stat_get and __all_stat_opts__

So this week I achieved the following milestones:

Wrapping nlinfit on nonlin_curvefit

[beta,R,J,CovB,MSE,ErrorModelInfo] = nlinfit(X,Y,modelfun,beta0,options,Name,Value)

Implementation [1]:

I chose not to wrap nlinfit on lsqcurvefit because 
  1. We might end up wrapping lsqcurvefit on lsqnonlin eventually so it is undecided.
  2. The default options for lsqnonlin/lsqcurvefit are different from nlinfit. 
Missing features:
  1. RobustWgtFun - The field RobustWgtFun in options can be provided with a function handle which computes the residuals at every iteration. The backend optimization algorithm in Octave currently does not support this functionality.  
  2. Name-Value pairs. Currently the only implemented one is "weights",  which takes an array of weights for the weighted optimization. "ErrorModelInfo" and "ErrorParameters" are not implemented. The possible error models include, "constant", "proportional" and "combined". The error model also translates to a weight function which helps in reducing the effect of outliers.   
  3. ErrorModelInfo- output field which gives information about the error variance, and estimates the parameters of Error models. 
Setting options using statset

In Matlab, options for statistics toolbox are set using statset [2].  The functionality is almost identical to optimset but the different functions in Matlab are because of different toolboxes (statset for statistics and optimset for optimization toolbox).

Added functions:
statset.m, statget.m and __all_stat_opts__.m [3]-[5]
Creating these functions was pretty straight forward.

Still to do:
  1. Have to check if the weighted residual and weighted Jacobian output in octave is consistent with Matlab and further refine the functions with the feedback from my mentors.
  2. Move on to fmincon wrapping nonlin_min.

    by Asma Afzal ( at June 14, 2015 08:11 PM

    June 09, 2015

    Asma Afzal

    Week 2: lsqnonlin and lsqcurvefit

    A bit late blogging about week 2. 

    Almost completed functions lsqnonlin and lsqcurvefit. 
    Successfully mapped user-specified Jacobian. 
    In Matlab, if the Jacobian option is set to "on",  the model function must return 
    a second output which is the Jacobian function.
    In Octave, the Jacobian function handle is given to the dfdp option using optimset.

    Lsqnonlin function description:

    [x,resnorm,residual,exitflag,output,lambda,jacobian] = ...

    This function maps on:

    [x, residual, exitflag, output] = nonlin_residmin (fun, x0, options)

    Features of lsqnonlin in Octave:
    • Input arguments: Acceptable forms are lsqnonlin(fun,x0), lsqnonlin(fun,x0,lb,ub) and lsqnonlin(fun,x0,lb,ub,options)
    • Outputs
      • x, exitflag, residual and output currently same as nonlin_residmin.
      • resnorm=sum(residual.^2)
      • Lambda is computed using the complementary pivoting in __lm_svd__.
        It's values differ from Matlab's due to the difference in backends.
      • Jacobian is computed using the function residmin_stat ().

    Lsqcurvefit function description:

    [x,resnorm,residual,exitflag,output,lambda,jacobian] = ...

    This function maps on

    [x, fy, exitflag, output] = nonlin_curvefit (fun, x0, xdata, ydata, options)

    Features of lsqcurvefit in Octave:
    • Input arguments: Acceptable forms are lsqcurvefit (fun,x0,xdata,ydata), lsqcurvefit (fun,x0,xdata,ydata,lb,ub) and lsqcurvefit (fun,x0,lb,ub,xdata,ydata,options)
    • Outputs
      • x, exitflag, residual and output currently same as nonlin_curvefit.
      • residual = fy-ydata, resnorm = sum(residual.^2)
      • Lambda and Jacobian same as in lsqnonlin.
    There are only minor interface differences between lsqcurvefit and lsqnonlin.

    This week's plan:

    • Hopefully, with lsqnonlin and lsqcurvefit wrapped up, I'll move on to nlinfit
    • Three key challenges need to be addressed when wrapping nlinfit using nonlin_curvefit
      and curvefit_stat:
      • Weight functions: Currently, no such functionality exists in nonlin_curvefit,
         where a user can specify weight functions to perform Robust regression
        (weights computed using the specified function in every iteration).
      • Error Models and ErrorModelInfo
      • Setting options using statset instead of optimset or optimoptions. 

    by Asma Afzal ( at June 09, 2015 08:17 PM

    June 07, 2015

    Piotr Held

    Analyzing lfo-run

    I have written tests that compare lfo-run from TISEAN to the ported version lfo_run. The test that uses amplitude.dat works perfectly, but when I analyzed the results both programs/function gave for henon (Henon Maps) I ran into some problems. I will attempt to describe them.

    Input data

    The problems occur when analyzing a 1000 element Henon map (henon(1000)). For all of the implementations if I used a simple call with default parameters (m = 2, d= 1) the programs would quit due to a matrix singularity. The problems arose when (m = 4, d= 6) was used. With these parameters the program gave various results for various implementation methods.

    It is important to note that the prediction that I was testing tried to predict 1000 future elements (default for all implementations) on the basis of given 1000 elements.


    There are 3 implementations I used:
    1. The TISEAN implementation (uses lfo-run)
    2. The implementation similar to 1. but compiled as c++ and wrapped in enough code to run as m-file (uses __lfo_run__ and invert_matrix())
    3. The implementation that uses Matrix::solve() method
    I tried to find out if maybe method 1. and 2. do not differ due to a bug that was introduced while porting. I therefore ported it twice (the second time to a very rudimentary stage) and both times the same results were encountered. I do not understand why there is a discrepancy between between these two implementations.


    Since the goal of this project is to port TISEAN functions I have compare implementation 1. with 2. and 1. with 3, to see what differences I come across.

    A comparison between implementation 1. and 2. results in an error from implementation 2. The function generates about 700 elements (of the default 1000) and then throws an error that the forecast has failed.

    The comparison between implementation 1. and 3. is much more fruitful as the results are the same for about 150 elements and then they begin to differ (see Fig. 1.)
    Fig. 1 Comparison between the TISEAN implementation and using Matrix::solve() in TISEAN package from Octave
    These results can be achieved by cloning the repo, doing make run and running the script:

    cd tests/lfo_run/; test_lfo_run


    Before I give my suggestions for what is the cause for these discrepancies I would like to discuss another interesting discrepancy. This discrepancy is the maximum difference between the solution of the equation system obtained from implementation 2. and 3. When using implementation 3. for the forecast this value was 8.5e-14, but when using implementation 2. for the forecast this difference was 5e-13.

    I believe this is because the computational error is accumulated throughout the program. Each new forecast point is dependent on the previous ones. Moreover the Kahan algorithm (compensated summation) is never used in the TISEAN implementations. Even matrix multiplication (as seen e.g. in multiply_matrix())  uses the simple, but error accumulating for(...) sum += vec[i].

    As to why implementation 1. and 2. give different results I have two theories: either there still is a bug which I was unable to detect, or some compilation difference (e.g. a linked library) between the TISEAN program written in C (lfo-run) and the TISEAN package function written in C++(__lfo_run__).


    The question that poses itself is whether this warrants a rewriting of other TISEAN function that use the simple summation, or if this problem can be ignored. The authors of TISEAN said in their introduction that blindly using the programs they wrote may result in unintended or even wrong results. Trying to predict 1000 elements of a 1000 element Henon Map using first order local linear prediction might be considered a bad use-case.

    Progress report

    Since the last post I wrote a tutorial on I also ported:
    • lzo_gm
    • lzo_run
    • ikeda
    • lfo_run
    • lfo_ar
    • lfo_test
    • rbf
    • polynom
    During my work I discovered that polynom has similar functions (polynomppolyparpolyback) which provide extra options for performing polynomial fits. I will not include those functions in the project now, but they have a high priority once I finish all of the functions I outlined for this project.

    These newly ported functions aren't completely polished (some need demos and documentation) but they pass tests and don't have memory leaks. Once I clean these functions up and port xzero the last function in this section, I intend to create version 0.1.0 of the package. With this version I intend to branch out the repo to have a 'devel' and a 'stable' branch.

    Afterwards, I will add more information to the tutorials on the wiki page.

    by Piotr Held ( at June 07, 2015 03:16 PM

    June 02, 2015

    Asma Afzal


    Answering some questions to better understand the behavior of optimoptions.
    Although the answers have been discussed in [1], I am pasting some examples for clarity.
    *I was using the terms solver/algorithm in the wrong context before. Solver names are functions such as lsqnonlin, fmincon, etc. and one solver can have multiple algorithms such as interior-point, lev-mar, etc. *

    1) Is there an error or warning if optimoptions is used to assign an option not contained in the specified algorithm? 

    If the option belongs to a different algorithm of the solver, Matlab stores it as options "not" used by the current algorithm so if we change the algorithm to which the option belongs, we do not need to set it again. 

    For eg. The default algorithm of lsqnonlin is trust-region-reflective but when I try to set the option 'ScaleProblem' which is specific to 'levenberg-marquardt' algorithm, I get: 

    opts = optimoptions ('lsqnonlin', 'ScaleProblem', 'Jacobian')

    opts = 

      lsqnonlin options:

       Options used by current Algorithm ('trust-region-reflective'):
       (Other available algorithms: 'levenberg-marquardt')

       Set by user:
         No options set by user.


     Algorithm: 'trust-region-reflective'
        DerivativeCheck: 'off'
         .               .  
         .               .  
         .               .  
         .               .  
        TolX: 1.0000e-06

       Show options not used by current Algorithm ('trust-region-reflective')

    Set by user:
        ScaleProblem: 'jacobian'

    This gives the same result using dot notation:

    opts = optimoptions ('lsqnonlin')
    opts.ScaleProblem = 'Jacobian'

    2) ... or to assign an option that does not exist in the specified solver or any solver?

    This gives an error. Trying to set the option for SQP Algorithm, which is not used by lsqnonlin:

    opts=optimoptions('lsqnonlin', 'MaxSQPIter', 100)
    Error using optimoptions
    'MaxSQPIter' is not an option for LSQNONLIN

    using dot notation:

    opts.MaxSQPIter = 100;
    No public field MaxSQPIter exists for class optim.options.Lsqnonlin.
    3) If options are transfered to a different solver with optimoptions, are there errors or warnings if the new solver does not have some of these options?

    No errors or warnings. The options common to both solvers are copied. They could be options of different algorithms. For eg. 

    opts = optimoptions ('fmincon', 'Algorithm', 'sqp', 'TolX', 1e-10) 
    opts_lsq = optimoptions ('lsqnonlin', opts) 

    Options set in opts_lsq are: 
                    PrecondBandWidth: 0 
                    TolX: 1.0000e-10 

    The option PrecondBandwidth belongs to the trust-region algorithm of fmincon solver. 
    Another option copied from opts in opts_lsq belongs to the lev-mar algorithm of lsqnonlin. It is the stored option as mentioned in 1) 

      ScaleProblem: 'none' 
    4) Can options returned by optimoptions be used with optimset, and   vice versa? 

    This returns an error in both cases:

    opts = optimoptions ('lsqnonlin','MaxIter',100);
    opts = optimset (opts, 'TolX', 1e-6);
    Error using optimset
    Cannot use OPTIMSET to alter options created using OPTIMOPTIONS.
    Use OPTIMOPTIONS instead.

    opts = optimset ('TolX', 1e-6);
    opts = optimoptions ('lsqnonlin', opts);
    Error using optimoptions
    Invalid option name specified. Provide a string (such as 'Display').


    by Asma Afzal ( at June 02, 2015 07:51 PM

    May 31, 2015

    Asma Afzal

    lsqnonlin wrapping nonlin_residmin

    So the first week of GSoC is officially over.

    I was working on lsqnonlin. My code is accessible here:

    [x, resnorm, residual, exitflag, output, lambda, jacobian]...
    = lsqnonlin (fun, x0, lb, ub, options)

    [x, resid, cvg, outp] = nonlin_residmin (f, x0, settings)

    A recap of encountered problems:
    1. output lambda- it wasn't previously returned from the backend [1]
    2. options- In addition to optimset, Matlab uses optimoptions to set options specific to a certain optimization function. optimoptions
      • Creates and modifies only the options that apply to a solver
      • Shows your option choices and default values for a specific solver/algorithm
      • Displays links for more information on solver options and other available solver algorithms [2].
      octave currently does not have this functionality.  For more discussion on this, check [3]
    Things to do:
    1. check how user-specified jacobian is to be provided.
    2. check for matrix/complex inputs.
    3. come up with a plan for writing optimoptions in octave.

    by Asma Afzal ( at May 31, 2015 07:41 PM

    May 28, 2015

    Juan Pablo Carbajal

    El que busca encuentra

    El foro de discusión tiene una nueva entrada: una pregunta de Florencia que está resolviendo el desafío nro. 1.5.

    La pregunta de Floppy nos da pié para discutir sobre un tipo de funciones muy útiles. Funciones que nos permiten obtener la posición, dentro de un vector, de elementos que cumplen cierta condición lógica. Esto es muy similar a lo que hicimos en la clase ¿Verdadero o falso? cuando buscamos números pares.

    Find == Econtrar

    Las operaciones lógicas generan matrices con valores verdadero o falso. Así, la siguiente desigualdad 

    tf = [-1,5,-3] > 0

    Genera un vector de valores lógicos (verdadero o falso), en este caso [false,true,false]. Estos vectores lógicos son de muchísima utilidad: nos permiten filtrar matrices o buscar elementos que cumplan ciertas condiciones lógicas. Para realizar lo segundo usamos la función find.
    La función find (encontrar, en inglés) nos devuelve la posición de aquellos elementos que corresponden al valor "verdadero". Por ejemplo:

    find ([false, true, false])

    devuelve un 2, porque el segundo elemento es un valor verdadero. En Octave todo aquello que no es cero o vacío se considera verdadero. Es decir que

    find ([0,-3, 0])

    también devuelve 2, porque el segundo elemento es el único no cero. Cuando varios elementos son verdaderos, la función find devuelve todas las posiciones correspondientes, es decir

    find ([false, true, true])

    devuelve [2, 3].
    La función también acepta matrices como entradas y nos devuelve el índice lineal de los elementos verdaderos. En Octave las matrices están ordenadas por columnas (en Inglés se dice column-major order), es decir contamos de arriba para abajo y de izquierda a derecha. Por ejemplo en una matriz de 3x4 los indices lineales son

    Es decir que si ejecutamos

    find ([0 1 0; 0 0 1; 0 1 0])

    obtenemos [4,6,8]
    Para obtener las filas y columnas de los elementos verdaderos, tenemos que llamar a la función con dos argumentos de salida. Si ejecutamos

    [i, j] = find ([0 1 0; 0 0 1; 0 1 0]);
    ans =
       1   2
       3   2
       2   3

    obtenemos en i las filas y en j las columnas de los valores verdaderos.

    ¿Es miembro?

    Otra función muy útil es ismember (es miembro, en inglés). Esta función toma como entrada dos argumentos y nos dice si los elementos del primero están en el segundo. Solo voy a proveer un corto ejemplo y los invito a que lean la ayuda de la función (en Octave: help ismember) y pongan sus preguntas en el foro de discusión.
    Supongamos que queremos saber si la letra "a" está presente en un string

    x = "Hay letras a en esta frase?";

    podemos ejecutar

    tf = ismember ("a", x)
      tf =  1

    Claro que sí. La función también se puede usar para obtener la posición de las letras buscadas. Para esto intercambiamos los argumentos de entrada y buscamos el string x en la letra "a".

    pos = find (ismember (x,"a"))
      pos =
          2    9   12   20   24

    Pregunta 1. ¿Puedes explicar el porqué de este truquillo?

    La función ismember también puede darnos la posición de los elementos encontrados, pero cuando el elemento buscado se repite (como en nuestro caso), solo nos devuelve la última ocurrencia

    [tf, pos] = ismember ("a", x)
      tf  = 1
      pos = 24

    Esta funcionalidad es útil cuando no hay repeticiones y se ilustra mejor con el ejemplo en la ayuda de ismember

    a = [3, 10, 1];
    s = [0:9];
    [tf, pos] = ismember (a, s)
    tf = [1, 0, 1]
    s_idx = [4, 0, 2]

    Las funciones find e ismember son de mucha utilidad y tienen varios otros modos de uso. Lee la ayuda, explora y pregunta en el foro!

    by Juan Pablo Carbajal ( at May 28, 2015 07:26 AM

    May 27, 2015

    Piotr Held


    I am happy to announce that I completed the first two sections.

    Nonlinear Noise Reduction
    From this section I added:
    • ghkss
    Thus deprecating project.  It is important to note that data allocation in ghkss (only) is done via new/delete because of the way the function was originally written. In the future this will be replaced with OCTAVE_LOCAL_BUFFER (OLB) macro.

    Closer examination has revealed another interesting function in this section: nrlazy, which according to the documentation is similar to lazy. Because of this similarity porting it has low priority.

    Phase Space Representation
    From this section I added:
    • mutual
    • false_nearest
    • poincare
    The function poincare was discovered when further studying the documentation and was omitted in the initial plan, but as it seems important in the documentation it was ported.

    The function false_nearest is in a similar situation to ghkss, that is, new/delete is used instead of OLB. This will be improved on in the future.

    Since corr does not need to be ported, this section is complete.

    Nonlinear Prediction
    This next section is well on the way as the following have been ported:
    • upo
    • upoembed
    • lzo_test 
    The function predict turned out to be essentially the same with lzo_test. This had not been verified so far, but according to the documentation they do essentially the same. The additional option that predict has (flag -v) can be easily replaced using GNU Octave's std when calling lzo_test with parameter r. Therefore porting predict will be most likely unnecessary.

    The function upoembed is closely associated with upo. It takes the output of upo and creates a cell array of delay vectors for each orbit produced by upo. It was not mentioned in the original outline, but it is an important function for the package.

    The state of upo is not optimal. The original implementation only supported input up to 1e6 data points. This might not be a big problem as calculating upo on a 1e4 henon map takes about 8 seconds, so a 1e6 would take about 800 seconds ~ 15 min. Changing this might be problematic as the main data in the FORTAN program is in a common block (it is stored in a global variable), which cannot contain arrays of variable dimensions. The authors of TISEAN chose 1e6, but because of how the program is written making it unlimited would not be trivial. It might be beneficial to lift this number to 1e8 or 1e9 but one must keep in mind that because of how the FORTRAN program is written it will always allocate a local array of the maximum size possible (currently 1e6 elements). If the maximum input length is lifted to 1e8 or 1e9 the size of the data allocated by Octave and the local copy that the FORTRAN program uses can be a sizable amount of memory to allocate. Moreover it is important to note that each data point will be a real*8 (not a real*4). This brings me to the next point.

    FORTRAN data types
    This topic has been problematic for me from the very beginning. I had trouble realizing just how the dummy variables work. After some research I found that one can pass -freal-4-real-8 and promote all real*4 to real*8. This is beneficial as the input into a FOTRAN program is passed as doubles. This caused serious issues as I needed to ensure that when I call any TISEAN or SLATEC function/subroutine I needed to use real*4 instead of real*8. The solution I originally used was to copy the input variables into local variables. Apart from eliminating potential bugs and improving code complexity the previously mentioned flag also allows the FORTRAN programs to have the same type of precision expected from GNU Octave programs.

    Other things
    I spent my time also on other things. I significantly changed the Makefiles, removed all compilation warnings and everywhere except for ghkss and false_nearest moved from new/delete to OCTAVE_LOCAL_BUFFER or some type of Array.

    by Piotr Held ( at May 27, 2015 06:33 PM

    May 26, 2015

    Juan Pablo Carbajal

    ¿Verdadero o falso?

    En el cole te enseñaron una cosa que llaman lógica formal (o solamente lógica). Si te gustó tenés suerte, probablemente tu profesor/a era muy buen@ y entendía el tema (¡yo tuve mucha suerte!). Lamentablemente la mayoría de los estudiantes con los que charlo odian la lógica. En esta clase voy a tratar de darles un ejemplo de porqué la lógica es super importante y que no es tan aburrida como muchos creen.

    Esta clase puede ayudarte con los desafíos del nivel 1, si todavía no los resolviste.

    "La lógica no sirve para nada"

    ¡Cuantas veces habrás escuchado esa frase o alguna otra similar! Como la mayoría de las herramientas matemáticas, la lógica no es más que una formalización de algo que hacemos naturalmente. Voy a tomar un ejemplo mundano: seleccionar fruta o verdura cuando vamos de compras.

    Hace unos días, en uno de mis viajes de aprovisionamiento al mercadito de la esquina, observaba con cierta admiración a una mujer mayor seleccionar las manzanas que ponía en su bolsa de las compras. Su meticulosidad era sorprendente y me preguntaba si esa mujer no habría sido una matemática en sus años de actividad laboral.
    Cada vez que la señora juntaba una manzana, la miraba de muchos ángulos diferentes, la golpeaba suavemente con un dedo, la olía y finalmente decidía si la manzana volvía a la góndola o si se convertía en parte de su compra.
    Podemos hacer un modelito sencillo del proceso de selección y pensar que la señora ejecutaba una función que tomaba como entrada los aspectos de la manzana bajo observación. Esta función evalúa tres condiciones y decide si la manzana se compra o se devuelve:
    1. Tiene buena forma?
    2. Suena bien?
    3. Huele bien?
    La respuesta a estas preguntas es si o no. Podemos re-escribir las preguntas como afirmaciones (o condiciones) y decidir si estas afirmaciones son verdaderas (en inglés: true) o falsa (en inglés: false):
    1. La manzana tiene buena forma.
    2. La manzana suena bien al ser golpeada suavemente.
    3. La manzana huele bien.
    Si la manzana en cuestión hace que todas estas afirmaciones sean verdaderas, entonces la compramos. De lo contrario, si cualquiera de estas afirmaciones es falsa, devolvemos la manzana a la góndola.

    Símbolos y operadores

    En el ejemplo introduje los dos símbolos fundamentales de la lógica: true y false. Estos son los nombres que se utilizan en GNU Octave para estos valores, pero vos podés usar cualquier par de valores que te guste: (V,F); (0,1); (-1,1); (0,5); (blanco, negro); etc.. Para poder ejercer la lógica necesitamos saber como operar con estos valores.

    Tomemos dos afirmaciones, una falsa y otra verdadera, para ejemplificar. 
    1. La afirmación verdadera es: la palabra "hacha" tiene 5 letras
    2. La afirmación falsa:  La palabra "hacha" no tiene ninguna letra "h".
    Verifiquemos los valores lógicos de estas afirmaciones en GNU Octave. El siguiente código guarda el string "hacha" en una variable llamada p (de palabra) y evalúa la 1ra afirmación.

    p = "hacha";
    # Cuenta la cantidad de letras
    nLetras = length (p);
    # Evalua la 1ra afirmacion 
    nLetras == 5

    La primera linea de código guarda la palabra en la variable. La linea luego del primer comentario devuelve la longitud del string p, es decir la cantidad de letras, que guardamos en a variable nLetras (recuerda que las lineas que empiezan con el caracter # o %, son ignoradas por GNU Octave, son lineas de comentarios, y tu también puedes ignorarlas. Son solo ayuda). La última linea de código pregunta si nLetras es igual a 5. Notar que el operador == no asigna el valor a nLetras, sino que compara los valores y responde verdadero si estos son iguales (experimenta un poco con este operador!).
    Si ejecutas esas líneas de código en una sesión de Octave obtendrás la respuesta 

    ans = 1

    En Octave el valor true se representa con un 1 (ejecuta true en tu sesión de Octave para verificar) por lo tanto el valor de la 1ra afirmación es verdadero. Veamos el código de la 2da afirmación:

    all (p != "h")

    ¿Simple, eh? Dejo como ejercicio entender que esta ocurriendo aquí. Lo que necesitas saber es que all en inglés significa "todo/todos" y que el operador != es lo opuesto a == (verifica desigualdad en vez de igualdad) ¿Qué ocurre cuando comparamos el string p con el caracter "h"?

    La respuesta de Octave a esta linea de código es 0. Indicando que, efectivamente, la segunda afirmación es falsa.

    Ok, tenemos una afirmación verdadera y una falsa, y vamos a guardar estos valores de verdad en un vector:

    afirmacion = [true, false]

    De esta forma afirmacion(1) es verdadera (true, 1) y afirmacion(2) es falsa (false, 0).

    ¿Qué dirías de la nueva afirmación más abajo?
    • La palabra "hacha" tiene 5 letras y la palabra "hacha" no tiene ninguna letra "h"
    ¿Es verdadera o falsa?

    El operador que hemos usado para construir esta nueva afirmación es el "y", en inglés "and", y que en lógica llamán conjunción. En Octave podemos utilizar la función and o el operador &, veamos: 

    # operador
    afirmacion(1) & afirmacion(2)
    # funcion
    and (afirmacion(1), afirmacion(2))

    Ejecuta este código ¿Estas de acuerdo con Octave sobre el valor de verdad de la nueva afirmación?
    ¿Cuál sería el resultado si las dos afirmaciones fuesen verdaderas?

    Veamos otra afirmación
    • La palabra "hacha" tiene 5 letras o la palabra "hacha" no tiene ninguna letra "h"
    Para construir esta nueva afirmación utilizamos la disyunción inclusiva, el "o". Pero no el "o" exclusivo (que algunos escriben "ó" para que quede claro), sino un "o" que está contento si esto es verdad, si lo otro, o si ambos son verdad. Este operador en Octave se escribe | y la función es or:

    # operador
    afirmacion(1) | afirmacion(2)
    # funcion
    or (afirmacion(1), afirmacion(2))

    Experimenta con este operador para entender como funciona. 

    Los operadores & y |, al igual que las respectivas funciones toman como entrada dos afirmaciones, son operadores binarios (en el sentido que toman 2 entradas). Otros operadores binarios que ya conocés son el + y el *, ambos toman dos números como entrada y devuelven el resultado.

    Ejercicio 1: ¿Puedes construir el resultado de los operadores & y | para todas las entradas posibles?

    Ejercicio 2: ¿Puedes escribir la afirmación que la señora de la historia evaluaba para seleccionar las manzanas?

    Ejemplo numérico

    Ok, toda esta cháchara sobre lógica, pero ¿Cómo nos puede ser útil para los desafíos?

    En los desafíos de nivel 1 es necesario filtrar o seleccionar ciertos números según sus propiedades. Por ejemplo ¿Cómo podemos seleccionar números pares?
    Lo que sabemos es que un número par se escribe como

    K = 2*N

    donde N puede ser cualquier numero entero. Todo bien con esto, pero para cada número K que me dan, tendría que buscar un N, entre todos los enteros, que cumpla con esta condición...posible, pero medio pesado. De manera equivalente podemos decir que un número es par si al dividirlo en 2 el resto es cero. Es decir, un número par es divisible por 2. GNU Octave tiene una función que nos devuelve el resto de una división entera, la función rem (del inglés "remainder", que quiere decir "resto" o "sobrante") . Como ejemplo de uso tomemos la division de 7 en 2 partes. 7 se puede repartir entre 2 en 3 partes iguales, pero nos sobra 1. Veamos que dice a la función:

    rem (7, 2)
    ans = 1

    El primer argumento es el numero a dividir (el dividendo) y el segundo argumento es el divisor.

    Ahora vamos a utilizar esta función y nuestro conocimiento de lógica para seleccionar los números pares de una lista de números:

    l = 0:3:15;
    tf = rem (l,2) == 0
    tf =
       1   0   1   0   1   0

    La lista l contiene números enteros desde 0 hasta 15 en incrementos de a 3 y Octave nos dice que la afirmación "el número es divisble por 2" es verdadera para el primer número, falsa para el segundo, verdadera para el tercero y así sucesivamente. 
    ¿Cuales son estos números? Claramente 0, 6 y 12. En Octave podemos obtener estos números utilizando el vector lógico al que hemos llamado tf. Prueba lo siguiente


    Sigue experimentando con este tipo de ejercicios hasta que entiendas como funciona la cuestión. Si tienes preguntas o sugerencias no dudes en escribirlas en el foro de discusión o los comentarios.

    Adjunto un archivo con todos los comandos que hemos utilizado en esta clase.

    by Juan Pablo Carbajal ( at May 26, 2015 03:17 PM

    Antonio Pino

    On galleries and the beginning of summer

    After the community bonding period and before starting today the coding period, I will briefly list the transformation that has undergone my initial proposal: from just implementing new algorithms and then add them to GNU Octave, to various modifications of GNU Octave itself so that Higham's toolboxes run smoothly and in the end add the new algorithms. Sticking to what I said before, I expect to be doing the modifications (e.g. new bugs, patches, toolboxes) most of the first half of the coding period. From there we aim to go as far as we can about matrix functions, I will do so. These changes are noted in the new time line.
    During the community bonding period I have been setting up, you might have seen me at freenode. I have been becoming more and more acquainted with GNU Octave, and found out that the gallery function was broken, with unassigned variables and missing auxiliary functions. This function will prove useful to test matrix functions, because the eigenvalue decomposition strategy (if $A=VDV^{-1}$  then $f\left(A\right) = V f\left( D \right) V^{-1}$) yields a big error using the ill conditioned matrices gallery provides. Another example are the useful positive definite matrices that have a computable principal p-th root. gallery is indeed interesting for anyone looking for a matrix with a special characteristic to test an edge case of a function.
    Besides, I also have had the chance to see a plethora of Matlab-style short circuit operators, looping with infinite ranges, and even weird undocumented functions like superiorfloat (it returns either "single" or "double" strings depending on the input). What let Carnë to point me to Undocumented Matlab blog, where they document its unsupported hidden underbelly. More quirks (and their solutions) on the next post. 
    A final thank goes to The Project in general and my mentors (Carnë and Mario) in particular for this opportunity. I hope everyone pleasantly codes their summer away!

    by Antonio Pino Robles ( at May 26, 2015 12:44 AM

    May 25, 2015

    Antonio Pino

    GSoC 2015 - Matrix Functions in GNU Octave

    A brief intro

    First of all, let me introduce myself: I am Antonio Pino Robles—an Electronic Engineering student from the Basque Country—and I will be improving matrix functions in GNU Octave this summer, following Google Summer of Code program.
    The idea behind this is quite simple: given a square matrix $M\in \mathbb{C}^{n \times n}$ and a function $f$, GNU Octave will compute $f\left(M\right)$. You may think of them as an extension to scalar functions, i.e. starting from $f:\mathbb{C}\rightarrow \mathbb{C}$ compute $f:\mathbb{C}^{n \times n}\rightarrow \mathbb{C}^{n \times n}$. Their implementation is quite different, though. (Check Golub and van Loan's book[0] and the Short Course by Higham and Lin[1] for further info.)
    Let me note that matrix functions are already part of octave: expm, logmsqrtm in octave itself and funm,  trigonometric and hyperbolic matrix functions in the Linear-Algebra Octave-Forge package. There are also GPLed toolboxes by Nicholas J. Higham, namely the mctoolbox[2] and the mftoolbox[3]; furthermore, GPLed software from the NAMF group—led by N. J. Higham at The University of Manchester—is available as well.
    Hence, on a first part octave will be modified in order to run the toolboxes—as they are—smoothly , and then the existing implementations will be improved by means of updating their algorithms.
    Finally, for a more detailed description of the project please refer to my octave-wiki page:

    Agur bero bat!

    [0] G.H. Golub and C.F. Van Loan. Matrix Computations, 4th Edition. The Johns Hopkins University Press, Baltimore, USA, 2013.
    [1] Nicholas J. Higham and Lin Lijing, Matrix Functions: A Short Course, preprint, (2013).
    [2] N. J. Higham. The Matrix Computation Toolbox.
    [3] N. J. Higham. The Matrix Function Toolbox.

    by Antonio Pino Robles ( at May 25, 2015 11:05 PM

    May 23, 2015

    Asma Afzal

    Quadratic Programming quadprog, qp

    Just a post to review the things I've learned over the past week:

    Intuitive explanation of Lagrange multipliers [1]. Given a D-dimensional function to optimize and a set of a few equality constraints.
    • The vector normal to the function should be a scaled version of the vector normal to the constraint function (In other words, both normals are parallel to each other). This scaling factor is the Lagrange multiplier corresponding a particular constraint. 
    • $\nabla f=\lambda_1\nabla g+\lambda_2 \nabla h$, where $g$ and $h$ are two constraint functions.

    KKT Conditions: Generalized method of Lagrange multipliers to be applicable on Inequality constraints.  $g_i(x)-b_i\geq 0$

    • Feasibility- $g_i(x^*)-b_i\geq 0$
    • Stationarity- $\nabla f(x^*)-\sum\limits_i\lambda_i^*\nabla g_i(x^*)=0$
    • Complementary Slackness $\lambda_i^*(g_i(x^*)-b_i)=0$
    • Positive Lagrange multipliers $\lambda_i \geq 0, \forall i$

    In case we obtain negative Lagrange multiplier, the constraint corresponding to the most negative multiplier is removed and the optimization is performed again until all multipliers are positive.

    A bit about active set algorithm [2]:

    • Possibility that none of the constraints are active or may be some are active. We only need to solve for equality constraints that are active at the optimum (binding).
    • When we have an active set $S^*, x^* \in F$, where $F=\{x|A x^*\leq b\}$,$\lambda^* \geq0$, where $\lambda^*$ is the set of Lagrange multipliers for equality constraints $Ax=b$ 
    • Start at $x=x_0$ and initial active set
    • Calculate $x^*_{EQP}, \lambda_{EQP}^*$ which minimizes the EQP defined by the current active set. Two possible outcomes:
      1. $x^*_{EQP}$ is feasible  ($x_{EQP}^* \in F$). Set $x=x_{EQP}$ and check Lagrange mulitpliers $\lambda_{EQP}^*$. If positive, solution found! Otherwise, remove constraint with $\min(\lambda_{EQP}^*)$ and repeat.
      2. $x^*_{EQP}$is infeasible. We move as far as possible along the line segment from $x_0$ to $x^*_{EQP}$ while staying feasible. Add to $S$ the constraint we encounter that prevents further progress. This is the blocking constraint.

    Quadratic programming:

    $\min\limits_{x}\frac{1}{2} x^THx + x^Tq$
    $ A_{eq} x = b_{eq}$
    $lb \leq x \leq ub$
    $A_{lb} \leq A_{in}x \leq A_{ub}$

    What qp.m does:

     [xobjinfolambda] = qp (x0HqAeqbeqlbubA_lbA_inA_ub)     
    • Checks feasibility of initial guess $x_0$
    • Checks size of inputs and that they make sense.
    • Checks if bounds lb,ub too close or A_lb or A_ub too close. If they are very close then the inequality is treated as an equality constraint instead.
    • Checks if any bound is set to Inf or -Inf. qp simply strikes it off.
    • Calls backend solver __qp__ using null space active set algorithm. 
    The ordering of lambda

    • quadprog returns Lagrange multipliers in a structure (with fields upper, lower, eqlin, ineqlin) and the multipliers corresponding to the constraints not provided are left empty.  
    • In qp, lambda is a column vector with Lagrange multipliers associated to the constraints in the following order: [equality constraints; lower bounds; upper bounds; other inequality constraints]
    • The length of lambda vector output from qp depends on the number of different constraints provided as  input.
    • Two issues in wrapping qp.m
      1.  the order, i.e. the position of the bounds constraints within the inequality constraints) is not specified by qp.m. The code could change and the ordering too. 
      2. qp.m strips off the INF constraints before calling __qp__ but does not process the lambda (returned by __qp__) accordingly.
      • Solution:
        • If this order is "specified" then we could extract parts of lambda. Patch for Inf checks in qp output lambda will make things easier but is not critical.

    by Asma Afzal ( at May 23, 2015 05:40 AM

    May 20, 2015

    Mike Miller

    Birthday Resolutions - Review

    Last year on my birthday I decided to try setting some goals for self-improvement with a deadline of the following birthday. People typically set New Year’s resolutions for themselves, but I wanted to try something different. Partly because I’m a natural contrarian, but also because my birthday last year was unique and offered more than a few reasons for self-reflection. So with another birthday looming, it’s time now to review how this experiment worked out.

    First, because this was a particularly notable birthday I had decided to hold myself to 10 resolutions. So this was almost doomed to failure from the beginning, if success means hitting all 10, which I didn’t. If I want to do this again next year, I should definitely go with a smaller set of goals to better set myself up for success. Obvious.

    Some of my goals were broader than others, which made it harder to define a successful target to aim for. For example, my goal to attend more free software developer conferences (did) was a lot easier to define and complete than my goal to make more time for creative pursuits (didn’t).

    Despite these problems, I like how this experiment turned out. I was able to accomplish about half of my 10 goals (for some definition of “accomplish”), and I’m not one to dwell on the other half that didn’t get done. I also like pinning personal goals to my birthday, rather than arbitrarily to the start of the Western calendar year. It reminds me to not only celebrate my birthday but to keep trying to improve from year to year.

    Anyone else tried this? Any other non-traditional ideas for annual, or more frequent, resolutions and personal goals?

    May 20, 2015 11:33 PM

    May 15, 2015

    Piotr Held

    Progress report

    As I haven't had any significant roadblock or breakthroughs this week I wanted to give a little progress report on my work.
    1. Added functionality
    I have managed to add the following functions:

    • mutual
    • spectrum
    • lazy
    • delay
    • pca
    Along with their documentation, tests and a demo (for lazy). I was really happy that once I had produced some examples of how I want to port these functions the process of porting each one accelerated rapidly. 
    I am especially excited about the fact that I have now henon, delay, an equivalent of addnoise and project available as this allowed me to create a nice noise reduction demo for project (and for lazy, but the one for project is more impressive). Fig.1 sums those efforts up.
    Fig. 1 Noisy data and data cleaned up by project.
    2. Functions found to be non-equivalent
    I also spent a lot of my time (almost a week) researching which programs from TISEAN have a GNU Octave equivalent. Apart from my positive identifications, most of which I discussed in previous posts, I have made some negative ones. I found that both extrema and polynom have no GNU Octave equivalent. 
    There was a suggestion made that extrema might be similar to findpeaks from signal. The only problem is that findpeaks searches (and returns) all peaks, whereas extrema returns either minimums or maximums. It might be easier to implement it in Octave than to port, but this decision has not been made yet.
    The latter program polynom was compared to detrend, polyfit and wpolyfit. The results were disappointing. All of the GNU Octave functions attempt to fit a polynomial onto the data, whereas polynom tries to make a "polynomial ansatz" for the data. The results are vastly different as can be seen on Fig. 2.
    Fig. 2 Comparison of original data (green), polyfit fit (red), and polynom prediction (blue).
    Both programs were run to try to use a 4th order polynomial. 

    by Piotr Held ( at May 15, 2015 06:18 PM

    May 14, 2015

    Asma Afzal

    Nonlinear Regression and 'nlinfit'

    In MATLAB, all three fucntions 'lsqnonlin', 'lsqcurvefit' and 'nlinfit' are used to perform non-linear curve fitting.

    To better understand the differences and similarities in these functions, consider the model function:
    $y= \beta_1+\beta_2  \text{exp}(-\beta_3x)$

    We wish to estimate the $\beta=\{\beta_1,\beta_2,\beta_3\}$ for the set of independents {$x_i$} and observed values {$y_i$} such that the model fits the data.

    Both 'nlinfit' and 'lsqcurvefit' are very similar as we can pass the regression function to compute the parameters. 'lsqnonlin' on the other hand, solves optimization problems of the type $min_{\beta} \sum_k f_k(\beta)^2$, so we cannot directly specify the regression function and instead, an error function has to be provided.  This is shown in the code below:

    modelfun = @(b,x)(b(1)+b(2)*exp(-b(3)*x));
    b = [1;3;2]; %actual
    x = exprnd(2,100,1); %independents
    y = modelfun(b,x) + normrnd(0,0.1,100,1); %noisy observation
    beta0 = [2;2;2]; %guess
    beta = nlinfit(x,y,modelfun,beta0)
    beta = lsqcurvefit(modelfun,beta0,x,y)
    beta = lsqnonlin(@(b)err_fun(b,x,y),beta0) %err_fun = modelfun-y

    All three functions generate:

    beta =


    • lsqcurvefit is more superior in the sense that we can define the bounds for the design variable (unlike nlinfit) while inputting the observed values separately (unlike lsqnonlin). 
    • Nlinfit provides extra statistics such as covariance matrix of the fitted coefficients and information about error model.
    • As an alternative to defining weights for the observed values in 'nlinfit', 'RobustWgtFtn' option can choose from different pre-defined weight functions for robust regression (with robust regression, fitting criterion is not as vulnerable to unusual data as least squares weighting function.)


    by Asma Afzal ( at May 14, 2015 10:24 AM

    May 12, 2015

    Mike Miller

    Octave + Python: A New Hope

    As a fan of both Python and Octave for numerical computing, and an active Octave developer, I’m always excited to hear about projects in either environment that create new capabilities or open up new ways of looking for solutions to problems. So I am especially excited about a new project that has the potential to bring Octave and Python much closer together and to give users of either tool full use of the other.

    The broad goal of this project is to provide a two-way interface layer between Octave and Python. What does this mean specifically? Well, I expect a future version of Octave to have a function that will call Python functions, using an embedded Python runtime, with transparent conversion between native Octave types and Python / NumPy types. There will also be a Python module to do the inverse: allow Python code to call Octave functions, invoke an embedded Octave interpreter, and have automatic conversion between Python and Octave types.

    The way in which the seeds of this project came together very quickly is really interesting, and what I want to describe in this post. The first was in a mailing list side discussion in late March about the appropriateness of Octave and Matlab for teaching numerical programming. It was mentioned that recent versions of Matlab have a calling interface to Python. For years they had provided a similar interface to Java, but I had no idea that Python was now an option for Matlab users. I filed that away for later.

    Then there is the Octave symbolic package, which relies heavily on SymPy to do the actual symbolic computation, but interacts with Python and SymPy over a pipe. So that existing package would definitely benefit from having a Python interpreter embedded in Octave or in a loadable oct-file.

    And finally there was a post in early April from fellow Octave developer JordiGH, who wrote:

    I have a wild idea. I like Python, and I think Numpy and Scipy are a great tool. Interfacing Scipy with Octave is also a good thing. … I therefore propose to bring Pytave into Octave proper.

    Pytave is an already-existing project which provides a Python module that can call Octave functions. It worked with older versions of Octave years ago, but has not kept up with the Octave API. It did work, and it does have a lot of useful code for converting between Octave and Python types, lots of good groundwork to start building from.

    I’m not sure what led Jordi to think of this “wild idea” or share it with us, but it definitely inspired me to latch onto this project. The timing of his message, after the other previous uses and mentions of Python, and being just days before the start of my first PyCon experience, read to me like a call to action. This felt like a perfect confluence of events and ideas to bring Octave and Python together in a novel way.

    So, I have already put some effort into this, and am planning to do some more. I hope that I (and any other interested contributors) will be able to make some real progress on this Octave-Python interface during this summer. I will share some more specifics about the project in a followup post soon.

    Thoughts about this project? Interested in following our progress or contributing?

    May 12, 2015 03:41 AM

    May 07, 2015

    Piotr Held

    The problem with 'spectrum'

    The 'spectrum' function from TISEAN most likely needs to be rewritten in GNU Octave or there is no need for it. This is because linking to it does not seem like a good idea. This is because there is a suspicion it does not produce good results for some data inputs. 

    1. Where 'spectrum' works
    First it is important to note that 'spectrum' from TISEAN is basically a GNU Octave 'abs(fft(:))' with additional data manipulation/adjustment. This additional work is not an elegant one-line solution, which might warrant designating a separate function that would translate the Octave respective function into a form similar to the output of 'spectrum'. Although this might not be necessary since the data obtained from the Octave function is very similar to 'spectrum' (Fig. 1).
    Fig. 1 Unadjusted data from Octave
    After adjusting the data (which was done by analyzing the source code to determine what actions the TISEAN programs perform) it was possible to get a close fit with a small difference. An example of this type of adjustment is listed below (Fig. 2)
    Fig. 2 Adjusted data from Octave.

    As it is not a one-line fix to convert ' abs (fft (:))' into a similar format as 'spectrum' it will not be shown in the post. It is available in the 'tests/spectrum/test_spectrum.m' function located on the tisean package repo (here).

    2. Where the problem lies
    The problem is that when 'spectrum' is used to create a step response its results vary substantially from what is produced by Octave. The way the data looks suggests that there is something wrong with 'spectrum'. The adjusted version is situated below.
    For the most part, the data fits perfectly, but there seems to be a shadow on the bottom of the TISEAN data. If it is the case that there is a problem with 'spectrum' then its code should not be used in the future Octave package and should be rewritten or omitted (as similar results can be obtained from a simple Octave call).

    by Piotr Held ( at May 07, 2015 07:58 PM

    May 06, 2015

    Piotr Held

    Finding 'histogram' in GNU Octave

    Unlike 'corr' it is quite easy to find a representative for 'histogram' from TISEAN. It is 'hist' from GNU Octave. The data is almost the same with the exception that the TISEAN package normalizes by default so one needs to be careful when calling the respective functions. I will describe differences in the data and describe the differences in usage.

    1. Data comparison
    I have attached a comparison of the two data sets (from 'hist' and 'histogram' on one chart)
    Fig. 1 Comparison between 'hist' (Octave) and 'histogram' (TISEAN)
    When  one analyses the data close there is a slight discrepancy between the value on the 40th and 41st bar. But not only is it slight, it basically means that both programs assigned a certain value to two different bins, which should not be a major problem. All told we can say that both of those functions perform the same task.

    2. Usage comparison
    As mentioned before, usage varies on both functions.
     $ histogram amplitude.dat -b#n -o "amplitude_histogram.dat"
     [nn, xx] = hist (amplitude, #n, 1);  
    nn = transpose (nn); xx = transpose (xx)  
    amplitude_hist = [xx, nn];
    This way the data stored in 'amplitude_hist' is essentially the same with 'amplitude_histogram.dat'.

    by Piotr Held ( at May 06, 2015 10:46 AM

    Finding a 'corr' representative in Octave

    This article describes the methodology used to compare function from GNU Octave and the TISEAN package. To achieve the desired results the author assumes you have installed the TISEAN package (available here) and have downloaded amplitude.dat and have installed GNU Octave with the 'signal' package in version 1.3.0 or newer.

    1. Comparison

    Procedure taken to receive results:

      1. Generate amp_corr.dat using the TISEAN package 'corr' with the call:
     '$ corr amplitude.dat -D5000 -o "amp_corr.dat"'
      2. Generate similar autocorelation data using (in GNU Octave):
     'load amplitude.dat; [a,b] = xcorr(amplitude, 5000, 'coeff');'
         Then to save the data you can use:
     'idx = [rows(amplitude):2*rows(amplitude)-1]; 
      xcorr_res = a(idx);
      save "xcorr_res.dat" xcorr_res'
    There is a strong difference in the data. This might be because of the different methods used in both cases (as explained further in the methods used Section 2. Methods). Because of those differences the amplitudes of the data generated using 'xcorr' from 'signal' decreases linearly. Thus to compare the data the oscillation amplitude of the data generated by 'xcorr' must be amplified. This linear decrease was not proven but observed on the 'amplitude.dat' data.

    When a linear correction is applied:
     'mult = rows (amplitude) ./ (rows (amplitude) - [0:rows(amplitude)-1]);  
    xcorr_tisean_res = mult .* xcorr_res'
    Fig. 1 Difference between xcorr_tisean_res and amp_corr

    The resultant xcorr_tisean_res is close to the TISEAN 'corr' function, and the difference is smaller than 3% (see Fig. 1). The end of the data begins to change and this is most likely because there is no more data past 5000 and so the results vary. If a autocorrelation is calculated for less data (e.g.4500 instead of 5000) the difference is much less, as can be seen on the chart above.

    Even better results can be obtained for different data. We can generate a different set using the TISEAN package
     '$ar-model -s5000 amplitude.dat -p10 -o "amp_ar.dat"' 
    When the process described above is applied to this new data set ('amp_ar.dat') the resulting difference between 'xcorr' and 'corr' is shown on Fig. 2.
    Fig. 2 Difference between 'xcorr' and 'corr' on 'amp_ar.dat'

    Similarly to the previous case the data is the same for small ( < 4000) numbers but when they get close to the edge the difference becomes more pronounced.

    2. Methods

    The way TISEAN calculates autocorrelation in the 'corr' program is by using estimation method. It is described here:

    On the other hand the 'xcorr' function from the signal package uses the FFT (Fast Fourier Transform) method (it is described in the same Wikipedia article: here)

    This difference in methodology is the cause of the difference in the data results between both functions.

    3. Conclusions [edited]
    After more test we found 'corr' from TISEAN and 'xcorr' from 'signal' to perform the same autocorrelation and therefor it is not necessary to port it.

    It is important to note the different usage:
     $ corr amplitude.dat -Dn# -n -o "amplitude_tisean.dat" 
     [data, lags] xcorr (amplitude, n#, 'unbiased')   
    data = data(find (lags >0))
    Both of the usage noted above produce the same data.

    It is important to note the '-n' in the calling of the TISEAN program. It mean the data is not normalized. You can achieve similar data even when calling 'corr' with normalization, but it is more tricky:
     $ corr amplitude.dat -Dn# -o "amplitude_tisean.dat" 
     [data, lags] xcorr (center (amplitude), n#, 'coeff')   
    data = data(find (lags >0))  
    data = data .* (n# ./ (n# - (transpose ([0:n#-1]))))
    The results of this can be viewed in tests/corr with function test_corr.m (note: 'signal' package needed) available in from the tisean port package repo:

    by Piotr Held ( at May 06, 2015 09:58 AM

    May 05, 2015

    Asma Afzal

    Project Goals

    Here, I list down the project goals as stated in my Wiki page:

              Start of GSoC (May) 
    1. 'lsqnonlin' using 'nonlin_residmin'
    2. 'lsqcurvefit' using 'nonlin_curvefit', 'nonlin_residmin', or 'lsqnonlin',
    3. 'fmincon' using 'nonlin_min',
    4. 'nlinfit' using 'leasqr',
    5. Test cases for the above functions [10] .
    6. Instead of wrappers for top-level functions like qp, call back-end function (__qp__) to be able to extract lambda. See [11].
      Stretch Goals
    7.  Further missing functions in Optim package. See [12] Implement another back-end algorithm/add variant.

     6.  quadprog and lsqlin should call a private intermediate function instead of qp.m
     This private function should do the argument processing for calling __qp__. It could also be configured to call yet to be written alternatives to __qp__ .
    Among other things this should make ordering of the 'lambda' output feasible.

    ((I have yet to study __qp__ and how this will be done))

    by Asma Afzal ( at May 05, 2015 05:41 PM