Planet Octave

June 10, 2021

Abdallah Khaled Elshamy


Hello there. In this blog post, I will describe how to setup a GNU Octave package using the new GitHub template. This is what I did to create the package that I will be working on during this GSoC. Let’s get started.

I followed those easy steps to setup my package:

1- Create a repo from the template:

As I am a part of the GNU Octave GitHub organization, I was able to do that easily. If you are not a member, Join the GNU Octave GitHub organization by asking for an invitation at our Discourse forum.

2- (Optional) Change the license:

the “COPYING” file contains the license text of the package. You may change the license if you want. For my package, I left it as it is.


Update the fields in this file to match your package. For my package, I used the following:

name: pkg-jupyter-notebook 
version: 1.0.0
date: 2021-05-30
author: Abdallah Elshamy <>
maintainer: Kai T. Ohlhus <>, 
 Abdallah Elshamy <>
title: A package to run and fill Jupyter Notebooks within GNU Octave. 
description: A package to run and fill Jupyter Notebooks within GNU Octave. 
 This would enable Jupyter Notebook users to evaluate
 long-running Octave Notebooks on a computing server without 
 a permanent browser connection, which is still a pending issue.
categories: package
depends: pkg-json (>= 1.0.0)

4- Update your README:

Right now, your README is the same as the template. Change it to suit your package.

5- Change the package icon in docs and remove the other images from there:

Currently, your icon is the same as the template. Change it to suit your package. There are also some other images that were used in the README of the template that you need to remove.

6- Remove unnecessary files from src:

Initially, this directory contains examples for Octave/Matlab code, FORTRAN code, C++ code called by the oct-interface, and C code called by the mex-interface. Remove the files that you don’t need and rename the rest of the files to match your package.

7- (Optional) Amend the initial commit and force push your changes:

The setup is now complete but you may want to amend the initial commit using git commit --amend if you don’t want the files that you removed to appear in the initial commit. To push those changes to the remote repo, use the -f option as you overwrote the initial commit.

And that’s it! As you can see, setting up a package using the new GitHub template is very easy and simple.

by abdallahkelshamy at June 10, 2021 02:10 PM


I am very pleased that I will be working with GNU Octave during Google summer of code for the second year in a row! This year, I will be working on the “Jupyter Notebook Integration” project.

The Jupyter Notebook is an open-source web application that allows you to create and share documents that contain live code, MathJax-rendered equations, visualizations, and narrative text. To interactively work with Octave code within Jupyter Notebooks, there already exists an Octave kernel for Jupyter.

This project aims to support the opposite direction: running (and filling) Jupyter Notebook within GNU Octave. This would enable Jupyter Notebook users to evaluate long-running Octave Notebooks on a computing server without a permanent browser connection, which is still a pending issue.

There are some changes in my timeline as the schedule of my final exams changed. This how the timeline looks now:

I’m looking forward to a fruitful and fun summer with GNU Octave.

by abdallahkelshamy at June 10, 2021 01:42 PM

January 22, 2021

Carnë Draug

Manage ImageJ update site on localhost with git


I've discovered that one can use the file: "protocol" to manage ImageJ update sites that are actually a git repository on the local filesystem. This means that I can then have the update site pull the changes from somewhere rather than have ImageJ push the changes to the remote update site. This reduces the need for direct access to the server with the overall goal being automating its deployment.


To make a new release of SIMcheck, an ImageJ plugin, I do the following dance:

  1. build new release of SIMcheck;
  2. install it on a fresh local copy of Fiji;
  3. use the ImageJ updater to update the remote update site.

While most projects have their update site on, we host the SIMcheck update site on one of our own servers — the Micron downloads site — together with some mirrors for other ImageJ update sites. I like to own my infrastructure and I like it distributed [1].

Anyway, I was never very happy with step 3 of this dance, namely the part where ImageJ pushes changes directly to the public website. This is in large part because I'm not happy with our the current setup. I don't like having to access the downloads server to upload files. I would much rather have them somewhere else and then configure the server to fetch/mount the files from that somewhere else. I also want to have the downloads site under version control and integrity checks. My plan is to use git-annex but there's always a lot of work to do and since infrastructure work is never urgent it never gets done.


While restructuring our servers is not going to happen overnight, I'm doing it one step at a time. For starters, I created a git repository with the SIMcheck update site. The plan now is to set up the downloads site to serve that git repository and only have to specify the git hash to deploy on the ansible playbook.

But the ImageJ updater only makes changes on remote servers. From its "documentation":

If you have an own server or web space with WebDAV, SFTP or SSH access, you can create a directory in that web space and initialize it as an update site, too.

I could set one of those services locally but seems too much work when the things are already local. So despite the documentation I tried to use file: as "host" and it worked just fine. This is how it looks like on the ImageJ update site manager:

How the configuration looks like on the ImageJ update site manager.

So my dance now is as follow:

  1. download a fresh copy of ImageJ;
  2. configure the updater with a SIMcheck update site that is the local git clone;
  3. install new version of SIMcheck;
  4. update the SIMcheck update site (local git clone) with the new version with the ImageJ updater;
  5. commit and push the changes to the SIMcheck update site;
  6. pull the changes on the public server.

Which at the command line, roughly translate into:

$ wget
$ unzip
$ cd
$ ./ImageJ-linux64 --update update
$ ./ImageJ-linux64 --update add-update-site SIMcheck-local \
      file:/home/carandraug/src/SIMcheck-update-site/ \
      file: \
$ ./ImageJ-linux64 --update update
$ mv PATH-TO-SIMCheck-REPO/target/SIMcheck_-1.3.jar plugins/
$ rm plugins/SIMcheck_-1.2.jar
$ ./ImageJ-linux64 --update upload \
      --update-site SIMcheck-local \
$ cd ~/src/SIMcheck-update-site
$ git add plugins/SIMcheck_-1.3.jar-20210121203119
$ git add db.xml.gz
$ git commit -m "SIMcheck release 1.3"

Future Ideas

I think it might be interesting to have something like this for automating releases. An automation server can be triggered to build new releases and push them to a git repository for each update site. A site that serves those update sites can be configured to pull and serve a specific commit for each update site.

[1]It's not only me. The ImageJ project itself seems interested in having mirrors of its resources. If you can set a mirror, checkout the thread Who can mirror ImageJ online resources?

by David Miguel Susano Pinto at January 22, 2021 12:00 AM

August 28, 2020

Abdallah Khaled Elshamy


Hello there. In this blog post, I will describe my work with GNU Octave during Google summer of code 2020. I will also add links to the commits I have made and repositories I have worked on. Let’s get started.

About my project

JavaScript Object Notation, in short JSON, is a very common human readable and structured data format. My project aims to provide GNU Octave with a builtin support for that data format.

Specifically, My project provides GNU Octave with two functions:

  • jsondecode: This function decodes JSON-formatted strings into Octave objects.
  • jsonencode: This function encodes Octave objects into JSON-formatted strings.

Having JSON support, Octave can improve for example its web service functions, which often exchange JSON data these days.

My contribution to GNU Octave during GSoC

For the past few months, I’ve been working with my mentors on my project. This work resulted in the following commits that were pushed into the main repository of Octave:

  • 5da49e37a6c9: This commit contains nearly all of my work during GSoC. It adds the functions jsonencode and jsondecode to GNU Octave.
  • aae9d7f098bd: This commit improves the integration of the functions into Octave’s build system.
  • 34696240591e: This commit improves the documentation of the functions.
  • 0da2fbd3a642: This commit improves the documentation of the functions.
  • 174550af014f: This commit adds more tests and improves the documentation of the functions.

Those commits contain:

  • The code of the two functions.
  • Unit tests.
  • Documentation (Doxygen documentation for the developers and Texinfo documentation for the users).
  • Code to add the new functions to the build system.

I was working on this stand-alone repository then at the final stages of GSoC, The code was moved to my fork of the Octave mirror on GitHub.

How to use my project

As I said before, my project is already pushed to the main repository of Octave. To use it, all you have to do is to build the default branch of Octave. This article shows how to do this.

What to do after GSoC

A benchmark was implemented by my mentor to assess the performance of the functions. The benchmark results showed that jsondecode has some performance issues. After some investigations, we were able to pinpoint what slows the performance which is makeValidName function. This function is written in Octave scripting language which is much slower than C++. So, I am going to rewrite the core of this function in C++ and make the m-file of makeValidName call it. This will definitely improve the performance of jsondecode.

Finally, I would like to say that I really had a wonderful experience during Google Summer of Code 2020 with GNU Octave and I am looking forward to contribute more in the future. Also, I would like to thank:

  • The team of the Google Summer of Code program for supporting such an amazing experience.
  • The community of GNU Octave for helping me during this wonderful journey.
  • My co-mentors and my mentor for providing me with continuous support and help that facilitated my work a lot. It has been my pleasure to get to know you.

by abdallahkelshamy at August 28, 2020 12:25 PM

August 13, 2020

Abdallah Khaled Elshamy


Hello there, this is my weekly report about my work. In this report, I will show what I did this past week. I will also show what I intend to do in the next week. let’s get started.

That’s what is done
  • My mentors and I are glad to announce that an initial change set has been pushed to the main repository of Octave here. We are waiting for your feedback. (Note: you must have the development version of RapidJSON.)
What I intend to do
  • Disable the PrettyWritter feature if the release version of RapidJSON is used instead of disabling the two functions.
  • Add docstrings to interpreter manual.
  • Move the function equals to string_vector class.

by abdallahkelshamy at August 13, 2020 11:06 PM

August 06, 2020

Abdallah Khaled Elshamy


Hello there, this is my weekly report about my work. In this report, I will show what I did this past week. I will also show what I intend to do in the next week. let’s get started.

That’s what is done
  • I finished writing jsonencode function in the new “standalone” repository. The function now passes all the 64 tests in the test suite. The function now encodes:
    • logical scalar
    • NaN, Inf and -Inf
    • numeric scalar
    • containers.Map
    • Structure scalar
    • Structure array
    • Cell scalar
    • Cell array
    • numeric array
    • logical array
    • character vector
    • character array
How to compile and run tests on the code

Right now, the code is treated as an external *.oct file. The integration of the code into Octave’s build system will be done at the end of the project. To compile it:

  • cd into the repo’s directory.
  • run mkoctfile command using the file name (eg. as an argument.

Octave test files are provided for each function. For example, you can run the one that tests jsonencode by running this command:


The log file “log-jsonencode.txt” in “test” in your repo’s directory will have the data of the failed tests.

What I intend to do
  • Write the documentation of both functions.
  • Integrate the test suite with the code (the test suite is already converted into Octave BIST)
  • Start integrating the functions with Octave’s code base.

by abdallahkelshamy at August 06, 2020 07:32 PM

July 24, 2020

Abdallah Khaled Elshamy


Hello there, this is my weekly report about my work. In this report, I will show what I did this past week. I will also show what I intend to do in the next week. let’s get started.

That’s what is done
  • I started writing jsonencode function in the new “standalone” repository. The function now passes 28 out of 42 tests in the test suite. The function now encodes:
    • logical scalar
    • NaN, Inf and -Inf
    • numeric scalar
    • containers.Map
    • Structure scalar
    • Structure array
    • Cell scalar
    • Cell array
How to compile and run tests on the code

Right now, the code is treated as an external *.oct file. The integration of the code into Octave’s build system will be done at the end of the project. To compile it:

  • cd into the repo’s directory.
  • run mkoctfile command using the file name (eg. as an argument.

Octave test files are provided for each function. For example, you can run the one that tests jsonencode by running this command:


The log file “log-jsonencode.txt” in “test” in your repo’s directory will have the data of the failed tests.

What I intend to do

by abdallahkelshamy at July 24, 2020 06:13 PM

July 18, 2020

Abdallah Khaled Elshamy


Hello there, this is my weekly report about my work. In this report, I will show what I did this past week. I will also show what I intend to do in the next week. let’s get started.

That’s what is done
  • Added the “ReplacementStyle” and “Prefix” options for jsondecode function.
  • Extended the test suite for jsondecode.
  • Added Doxygen comments to the internal functions of jsondecode.
  • Made some modifications in jsondecode functions after taking feedback from my mentors.
  • Made a new “standalone” repository to facilitate communication and moved my commits and the issues from the old repo to the new repo as I think it is better to preserve the history of the commits. This is how I moved the files with commits:
    • I made a clone of the json branch in my Octave’s repository.
    • I checked through the history and files and used an “index-filter” to remove everything except the files I want. What remained in my clone after that was just some directories that contain only my files. Here is the command I used:
git filter-branch --index-filter 'git rm --cached -qr --ignore-unmatch -- . && git reset -q $GIT_COMMIT -- test/json libinterp/corefcn/ libinterp/corefcn/ ' --prune-empty -- --all

After that, I just moved the files to the desired directories in the new repo and committed the changes.

How to compile and run tests on the code

Right now, the code is treated as an external *.oct file. The integration of the code into Octave’s build system will be done at the end of the project. To compile it:

  • cd into the repo’s directory.
  • run mkoctfile command using the file name (eg. as an argument.

Octave test files are provided for each function. For example, you can run the one that tests jsondecode by running this command:


The log file “log-jsondecode.txt” in “test” in your repo’s directory will have the data of the failed tests.

What I intend to do

by abdallahkelshamy at July 18, 2020 06:55 PM

July 09, 2020

Abdallah Khaled Elshamy


Hello there, this is my weekly report about my work. In this report, I will show what I did this past week. I will also show what I intend to do in the next week. let’s get started.

That’s what is done
  • I finished writing jsondecode function on my repository. The function now passes all the tests in the test suite. The function now decodes:
    • null values in non-numeric arrays
    • null values in numeric arrays
    • Boolean values
    • Numeric values
    • String values
    • Array of booleans
    • Array of numbers
    • Array of strings
    • JSON objects
    • Array of objects — Same field names
    • Array of objects — Different field names
    • Array — elements that are of different data types
How to compile and run tests on the code

Right now, the code is treated as an external *.oct file. The integration of the code into Octave’s build system will be done at the end of the project. This is how to compile it:

  • “cd” into “libinterp/corefcn” in your Octave’s code base directory
  • run “mkoctfile” command using the file name ( as an argument

Octave test files are provided for each function. You can run the one that tests jsondecode in the “libinterp/corefcn” directory by running this command:


The log file “log.txt” in “test/json” in your Octave’s code base directory will have the data of the failed tests.

What I intend to do

That’s it for this week. See you next one.

by abdallahkelshamy at July 09, 2020 10:00 PM

July 02, 2020

Abdallah Khaled Elshamy


Hello there, this is my weekly report about my work. In this report, I will show what I did this past week. I will also show what I intend to do in the next week. let’s get started.

That’s what is done
  • I started writing jsondecode function on my repository. The function now fails only 6 tests of the 31 tests in the test suite. The function now decodes:
    • null values in non-numeric arrays
    • null values in numeric arrays
    • Boolean values
    • Numeric values
    • String values
    • Array of booleans
    • Array of numbers
    • Array of strings
    • JSON objects
    • Array of objects — Same field names
    • Array of objects — Different field names
How to compile and run tests on the code

Right now, the code is treated as an external *.oct file. The integration of the code into Octave’s build system will be done at the end of the project. This is how to compile it:

  • “cd” into “libinterp/corefcn” in your Octave’s code base directory
  • run “mkoctfile” command using the file name ( as an argument

Octave test files are provided for each function. You can run the one that tests jsondecode in the “libinterp/corefcn” directory by running this command:


The log file “log.txt” in “test/json” in your Octave’s code base directory will have the data of the failed tests.

What I intend to do
  • Finish jsondecode function.
  • Since I like to use the method of “get things done first. then enhance it”, I will start to see if there is some thing in the code that can be written better after taking feedback and re-reading the code.

That’s it for this week. See you next one.

by abdallahkelshamy at July 02, 2020 08:36 PM

June 25, 2020

Abdallah Khaled Elshamy


Hello there, this is my weekly report about my work. In this report, I will show what I did this past week. I will also show what I intend to do in the next week. let’s get started.

That’s what is done
  • I finished writing tests for jsonencode function. I wrote and extracted from the previous implementations tests for the encoding of:
    • Structure array
    • Cell scalar
    • Cell array
  • I used those tests to assess the previous implementations to encode/decode JSON.
  • I detected the reason of failure for each failed test in the previous implementations.
  • Some decisions about the approach we will follow was discussed in the mailing list.
What I intend to do

That’s it for this week. See you next one.

by abdallahkelshamy at June 25, 2020 08:01 PM

June 18, 2020

Abdallah Khaled Elshamy


Hello there, this is my weekly report about my work. In this report, I will show what I did this past week. I will also show what I intend to do in the next week. let’s get started.

That’s what is done

My finals started this week so there isn’t much work done. (I organized this with my mentor.)

  • I started writing tests for jsonencode function. I wrote and extracted from the previous implementations tests for the encoding of:
    • logical scalar
    • NaN, Inf and -Inf
    • numeric scalar
    • numeric array
    • logical array
    • character vector
    • character array
    • containers.Map
    • Structure scalar
What I intend to do
  • Finish writing tests for jsonencode
  • Running those tests on the previous implementations to assess them and their approaches.
  • Taking some decisions with the community and the mentors about the approach we will follow in implementing the functionality required.

That’s it for this week. See you next one.

by abdallahkelshamy at June 18, 2020 10:00 PM

June 11, 2020

Abdallah Khaled Elshamy


Hello there, this is my weekly report about my work. In this report, I will show what I did this past week. I will also show what I intend to do in the next week. let’s get started.

That’s what is done

My finals started this week so there isn’t much work done. (I organized this with my mentor.)

  • I finished writing tests for jsondecode function. I wrote and extracted from the previous implementations tests for the decoding of:
    • JSON objects
    • Array of objects — Same field names
    • Array of objects — Different field names
    • Array — elements are of different data types
What I intend to do
  • Start writing tests for jsonencode.

That’s it for this week. See you next one.

by abdallahkelshamy at June 11, 2020 09:54 PM

June 04, 2020

Abdallah Khaled Elshamy


Hello there, this is my weekly report about my work. In this report, I will show what I did this past week. I will also show what I intend to do in the next week. let’s get started.

That’s what is done

My finals started this week so there isn’t much work done. (I organized this with my mentor.)

  • I made a branch in my git repository called “test-suite” that I will push my tests to in the directory “octave/test/json-encode-decode”.
  • Since MATLAB compatibility is a core target for my project, I learned how to write script-based unit tests in MATLAB. This will help me in writing a test suite that MATLAB can run in order to verify its compatibility.
  • I started writing tests for jsondecode function. I wrote and extracted from the previous implementations tests for the decoding of:
    • null values in non-numeric arrays
    • null values in numeric arrays
    • Boolean values
    • Number values
    • String values
    • Array of booleans
    • Array of numbers
    • Array of strings
What I intend to do
  • Continue writing tests for jsondecode.

That’s it for this week. See you next one.

by abdallahkelshamy at June 04, 2020 08:57 PM

May 28, 2020

Abdallah Khaled Elshamy


Hello there, this is my weekly report about my work. In this report, I will show what I did this past week. I will also show what I intend to do in the next week. let’s get started.

That’s what is done
  • I finished my experiments with RapidJSON. I also finished reading the documentation of the library. A small example I did is to make an Octave function (written in C++) that adds two JSON objects. JSON objects are a set of key-value pairs. This function accepts objects with numeric values only and adds these values if they have the same key. Else, the values remain the same. This small function serves as a good warm up before the coding period. I think this is a good warm up as it uses two important things for my project: Creating Octave functions that are written in C++ and using RapidJSON which I will use in the project. enough talking here is the code:
#include "rapidjson/document.h"
#include "rapidjson/writer.h"
#include "rapidjson/stringbuffer.h"
#include "rapidjson/error/en.h"
#include <octave/oct.h>

using namespace rapidjson;

DEFUN_DLD (addJSON, args, , "adds two JSON objects.")
  int nargin = args.length ();

  if (args.length () != 2)
    print_usage ();

  if (! (args(0).is_string () && args(1).is_string ()))
    error ("parameters must be Character Strings");

  std::string first_json = args(0).string_value ();
  std::string second_json = args(1).string_value ();
  Document d1;
  Document d2;

  d1.Parse (first_json.c_str ());
  if (d1.HasParseError ())
    error("(offset %u): %s\n", 

  d2.Parse (second_json.c_str ());
  if (d2.HasParseError ())
    error("(offset %u): %s\n", 

  if(! (d1.IsObject () && d2.IsObject ()))
    error ("parameters must be JSON objects");

  // checking that the first json object has numeric values only
  for (Value::ConstMemberIterator itr = d1.MemberBegin ();
       itr != d1.MemberEnd () ; ++itr)
      if (! itr->value.IsNumber ())
        error ("values must be numbers");

  for (Value::ConstMemberIterator itr = d2.MemberBegin ();
       itr != d2.MemberEnd () ; ++itr)
      if (! itr->value.IsNumber ())
        error ("values must be numbers");
      if (d1.HasMember (itr->name.GetString ()))
          Value& s = d1[itr->name.GetString ()];
          if (s.IsDouble () || itr->value.IsDouble ())
            s.SetDouble(s.GetDouble() + itr->value.GetDouble ());
            s.SetInt(s.GetInt () + itr->value.GetInt ());
          Value key(itr->name.GetString (), d1.GetAllocator ());
          Value value (itr->value, d1.GetAllocator ());
          d1.AddMember (key, value, d1.GetAllocator ());
  StringBuffer buffer;
  Writer<StringBuffer> writer (buffer);
  d1.Accept (writer);

  return octave_value (buffer.GetString());
  • I discovered this cool Octave command __run_test_suite__. This command runs the complete test suite of Octave (the one that gets run at the end of make check.) This is very useful for regression testing.
  • I also prepared my check list for the test suite. My goal here is to make the test suite covers all the conversion cases that jsonencode and jsondecode cover in the official documentation of MATLAB (E.g. from boolean JSON data type to scalar logical) so my check list is simply the conversions listed at the end of the documentation of both functions (posting them here will overpopulate the post.)
Timeline and Milestones

Since the coding will start next week. This is a good time to show you my plan for the project. Those are my milestones:

  • 26/6: Deliver test suite (first evaluation period starts on 29/6)
  • 20/7: Deliver jsondecode (second evaluation period starts on 27/7)
  • 05/8: Deliver jsonencode (final week starts on 24/8)

Here is my timeline:

01/6 – 21/6* (final exams)20 daysPreparing the test suite7-10
21/6 – 03/712 daysFinalizing the test suite, running tests on the libraries and Creating reliable figures.40-45
03/7 – 06/73 daysAnalyzing results and taking design decisions with the mentors.40-45
06/7 – 18/712 daysImplementing jsondecode40-45
18/7 – 20/72 daysBuffering40-45
20/7 – 03/814 daysImplementing jsonencode40-45
03/8 – 07/84 daysBuffering & Documenting 40-45
07/8 – 12/85 daysConverting the test suite to Octave BIST40-45
12/8 – 17/85 daysCleaning the code and preparing the patch40-45
17/8 – 31/814 daysPerfecting the patch with the community feedback.40-45
My timeline
What I intend to do

That’s it for this week. See you next one.

by abdallahkelshamy at May 28, 2020 08:52 PM

May 21, 2020

Abdallah Khaled Elshamy


Hello there, this is my weekly report about my work. In this report, I will show what I did this past week. I will also show what I intend to do in the next week. let’s get started.

That’s what is done
  • I finished my experiments with Oct-Files by working with a simple example that makes some checks on the input , generates some errors and manipulates a struct inside the function.
  • I refreshed my knowledge on shell scripting using this tutorial, Here is some useful info:
    • grep -r : This option is used to recursively search for a pattern. This was useful for me as it showed me where is the macro OCTAVE_CHECK_LIB so I can know its job.
    • which : An awesome feature of Octave that it implements its own version of “which” command. “which” command in Octave shows the file that contains a specific function.
    • A cool best practice I learned is using the backtick to improve performance if you want to run a set of commands and parse various bits of its output:
find / -name "*.html" -print | grep "/index.html$"
find / -name "*.html" -print | grep "/contents.html$"

This code could take a long time to run, and we are doing it twice!
A better solution is:

HTML_FILES=`find / -name "*.html" -print`
echo "$HTML_FILES" | grep "/index.html$" 
echo "$HTML_FILES" | grep "/contents.html$"
  • I got more familiar with GNU Autotools.
  • I started reading about and experimenting with RapidJSON library.
What I intend to do
  • Extend file to check for RapidJSON after some discussions on the mailing list about which macros to use and some build options.
  • Finish my experiments with RapidJSON.
  • Describe in details the parts of the test suite.
  • Find out how to do regression testing.

That’s it for this week. See you next one.

by abdallahkelshamy at May 21, 2020 09:18 PM

May 14, 2020

Abdallah Khaled Elshamy


Hello there, this is my weekly report about my work. In this report, I will show what I did this past week. I will also show what I intend to do in the next week. let’s get started.

That’s what is done
  • I read GSoC student guide.
  • I set up my public blog.
  • I will be using the GitHub mirror of Octave instead of using a mercurial repo so, I set up my public repo and prepared my local environment.
  • The decision on how to add RapidJSON library to Octave was discussed and made on the mailing list.
  • I started reading about and experimenting with Oct-Files to get familiar with the code base.
What I intend to do
  • Get more familiar with GNU Autotools.
  • Extend file to check for RapidJSON.
  • Getting familiar with RapidJSON.
  • Finish my experiments with Oct-Files.

That’s it for this week. See you next one.

by abdallahkelshamy at May 14, 2020 09:39 PM

February 20, 2020


Octave in GSoC 2020

Octave is a mentor organization for Google Summer of Code this year. Applications from students are due by March 31. See the Octave wiki for tips on applying.

by Nir at February 20, 2020 09:55 PM

March 04, 2019


Google Summer of Code 2019: Call for Coders

Octave is in GSoC this year, for our fifth time as an independent organization!

Student applications for the paid summer internships are due 9 April.

Check out the Wiki for potential projects and application instructions.

by Nir at March 04, 2019 03:35 PM

February 24, 2019

Jordi Gutiérrez Hermoso

Exercising software freedom on Firefox

I’m a little unusual. I use Emacs.

That alone is unusual. But I get the impression that even amongst Emacs users, I’m in the minority in another way: I use the default keybindings. I love them. A lot of new Emacs users seem to insist on jamming vim keys into Emacs, but not me. These are my friends: C-p C-n C-f C-b C-a C-e C-k; down up left right start end kill.

I’m so gung-ho about Emacs keybindings that I made them the default keybinding of GTK+, which means that any application that uses GTK+ will respect Emacs keybindings for motion. They also work in anything that uses readline or readline-like input, like bash, python, or psql (postgresql’s default CLI client). Being used to Emacs keys has paid off for me. I have a consistent interface across the software that matters to me.

I’m becoming a minority in another way: I use Firefox. And Firefox uses GTK+. That means I can use Emacs keybindings in Firefox.

Ah, but there’s a rub. Firefox binds C-n (or as most people would call it, “ctrl-n”) to new window. This is probably okay for people who don’t have the intersectionality of Emacs keybindings everywhere and Firefox. But for me, it’s intolerable. If I want to move a cursor down, I have to instead perform a very unnatural-feeling motion of moving my right hand to the arrow keys and hit the arrow down button. For those accostumed to using arrow keys, imagine if every time you pressed the down arrow Firefox would open a new window. Imagine software reacting so at odds to your habituation.

Up until Firefox 56 there was an easy workaround. You could download extensions that would let you configure Firefox’s keyboard shorcuts, including disable some of them. I used to do this. The world, however, marches on and so does Firefox. Many extensions cannot do what they once did and the easy fix was gone.

I tried to cope, for a while. After all, it’s just one key. I can still use the arrow keys. I tried.

But no. It wouldn’t work. I couldn’t help myself. I often wanted to move the cursor down three or four rows and would accidentally open up three or four new windows. It was even worse because I could move in every other direction and it all felt natural, but if I made the mistake of going down, the software would react in the wrong way. Everything else did it right except Firefox. And one day, I had enough.

Software Freedom

Enough was enough. I had accidentally opened a new window for the last time. I want to go down, you donut! And you won’t stop me anymore!


I had the motivation. I have some skill. We can rebuild Firefox. Make it better. More consistent. We have the technology.

I didn’t want to get involved in Firefox’s build drama, though. I didn’t want to figure out how to clone its repo, how to setup a development environment, how to configure the build, what kinds of builds there are, and how to integrate all of this with my operating system. Luckily, someone else has already done all of this work for me: the Debian packagers.

A Debian package knows what dependencies are required to build a package and has all of the tooling ready to build that package and make it fit exactly with my operating system. Right system libraries, right compilation options, everything. I know how to build Debian packages:

  1. Get the source (apt-get source $packagename)
  2. Get the dependencies (sudo apt build-dep $packagename)
  3. Build the package (dpkg-buildpackage)

Easy enough.

Firefox, the behemoth

As I started following the steps above, something was immediately evident. Firefox is huge. Enormous. Gargantuan. The biggest codebase I have ever seen. At a glance I saw a mix of Python, C++, Rust, and XML which I later came to recognise as XUL (“XUL?” I hear you ask. Yes. XUL. More on this below.) I can see why few dare tread in here.

I, on the other hand, with my motivation going strong, felt undaunted. I would tame The Beast of oxidised metal.

But I wouldn’t do it alone. I know that the Mozilla project still has a fairly active IRC network over at, so I headed down that way. I started talking about my problem, asking for advice. While I waited for replies, I tried to do it on my own. I figured, GTK+, keybindings, C. I was looking for some C or C++ source file that would define the GTK+ keybindings. I would find this file and destroy the keybinding. I have done something similar in the past for other GTK+ programs.

My solo search proved unfruitful. I couldn’t find anything about new window in C++ source files. I even tried the Rust files, maybe they’ve done something there, but again nothing. My grepping did find new window commands in XML files, but I figured that couldn’t still be of use. Everyone knows it, it’s all over the software news: Firefox disabled XUL as part of its move to a Rust engine.

In the meantime, helpful people from IRC pushed me along my quest and pointed me in the right direction. Yes, XUL is all I needed.

There is no Rust. There is only XUL!

Yep! Firefox has been lying to us! It’s still all XUL. All they’ve disabled is the external interface for extensions, but under the hood, Firefox is still the XUL mess it always was. They say they’re ripping it out, yet the process seems slow.

So I followed the advice. I changed a single XML file. I built the Debian package. I was expecting a long compilation time and I got it. I was worried I wouldn’t have enough RAM for the build, but looks like 16 gigabytes with four cores (Thinkpad X1 Carbon 5th gen) was enough. People in IRC reassured me that it would take about two hours. They were right! Two hours later, I had a new Firefox in a neat little Debian package. I installed it (dpkg -i *.deb) eager to see the results and…

XML parsing error. Undefined entity.

Oh no! I had made a mistake! All I could do was close this error window. Firefox just wouldn’t start.

However, this confirmed two things. One, the XUL really is still being used. In fact, it’s so important that Firefox won’t even start if you get it wrong. And two… I was on the right track. Modifying XUL could very well get me to my goal of disabling one key.

The error window reminded me a lot of similar errors I had seen in the past when XUL was available to 3rd party extension authors. It seems that not as much as advertised has changed.

Bad XULXUL parsing error

I tried again. I had removed the key but I hadn’t removed a few references to that key. Another build. Another two hours. In the meantime, Mozilla employees and enthusiasts in IRC kept asking me if I was doing an artifact build. I said no, that I wanted to learn as little as possible about Firefox’s build process. Turns out that an artifact build is an interesting thing where you download pre-built Firefox components and the build just puts them together, greatly reducing the compilation times.

I had the very specific goals of building a Debian package and not wanting to get too involved in build drama, so I politely refused the suggestions of artifact builds.

I just want my cursor to move down, man.

My second try also didn’t work. I had neglected one further reference to the new window key. I didn’t think it was necessary, but the XML again failed to parse because the key for undoing closing a window is defined in terms of the key for opening a new window. I decided that if I wasn’t going to be opening new windows, I also wasn’t going to undo close them, so I also deleted this reference.

By now it was getting late, I had to sleep, and I couldn’t wait for another two-hour build. I made the change, started the build, and went to bed like a kid excited for Christmas morning.

Free at last!

The morning came. My new build was ready. I installed the third Debian package I built.

This time Firefox started. No more XML errors.

Could it be…?

I went to the first website I could think of that had a textarea element I could try to type in,

I typed some text. I hit enter a few times. I pressed C-p to go back up.

The moment of truth!

I hit C-n.

No new window.

The cursor moved down.


Victoly!Great success!

The patch

So here’s the patch, for anyone else who wants it. I made it against ESR (currently Firefox 60) because that’s what’s packaged for Debian stable, but all of these modified files are still there in the current Mercurial repository I just checked right now.

diff --git a/firefox-esr-60.5.1esr/browser/base/content/ b/firefox-esr-60.5.1esr/browser/base/content/
--- a/firefox-esr-60.5.1esr/browser/base/content/
+++ b/firefox-esr-60.5.1esr/browser/base/content/
@@ -27,7 +27,6 @@
                 <menuitem id="menu_newNavigator"
-                          key="key_newNavigator"
                 <menuitem id="menu_newPrivateWindow"
diff --git a/firefox-esr-60.5.1esr/browser/base/content/ b/firefox-esr-60.5.1esr/browser/base/content/
--- a/firefox-esr-60.5.1esr/browser/base/content/
+++ b/firefox-esr-60.5.1esr/browser/base/content/
@@ -196,10 +196,6 @@
   <keyset id="mainKeyset">
-    <key id="key_newNavigator"
-         key="&newNavigatorCmd.key;"
-         command="cmd_newNavigator"
-         modifiers="accel" reserved="true"/>
     <key id="key_newNavigatorTab" key="&tabCmd.commandkey;" modifiers="accel"
          command="cmd_newNavigatorTabNoEvent" reserved="true"/>
     <key id="focusURLBar" key="&openCmd.commandkey;" command="Browser:OpenLocation"
@@ -378,7 +374,6 @@
     <key id="key_undoCloseTab" command="History:UndoCloseTab" key="&tabCmd.commandkey;" modifiers="accel,shift"/>
-    <key id="key_undoCloseWindow" command="History:UndoCloseWindow" key="&newNavigatorCmd.key;" modifiers="accel,shift"/>
 #ifdef XP_GNOME
diff --git a/firefox-esr-60.5.1esr/browser/components/customizableui/content/ b/firefox-esr-60.5.1esr/browser/components/customizableui/content/
--- a/firefox-esr-60.5.1esr/browser/components/customizableui/content/
+++ b/firefox-esr-60.5.1esr/browser/components/customizableui/content/
@@ -205,7 +205,6 @@
         <toolbarbutton id="appMenu-new-window-button"
                        class="subviewbutton subviewbutton-iconic"
-                       key="key_newNavigator"
         <toolbarbutton id="appMenu-private-window-button"
                        class="subviewbutton subviewbutton-iconic"
diff --git a/firefox-esr-60.5.1esr/browser/locales/en-US/chrome/browser/browser.dtd b/firefox-esr-60.5.1esr/browser/locales/en-US/chrome/browser/browser.dtd
--- a/firefox-esr-60.5.1esr/browser/locales/en-US/chrome/browser/browser.dtd
+++ b/firefox-esr-60.5.1esr/browser/locales/en-US/chrome/browser/browser.dtd
@@ -298,7 +298,6 @@ These should match what Safari and other
 <!ENTITY newUserContext.label             "New Container Tab">
 <!ENTITY newUserContext.accesskey         "B">
 <!ENTITY newNavigatorCmd.label        "New Window">
-<!ENTITY newNavigatorCmd.key        "N">
 <!ENTITY newNavigatorCmd.accesskey      "N">
 <!ENTITY newPrivateWindow.label     "New Private Window">
 <!ENTITY newPrivateWindow.accesskey "W">

So there you have it. You can still alter Firefox’s XUL. You just have to compile it in instead of doing an extension.

by Jordi at February 24, 2019 06:07 PM

February 21, 2019

Jordi Gutiérrez Hermoso

To Translate Is To Lie, So Weave A Good Yarn

I’m not a professional translator, but I know what I like in fiction.

When I was a Mexican kid in the 1980s we used to get old re-runs of the Flintstones in Spanish. Of course, my English wasn’t very good when I was very young, and I didn’t know them as “the Flintstones at all.” They were “Los Picapiedra” (something like “The Pickstones”), and not only that, but I had no idea who Fred or Barney were. Instead, I knew Pedro Picapiedra and Pablo Mármol (something like “Peter Pickstone” and “Paul Marble”). I liked them, and they felt familiar and comfortable. They spoke with a Spanish accent very close to mine and they used expressions that were similar to how my parents spoke.

It wasn’t until I got older and got more experienced that I realised I had been lied to, like many other lies we tell children. Pedro and Pablo weren’t a caricature of my Mexican lifestyle at all, but of a different, 1950s lifestyle from another country up north. I didn’t exactly feel cheated or lied to, but it was another cool new thing to learn about the world. I still felt much endeared to the original names and to this day, if I have to watch the Flintstones, I’d much rather view them as Los Picapiedra instead.

Other Lies I Grew Up With

This wasn’t the only time this happened. Calvin & Hobbes fooled me too. This time their names didn’t change, but their language did. Calvin spoke to me from the comic book pages with a hip, cool Mexico City slang like other kids my age would use to elevate themselves in the eyes of other kids. Calvin talked about the prices of candy and magazines in pesos, with peso amounts appropriate for the time of publication, and used phrases like “hecho la mocha” (something like “made a blur”) when he said he was gonna do something very quickly. His mother sounded like my mother. This time the deception was even better, and for the longest time I honestly thought Calvin was a Mexican kid like me.

And there were others. The Thundercats were Los Felinos Cósmicos (something like “Cosmic Felines”), the Carebears were Los Ositos Cariñositos (something like “The Little Loving Bears”), and The Little Mermaid was La Sirenita (interesting how mythological sirens and mermaids are different in English but not in Spanish).

Again, as I grew up, so did my languages, and I was able to experience the other side of the localisation. It was always a small revelation to realise that the names I had known were an alteration, that the translators had taken liberties, that the stories had been subtly tampered with. In some cases, like with Calvin, I was thoroughly fooled.

The Translator’s Task

I’m of the opinion that the translators and localisers of my youth performed their task admirably. A good translator should be a good illusionist. Making me believe that Calvin was Mexican or that the Flintstones could have been my neighbours is what a good translator should do. Translation is always far more than language, because languages are more than words. A language always comes with a culture, a people, habits and customs. You cannot just translate words alone; you have to translate everything else.

Only bad translators believe in the untranslateable. Despite differences in language, culture, and habits, a translator must seek out the closest points of contact across the divide and build bridges on those points. When no point of contact exists, a translator must build it. A new pun may be needed. The cultural references might need to be altered. If nothing else can be done and if there is time and space for it, a footnote can be the last resort, when a translator admits defeat and explains the terms of their surrender. Nothing went according to keikaku.

The world has changed a lot since I was a child. It has gotten a lot bigger. We have more ways to talk to each other. As a result, it’s getting harder for translators to perform their illusions.

Modern Difficulties of Translation

With the internet and other methods of communication, a more unified global presence has become more important. Translations now have to be more alike to the source material. Big alterations to characters’ names or, worse, to the title of the work, are now out of the question.

Thus we get The Ice Queen becoming Frozen, because it’s good marketing (things didn’t go so well last time we made a title about a princess or a queen), and Frozen she shall be in Spanish as well, leaving Spanish speakers to pronounce it as best they can. As a small concession, we will allow the forgettable and bilingually redundant subtitle “Una Aventura Congelada” (something like “A Frozen Adventure”), but overall, the trademark must be preserved. There’s now far too much communication between Spanish and English speakers to allow the possibility of losing brand recognition.

Something similar and strange happened with the localisation of Japanese pop culture. We went from Japanimation to anime, from comics to manga. The fans will no longer let a good lie in their stories, and while we will grandfather in Megaman instead of Rockman or Astroboy instead of Mighty Atom, from now on new material must retain as foreign of a feeling as possible, because we now crave the foreign. It doesn’t matter if we really can understand it as closely as the Japanese do, because we crave the experience of the foreign.

The reverse also happens and the Japanese try their best to assimilate the complicated consonants of English into their language, but they have had more practice with this assimilation. Their faux pas have been documented on the web for the amusment of English speakers.

When Lies Won’t do

I should be more fair to translators. Sometimes, a torrent of footnotes is all that will work. Of course, this should be reserved for the written word. Such is the case of the English translation of Master and Margarita. The endless stream of jokes making fun of Soviet propaganda and Soviet life are too much of a you-had-to-be-there. Explaining the jokes sadly makes them no longer funny, but there’s no other recourse except writing a completely different book, far removed from the experience of a modern Russian reading a Soviet satire.

But it doesn’t have to be this way. The Japanese translations of Don Quixote works without burdening the readers with the minutiae of life from a time long, long ago, in a country far, far away. Don Quixote’s exaggerated chivalric speech is rendered in Japanese translations as samurai speech. Tatamis suddenly appear in a place of La Mancha that I don’t care to call to mind.

And that’s the best kind of translation. The one that works and makes the fans love it, that makes them feel like they belong in this translated world.

by Jordi at February 21, 2019 01:59 AM

September 07, 2018

Sudeepam Pandey

GSoC: final post

Welcome to the final post regarding my Google Summer of Code 2018 project. In this post, I'd like to talk about the overall work product and how it corresponds (or varies) from the original plan. Then, I would like to acknowledge some suggestions of my mentors and talk about some new ideas that were recently discussed with them.

However, before talking about any of those things, I'd like to share the code that was written down in the last twelve weeks. So here is the link to my public repository where all the code can be found and here is a patch that can be merged with the main line of development.

Now, coming to final work product, functionality wise, the feature turned out to be exactly what it was supposed to be, a fast and accurate way to suggest corrections for typographic errors, made while working on the command window of Octave. The difference, however, was in the way of implementation.

My original idea was to make a Neural Network for this problem and I did go to some lengths to make that happen. Precisely, I did collect some data about the most common typographic errors made by Octave users and did code up a small model that could learn the correct spellings of a few commands of Octave. At the time, the motivation behind the Neural Network model was to have an algorithm that could work better than the existing algorithms that are used to compare two strings, in terms of the speed-accuracy trade-off.

However, during the community bonding period, some loopholes in my Neural Network implementation were pointed out by a few members of the Octave community. As a student who wants to pursue a career in data science, those counter points, and further research that was done on the Neural Network approach during the third phase of coding, turned out to be invaluable, for it taught me that 'Neural Networks + Data' is not a magical combination that solves every problem of this world. Maybe they can, but sometimes, simpler, more optimal solutions exist, and in those times, one must look at those solutions and optimize them further according to the problem at hand. Somewhere down the line, it also gave me a better understanding of the nature of Neural Networks.

Now, coming back to the technical details of this project, to summarize it all, I used the faster variation of the edit distance algorithm, the one that uses dynamic programming, and optimized it further by reducing the sampling space on which the algorithm had to work on. To reduce the sample space, I analyzed the data that I had originally collected to make a Neural Network and based on the results of the analysis, I was able to make certain assumptions about the misspellings. These assumptions coupled with some clever data organization techniques helped me code up a fast, and yet very accurate version of the edit distance algorithm. One can read about this implementation in great detail in the previous blog posts.

The plan was to replace this algorithm with Neural Networks, during the third phase, 'if' they happen to perform better. As of now however, I found no way to make a Neural Networks perform better than what had been already made and so the suggestion engine still uses my original algorithm.

Additionally, I had to write the documentation and the tests for all of my code during the third phase of coding and I am glad to say that this work has been successfully completed. The main documentation for the m-scripts can be seen in the help text of those scripts. Besides that, I've also written down the documentation for the database file in a markdown file that is included with the database.

I must acknowledge the fact that Nick had guided me very well on how the documentation should be done, during the second phase evaluations. I did keep his guidance in mind while writing the documentation and the tests, and have, hopefully, made a well documented, well tested product.

Now, although, the main documentation should be enough for anyone who wishes to understand how the feature works, if any additional help is required by anyone, the previous posts of this blog (which contain a very detailed explanation), and the public mailing list of Octave (which I shall continue to follow), should be a good place to visit.

During the community bonding period, Rik and I had discussed the importance of an on/off switch for this feature. This switch was already created by the time the first evaluations took place, but during the third phase, I took some time to wrap up this toggling command into a nice m-script. The users can now do a simple >>command_correction ("off") to switch off the feature and do a simple >>command_correction ("on") to turn it back on.

Next, I'd like to talk about something that Doug recently mentioned to me. He asked me if I could think of some way in which we can track the identifiers that don't get resolved by my software. Essentially, this problem is directly related to the maintenance of the database file. With Octave under constant development, new identifiers will be created and some identifiers will deprecate as well. To make sure that the correction suggestion feature does not loose its value, the database of the identifiers would have to be updated in some regular intervals of time. Maybe an update every 6 months would be enough.

Currently, I've included a markdown file with the database that explains how this update can be done, and for now, this update could be done manually only. For now, I cannot not think of a way in which the database file gets automatically updated. Later on, maybe I or someone else could come up with a way to make a program read the release notices of Octave and its various packages and then modify the database accordingly. Maybe this could be a GSoC project for a future batch of students?

So in conclusion, the planned part of the project is absolutely complete and we have already started thinking of ways in which this feature can be improved. For further testing of the current implementation of the feature, I'd need the support of the members of the community. I would really appreciate it if anyone could try this feature for themselves and see if they could break it, or find any other kind of bugs, or maybe suggest some changes to the suggestion engine that could speed up the feature, or, maybe do something as small as pointing out some pieces of code where the coding style has not been followed properly.

Finally, I'd like to thank the Octave community. Working with them was an invaluable learning experience and I hope to be able to continue to associate myself with them for the years to come. :)

by Sudeepam Pandey ( at September 07, 2018 04:10 PM

August 13, 2018

Erivelton Gualter

Final Post

The Google Summer of Code program is over and I am positive I have gained so many experience in this period and additionally I have been done a significant work for GNU Octave about the Sisotool. Therefore, in this last post I all will go over in the project, describe …

by Erivelton Gualter at August 13, 2018 05:00 AM

July 06, 2018

Erivelton Gualter

Second Evaluation - week 8

So, here is my last post before the second evaluation. If you have been following my blog or the octave blog, you know that the purpose of this google summer of code project is to create an Interactive Tool for Single Input Single Output (SISO) Linear Control System Design. Also …

by Erivelton Gualter at July 06, 2018 04:36 PM

July 03, 2018

Sudeepam Pandey

GSoC project progress: part three

The goal for the second evaluations was to code up a complete, working, command line suggestion feature that supports identifiers and graphic properties of core octave and all the octave forge packages. I am happy to say that this goal has been achieved and we do have a working suggestion feature now. The link to public repository where the code can be found is this.

If you haven't already, you should read my previous posts to find out what the community wanted the feature to look like and how much progress had been already made. You may need that to understand the contents of this post. In this post, I would like to talk about the additional work that has been done and the work that will be done in the days to come.

At the time of the first evaluations, one of my mentors, Nicholas, expressed how he would be interested in seeing how the rest of the project progresses, including the aspects related to user interface and maintainability of the code by other developers. I'd like to address these points first.

So the UI is relatively simple. You enter a misspelling and some suggestions are displayed. We could have tried adding some GUI pop-ups but I refrained myself from trying to do those. There were two primary reasons for that.
  • First reason is that a GUI pop-up looks very unpleasant when you are working on the CLI of Octave, but honestly, that is more of a personal opinion I suppose.
  • Second, and the more strong reason is that adding a GUI pop-up would have been a really complicated task due to the way octave handles errors and would have resulted in things like displaying of the "undefined near line..." error messages for the misspelled command, after the correct command has been executed. 
There are some other reasons as well which have been discussed with the members of the community before. Obviously we can try changing things later on, if we really want to, but as of now, suggestions are simply displayed and the user can just use the upward arrow key of their keyboard and edit the previous command to quickly correct their misspelling.

I have accounted for code maintainability as well. I moved a few pieces of code here and there (see commit log) and have made the feature in a way that . . . all the code related to the UI, or how the feature presents itself to the user is in a separate file (scripts/help/__suggestions__.m) and all the code related to the suggestion engine, that generates the possible corrections for the misspelled identifier is in a separate file (scripts/help/__generate__.m). A lot of comments have been included in the code and the code is simple enough to be red and understood by anyone who knows how the Octave or MATLAB programming language works. Another important point is that, all the graphic properties and identifiers of Octave core and forge with which a misspelling can be compared have been stored in a database file called func.db (examples/data/func.db). I had described this file in my in my previous post.

Maintainability shall be very easy due to such an implementation. If UI changes are required, major changes must be done only to the file __suggestions__.m. If the algorithm of the suggestion engine has to be changed, changing the code of the file __generate__.m shall be enough and if new identifiers are added to octave (something that will be constantly done), including them in the well organized database file (which can be very easily done with a load>edit>save) would be enough.

Now I'd like to describe the other tasks that have been done in this coding phase. These include adding the support for the remaining packages of Octave forge and adding support for the graphic properties.

Including the remaining packages of Octave forge was very easy, all I had to do was, fetch the list of identifiers, clean up the data a little, and include it in the database file.

The challenging part was adding the support for graphic properties. This was mainly because of the fact that it required me to write a C++ code for a missing_property_hook() function which had to be similar in architecture to the already existing, missing_function_hook() function.

In the codebase of Octave, missing_function_hook() is a function that points to a particular m-script which is called when an unknown command is encountered by the parser. Like I had described earlier, I had extended its functionality to trigger the suggestion feature when an unknown identifier was found. The missing_property_hook() had to do something similar, call a certain m-script when an unknown graphic property is encountered.

Rik helped a lot with this part and finally, I was able to code up a missing_property_hook() function which would trigger the suggestion feature when an unknown graphic property is encountered. Although, the code does what it is supposed to, I'd be honest here and say that this part is still a little black-box to me. I'd appreciate it if some other maintainer who is good with c++ and familiar with the code of the missing_function_hook() function would take a look at the missing_property_hook() function and point out or fix any issues that they find.

I'd like to mention that the suggestion feature differentiates between the levels of parsing, i.e. whether the trigger is an unknown property or an unknown command, by looking at the number of input arguments. The rest of the functionality is same.

With all these things done, I was able to realize a complete and working command line suggestion feature and complete the goal that was set for the phase two evaluations. Future work that had been planed for phase three of coding includes writing the documentation, writing some tests, fixing any and every bug that is reported, and seeing if I could use a better algorithm for the suggestion engine. An additional thing that I would like to do is to nicely wrap up the on/off button and other such user settings into a single m-script for better user experience.

Since the phase two work is done, I'll start working on these things that have been planned for phase three from tomorrow onwards. I'll publish another post when I make some more significant changes, till then, thank you for reading and goodbye.

by Sudeepam Pandey ( at July 03, 2018 10:09 AM

July 01, 2018

Erivelton Gualter

Edit Compensantor Dynamics

So far, to design a controller using sisotool we need to select the desired feature to add to the compensator, such as real and complex pole or zero. In order to perform this task, we have two options. First, we can go to the main tab and select the feature …

by Erivelton Gualter at July 01, 2018 02:34 PM

June 24, 2018

Erivelton Gualter

Back to Coding

Results from the first evaluation came 9 days ago. All of the three gsoc student were successful! For the readers of my blog, you can find them at If you already a reader of planet octave, you are in the right place.

The feedback from my …

by Erivelton Gualter at June 24, 2018 06:19 AM

June 11, 2018

Sudeepam Pandey

GSoC project progress: part two

In my previous post, I talked about all the major discussions that have been made with the community, what the suggestion feature would be like, how I plan to realize this feature, and how I have extended the functionality of the scripts/help/__unimplemented__.m function file to integrate the command line suggestion feature with Octave. In this post, I would like to share my progress and talk about how the current implementation of the suggestion feature is working. The link to the public repository that contains the code for this feature can be found here.

The goal for the first evaluations was to code up a small model that would show how this feature integrates itself with Octave. That part, however, was completed by the time I had made my last blog post. I had been working on a full-fledged command line suggestion feature since then and till now, I have been able to complete a working command suggestion feature that supports identifiers from core octave and 40 octave-forge packages. Lets start looking at various parts of the feature.

Whenever the __unimplemented__.m function file, fails to identify, whatever the user entered, as a valid, unimplemented octave command, it calls one of my m-script, called __suggestions__.m and the command suggestion feature gets triggered. This script, __suggestions__.m, does the following things...
  • Firstly, based on the setting of the custom preference (set by the user with the command setpref ("Octave", "autosuggestion", true/false) it decides whether to display/not to display any suggestions. If the preference is 'false', it realizes that the user has turned off the feature and so it returns the control without calculating or displaying any suggestions. 
  • However, if the preference is true, it checks if whatever the user has entered is at-least a two letter string. If not, it again, returns the control without calculating or displaying any suggestions. This is done because it is less likely that a one letter strings is a misspelled form of some command.
  • However, if the string entered by the user is two lettered or more, the script goes on to calculate the commands that closely match the misspelling. The work of calculation is done by a different script and __suggestions__.m, only calls that script to get the closest matching commands. These commands are then displayed to the user as potential corrections.
  • If the misspelling of the user is short (length of the misspelling < 5), the script entertains one typo only, However, if the length of the misspelling is more than or equal to 5, two typos are entertained as well. This essentially means that for short misspellings, commands which are at an edit distance of 1 from the misspelling are shown as potential corrections and for relatively longer misspellings, commands which are at an edit distance of 2 from the misspelling are also shown as potential corrections.
Commands that closely match the misspelling are calculated by a different m-script. This m-script is called __generate__.m. It loads a list of possible commands from a database file called  func.db and then calculates the edit distance between the misspelling and each entry of the list using a different script called edit_distance.m. The commands having an edit distance equal to one or two are accepted as close matches and a list of all such commands along with their edit distances is returned to __suggestions__.m which displays some or all of these suggestions depending on the logic described before.

I'd like to mention that the strings package of Octave forge also has a function file that calculates the edit distance. It is called "editdistance.m". Therefore, to avoid compatibility issues or to avoid having two different function files that do the same thing, later on, I will include the edit_distance function that I wrote within the __generate__.m script.

Improving the speed of the generation script 

If we go on and calculate the edit distance between the misspelling and each and every identifier of octave (core+forge), our algorithm would take nearly 20 years to generate an output for each typographic error that the user makes. We, however, would like the time to be 20 milliseconds or so. For that, we use some smart techniques that reduce the sample space on which the algorithm has to operate.

To reduce the time, I've made a small assumption. I have assumed that the user never mistypes the first letter of a command. A rough analysis of the misspelling data that I received from Shane of octave-online before the commencement of the project, suggests that this is a reasonable assumption and would hardly reduce the accuracy of the suggestion feature. How good is this assumption for the speed? Well, I'd just say that, for a misspelling starting with the letter 'n', this small assumption reduces the sample size from 1492 to 36 (and that is not the best case!). The worst case was that of the letter 's' in which 178 out of 1492 commands were left. Even that corresponds to an 88% reduction in the sample size.

It is important to mention that doing this alphabetical separation at run-time would be a redundant task and a stupid idea, that would correspond to the algorithm taking 20 years again.

Another thing that we should consider, to improve the speed, is to show suggestions from octave core + loaded packages only. Obviously it is not a good idea to check among the commands that belong to a package which the user is not currently using (or worst, a package that is not installed on the user's machine).

Keeping these things in mind, I have created the func.db database file in such a way that the commands belonging to different packages are stored in different structures and are alphabetically separated as fields of that structure. So for example, func.db contains a structure called control which holds the identifiers from the control package only, and another structure core which holds the identifiers of core Octave only, and another structure signal which holds the identifiers of the signal package only and so on. The field a of the control structure (accessed by typing control.a) contains all the identifiers of control package starting with 'a', The field b (accessed by typing control.b) contains those identifiers of the control package that start with b, and so on. This has been repeated for all the packages available.

To make our __generate__.m script memory efficient as well, we load the core structure (which is always required) and then check for the loaded packages and load the structures corresponding to the loaded packages only, then, using a switch case, fetch all those commands which have the same first letter as that of the misspelling (in O(1), thanks to the way in which the database is arranged) and then proceed to the next step.

To understand the next step, we need to understand that the if a misspelling is of length p (say), and we are accepting corrections that are at an edit distance of one or two from the misspelling, then the corrections could have the following lengths only...
  • p-2: Two deletes in the misspelling,
  • p-1: One delete and one substitution, or one delete only.
  • p: One delete and one addition, or one or two substitutions.
  • p+1: One addition and one substitution, or one substitution only.
  • p+2: Two additions to the misspelling.
This fact, allows us to reduce the list further and would cut out some 5-10 more entries for normal length misspellings. This logic, however, is particularly useful for large length misspellings, because commands with large lengths are very less in number. If a user misspells the command "suppress_verbose_help_message" the script would take days to suggest a correction for this command without this logic, this is because edit distance algorithm is O(n1*n2) with dynamic programming, where n1 and n2 are the lengths of the strings being compared. This O(n1*n2) is repeated m times where m is the number of possible commands that could be close matches. With this logic however, the possible list would be cut down to one or two commands only. Thus, the value of m will be reduced and the close matches will be found within one or two iterations.

That summarizes all the measures that I have taken to improve the speed of the suggestion feature. The control flow had been described before this and so that concludes the working of the suggestion feature.


This concludes phase one. What's left is to include more forge packages and to include graphic properties within the scope of this feature. Writing the documentation, writing the tests, and debugging also remains but these shall be the tasks for subsequent coding phases. Till then, goodbye, see you in the next blog post. :)

by Sudeepam Pandey ( at June 11, 2018 10:03 AM

June 10, 2018

Erivelton Gualter

First Evaluation - week 4

So, here is my last post before the first evaluation. If you have been following my blog or octave blog, you know that the purpose of this google summer of code project is to create an Interactive Tool for Single Input Single Output (SISO) Linear Control System Design. Also, well-known …

by Erivelton Gualter at June 10, 2018 04:36 PM

Sudeepam Pandey

GSoC project progress: part one

p { margin-bottom: 0.25cm; line-height: 120%; }

An Initial note....

Alright, so first of all, I would like to apologize for not writing a proper blog post up till now. I had my Final examinations during the first week of the coding period and immediately after that, to catch up, I got so involved with the coding part that I forgot to share the progress of the project on the blog. On the positive side however, I have completed a lot of work. I can safely say that I have completed the goals that were set for phase 1 evaluations (possible style fixes may be left), but that’s not the entire good news. The phase two evaluations goal is also halfway done!

Now, I do realize that I have not shared any details of my project until now, and so, in this blog post, I’ll share a lot of important details and talk about everything that has been discussed and done so far. I promise to post more often after this, ‘cumulative’ post. Here goes...

The Project Idea....

If you've red the last blog post, you'd know that I plan to add something called a 'Command Line suggestion feature' to Octave and you may be wondering what that means. Basically this feature would do something like this...

Whenever the users make a typographic error while working on Octave's command window, the command line suggestion feature would suggest a correction to them and say something like "The command you entered was not recognized, did you mean any of the following...?"

Now I could share a detailed time-line explaining 'when' I plan to do 'what' but I believe that not everyone would be interested in reading that and so I'll skip that for now. Instead, I'll quickly talk about the following...
  • What the community wants the overall project to be like.
  • What are the challenging parts of the project.
  • What are my evaluation goals.
  • What discussions have been made, and
  • How much progress has been made.
If you really would like to see my time-line then just ask for it in the comments section and I'll share a link.

The Community Bonding Period...

By the time you finish reading this section, probably the only thing left to talk about would be "How much progress has been made". That is just a glimpse of how much the community has been involved in this project. It also shows how successful GNU Octave is as an open source community, not every open-source community is as open when it comes to discussions.

Now the first thing to understand is that this project is essentially a UX improvement, and as such, Octave is not bound by 'MATLAB compatibility issues'. This is one of the primary reasons why there was so much to discuss in the community bonding period. Here are the main points that summarize the collective decision of the community on what the overall project should be like:
  • First of all, it was decided that the user interface, or the part handling 'how this feature hooks itself to octave' should be well separated from how the 'suggestions are generated'. This need was realized, immediately after realizing the fact that there are a lot of algorithms available that could be used to generate suggestions. Separating the integration and generation part would allow us to make sure that, in future, if a faster or a more accurate algorithm to generate suggestions is discovered, replacing the existing implementation becomes easier.
  • Secondly, a few problems such as, a very large output layer size, and failure on dynamic package loading were found with my proposed Neural Networks, based approach. Therefore, we decided to use a well established approach called the Edit distance algorithm for now and the Neural Networks based approach will be the 'research part' of the project. Essentially, the plan is to first use 'smart implementations' of the good old Edit-distance algorithm to realize this feature and to research and see if a Neural Network could do better after that has been done. If later on, we realize that a Neural Network (or for that matter, any other approach), really could do better than the Edit-distance approach, the algorithm can be replaced very easily (thanks to the previous point).
  • Next, we decided to include keywords, functions, and graphic properties within the scope of this feature. Very short keywords, user variables, and internal functions will not be included in its scope. Deprecated functions would also be included in the scope for now. Essentially, corrections would be suggested for typos close to anything that is within the scope of this feature and would not be suggested for anything that isn't.
  • Also, we decided to use the missing_function_hook() to realize the integration part of this feature. More about this later in this post.
  • Lastly, we decided that it is absolutely necessary to include an 'on/off switch' type of command that would let the users decide whether they want to use this feature or not. We plan to use custom preference for now to do this.
That summarizes the most important discussions that took place and with that, we are in a position to talk about how the second point and the last point are directly related to what are the 'challenging parts of the project'. Let's start with that.

Essentially, the second point talks about the algorithm that will be used to generate the corrections that are ultimately going to be shown to the user. The challenging part is that this algorithm should provide a minimum speed-accuracy trade-off. I did know about the Edit-distance algorithm beforehand but I initially believed that a Neural Network would outperform it in terms of the speed accuracy trade-off. Discussing the idea with the community made me realize that there are some critical loopholes in the Neural Network based model and although they could definitely be improved with more research, I should not jeopardize the entire project just to proof that Neural Networks could do better. We therefore decided to do what I had described earlier in the second point.

At this point, defining a 'smart implementation' of Edit distance remains. Basically, Edit distance is a very accurate algorithm that quantifies how dissimilar two strings are. The only problem with it is its speed (my primary reason for initially proposing a trained Neural Network). Essentially, by a smart implementation of the algorithm, we mean an implementation which would maximize the computation speed by reducing the sample space, on which the algorithm has to work on. This would be done using some clever data management techniques and some probability based assumptions. Some discussions related to these were also done during the community bonding and since then, I have been looking at a lot of suggestion features of other free and open source softwares to device some clever techniques. Good progress has been made but I'll share that in another blog post.

The last point talks about a very important 'on/off' feature, the tricky part with this was that Octave comes in both a GUI and a CLI and so a common method that does the job could have been hard to find. However, this problem was solved with relative ease, and we decided to use custom preference to realize this part. This gave us a simple and common command to switch on/ switch off the feature.

These discussions led me to reset my term evaluation goals which are as follows now:-
  • Phase-1 goal: To code up and push an algorithm independent version of the suggestion feature into my public repository. Essentially this would show how this feature integrates itself with Octave.
  • Phase-2 goal: A development version of Octave with a working (but maybe bugged and surely undocumented) command line suggestion feature integrated into it.
  • Phase-3 goal: The primary goal would be to have a well documented, well tested and bug free command line suggestion feature. The secondary goal would be to research and try to produce a Neural Network based correction generation script that outperforms the edit distance algorithm.
...and that, marked the end of the major discussions and the community bonding period.

Progress made so far...

So far, I have coded up the phase-1 goal. The public repository can be seen here. It very well shows how we have used the missing_function_hook() to integrate the feature with octave. The following points summarize the working:
  • Essentially, whenever the parser fails to identify something as a valid octave command it calls the missing_function_hook() which points to an internal function file, '__unimplemented__.m'.
  • This file checks if whatever the user entered is a valid, unimplemented octave (core or forge) command or if it is a implemented command but belongs to an unloaded forge package. If yes, it returns an appropriate message to the user and if not, it does, or rather, used to, do nothing.
  • To realize the suggestion feature, I have extended its functionality to check for typographic errors whenever the command entered was not identified as a valid unimplemented/ forge command.
By using the missing_function_hook() the keywords and built-in functions were automatically bought into the scope of this feature. Graphic properties remain because there is no missing_property_hook() in octave right now. I have discussed this with the community and I'll try to code it up in the subsequent weeks.
Besides that I have also figured out how the Edit Distance algorithm can be made 'smart'. I'll push an update and write another blog post as soon I master and code up the entire thing. For now, thanks for reading, see you in the next post. :)

by Sudeepam Pandey ( at June 10, 2018 02:37 PM

June 03, 2018

Erivelton Gualter

Geeting closer to the First Evaluation - week 3

First Evaluation period is around in the corner. As was proposed on my first post of my timeline, the work I have been doing is on time.

For this past week, I added some functionalities to the Root Locus Editor. This time, the user can add: real poles, complex poles …

by Erivelton Gualter at June 03, 2018 10:48 AM

May 29, 2018

Erivelton Gualter

Plots are Working - week 2

Here we go one more week of code. This week I have continued my work from previous week related to interface of sisotool. Just reiterating what was done last week, I created a couple GUI to understand a little better how the UI Elements works in Octave. For this week …

by Erivelton Gualter at May 29, 2018 05:38 PM

May 21, 2018

Erivelton Gualter

Code begins - week 1

The first week of coding has been completed.

As I mentioned in the last post, the goal of this previous week was to create a fixed layout to study the plot diagrams for the sisotool, as well as to add some UI Element functionality to control the interface. The following …

by Erivelton Gualter at May 21, 2018 04:27 PM

May 15, 2018

Erivelton Gualter

Community Bonding Period

The community bonding period is over. The past 3 weeks were really busy because I was in my finals week, final projects and my doctoral research. However, I basically completed everything I wanted to before the “Coding officially begins!”:

  • Finished Optimal Control and Intelligent Control System classes;
  • Submitted a conference …

by Erivelton Gualter at May 15, 2018 04:16 PM

April 26, 2018

Sudeepam Pandey

Starting with GSoC 2018.

So this year, I applied to the Google summer of code and got in. Google summer of code, or GSoC, as it is usually called, is a program funded by Google that has helped open source grow for over a decade. Under this program, Google awards stipends to university students for contributing code to open source organizations during their summer breaks from the university. The details of the program can be found here: Starting with Google summer of code.

Now this year, I have been selected to work with GNU Octave. It is a free and open source software/ high level programming language which is primarily focused on scientific computing. It is largely compatible with MATLAB and is a brilliant open source alternative to it. More details about GNU Octave can be found at Free your numbers! Introducing GNU Octave.

My GSoC project is about adding a Command line suggestion feature to GNU Octave. Stay tuned, I will share the details of the project very soon.

by Sudeepam Pandey ( at April 26, 2018 12:57 PM

April 23, 2018

Erivelton Gualter

Welcome to Octave and Google Summer of Code

This summer I got accepeted to the Summer Google of Code under the GNU Octave. This program, admistred by Google, facilates the emergence of students to the Open Source Community. My primary goal to participate in the GSoC is to build a long term relationship with the open source community …

by Erivelton Gualter at April 23, 2018 07:46 AM

Welcome to Octave and Google Summer of Code

This summer I got accepeted to the Summer Google of Code under the GNU Octave. This program, admistred by Google, facilates the emergence of students to the Open Source Community. My primary goal to participate in the GSoC is to build a long term relationship with the open source community …

by Erivelton Gualter at April 23, 2018 07:46 AM

March 06, 2018

Jordi Gutiérrez Hermoso

Advent of D

I wrote my Advent of Code in D. The programming language. It was the first time I used D in earnest every day for something substantial. It was fun and I learned things along the way, such as easy metaprogramming, concurrency I could write correctly, and functional programming that doesn’t feel like I have one arm tied behind my back. I would do it all over again.

My main programming languages are C++ and Python. For me, D is the combination of the best of these two: the power of C++ with the ease of use of Python. Or to put it another way, D is the C++ I always wanted. This used to be D’s sales pitch, down to its name. There’s lots of evident C++ heritage in D. It is a C++ successor worthy of consideration.

Why D?

This is the question people always ask me. Whenever I bring up D, I am faced with the following set of standard rebuttals:

  • Why not Rust?
  • D? That’s still around?
  • D doesn’t bring anything new or interesting
  • But the GC…

I’ll answer these briefly: D was easier for me to learn than Rust, yes, it’s still around and very lively, it has lots of interesting ideas, and what garbage collector? I guess there’s a GC, but I’ve never noticed and it’s never gotten in my way.

I will let D speak for itself further below. For now, I would like to address the “why D?” rebuttals in a different way. It seems to me that people would rather not have to learn another new thing. Right now, Rust has a lot of attention and some of the code, and right now it seems like Rust may be the solution we always wanted for safe, systems-level coding. It takes effort to work on a new programming language. So, I think the “why D?” people are mostly saying, “why should I have to care about a different programming language, can’t I just immediately dismiss D and spend time learning Rust instead?”

I posit that no, you shouldn’t immediately dismiss D. If nothing else, try to listen to its ideas, many of which are distilled into Alexandrescu’s The D Programming Language. I recommend this book as good reading material for computer science, even if you never plan to write any D (as a language reference itself, it’s already dated in a number of ways, but I still recommend it for the ideas it discusses). Also browse the D Gems section in the D tour. In the meantime, let me show you what I learned about D while using it.

Writing D every day for over 25 days

I took slightly longer than 25 days to write my advent of code solutions, partly because some stumped me a little and partly because around actual Christmas I wanted to spend time with family instead of writing code. When I was writing code, I would say that nearly every day of advent of code forced me to look into a new aspect of D. You can see my solutions in this Mercurial repository.

I am not going to go too much into details about the abstract theory concerning the solution of each problem. Perhaps another time. I will instead focus on the specific D techniques I learned about or found most useful for each.


  1. Day 1: parsing arguments, type conversions, template constraints
  2. Day 2: functional programming and uniform function call syntax
  3. Day 3: let’s try some complex arithmetic!
  4. Day 4: reusing familiar tools to find duplicates
  5. Day 5: more practice with familiar tools
  6. Day 6: ranges
  7. Day 7: structs and compile-time regexes
  8. Day 8: more compile-time fun with mixin
  9. Day 9: a switch statement!
  10. Day 10: learning what ranges cannot do
  11. Day 11: offline hex coding
  12. Day 12: for want of a set
  13. Day 13: more offline coding
  14. Day 14: reusing older code as a module
  15. Day 15: generators, lambdas, functions, and delegates
  16. Day 16: permutations with primitive tools
  17. Day 17: avoiding all the work with a clever observation
  18. Day 18: concurrency I can finally understand and write correctly
  19. Day 19: string parsing with enums and final switches
  20. Day 20: a physics problem with vector operations
  21. Day 21: an indexable, hashable, comparable struct
  22. Day 22: more enums, final switches, and complex numbers
  23. Day 23: another opcode parsing problem
  24. Day 24: a routine graph-search problem
  25. Day 25: formatted reads to finish off Advent of D

Day 1: parsing arguments, type conversions, template constraints

(problem statement / my solution)

For Day 1, I was planning to be a bit more careful about everything around the code. I was going to carefully parse CLI arguments, produce docstrings and error messages when anything went wrong, and carefully validate template arguments with constraints (comparable to concepts in C++). While I could have done all of this, as days went by I tried to golf my solutions, so I abandoned most of this boilerplate. Instead, I lazily relied on getting D stack traces at runtime or compiler errors when I messed up.

As you can see from my solution, had I kept it up, the boilerplate isn’t too bad, though. Template constraints are achieved by adding if(isNumeric!numType), which checks at compile time that my template was given a template argument of the correct type, where isNumeric comes from import std.traits. I also found that getopt was a sufficiently mature standard library for handling command-line parsing. It’s not quite as rich as Python’s argparse, merely sufficient. This about shows all it can do:

  string input;
  auto opts = getopt(
    "input|i", "Input captcha to process", &input

  if (opts.helpWanted) {
    defaultGetoptPrinter("Day 1 of AoC", opts.options);

Finally, a frequent workhorse that appeared from Day 1 was std.conv for parsing strings into numbers. A single function, to is surprisingly versatile and does much more than that, by taking a single template argument for converting (not casting) one type into another. It knows not only how to parse strings into numbers and vice versa, but also how to convert numerical types keeping as much precision as possible or reading list or associative array literals from strings if they are in their standard string representation. It’s a good basic example of D’s power and flexibility in generic programming.

Day 2: functional programming and uniform function call syntax

(problem statement / my solution)

For whatever reason, probably because I was kind of trying to golf my solutions, I ended up writing a lot of functionalish code, with lots of map, reduce, filter, and so forth. This started early on with Day 2. D is mostly unopinionated about which style of programming one should use and offers tools to do object-orientation, functional programming, or just plain procedural programming, presenting no obstacle to the mixing these styles. Lambdas are easily written inline with concise syntax, e.g. x => x*x, and the basic standard functional tools like map, reduce, filter and so on are available.

D’s approach to functional programming is quite pragmatic. While I rarely used it, because I wasn’t being too careful for these solutions, D functions can be labelled pure, which means that they can have no side effects. However, this still lets them do local impure things such as reassigning a variable or having a for loop. The only restriction is that all of their impurity must be “on the stack”, and that they cannot call any impure functions themselves.

Another feature that I came to completely fall in love with was what they call uniform function call syntax (UFCS). With some caveats, this basically means that

is just sugar for

 bar(foo, baz)

If the function only has one argument, the round brackets are optional and is sugar for bar(foo). This very basic syntactic convenience makes it so easy and pleasant to chain function calls together, lending itself to making it more inviting to write functional code. It also is a happy unification between OOP and FP, because syntactically it’s the same to give an object a new member function as it is to create a free-standing function whose first argument is the object.

Day 3: let’s try some complex arithmetic!

(problem statement / my solution)

For me, 2-dimensional geometry is often very well described by complex numbers. The spiral in the problem here seemed easy to describe as an associative array from complex coordinates to integer values. So, I decided to give D’s std.complex a try. It was easy to use and there were no big surprises here.

Day 4: reusing familiar tools to find duplicates

(problem statement / my solution)

There weren’t any new D techniques here, but it was nice to see how easy it was to build a simple word counter from D builtins. Slightly disappointed that this data structure itself wasn’t builtin like Python’s own collections.Counter but hardly an insurmountable problem.

Day 5: more practice with familiar tools

(problem statement / my solution)

Again, not much new D here. I like the relative ease with which it’s possible to read integers into a list using map and

Day 6: ranges

(problem statement / my solution)

There’s usually a fundamental paradigm or structure in programming languages out of which everything else depends on. Haskell has functions and monads, C has pointers and arrays, C++ has classes and templates, Python has dicts and iterators, Javascript has callbacks and objects, Rust has borrowing and immutability. Ranges are one of D’s fundamental concepts. Roughly speaking, a range is anything that can be iterated over, like an array or a lazy generator. Thanks to D’s powerful metaprogramming, ranges can be defined to satisfy a kind of compile-time duck typing: if it has methods to check for emptiness, get the first element, and get the next element, then it’s an InputRange. This duck typing is kind of reminiscent of type classes in Haskell. D’s general principle of having containers and algorithms on those containers is built upon the range concept. Ranges are intended to be simpler reformulation of iterators from the C++ standard libary.

I have been using ranges all along, as foreach loops are kind of like sugar for invoking those methods on ranges. However, for day 6 I actually wanted to use a method that had to invoke an std.range method, enumerate. It simply iterates over a range while simultaneously producing a counter. This I used to write some brief code to obtain both the maximum of an array and the index in which it occurs.

Another range-related feature that appears for the first time here is slicing. Certain random-access ranges which allow integer indexing also allow slicing. The typical method to remove elements from an array is to use this slicing. For example, to remove the first five elements and the last two elements from an array:

 arr = arr[5..$-2];

Here the dollar sign is sugar for arr.length and this removal is simply done by moving some start and end pointers in memory — no other bytes are touched.

The D Tour has a good taste of ranges and Programming in D goes into more depth.

Day 7: structs and compile-time regexes

(problem statement / my solution)

My solution for this problem was more complicated, and it forced me to break out an actual tree data structure. Because I wasn’t trying to be particularly parsimonious about memory usage or execution speed, I decided to create the tree by having a node struct with a global associative array indexing all of the nodes.

In D, structs have value semantics and classes have reference semantics. Roughly, this means that structs are on the stack, they get copied around when being passed into functions, while classes are always handled by reference instead and dynamically allocated and destroyed. Another difference between structs and classes is that classes have inheritance (and hence, polymorphic dispatch) but structs don’t. However, you can give structs methods, and they will have an implicit this parameter, although this is little more than sugar for free-standing functions.

Enough on OOP. Let’s talk about the really exciting stuff: compile-time regular expressions!

For this problem, there was some input parsing to do. Let’s look at what I wrote:

void parseLine(string line) {
 static nodeRegex = regex(r"(?P<name>\w+) \((?P<weight>\d+)\)( -> (?P<children>[\w,]+))?");  
 auto row = matchFirst(line, nodeRegex);
 // init the node struct here

The static keyword instructs D that this variable has to be computed at compile-time. D’s compiler basically has its own interpreter that can execute arbitrary code as long as all of the inputs are available at compile time. In this case, this parses and compiles this regex into the binary. The next line, where I call matchFirst on each line, is done at runtime, but if for whatever reason I had these strings available at compile time (say, defined as a big inline string a few lines above the same source file), I could also do the regex parsing at compile time if I wanted to.

This is really nice. This is one of my favourite D features. Add a static and you can precompute into your binary just about anything. You often don’t even need any extra syntax. If the compiler realises that it has all of the information at compile time to do something, it might just do it. This is known as compile-time function execution, hereafter, CTFE. The D Tour has a good overview of the topic.

Day 8: more compile-time fun with mixin

(problem statement / my solution)

Day 8 was another problem where the most interesting part was parsing. As before, I used a compile-time regex. But the interesting part of this problem was the following bit of code for parsing strings into their corresponding D comparison operation, as I originally wrote it:

auto comparisons = [
  "<": function(int a, int b) => a  < b,
  ">": function(int a, int b) => a > b,
  "==": function(int a, int  b) => a == b,
  "<=": function(int a, int b) => a <= b,
  ">=": function(int a, int b) => a >= b,
  "!=": function(int a, int b) => a  != b,

Okay, this isn’t terrible. It’s just… not very pretty. I don’t like that it’s basically the same line repeated six times. I furthermore also don’t like that within each line, I have to repeat the operator in the string part and in the function body. Enter the mixin keyword! Basically, string mixins allow you to evaluate any string at compile time. They’re kind of like the C preprocessor, but much safer. For example, string mixins only evaluate complete expressions, so no shenanigans like #define private public are allowed. My first attempt to shorten the above looked like this:

bool function(int,int)[string] comparisons;
static foreach(cmp; ["<", ">", "==", "<=", ">=", "!="]) {
  comparisons[cmp] = mixin("function(int a, int b) => a "~cmp~" b");

Since I decided to use a compile-time static loop to populate my array, I now needed a separate declaration of the variable which forced me to spell out its ungainly type: an associative array that takes a string and returns a function with that signature. The mixin here takes a concatenated string that evaluates to a function.

However, this didn’t work for two reasons!

The first one is that static foreach was introduced on September 2017. The D compilers packaged in Debian didn’t have it yet when I wrote that code! The second problem is more subtle: initialisation of associative arrays currently cannot be statically done because their internal data structures rely on runtime computations, according to my understanding of this discussion. They might fix it some day?

So, next best thing is my final answer:

bool function(int,int)[string] comparisons;

auto getComparisons(Args...)() {
  foreach(cmp; Args) {
    comparisons[cmp] = mixin("function(int a, int b) => a "~cmp~" b");
  return comparisons;

shared static this() {
  comparisons = getComparisons!("<", ">", "==",  "<=", ">=", "!=");

Alright, by size this is hardly shorter than the repetitive original. But I still think it’s better! It has no dull repetition where bugs are most often introduced, and it’s using a variable-argument templated function so that the mixin can have its values available at compile time. It uses the next best thing to compile-time initialisation, which is a module initialiser shared static this() that just calls the function to perform the init.

Day 9: a switch statement!

(problem statement / my solution)

Day 9 was a simpler parsing problem, so simple that instead of using a regex I decided to just use a switch statement. There isn’t anything terribly fancy about switch statements, and they work almost exactly the same as they do in other languages. The only distinct features of switch statements in D is that they work on numeric, string, or bool types and that they have deprecated implicit fallthrough. Fallthrough instead must be explicitly done with goto case; or will be once the deprecation is complete.

Oh, and you can also specify ranges for a case statement, e.g.

  case 'a': .. case 'z':
    // do stuff with lowercase ASCII

It’s the small conveniences that make this pleasant. Programming in D has a good discussion on switch statements.

Day 10: learning what ranges cannot do

(problem statement / my solution)

So, superficially, you might think that expressions like arr[2..$-2], which is valid, would also allow for things like arr[$-2..1] to traverse the array in reverse order or some other syntax for having a different step size than +1. At least I did. These kinds of array indexing are common in numeric-based arrays such as Octave, Julia, R, or Python’s numpy. So for day 10’s hash, which requires reversing an array, I thought I could just do that.

Turns out that the language doesn’t have syntax to allow this, but after a quick trip to the standard library I found the necessary functions. What I thought could be written as

arr[a..b] = arr[b..a];

instead became


Other than this minor discovery about ranges, Day 10 was more about getting the algorithm right than using any specialised D utilities. Since real hashes typically allow several sizes, I templated the hash functions with the total size, rounds of hashing, and chunk size, with a template constraint that the chunk size must divide the total size:

auto getHash(int Size=256, int Rounds=64, int ChunkSize=16)(string input)
 if( Size % ChunkSize == 0)
  // ...

Nothing new here. I just like that template constraints are so easy to write.

Day 11: offline hex coding

(problem statement / my solution)

I did most of Day 11 on paper. It took me a while to figure out a proper hex coordinate system and what the distance function in that coordinate system should be. I had seen hex coordinates from playing Battle for Wesnoth, but took me a while to figure them out again. Once I had that, the actual D code is pretty simple and used no techniques I hadn’t seen before. I think this is the first time I used the cumulativeFold function, but other than that, nothing to see here. An immutable global associative array populated at module init time like before,

pure static this(){
  directions = [
    "ne": [1,1],
    "n": [0,1],
    "nw": [-1,0],
    "sw": [-1,-1],
    "s": [0,-1],
    "se": [1,0],

and that’s it.

Day 12: for want of a set

(problem statement / my solution)

The only new D technique for this problem was that I decided to use a set structure to keep track of which graph nodes had been visited. The only problem is that D doesn’t have a built-in set structure (yet?), but it does have a setDifference function. It’s a bit clunky. It only works on ordered arrays, but that was sufficient for my purpose here, and probably not much worse than hashing with a traditional set structure would have been.

One further observation: D has an in keyword, which can be used to test membership, like in Python (it also has an unrelated use for defining input and output arguments to functions), but unlike Python, only for associative arrays. This makes sense, because the complexity of testing for membership for other data structures can vary widely depending on the structure and the chosen algorithm, and there isn’t a clear universal choice like there is for associative arrays.

If desired, however, it’s possible to define the in operator for any other class, like so:

bool opBinaryRight!("in")(T elt) {
  // check that elt is in `this`

I would assume that’s what you could use to write a set class for D.

Day 13: more offline coding

(problem statement / my solution)

This one is another where I did most of the solution on paper and thus managed to write a very short program. No new D techniques here, just the usual functionalish style that I seem to be developing.

Day 14: reusing older code as a module

(problem statement / my solution)

The problem here is interesting because I’ve solved this labelling of connected components problem before in C++ for GNU Octave. I wrote the initial bwlabeln implementation using union-find. I was tempted to do the same here, but I couldn’t think of a quick way to do so, and talking to others in the #lobsters channel in IRC, I realised that a simpler recursive solution would work without overflowing the stack (because the problem is small enough, not because a stack-based algorithm is clever).

The interesting part is reusing an earlier solution, the hashing algorithm from Day 10. At first blush, this is quite simple: every D file also creates its own module, namespaced if desired by directories. It’s very reminiscent of Python’s import statement and module namespacing. The only snag is that my other file has a void main(string[] args) function and so does this one. The linker won’t like that duplicate definition of symbols. For this purpose, D offers conditional compilation, which in C and C++ is usually achieved via a familiar C preprocessor macro idiom.

In D, this idiom is codified into the language proper via the version kewyord, like so

version(standalone) {
  void main(string[] args){
    // do main things here

This instructs the compiler to compile the inside of the version block only if an option called “standalone” is passed in,

gdc -O2 -fversion=standalone app.d -o day10

or, with regrettably slightly different flags,

ldc2 -O2 -d-version=standalone app.d -of day10

There are other built-in arguments for version, such as “linux” or “OSX” to conditionally compile for a particular operating system. This keyword offers quite a bit of flexibility for conditional compilation, and it’s a big improvement over C preprocessor idioms.

Day 15: generators, lambdas, functions, and delegates

(problem statement / my solution)

This problem was an opportunity to test out a new function, generate, which takes a function and iterates it repeatedly on a range. Haskell calls this one iterate, which I think is a better name. It’s also a lazy generator, so you need something like take to say how much of the generator do you want to use. For example, the Haskell code

pows = take 11 $ iterate (\x -> x*2) 1

can be translated into D as

auto x = 1;
auto pows = generate!(x => x*2).take(11);

There are other examples in the documentation.

Let also take a moment here to talk about the different anonymous functions in D. The following both declare a function that squares its input:

function(int a) { return a^^2;}
delegate(int a) { return a^^2;}

The difference is just a question of closure. The delegate version carries a hidden pointer to its enclosing scope, so it can dynamically close over the outer scope variables. If you can’t afford to pay this runtime penalty, the function version doesn’t reference the enclosing scope (no extra pointer). So, for a generator, you typically want to use a delegate, since you want the generator to remember its scoped variables across successive calls, like what I did:

auto generator(ulong val, ulong mult) {
  return generate!(delegate(){
      val = (val * mult) % 2147483647;
      return val;

This function returns a generator range where each entry will result in a new entry of this pseudorandom linear congruence generator.

The delegate/function is part of the type, and can be omitted if it can be inferred by context (e.g. when passing a function into another function as an argument). Furthermore, there’s a lambda shorthand that I have been using all along, where the {return foo;} boilerplate can be shortened to just => like so:

  (a) => a^^2

This form is only valid where there’s enough context to infer if it’s a delegate or a function, as well as the type of a itself. More details in the language spec.

Day 16: permutations with primitive tools

(problem statement / my solution)

This permutations problem made me reach for the std.algorithm function bringToFront for cyclicly permuting an array in place like so,

   bringToFront(progs[rot..$], progs[0..rot]);

It’s a surprisingly versatile function that can be used to perform more tricks than cyclic permutations. Its documentation is worth a read.

I also ran into a D bug here. I had to create a character array from an immutable input string, but due to Unicode-related reasons that D has for handling characters especially, I had to cast to ubyte[] instead of char[].

Besides that, for the second part where you had to realise that permutations cannot have too big of an orbit, I also ended up using a string array with the canFind from std.algorithm. I would have preferred a string set with hashing instead of linear searching, but it didn’t make a huge difference for the size of this problem.

I really want sets in the D standard library. Maybe I should see what I can do to make them happen.

Day 17: avoiding all the work with a clever observation

(problem statement / my solution)

This puzzle is a variation of the Josephus problem. I needed some help from #lobsters in IRC to figure out how to solve it. There aren’t any new D techniques, just some dumb array concatenation with the tilde operator for inserting elements into an array:

circ = circ[0..pos] ~ y ~ circ[pos..$];

The second part can be solved via the simple observation that you only
need to track one position, the one immediately following zero.

Day 18: concurrency I can finally understand and write correctly

(problem statement / my solution)

This day was an exciting one for me. Discussing this problem with others, it seems many people had much difficulty solving this problem in other programming languages. Most people seemed to have to emulate their own concurrency with no help from their programming language of choice. In contrast, D absolutely shined here, because it is based on the actor concurrency model (message passing), which precisely fits the problem statement. (There are other concurrency primitives in case the actor concurrency model isn’t sufficient, but it was sufficient for me.)

The basic idea of concurrency in D is that each thread of execution localises all of its state. By default, threads share no data. In order to communicate with each other, threads pass messages. A thread can indicate at any time when it’s ready to send or receive messages. Messages can be any type, and each thread says what type it’s expecting to receive. If a thread receives a type it’s not prepared to handle, it will throw an exception.

There are more details, such as what happens if a thread receives too many messages but doesn’t respond to any of them. Let’s not go into that now. Basic idea: threads get spawned, threads send and receive messages.

Let’s spend a little bit of time looking at the relevant functions and types I used, all defined in std.concurrency

Starts a thread of execution. The first argument is a reference to the function that this thread will execute, along with any arguments that function may take. Returns a Tid type, a thread ID. This is used as the address to send messages to. A thread can refer to its parent thread via the special variable ownerTid.

Unless explicitly declared shared, the arguments of a threaded function must be immutable. This is how the compiler guarantees no race conditions when manipulating those variables. Of course, with shared variables, the programmer is signalling that they are taking over synchronisation of that data, which may require using low-level mutexes.

Send a message to a particular thread. The first argument is the thread id. The other arguments can be anything. It’s up to the receiving thread to handle the arguments it receives.
Indicate that this thread is ready to receive a single type, and returns the value of that type. The type must of course be specified as a compile-time argument.
Indicates what to do with any of several possible types. The arguments to this function are a collection of functions, whose parameters types will be dynamically type-matched with received types. I didn’t need this function, but I wanted to mention it exists.
The problem statement is designed to deadlock. Although there probably is a more elegant solution, timing out on deadlock was the solution I wrote. This function does just that: listens for a message for a set of amount of time. If a message is received in the designated time, its handler function is executed and receiveTimeout returns true. If the timeout happens, it returns false instead.

Armed with these tools, the solution was a breeze to write. I first spawn two threads and save their thread ids,

  auto tid1 = spawn(&runProgram, opcodes, 0);
  auto tid2 = spawn(&runProgram, opcodes, 1);

Each of the two threads defined by runProgram then immediately stops, waiting for a thread id, to know whom to talk to,

void runProgram(immutable string[] opcodes, ulong pid) {
  auto otherProg = receiveOnly!Tid();
 // ...

The parent thread then connects the two worker threads to each other,

  send(tid1, tid2);
  send(tid2, tid1);

And off they go, the two threads run through the opcodes in the problem statement, and eventually they deadlock, which I decided to handle with a timeout like so,

    case "rcv":
      if (!receiveTimeout(100.msecs, (long val) {regs[reg] = val;})) {
        goto done;
      // no timeout, handle next opcode
      goto default;

After the thread has timed out, it signals to the parent thread that it’s done,

  send(ownerTid, thisTid, sent);

The parent in turn receives two tuples with thread id and computed value from each thread, and based on that decides what to output, after figuring out which thread is which,

  // Wait for both children to let us know they're done.
  auto done1 = receiveOnly!(Tid, long);
  auto done2 = receiveOnly!(Tid, long);
  if (done1[0] == tid2) {
  else {

And voilà, concurrency easy-peasy.

Day 19: string parsing with enums and final switches

(problem statement / my solution)

The only new D technique here are final switches. A final switch is for enum types, to make the compiler enforce you writing a case to match all of the possible values. That’s what I did here, where I wanted to make sure I matched the up, down, left, and right directions:

    final switch(dir){
    case DIR.d:
    case DIR.u:
    case DIR.l:
    case DIR.r:

The rest of the problem is merely some string parsing.

Day 20: a physics problem with vector operations

(problem statement / my solution)

I have not yet done serious numerical work with D, but I can see that it has all of the necessary ingredients for it. One of the most obvious amongst these is that it has built-in support for writing vector instructions. Given struct to model a particle in motion,

struct particle {
  double[3] pos;
  double[3] vel;
  double[3] acc;

the following function return another particle where all of the vectors have been divided by its norm (i.e. normalised to 1):

auto unit(particle p) {
    pos = p.pos,
    vel = p.vel,
    acc = p.acc;
  pos[] /= pos.norm;
  vel[] /= vel.norm;
  acc[] /= acc.norm;
  particle u = {pos, vel, acc};
  return u;

This vec[] /= scalar notation divides all of the vector by the given scalar. But that’s not all. You can also add or multiply vectors elementwise with similar syntax,

      double[3] diff1 = p.pos, diff2 = p.vel;
      diff1[] -= p.vel[];
      diff2[] -= p.acc[];

Here diff1 and diff2 give the vector difference between the position and velocity, and respectively, velocity in acceleration (I use the criterion of all of three of these being mostly collinear to determine if all particles have escaped the system and thus can no longer interact with any other particle).

This is mostly syntactic sugar, however. Although the D compiler can sometimes turn instructions like these into native vector instructions like AVX, real vectorisation has to be done via some standard library support

Day 21: an indexable, hashable, comparable struct

(problem statement / my solution)

I was happy to recognise, via some string mixins, that I could solve this problem by considering the dihedral group of the square:

immutable dihedralFun =
"function(ref const Pattern p) {
  auto n = p.dim;
  auto output = new int[][](n, n);
  foreach(i; 0..n) {
    foreach(j; 0..n) {
      output[i][j] = p.grid[%s][%s];
  return output;

immutable dihedralFourGroup = [
  mixin(format(dihedralFun, "i",     "j")),
  mixin(format(dihedralFun, "n-i-1", "j")),
  mixin(format(dihedralFun, "i",     "n-j-1")),
  mixin(format(dihedralFun, "n-i-1", "n-j-1")),
  mixin(format(dihedralFun, "j",     "i")),
  mixin(format(dihedralFun, "n-j-1", "i")),
  mixin(format(dihedralFun, "j",     "n-i-1")),
  mixin(format(dihedralFun, "n-j-1", "n-i-1")),

This isn’t a new technique, but I’m really happy how it turns out. Almost like lisp macros, but without devolving into the lawless chaos of Python or Javascript eval or C preprocessor macros. As an aside, the format function accepts formatted strings with POSIX syntax for positional arguments, but there isn’t anything built-in as nice as Perl string interpolation or Python format strings.

The real meat of this problem was to implement a grid structure that could be hashed, compared, and indexed. This is is all done with a number of utility functions. For indexing and slicing, the basic idea is that for a user-defined type foo


is sugar for

  foo.opIndex(foo.opSlice(bar, baz))

so those are the two functions you need to implement for indexing and slicing. Similarly, for equality comparison and hashing, you implement opEquals and toHash respectively. I relied on the dihedral functions above for comparison for this problem.

After implementing these functions for a struct (recall: like a class, but with value semantics and no inheritance), the rest of the problem was string parsing and a bit of logic to implement the fractal-like growth rule.

Day 22: more enums, final switches, and complex numbers

(problem statement / my solution)

Another rectangular grid problem, which I again decided to represent via complex numbers. The posssible infection states given in the problem I turned into an enum which I then checked with a final switch as before. The grid is then just an associative array from complex grid positions to infection states.

Nothing new here. By now these are all familiar tools and writing D code is becoming habitual for me.

Day 23: another opcode parsing problem

(problem statement / my solution)

This problem wasn’t technically difficult from a D point of view. The usual switches and string parsing techniques of days 8 and 18 work just as well. In fact, I started with the code of day 18 and modified it slightly to fit this problem.

The challenge was to statically analyse the opcode program to determine that it is implementing a very inefficient primality testing algorithm. I won’t go into an analysis of that program here because others have already done a remarkable job of explaining it. Once this analysis was complete, the meat of the problem then becomes to write a faster primality testing algorithm, such as dumb (but not too dumb) trial division,

auto isComposite(long p) {
  return iota(2, sqrt(cast(double) p)).filter!(x => p % x == 0).any;

and use this test at the appropriate location.

Day 24: a routine graph-search problem

(problem statement / my solution)

This problem required some sort of graph structure, which I implemented as an associative array from node ids to all edges incident to that node. The problem then reduces to some sort of graph traversal (I did depth-first search), keeping track of edge weights.

No new D techniques here either, just more practice with my growing bag of tricks.

Day 25: formatted reads to finish off Advent of D

(problem statement / my solution)

The final problem involved parsing a slightly more verbose DSL. For this, I decided to use formatted strings for reading, like so,

auto branchFmt =
"    - Write the value %d.
    - Move one slot to the %s.
    - Continue with state %s.

auto parseBranch(File f) {
  int writeval;
  string movedir;
  char newstate;

  f.readf(branchFmt, &writeval, &movedir, &newstate);
  return Branch(writeval ? true : false, movedir == "left" ? -1 : 1, newstate);

This is admittedly a bit brittle. Even the type check between the formatted string and the expected types is done at runtime (but newer D versions have a compile-time version of readf for type-checking the format string). An error here can cause exceptions at runtime.

Other than this, the only new technique here is that I actually wrote a loop to parse the program file:

auto parseInstructions(File f) {
  Instruction[char] instructions;

  while(!f.eof) {
    char state;
    f.readf("In state %s:\n", &state);
    f.readln; // "If the current value is 0:"
    auto if0 = f.parseBranch;
    f.readln; // "If the current value is 1:"
    auto if1 = f.parseBranch;
    f.readln; // Blank line
    instructions[state] = Instruction(if0, if1);
  return instructions;

A small comfort here is that checking for eof in the loop condition actually works. This is always subtly wrong in C++ and I can never remember why.

What’s left of the problem is absolutely routine D by now: associative arrays, UFCS, foreach loops, standard library utilities for summing and iterating and so forth. A few of my favourite things.

Concluding remarks

The best part is that my code was also fast! I was comparing my solutions above with someone else who was doing their Advent of Code in C. I could routinely match his execution speed on the problems where we bothered to compare, whenever we wrote similar algorithms. I’m eager to see what D can do when faced with some real number-crunching.

After all this, I have come to appreciate D more, as well as seeing some of its weak points. I think I have already raved enough about how much I like its functional style, its standard library, its type-checking, and its compile-time calculation. I also ran into a few bugs and deprecated features. I have also observed some language questionable design choices. Not once did I notice having a garbage collector. It was lots of fun.

Merry belated Christmas!

by Jordi at March 06, 2018 04:24 AM

August 29, 2017

Enrico Bertino

Summary of work done during GSoC

GSoC17 is at the end and I want to thank my mentors and the Octave community for giving me the opportunity to participate in this unique experience.

During this Google Summer of Code, my goal was to implement from scratch the Convolutional Neural Networks package for GNU Octave. It  will be integrated with the already existing nnet package.

This was a very interesting project and a stimulating experience for both the implemented code and the theoretical base behind the algorithms treated. A part has been implemented in Octave and an other part in Python using the Tensorflow API.

Code repository

All the code implemented during these months can be found in my public repository:

(my username: citti berto, bookmark enrico)

Since I implemented a completely new part of the package, I pushed the entire project in three commits and I wait for the community approving for preparing a PR for the official package [1].


The first commit (ade115a, [2]) contains the layers. There is a class for each layer, with a corresponding function which calls the constructor. All the layers inherit from a Layer class which lets the user create a layers concatenation, that is the network architecture. Layers have several parameters, for which I have guaranteed the compatibility with Matlab [3].

The second commit (479ecc5 [4]) is about the Python part, including an init file checking the Tensorflow installation. I implemented a Python module, TFintefrace, which includes:

  • an abstract class for layers inheritance
  • layers/ the layer classes that are used to add the right layer to the TF graph
  • a class for managing the datasets input
  • the core class, which initiates the graph and the session, performs the training and the predictions
  • a version of [5] for deepdream implementation (it has to be completed)

The third commit (e7201d8 [6]) includes:
  • trainingOptions: All the options for the training. Up to now, the only optimizer available is the stochastic gradient descent with momentum (sgdm) implemented in the class TrainingOptionsSGDM.
  • trainNetwork: passing the data, the architecture and the options, this function performs the training and returns a SeriesNetwork object
  • SeriesNetwork: class that contains the trained network, including the Tensorflow graph and session. This has three methods
    • predict: predicting scores for regression problems
    • classify: predicting labels for classification problems
    • activations: getting the output of a specific layer of the architecture


Goals not met

I did not manage to implement some features because of the lack of time due to the bug fixing in the last period. The problem was the conspicuous time spent testing the algorithms (because of the different random generators between Matlab, Octave and Python/Tensorflow). I will work in the next weeks to implement the missing features and I plan to continue to contribute to maintaining this package to keep it up to date with both Tensorflow new versions and Matlab new feature.

Function Missing features
activations OutputAs (for changing output format)
imageInputLayer DataAugmentation and Normalization
trainNetwork Accepted inputs: imds or tbl
trainNetwork .mat Checkpoints
trainNetwork ExecutionEnvironment: 'multi-gpu' and 'parallel'
ClassificationOutputLayer    classnames
TrainingOptions WorkerLoad and OutputFcn
DeepDreamImages Generalization to any network and AlexNet example


Tutorial for testing the package

  1. Install Python Tensorflow API (as explained in [4])
  2. Install Pytave (following these instructions [5])
  3. Install nnet package (In Octave: install [6] and load [7])
  4. Check the package with make check PYTAVE="pytave/dir/"
  5. Open Octave, add the Pytave dir the the paths and run your first network:

### TRAINING ###
# Load the training set
[XTrain,TTrain] = digitTrain4DArrayData();

# Define the layers
layers = [imageInputLayer([28 28 1]);

# Define the training options
options = trainingOptions('sgdm', 'MaxEpochs', 15, 'InitialLearnRate', 0.04);

# Train the network
net = trainNetwork(XTrain,TTrain,layers,options);

### TESTING  ###
# Load the testing set
[XTest,TTest]= digitTest4DArrayData();

# Predict the new labels
YTestPred = classify(net,XTest);

Future improvements

  • Manage the session saving
  • Save the checkpoints as .mat files and not as TF checkpoints
  • Optimize array passage via Pytave 
  • Categorical variables for classification problems


Repo link:


by Enrico Bertino ( at August 29, 2017 06:23 PM

August 28, 2017

Joel Dahne

Final Report

This is the final report on my work with the interval package during Google Summer of Code 2017. This whole blog have been dedicated to the project and by reading all posts you can follow my work from beginning to end.

The work has been challenging and extremely fun! I have learned a lot about interval arithmetic, the Octave and Octave-forge project, and also how to contribute to open-source in general. I have found the whole Octave community to be very helpful and especially I want to thank my mentor, Oliver Heimlich, and co-mentor, Kai Torben Ohlhus, for helping me during the project.

Here I will give a small introduction to Octave and the interval package for new readers and a summary of how the work has gone and how you can run the code I have actually contributed with.

Octave and the Interval Package

Octave, or GNU Octave, is a free program for scientific computing. It is very similar to Matlab and its syntax is largely compatible with it. Octave comes with a large set of core functionality but can also be extended with packages from Octave forge. These add new functionality, for example image processing, fuzzy logic or more statistic functions. One of those packages is the interval package which allows you to compute with interval arithmetic in Octave.

Summary of the Work

The goal with the project was to improve the Octave-forge interval package by implementing support for creating, and working with, N-dimensional arrays of intervals.

The work has gone very well and we have just released version 3.0.0 of the interval package incorporating all the contributions I have made during the project.

The package now has full support for working with N-dimensional arrays in the same way you do with ordinary floating point numbers in Octave. In addition I have also fixed some bugs not directly related to N-dimensional arrays, see for example bug #51783 and #51283.

During the project I have made a total of 108 commits. I have made changes to 332 of the packages 666 files. Some of these changes have only been changes in coding style or (mainly) automated adding of tests. Not counting these I have, manually, made changes to 110 files.

If you want to take a look at all the commits I have contributed with the best way is to download the repository after which you can see all the commits from GSoC with

hg log -u -d "2017-06-01 to 2017-08-29"

Unfortunately I have not found a good way of isolating commits from a specific period and author on sourceforge where the package is hosted. Instead you can find a list of all commits at the end of this blog post.

The NEWS-file from the release of version 3.0.0 is also a pretty good overview of what I have done. While not all of the changes are a result of GSoC quite a lot of them are.

Running the Code

As mentioned above we have just released version 3.0.0 of the interval package. With the new release it is very easy to test the newly added functionality. If you already have Octave installed the easiest way to install the package is with the command “pkg install -forge interval”. This will install the latest release of the package, at the time of writing this is 3.0.0 but that will of course change in the future. You can also download version 3.0.0 directly from Octave-forge.

If you want you can also download the source code from the official repository and test it with "make run" or install it with "make install". To download the repository, update to version 3.0.0 an run Octave with the package on Linux use the following

 hg clone octave-interval
 cd octave-interval
 hg update release-3.0.0
 make run

A Package for Taylor Arithmetic

The task took less time than planned so I had time to work upon a project depending on my previous work. I started to work on a project for Taylor arithmetic, you can read my blog post about it. I created a proof of concept implementation as part of my application for Google Summer of Code and I have now started to turn that into a real package. The repository can be found here.

It is still far from complete but my goal is to eventually add it as a package at Octave-Forge. How that goes depends mainly on how much time I have to spend on it the following semesters.

If you want to run the code as it is now you can pull the repository and then run it with "make run", this requires that Octave and version 3.0.0 (or higher) of the interval package is installed.

List of Commits

Here is a list of all 108 commits I have done to the interval package
summary:     Added Swedish translation for package metadata
summary:     @infsupdec/factorial.m: Fix decoration (bug #51783)
summary: Cast int to octave_idx_type
summary:     maint: Fix input to source
summary:     @infsupdec/dot.m: Fix decoration on empty input
summary:     @infsup/postpad.m, @infsup/prepad.m, @infsupdec/postpad.m, @infsupdec/prepad.m: Corrected identification of dimension for N-dimensional arrays
summary:, Fixed bug when broadcasting with one size equal to zero
summary:     doc: NEWS.texinfo: Info about vectorization for nthroot and pownrev
summary:     @infsup/pownrev.m, @infsupdec/pownrev.m: Support for vectorization of p
summary:     @infsup/nthroot.m, @infsupdec/nthroot.m, Support for vectorization of n
summary:     doc: NEWS.texinfo: Summarized recent changes from GSoC
summary: Fixed bug when broadcasting with one size equal to zero
summary:     doc: examples.texinfo: Updated example for the latest Symbolic package version
summary:     @infsup/*.m, @infsupdec/*.m: Added missing N-dimensional versions of tests
summary:     @infsup/*.m, @infsupdec/*.m: N-dimensional versions of all ternary tests
summary:     @infsup/*.m, @infsupdec/*.m: N-dimensional versions of all binary tests
summary:     @infsup/*.m, @infsupdec*.m: N-dimensional version of all unary tests
summary:     @infsup/powrev2.m: Fixed bug when called with vector arguments
summary:     @infsup/pownrev.m, @infsupdec/pownrev.m: Reworked vectorization test
summary:     @infsup/pown.m: Added support for N-dimensional arrays
summary:     @infsup/nthroot.n: Reworked vectorization test
summary:     @infsup/nthroot.m, @infsupdec/nthroot.m: Clarified that N must be scalar
summary:     @infup/pow.m, @infsupdec/pow.m: Fixed bug when called with vector arguments
summary:     @infsup/overlap.m, @infsupdec/overlap.m: Fixed formatting of vector test.
summary:     doc: Modified a test so that it now passes
summary:     doc: Fixed formatting of example
summary:     doc: SKIP an example that always fail in the doc-test
summary:     doc: Fixed missed ending of example in Getting Started
summary:     Updated coding style for all infsupdec-class functions
summary:     Updated coding style for all infsup-class functions
summary:     Updated coding style for all non-class functions
summary:     @infsupdec/dot.m: Fixed wrong size of decoration when called with two empty matrices
summary:     Small updates to documentation and comments for a lot of function to account for the support of N-dimensional arrays
summary:     doc: A small update to Examples, the interval Newton method can only find zeros inside the initial interval
summary:     doc: Updates to Getting Started, mainly how to create N-dimensional arrays
summary:     doc: Small updates to Preface regarding N-dimensional arrays and fixed one link
summary:     ctc_intersect.m, ctc_union.m: Fixed bugs when used for vectorization and when called with 0 or 1 output arguments
summary:     @infsup/sumsq.m: Updated to use the new functionality of dot.m
summary:     @infsup/dot.m, @infsupdec/dot.m, Added support for N-dimensional vectors. Moved all vectorization to the oct-file. Small changes to functionality to mimic how the sum function works.
summary:     ctc_intersect.m, ctc_union.m: Added support for N-dimensional arrays
summary:     @infsup/fsolve.m: Added support for N-dimensional arrays. Fixed problem with the function in the example. Improved performance when creating the cell used in vectorization.
summary:     @infsup/disp.m: Fixed wrong enumeration of submatrices
summary:     Fixed typo in NEWS.texinfo
summary:     @infsup/diag.m: Added description of the previous bug fix in the NEWS file
summary:     @infsup/diag.m: Fixed error when called with more than 1 argument
summary:     @infsup/meshgrid.m, @infsupdec/meshgrid.m: Removed these functions, now falls back on standard implementation, also updated index
summary:     @infsup/plot.m: Updated documentation
summary:     @infsup/plot3.m: Small change to allow for N-dimensional arrays as input
summary:     @infsupdec/prod.m: Added support for N-dimensional arrays
summary:     @infsup/prod.m: Added support for N-dimensional arrays. Removed short circuit in simple cases.
summary:     @infsup/sum.m, @infsupdec/sum.m, Added support for N-dimensional vectors. Moved all vectorization to the oct-file. Small changes to functionality to mimic Octaves standard sum function.
summary:     @infsup/fminsearch.m: Updated documentation
summary: Finalized support for N-dimensional arrays with binary functions and added support for it with ternary functions.
summary:     midrad.m: Added tests for N-dimensional arrays
summary:     @infsupdec/infsupdec.m: Added full support for creating N-dimensional arrays and added tests
summary:     @infsup/subset.m, @infsupdec/subset.m: Updated documentation
summary:     @infsup/strictsubset.m, @infsupdec/strictsubset.m: Fixed coding style and updated documentation
summary:     @infsup/strictprecedes.m, @infsupdec/strictprecedes.m: Updated documentation
summary:     @infsup/sdist.m: Updated documentation
summary:     @infsup/precedes.m, @infsupdec/precedes.m: Updated documentation
summary:     @infsup/overlap.m, @infsupdec/overlap.m: Fixed coding style and updated documentation
summary:     @infsup/issingleton.m: Updated documentation
summary:     @infsup/ismember.m: Updated documentation
summary:     @infsup/isentire.m: Updated documentation
summary:     @infsup/isempty.m, @infsupdec/isempty.m: Updated documentation
summary:     @infsup/iscommoninterval.m: Updated documentation
summary:     @infsup/interior.m, @infsupdec/interior.m: Updated documentation
summary:     @infsup/idist.m: Updated documentation
summary:     @infsup/hdist.m: Fixed coding style and updated documentation.
summary:     @infsup/sin.m, @infsupdec/sin.m: Added workaround for bug #51283
summary:     @infsup/gt.m: Updated documentation
summary:     @infsup/ge.m: Updated documentation
summary:     @infsup/lt.m, @infsupdec/lt.m: Updated documentation
summary:     @infsup/le.m, @infsupdec/le.m: Updated documentation
summary:     @infsup/disjoint.m, @infsupdec/disjoint.m: Updated documentation
summary: Added support for N-dimensional arrays for unary functions. Also temporary support for binary functions.
summary: Added support for N-dimensional arrays
summary:     @infsup/infsup.m: Fixed documentation and added missing line continuation
summary:     @infsup/disp.m: Fixed documentation
summary:     @infsup/size.m: Fixed documentation
summary:     @infsup/size.m: Fixes to the documentation
summary:     nai.m: Small fix to one of the tests
summary:     hull.m: Fixes according to Olivers review
summary:     @infsup/display.m: Vectorized loop
summary:     @infsup/disp.m: Fixes according to Olivers review, mainly details in the output
summary:     @infsup/infsup.m: Updated documentation and added test for N-dimensional arrays
summary:     @infsup/infsup.m: Fixed coding style
summary:     @infsup/disp.m: Updated documentation and added more tests for N-dimensional arrays
summary:     exacttointerval.m: Uppdated documentation and added tests for N-dimensional arrays
summary:     @infsup/intervaltotext.m, @infsupdec/intervaltotext.m: Updated documentation and added tests for N-dimensional arrays
summary:     @infsup/intervaltotext.m: Fixed coding style
summary:     @infsup/subsref.m, @infsupdec/subsref.m: Added tests for N-dimensional arrays
summary:     @infsup/size.m: Added support for N-dimensional arrays
summary:     @infsup/end.m: Added support for N-dimensional arrays
summary:     nai.m: Added support for N-dimensional arrays
summary:     @infsup/resize.m, @infsupdec/resize.m: Added support for N-dimensional arrays
summary:     @infsup/reshape.m, @infsupdec/reshape.m: Added support for N-dimensional arrays
summary:     @infsup/prepad.m, @infsupdec/prepad.m: Added small parts to the documentation and tests for N-dimensional arrays
summary:     @infsup/postpad.m, @infsupdec/postpad.m: Added small parts to the documentation and tests for N-dimensional arrays
summary:     @infsup/meshgrid.m, @infsupdec/meshgrid.m: Added support for outputting 3-dimensional arrays
summary:     @infsup/cat.m, @infsupdec/cat.m: Added support for N-dimensional arrays
summary:     hull.m: Added support for N-dimensional arrays
summary:     empty.m, entire.m: Added support for N-dimensional arrays
summary:     @infsup/display.m: Added support for displaying high dimensional arrays
summary:     @infsup/disp.m: Added support for displaying high dimensional arrays
summary:     @infsup/disp.m: Fixed coding style
summary:     @infsupdec/infsupdec.m: Temporary fix for creating high dimensional arrays
summary:     @infsupdec/infsupdec.m: Fixed coding style

by Joel Dahne ( at August 28, 2017 03:50 PM

Michele Ginesi

Final Resume

SummaryDuring the GSoC I worked on different special functions that needed to be improved or implemented from scratch. Discussing with my mentors and the community, we decided that my work should be pushed on a copy of the scource code of Octave on my repository [1] and then I should have work with different bookmarks for each function I had to work on. When different functions happened to be related (e.g. gammainc and gammaincinv), I worked on these on the same bookmark. I present now a summary and the bookmarks related to the functions.

Incomplete gamma function

bookmark: gammainc
first commit: d1e03faf080b
last commit: 107dc1d24c1b
added files: /libinterp/corefcn/, /scripts/specfun/gammainc.m, /scripts/specfun/gammaincinv.m
removed files:/libinterp/corefcn/, /liboctave/external/slatec-fn/dgami.f, /liboctave/external/slatec-fn/dgamit.f, /liboctave/external/slatec-fn/gami.f, /liboctave/external/slatec-fn/gamit.f, /liboctave/external/slatec-fn/xdgami.f, /liboctave/external/slatec-fn/xdgamit.f, /liboctave/external/slatec-fn/xgmainc.f, /liboctave/external/slatec-fn/xsgmainc.f
modified files: NEWS, /doc/interpreter/arith.txi, /libinterp/corefcn/, /liboctave/external/slatec-fn/, /liboctave/numeric/, /scripts/specfun/

Summary of the work

On this bookmark I worked on the incomplete gamma function and its inverse.
The incomplete gamma function gammainc had both missing features (it were missed the "scaled" options) and some problem of inaccurate result type (see bug # 47800). Part of the work was already been done by Marco and Nir, I had to finish it. We decided to implement it as a single .m file (gammainc.m) which call (for some inputs) a subfunction written in C++ (
The inverse of the incomplete gamma function was missing in Octave (see bug # 48036). I implemented it as a single .m file (gammaincinv.m) which uses a Newton method.

Bessel functions

bookmark: bessel
first commit: aef0656026cc
last commit: e9468092daf9
modified files: /liboctave/external/amos/README, /liboctave/external/amos/cbesh.f, /liboctave/external/amos/cbesi.f, /liboctave/external/amos/cbesj.f, /liboctave/external/amos/cbesk.f, /liboctave/external/amos/zbesh.f, /liboctave/external/amos/zbesi.f, /liboctave/external/amos/zbesj.f, /liboctave/external/amos/zbesk.f, /liboctave/numeric/, /scripts/specfun/bessel.m

Summary of the work

On this bookmark I worked on Bessel functions.
There was a bug reporting NaN as output when the argument $x$ was too large in magnitude (see bug # 48316). The problem was given by Amos library, which refuses to compute the output in such cases. I started "unlocking" this library, in such a way to compute the output even when the argument was over the limit setted by the library. Then I compared the results with other libraries (e.g. Cephes [2], Gnu Scientific library [3] and C++ special function library [4]) and some implementations I made. In the end, I discovered that the "unlocked" Amos were the best one to use, so we decided to maintain them (in the "unlocked" form), modifying the error variable to explain the loss of accuracy.

Incomplete beta function

bookmark: betainc
first commit: 712a069d2860
last commit: e0c0dd40f096
added files: /libinterp/corefcn/, /scripts/specfun/betainc.m, /scripts/specfun/betaincinv.m
removed files: /libinterp/corefcn/, /liboctave/external/slatec-fn/betai.f, /liboctave/external/slatec-fn/dbetai.f, /liboctave/external/slatec-fn/xbetai.f, /liboctave/external/slatec-fn/xdbetai.f
modified files: /libinterp/corefcn/, /liboctave/external/slatec-fn/, /liboctave/numeric/, /liboctave/numeric/lo-specfun.h, /scripts/specfun/, /scripts/statistics/distributions/betainv.m, /scripts/statistics/distributions/binocdf.m

Summary of the work

On this bookmark I worked on the incomplete beta function and its inverse.
The incomplete beta function missed the "upper" version and had reported bugs on input validation (see bug # 34405) and inaccurate result (see bug # 51157). We decided to rewrite it from scratch. It is now implemented ad a single .m file (betainc.m) which make the input validation part, then the output is computed using a continued fraction evaluation, done by a C++ function (
The inverse was present in Octave but missed the "upper" version (since it was missing also in betainc itself). The function is now written as a single .m file (betaincinv.m) which implement a Newton method where the initial guess is computed by few steps of bisection method.

Integral functions

bookmark: expint
first commit: 61d533c7d2d8
last commit: d5222cffb1a5
added files:/libinterp/corefcn/, /scripts/specfun/cosint.m, /scripts/specfun/sinint.m
modified files: /doc/interpreter/arith.txi, /libinterp/corefcn/, /scripts/specfun/expint.m, /scripts/specfun/

Summary of the work

On this bookmark I worked on exponential integral, sine integral and cosine integral. I already rewrote the exponential integral before the GSoC. Here I just moved the Lentz algorithm to an external C++ function (, accordingly to gammainc and betainc. I've also modified the exit criterion for the asymptotic expansion using [5] (pages 1 -- 4) as reference.
The functions sinint and cosint were present only in the symbolic package of Octave but was missing a numerical implementation in the core. I wrote them as .m files (sinint.m and cosint.m). Both codes use the series expansion near the origin and relations with expint for the other values.

To do

There is still room for improvement for some of the functions I wrote. In particular, gammainc can be improved in accuracy for certain couple of values, and I would like to make a template version for the various Lentz algorithms in C++ so to avoid code duplication in the functions.
In October I will start a PhD in Computer Science, still here in Verona. This will permit me to remain in contact with my mentor Marco Caliari, so that we will work on these aspects.

[5] N. Bleistein and R.A. Handelsman, "Asymptotic Expansions of Integrals", Dover Publications, 1986.

by Michele Ginesi ( at August 28, 2017 04:46 AM

Piyush Jain

Geometry Package (Octave)

Geometry package: Implement boolean operations on polygons

As part of GSoC 2017 , this project is intended to implement a set of boolean operations and supporting function for acting on polygons. These include the standard set of potential operations such as union/OR, intersection/AND, difference/subtraction, and exclusiveor/XOR. Other things to be implemented are the following functions: polybool, ispolycw, poly2ccw, poly2cw, poly2fv, polyjoin, and polysplit.

This Repository is a clone(fork) of the Geometry Package which is a part of the Octave Forge Project.

This fork adds new functions to the official Geometry Package as part of GSoC (Google Summer of Code) 2016.

The official Geometry Package can be found here

Link to commits on official repo :

Added files and functions

  1. /inst/polygons2d/clipPolygon_mrf.m
  2. /inst/polygons2d/private/_poly2struct_.m
  3. /src/martinez.cpp
  4. /src/polygon.cpp
  5. /src/utilities.cpp
  6. /src/
  7. /inst/polygons2d/funcAliases

Bonding Period

After discussion with my mentor and keeping my proposal in mind, I tried to understand and enlist the tasks more specifically and in detail. I first tried to understand the conventions and other things about the organisation. I tried to understand the first basic thing which I will need throughout this project, i.e. how we create an oct-interface for the C++ codes to be executable in Octave. My first goal was to explore the possibility if the already implemented mex-interface geometry package can be improved in performance by replacing it with oct-interface. So, for understanding how these oct-files work and other things , I started to implement something in oct-files and getting familiar with it.

First Coding Phase

As stated, there is an already implemented Geometry3.0 package. This has mex interface for its functions. I tried to compare its performance with the oct interface version. For benchmarking, I first implemented my own function for polyUnion (using clipper library) with oct-interface (Find it here). Then, I compared its performance over a number of different sets of polgons (parametrized over number of vertices) and recorded the elapsed times with both the interfaces. On plotting a graph of Number of vertices V/s Elapsed time (for oct and mex), following obseravtions were made : </br>

  • The oct interface had a better performance over mex.
  • For 10000 vertices, the oct interface took about 0.008 seconds while the mex interface took about 0.014 seconds. This implies oct interface took 8X10e-3 seconds / 10e4 vertices i.e. 8X10e-7 seconds per vertex. For mex, it was 14X10e-7 seconds per vertex.
  • As it can be seen from the above data, the oct interface was not more than twice as good as mex interface.</br> From these observations, it was concluded that it is not worth to change the interface from mex to oct since there was not much improvement in performance. Thus, our next goal is now to incorporate new algorithms.

After spending decent time over checking out the new algorithm and understanding its implementation, I have now started to implement the polybool function. I also tried to compare its performance with the already implemented clipPolygon in the current geometry package. The new algorithm has a better performance than the old one.

The implementation of boolean operations on polygons, like DIFFERENCE , INTERSECTION , XOR and UNION with the new algorithm is almost done. A little work on tests, demos and documentation is still needed and I am working on that.

More about the algorithm by F. Martínez, A.J. Rueda, F.R. Feito

The algorithm is very easy to understand, among other things because it can be seen as an extension of the classical algorithm, based on the plane sweep, for computing the intersection points between a set of segments.When a new intersection between the edges of polygons is found, the algorithm subdivides the edges at the intersection point. This produces a plane sweep algorithm with only two kind of events: left and right endpoints, making the algorithm quite simple. Furthermore, the subdivision of edges provides a simple way of processing degeneracies. </br> Overall sketch of the approach for computing Boolean operations on polygons:

  • Subdivide the edges of the polygons at their intersection points.
  • Select those subdivided edges that lie inside the other polygon—or that do not lie depending on the operation.
  • Join the edges selected in step 2 to form the result polygon.</br>


    Let n be the total number of edges of all the polygons involved in the Boolean operation and k be the number of intersections of all the polygon edges. The whole algorithm runs in time O(n+k)log(n).</br>

After raw testing this new algorithm on several cases, I am now adding few tests in the m-script. Other than that, a demo has also been added in the m-script. The demo can be seen by the command demo clipPolygon_mrf. The tests have also been added. The test can be seen by test clipPolygon_mrf.

Second Coding Phase

After implementing the polybool function and checking it, we are planning to include it in the next release of geometry package. Now , to move forward, I am first importing some functions from last year GSoC’s repo and ensuring their MATLAB compatibility. Functions like poly2ccw, poly2cw, joinpolygons, splitPolygons have been created as aliases while ensuring their compatibility with their mathworks counterparts. Then after, there was some time invested on understanding the CGAL library and its implementation.

The further plan is to sync the matgeom package with the geometry package.

Third Coding Phase

Proceeding towards the next goal, the idea is to devise some way to automate the process somewhat, of syncing the matgeom and geometry package. The issue is that when a new release of geometry package is planned, there are some things which ahve been updated in matgeom but not in their geometry counterparts (if it exists). So, every time before releasing, so much time has to be invested in manually checking each edit and syncing it into the geometry.

To achieve this, first a workaround is implemented on a dummy repository - dummyMatGeom. Its master branch is matGeom (dummy) and there is another branch (named geometry) is created which contains geometry package (dummy). To test the entire procedure, go to the dummy repository dummyMatGeom , pull both branches in different folders, say “dummyMatGeom” for master branch and “dummyGeom” for geometry branch. Then follow the given steps as explained on the wiki page.


  • Clearly, the above procedure will only sync the script of the function, not it’s tests and demo, which are in separate folders in a Matlab package structure. Even if we try to concatenate their corresponding test/demo scripts with the function scripts (as it is in an octave package structure), there will be discrepancies because the notion or writing tests for octave and matlab packages are quite different. The way octave allows tests to work is unique to octave as explained here. SO, we can’t simply concatenate the Matlab test scripts with the functions.

  • Git doesn’t preserves the original version of geometry scripts and overwrites the whole file completely. For example :

  1. Original file at matGeom (upstream)

% Bla bla
% bla bla bla

function z = blabla (x,y)
% Help of function
for i=1:length(x)
   z(i) = x(i)*y(i);

  1. Ported to geometry

# Copyright - Somebody
# Bla bla
# bla bla bla

# texinfo
# Help of function

function z = blabla (x,y)
   z = x .* y;

  1. Updated in matGeom

% Bla bla
% bla bla bla

function z = blabla (x,y)
% Help of function
% updated to be more clear
z = zeros (size(x));
for i=1:length(x)
   z(i) = x(i)*y(i);

  1. After syncing , the expected result is something like this :

# Copyright - Somebody
# Bla bla
# bla bla bla

# texinfo
# Help of function
# updated to be more clear

function z = blabla (x,y)
   z = zeros (size(x));
   z = x .* y;

But, this doesn’t happen as expected. Git just finds the files which have been modified and overwrites those files completely. Considering the possibilities of the solutions, there are ways like git patch or git interactive which allows us to select the lines specifically which we want to be committed, but that would not serve our purpose as it would not be better than syncing it manually, file by file. Looking for a better solution to handle this !

Now, the further idea is to release geometry and I am getting involved into it to get a feel of how things are done.

Thus, as the time concludes, it’s time to say Goodbye to GSoC’17. It was overall a great learning experience.

August 28, 2017 12:00 AM

August 19, 2017

Michele Ginesi

Integral functions

Integral functionsDuring the last week I made few modifications to expint.m and wrote sinint.m and cosint.m from scratch. All the work done can be found on the bookmark expint of my repository.


As I mentioned here I rewrote expint.m from scratch before the GSoC. During the last week I moved the Lentz algorithm to a .cc function (in order to remain coherent with the implementations of gammainc and betainc) and added few tests.


The sinint function is present in the symbolic package, but is not present a numerical implementation in the core.
The sine integral is defined as $$ \text{Si} (z) = \int_0^z \frac{\sin(t)}{t}\,dt. $$ To compute it we use the series expansion $$ \text{Si}(z) = \sum_{n=0}^\infty \frac{(-1)^n z^{2n+1}}{(2n+1)(2n+1)!} $$ when the module of the argument is smaller than 2. For bigger values we use the following relation with the exponential integral $$ \text{Si} = \frac{1}{2i} (E_1(iz)-E_1(-iz)) + \frac{\pi}{2},\quad |\text{arg}(z)| \frac{\pi}{2}$$ and the following simmetry relations $$ \text{Si}(-z) = -\text{Si}(z), $$ $$ \text{Si}(\bar{z}) = \overline {\text{Si}(z)}. $$ The function is write as a single .m file.


As the sinint function, also cosint is present in the symbolic package, but there is not a numerical implementation in the core.
The cosine integral is defined as $$ \text{Ci} (z) = -\int_z^\infty \frac{\cos(t)}{t}\,dt. $$ An equivalent definition is $$ \text{Ci} (z) = \gamma + \log z + \int_0^z \frac{\cos t - 1}{t}\,dt. $$ To compute it we use the series expansion $$ \text{Ci}(z) = \gamma + \log z + \sum_{n=1}^\infty \frac{(-1)^n z^{2n}}{(2n)(2n)!} $$ when the module of the argument is smaller than 2. For bigger values we use the following relation with the exponential integral $$ \text{Ci} = -\frac{1}{2} (E_1(iz)+E_1(-iz)),\quad |\text{arg}(z)| \frac{\pi}{2}$$ and the following simmetry relations $$ \text{Ci}(-z) = \text{Ci}(z) -i\pi,\quad 0\text{arg}(z)\pi, $$ $$ \text{Ci}(\bar{z}) = \overline{\text{Ci}(z)} .$$ As for sinint, also cosint is written as a single .m file.

by Michele Ginesi ( at August 19, 2017 02:32 AM

August 18, 2017

Joel Dahne

The Final Polish

We are preparing for releasing version 3.0.0 of the interval package and this last week have mainly been about fixing minor bugs related to the release. I mention two of the more interesting bugs here.

Compact Format

We (Oliver) recently added support for "format compact" when printing intervals. It turns out that the way to determine if compact format is enabled differs very much between different version of Octave. There are at least three different ways to get the information.

In the older releases (< 4.2.0 I believe) you use "get (0, "FormatSpacing")" but there appear to be a bug for version < 4.0.0 for which this always return "loose".

For the current tip of the development branch you can use "[~, spacing] = format ()" to get the spacing.

Finally in between these two version you use "__compactformat__ ()".

In the end Oliver, probably, found a way to handle this mess and compact format should now be fully supported for intervals. The function to do this is available here

Dot-product of Empty Matrices

When updating "dot" to support N-dimensional arrays I also modified it so that it behaves similar to Octaves standard implementation. The difference is in how it handles empty input. Previously we had

> x = infsupdec (ones (0, 2));
> dot (x, x)
ans = 0×2 interval matrix

but with the new version we get

> dot (x, x)
ans = 1×2 interval vector
   [0]_com   [0]_com

which is consistent with the standard implementation.

In the function we use "min" to compute the decoration for the result. Normally "min (x)" and "dot (x, x)" returns results of the same size (the dimension along which it is computed is set to 1), but they handle empty input differently. We have

> x = ones (0, 2);
> dot (x, x)
ans =
   0   0
> min (x)
ans = [](0x2)

This meant that the decoration would be incorrect since the implementation assumed they always had the same size. Fortunately the solution was very simple. If the dimension along which we are computing the dot-product is zero. the decoration should always be "com". So just adding a check for that was enough.

You could argue that "min (ones (0, 2))" should return "[inf, inf]" similarly to how many of the other reductions, like "sum" or "prod", return their unit for empty input. But this would most likely be very confusing for a lot of people. And it is not compatible with how Matlab does it either.

Updates on the Taylor Package

I have also had some time to work on the Taylor package this week. The basic utility functions are now completed and I have started to work on functions for actually computing with Taylor expansions. At the moment there are only a limited amount of functions implemented. For example we can calculate the Taylor expansion of order 4 for the functions $\frac{e^x + \log(x)}{1 + x}$ at $x = 5$.

## Create a variable of degree 4 and with value 5
> x = taylor (infsupdec (5), 4)
x = [5]_com + [1]_com X + [0]_com X^2 + [0]_com X^3 + [0]_com X^4

## Calculate the function
> (exp (x) + log (x))./(1 + x)
ans = [25.003, 25.004]_com + [20.601, 20.602]_com X + [8.9308, 8.9309]_com X^2 + [2.6345, 2.6346]_com X^3 + [0.59148, 0.59149]_com X^4

by Joel Dahne ( at August 18, 2017 05:01 PM

August 12, 2017

Michele Ginesi


betaincinvThe inverse of the incomplete beta function was present in Octave, but without the "upper" option (since it was missing in betainc itself). We decided to rewrite it from scratch using Newton method, as for gammaincinv (see my post on it if you are interested).
To make the code numerically more accurate, we decide which version ("lower" or "upper") invert depending on the inputs.
At first we compute the trivial values (0 and 1). Then the remaining terms are divided in two sets: those that will be inverted with the "lower" version, and those that will be inverted with the "upper" one. For both cases, we perform 10 iterations of bisection method and then we perform a Newton method.
The implementation (together with the new implementation of betainc) can be found on my repository, bookmark "betainc".

by Michele Ginesi ( at August 12, 2017 08:52 AM

August 11, 2017

Joel Dahne

Improving the Automatic Tests

Oliver and I have been working on improving the test framework used for the interval package. The package shares a large number of tests with other interval packages through an interval test framework that Oliver created. Here is the repository.

Creating the Tests

Previously these tests were separated from the rest of the package and you usually ran them with help of the Makefile. Now Oliver has moved them to the m-files and you can run them, together with the other tests for the function, with test @infsup/function in Octave. This makes it much easier to test the functions directly.

In addition to making the tests easier to use we also wanted to extend them to not only test scalar evaluation but also vector evaluation. The test data, input ad expected output, is stored in a cell array and when performing the scalar testing we simply loop over that cell and run the function for each element. The actual code looks like this (in this case for plus)

%! # Scalar evaluation
%! testcases = testdata.NoSignal.infsup.add;
%! for testcase = [testcases]'
%!   assert (isequaln (...
%!     plus ({1},{2}), ...
%!     testcase.out));
%! endfor

For testing the vector evaluation we simply concatenate the cell array into a vector and give that to the function. Here is what that code looks like

%! # Vector evaluation
%! testcases = testdata.NoSignal.infsup.add;
%! in1 = vertcat (vertcat ({:, 1});
%! in2 = vertcat (vertcat ({:, 2});
%! out = vertcat (testcases.out);
%! assert (isequaln (plus (in1, in2), out));

Lastly we also wanted to test evaluation of N-dimensional arrays. This is done by concatenating the data into a vector and then reshape that vector into an N-dimensional array. But what size should we use for the array? Well, we want to have at least three dimensions because otherwise we are not really testing N-dimensional arrays. My solution was to completely factor the length of the vector and use that as size, testsize = factor (length (in1)), and if the length of the vector has two or fewer factors we add a few elements to the end until we get at least three factors. This is the code for that

%! # N-dimensional array evaluation
%! testcases = testdata.NoSignal.infsup.add;
%! in1 = vertcat (vertcat ({:, 1});
%! in2 = vertcat (vertcat ({:, 2});
%! out = vertcat (testcases.out);
%! # Reshape data
%! i = -1;
%! do
%!   i = i + 1;
%!   testsize = factor (numel (in1) + i);
%! until (numel (testsize) > 2)
%! in1 = reshape ([in1; in1(1:i)], testsize);
%! in2 = reshape ([in2; in2(1:i)], testsize);
%! out = reshape ([out; out(1:i)], testsize);
%! assert (isequaln (plus (in1, in2), out));

This works very well, except when the number of test cases is to small. If the number of test is less than four this will fail. But there are only a handful of functions with that few tests so I fixed those independently.

Running the tests

Okay, so we have created a bunch of new tests for the package. Do we actually find any new bugs with them? Yes!

The function pow.m failed on the vector test. The problem? In one place $\&\&$ was used instead of $\&$. For scalar input I believe these behave the same but they differ for vector input.

Both the function nthroot.m and the function pownrev.m failed the vector test. Neither allowed vectorization of the integer parameter. For nthroot.m this is the same for standard Octave version so it should perhaps not be treated as a bug. The function pownrev.m uses nthroot.m internally so it also had the same limitation. This time I would however treat it as a bug because the function pown.m does allow vectorization of the integer parameter and if that supports it the reverse function should probably also do it. So I implemented support for vectorization of the integer parameter for both nthroot.m and pownrev.m and they now pass the test.

No problems were found with the N-dimensional tests that the vector tests did not find. This is a good indication that the support for N-dimensional arrays is at least partly correct. Always good to know!

by Joel Dahne ( at August 11, 2017 03:04 PM

August 02, 2017

Michele Ginesi


table, th, td { border: 1px solid black; } betaincThe betainc function has two bugs reported: #34405 on the input validation and #51157 on inaccurate result. Moreover, it is missing the "upper" version, which is present in MATLAB.

The function

The incomplete beta function ratio is defined as $$I_x(a,b) = \dfrac{B_x(a,b)}{B(a,b)},\quad 0\le x \le 1,\,a>0,\,b>0,$$ where $B(a,b)$ is the classical beta function and $$B_x(a,b)=\int_0^x t^{a-1}(1-t)^{b-1}\,dt.$$ In the "upper" version the integral goes from $x$ to $1$. To compute this we will use the fact that $$\begin{array}{rcl} I_x(a,b) + I_x^U(a,b) &=& \dfrac{1}{B(a,b)}\left( \int_0^x t^{a-1}(1-t)^{b-1}\,dt + \int_x^1 t^{a-1}(1-t)^{b-1}\,dt\right)\\ &=&\dfrac{1}{B(a,b)}\int_0^1 t^{a-1}(1-t)^{b-1}\,dt\\ &=&\dfrac{B(a,b)}{B(a,b)}\\ &=&1 \end{array}$$ and the relation $$I_x(a,b) + I_{1-x}(b,a) = 1$$ so that $$I_x^U(a,b) = I_{1-x}(b,a).$$

The implementation

Even if it is possible to obtain a Taylor series representation of the incomplete beta function, it seems to not be used. Indeed the MATLAB help cite only the continuous fraction representation present in "Handbook of Mathematical Functions" by Abramowitz and Stegun: $$I_x(a,b) = \dfrac{x^a(1-x)^b}{aB(a,b)}\left(\dfrac{1}{1+} \dfrac{d_1}{1+} \dfrac{d_2}{1+}\ldots\right)$$ with $$d_{2m+1} = -\dfrac{(a+m)(a+b+m)}{(a+2m)(a+2m+1)}x$$ and $$d_{2m} = \dfrac{m(b-m)}{(a+2m-1)(a+2m)}x$$ which seems to be the same strategy used by GSL. To be more precise, this continued fraction is computed directly when $$x\dfrac{a-1}{a+b-2}$$ otherwise, the computed fraction is used to compute $I_{1-x}(b,a)$ and then it is used the fact that $$I_x(a,b) = 1-I_{1-x}(b,a).$$ In my implementation I use a continued fraction present in "Handboob of Continued Fractions for Special Functions" by Cuyt, Petersen, Verdonk, Waadeland and Jones, which is more complicated but converges in fewer steps: $$\dfrac{B(a,b)I_x(a,b)}{x^a(1-x)^b} = \mathop{\huge{\text{K}}}_{m=1}^\infty \left(\dfrac{\alpha_m(x)}{\beta_m(x)}\right),$$ where $$\begin{array}{rcl} \alpha_1(x) &=&1,\\ \alpha_{m+1}(x) &=&\dfrac{(a+m-1)(a+b+m-1)(b-m)m}{(a+2m-1)^2}x^2,\quad m\geq 1,\\ \beta_{m+1}(x) &=&a + 2m + \left( \dfrac{m(b-m)}{a+2m-1} - \dfrac{(a+m)(a+b+m)}{a+2m+1} \right)x,\quad m\geq 0. \end{array}$$ This is most useful when $$x\leq\dfrac{a}{a+b},$$ thus, the continued fraction is computed directly when this condition is satisfied, while it is used to evaluate $I_{1-x}(b,a)$ otherwise.
The function is now written as a .m file, which check the validity of the inputs and divide the same in the values which need to be rescaled and in those wo doesn't need it. Then the continued fraction is computed by an external .c function. Finally, the .m file explicit $I_x(a,b)$.


Next step will be to write the inverse. It was already present in Octave, but is missing the upper version, so it has to be rewritten.

by Michele Ginesi ( at August 02, 2017 04:12 AM

August 01, 2017

Michele Ginesi

Second period resume

Second period resumetd, th{ border: 1px solid } Here I present a brief resume of the work done in this second month.

Bessel functions

The topic of this month were Bessel functions. On the bug tracker it is reported only a problem regarding the "J" one (see bug 48316), but the same problem is present in every type of Bessel function (they return NaN + NaNi when the argument is too big).

Amos, Cephes, C++ and GSL

Actually, Bessel functions in Octave are computed via the Amos library, written in Fortran. Studying the implementation I discovered that the reported bug follows from the fact that if the input is too large in module, the function zbesj.f set IERR to 4 (IERR is a variable which describe how the algorithm has terminate) and set the output to zero, then return NaN when IERR=4. Obviusly, the same happen for other Bessel functions.
What I initially did was to "unlock" these .f files in such a way to give still IERR=4, but computing anyway the output, and modify in order to return the value even if IERR is 4. Then I tested the accuracy, together with other libraries.
On the bug report where suggested the Cephes library, so I tested also them, the C++ Special mathematical functions library, and the GSL (Gnu Scientific library). Unfortunately, both these alternatives work worse than Amos. I also tried to study and implement some asymptotic expansions by myself to use in the cases which give inaccurate results, unfortunately without success.
For completeness, in the following table there are some results of the tests (ERR in Cephes refers to cos total loss of precision error):
1e09 1e10 1e11 1e12 1e13
Amos 1.6257e-16 0.0000e+00 0.0000e+00 1.3379e-16 1.1905e-16
Cephes 2.825060e-08 ERR ERR ERR ERR
GSL 4.8770e-16 2.2068e-16 4.2553e-16 1.3379e-16 1.1905e-16
C++ 2.82506e-08 2.68591e-07 1.55655e-05 8.58396e-07 0.000389545

1e15 1e20 1e25 1e30
Amos 1.3522e-16 1.6256e-16 15.22810 2.04092
GSL 1.3522e-16 0 15.22810 2.04092

The problem with the double precision

As I explained in my last post the problem seems to be that there are no efficient algorithms in double precision for arguments so large. In fact, the error done by Amos is quite small if compared with the value computed with SageMath (which I'm using as reference one), but only if we use the double precision also in Sage: using more digits, one can see that even the first digit change. Here the tests:
sage: bessel_J(1,10^40).n()
sage: bessel_J(1,10^40).n(digits=16)
sage: bessel_J(1,10^40).n(digits=20)
sage: bessel_J(1,10^40).n(digits=25)
sage: bessel_J(1,10^40).n(digits=30)
sage: bessel_J(1,10^40).n(digits=35)
sage: bessel_J(1,10^40).n(digits=40)
The value stabilizes only when we use more than 30 digits (two times the digits used in double precision).

The decision

Even if we are aware of the fact that the result is not always accurate, for MATLAB compatibility we decided to unlock the Amos, since they are still the most accurate and, even more important, the type of the inputs is the same as in MATLAB (while, for example, GSL doesn't accept complex $x$ value). Moreover, in Octave is possible to obtain in output the value of IERR, thing that is not possible in MATLAB.
You can find the work on the bookmark "bessel" of my repository.


During these last days I also started to implement betainc from scratch, I think it will be ready for the first days of August. Then, it will be necessary to rewrite also betaincinv, since the actual version doesn't have the "upper" version. This should not be too difficult. I think we can use a simple Newton method (as for gammaincinv), the only problem will be, as for gammaincinv, to find good initial guesses.

by Michele Ginesi ( at August 01, 2017 09:30 AM

July 28, 2017

Joel Dahne

A Package for Taylor Arithmetic

In the last blog post I wrote about what was left to do with implementing support for N-dimensional arrays in the interval package. There are still some things to do but I have had, and most likely will have, some time to work on other things. Before the summer I started to work on a proof of concept implementation of Taylor arithmetic in Octave and this week I have continued to work on that. This blog post will be about that.

A Short Introduction to Taylor Arithmetic

Taylor arithmetic is a way to calculate with truncated Taylor expansions of functions. The main benefit is that it can be used to calculate derivatives of arbitrary order.

Taylor expansion or Taylor series (I will use these words interchangeably) are well known and from Wikipedia we have: The Taylor series of real or complex valued function $f(x)$ that is infinitely differentiable at a real or complex number $a$ is the power series
f(a) + \frac{f'(a)}{1!}(x-a) + \frac{f''(a)}{2!}(x-a)^2 + \frac{f'''(a)}{3!}(x-a)^3 + ....
From the definition it is clear that if we happen to know the coefficients of the Taylor series of $f$ at the point $a$ we can also calculate all derivatives of $f$ at that point by simply multiplying a coefficient with the corresponding factorial.

The simplest example of Taylor arithmetic is addition of two Taylor series. If $f$ has the Taylor series $\sum_{n=0}^\infty (f)_n (x-a)^n$ and $g$ the Taylor series $\sum_{n=0}^\infty (g)_n (x-a)^n$ then $f + g$ will have the Taylor series
\sum_{n=0}^\infty (f + g)_n (x-a)^n = \sum_{n=0}^\infty ((f)_n + (g)_n)(x-a)^n$
If we instead consider the product, $fg$, we get
\sum_{n=0}^\infty (fg)_n (x-a)^n = \sum_{n=0}^\infty \left(\sum_{i=0}^n (f)_n(g)_n\right)(x-a)^n.

With a bit of work you can find similar formulas for other standard functions. For example the coefficients, $(e^f)_n$, of the Taylor expansion of $\exp(f)$ is given by $(e^f)_0 = e^{(f)_0}$ and for $n > 0$
(e^f)_n = \frac{1}{n}\sum_{i=0}^{n-1}(k-j)(e^f)_i(f)_{n-i}.

When doing the computations on a computer we consider truncated Taylor series, we choose an order and keep only coefficients up to that order. There is also nothing that stops us from using intervals as coefficients, this allows us to get rigorous enclosures of derivatives of functions.

For a more complete introduction to Taylor arithmetic in conjunction with interval arithmetic see [1] which was my first encounter to it. For another implementation of it in code take a look at [2].

Current Implementation Status

As mentioned in the last post my repository can be found here

When I started to write on the package, before summer, my main goal was to get something working quickly. Thus I implemented the basic functions needed to do some kind of Taylor arithmetic, a constructor, some help functions and a few functions like $\exp$ and $\sin$.

This last week I have focused on implementing the basic utility functions, for example $size$, and rewriting the constructor. In the process I think I have broken the arithmetic functions, I will fix them later.

You can at least create and display Taylor expansions now. For example creating a variable $x$ with value 5 of order 3

> x = taylor (infsupdec (5), 3)
x = [5]_com + [1]_com X + [0]_com X^2 + [0]_com X^3

or a matrix with 4 variables of order 2

> X = taylor (infsupdec ([1, 2; 3, 4]), 2)
X = 2×2 Taylor matrix of order 2

ans(:,1) =

   [1]_com + [1]_com X + [0]_com X^2
   [3]_com + [1]_com X + [0]_com X^2

ans(:,2) =

   [2]_com + [1]_com X + [0]_com X^2
   [4]_com + [1]_com X + [0]_com X^2

If you want you can create a Taylor expansion with explicitly given coefficients you can do that as well

> f = taylor (infsupdec ([1; -2; 3, -4))
f = [1]_com + [-2]_com X + [3]_com X^2 + [-4]_com X^3

This would represent a function $f$ with $f(a) = 1$, $f'(a) = -2$, $f''(a) = 3 \cdot 2! = 6$ and $f'''(a) = -4 \cdot 3! = -24$.

Creating a Package

My goal is to create a full package for Taylor arithmetic along with some functions making use of it. The most important step is of course to create a working implementation but there are other things to consider as well. I have a few things I have not completely understood about it. Depending on how much time I have next week I will try to read a bit more about it probably ask some questions on the mailing list. Here are at least some of the things I have been thinking about

Mercurial vs Git?

I have understood that most of the Octave forge packages uses Mercurial for version control. I was not familiar with Mercurial before so the natural choice for me was to use Git. Now I feel I could switch to Mercurial if needed but I would like to know the potential benefits better, I'm still new to Mercurial so I don't have the full picture. One benefit is of course that it is easier if most  packages use the same system, but other than that?

How much work is it?

If I were to manage a package for Taylor arithmetic how much work is it? This summer I have been working full time with Octave so I have had lots of time but this will of course not always be the case. I know it takes time if I want to continue to improve the package, but how much, and what, continuous work is there?

What is needed besides the implementation?

From what I have understood there are a couple of things that should be included in a package besides the actual m-files. For example a Makefile for creating the release, an INDEX-file and a CITATION-file. I should probably also include some kind of documentation, especially since Taylor arithmetic is not that well known. Is there anything else I need to think about?

What is the process to get a package approved?

If I were to apply (whatever that means) for the package to go to Octave forge what is the process for that? What is required before it can be approved and what is required after it is approved?

[1] W. Tucker, Validated Numerics, Princeton University Press, 2011.
[2] F. Blomquist, W. Hofschuster, W. Krämer, Real and complex taylor arithmetic in C-XSC, Preprint 2005/4, Bergische Universität Wuppertal.

by Joel Dahne ( at July 28, 2017 09:56 PM

July 27, 2017

Enrico Bertino

Deep learning functions

Hi there,

the second part of the project is finishing. This period was quite interesting because I had to dive into the theory behind Neural Networks  In particular [1], [2], [3] and [4] were very useful and I will sum up some concepts here below. On the other hand, coding became more challenging and the focus was on the python layer and in particular the way to structure the class in order to make everything scalable and generalizable. Summarizing the situation, in the first period I implemented all the Octave classes for the user interface. Those are Matlab compatible and they call some Python function in a seamless way. On the Python side, the TensorFlow API is used to build the graph of the Neural Network and perform training, evaluation and prediction.

I implemented the three core functions: trainNetwork, SeriesNetwork and trainingOptions. To do this, I used a Python class in which I initialize an object with the graph of the network and I store this object as attribute of SeriesNetwork. Doing that, I call the methods of this class from trainNetwork to perform the training and from predict/classify to perform the predictions. Since it was quite hard to have a clear vision of the situation, I used a Python wrapper (Keras) that allowed me to focus on the integration, "unpack" the problem and go forth "module" by "module". Now I am removing the dependency on the Keras library using directly the Tensorflow API. The code in my repo [5].

Since I have already explained in last posts how I structured the package, in this post I would like to focus on the theoretical basis of the deep learning functions used in the package. In particular I will present the available layers and the parameters that are available for the training.  

Theoretical dive

I. Fundamentals

I want to start with a brief explanation about the perceptron and the back propagation, two key concepts in the artificial neural networks world. 


Let's start from the perceptron, that is the starting point for understanding neural networks and its components. A perceptron is simply a "node" that takes several binary inputs,  $ x_1, x_2, ... $, and produces a single binary output:

The neuron's output, 0 or 1, is determined by whether the linear combination of the inputs $  \omega \cdot x = \sum_j \omega_j x_j  $ is less than or greater than some threshold value. That is a simple mathematical model but is very versatile and powerful because we can combine many perceptrons and varying the weights and the threshold we can get different models. Moving the threshold to the other side of the inequality and replacing it by what's known as the perceptron's bias, b = −threshold, we can rewrite it as

$ out = \bigg \{ \begin{array}{rl} 0 & \omega \cdot x + b \leq 0 \\ 1 & \omega \cdot x + b > 0 \\ \end{array} $

Using the perceptrons like artificial neurons of a network, it turns out that we can devise learning algorithms which can automatically tune the weights and biases. This tuning happens in response to external stimuli, without direct intervention by a programmer and this enables us to have an "automatic" learning.

Speaking about learning algorithms, the proceedings are simple: we suppose we make a small change in some weight or bias and what see the corresponding change in the output from the network. If a small change in a weight or bias causes only a small change in output, then we could use this fact to modify the weights and biases to get our network to behave more in the manner we want. The problem is that this isn't what happens when our network contains perceptrons since a small change of any single perceptron can sometimes cause the output of that perceptron to completely flip, say from 0 to 1. We can overcome this problem by introducing an activation function. Instead of the binary output we use a function depending on weights and bias. The most common is the sigmoid function:
$ \sigma (\omega \cdot x + b ) = \dfrac{1}{1 + e^{-(\omega \cdot x + b ) } } $

Figure 1. Single neuron 

With the smoothness of the activation function $ \sigma $ we are able to analytically measure the output changes since $ \Delta out $ is a linear function of the changes $ \Delta \omega $ and $ \Delta b$ :
$ \Delta out \approx \sum_j \dfrac{\partial out}{\partial \omega_j} \Delta \omega_j + \dfrac{\partial out}{\partial b} \Delta b $

Loss function

Let x be a training input and y(x) the desired output. What we'd like is an algorithm which lets us find weights and biases so that the output from the network approximates y(x) for all x. Most used loss function is mean squared error (MSE) :
$ L( \omega, b) = \dfrac{1}{2n} \sum_x || Y(x) - out ||^2 $ ,
where n is the total number of training inputs, out is the vector of outputs from the network when x is input. 
To minimize the loss function, there are many optimizing algorithms. The one we will use is the gradient descend, of which every iteration of an epoch is defined as:

$ \omega_k \rightarrow \omega_k' = \omega_k - \dfrac{\eta}{m} \sum_j \dfrac{\partial L_{X_j}}{\partial \omega_k} $
$ b_k \rightarrow b_k' = b_k - \dfrac{\eta}{m} \sum_j \dfrac{\partial L_{X_j}}{\partial b_k} $

where m is the size of the batch of inputs with which we feed the network and $ \eta $ is the learning rate.


The last concept that I would like to emphasize is the backpropagation. Its goal is to compute the partial derivatives$ \partial L / \partial  \omega $ and $ \partial L / \partial b} $ of the loss function L with respect to any weight or bias in the network. The reason is that those partial derivatives are computationally heavy and the network training would be excessively slow.

Let be $ z^l $ the weighted input to the neurons in layer l, that can be viewed as a linear function of the activations of the previous layer: $ z^l = \omega^l a^{l-1} + b^l $ .
In the fundamental steps of backpropagation we compute:

1) the final error:
$ \delta ^L = \Delta_a L \odot \sigma' (z^L) $
The first term measures how fast the loss is changing as a function of every output activation and the second term measures how fast the activation function is changing at $ z_L $

2) the error of every layer l:
$ \delta^l = ((\omega^{l+1})^T \delta^{l+1} ) \odot \sigma' (z^l) $

3) the partial derivative of the loss function with respect to any bias in the net
$ \dfrac{\partial L}{\partial b^l_j} = \delta^l_j $

4) the partial derivative of the loss function with respect to any weight in the net
$ \dfrac{\partial L}{\partial \omega^l_{jk}} = a_k^{l-1} \delta^l_j $

We can therefore update the weights and the biases with the gradient descent and train the network. Since inputs can be too numerous, we can use only a random sample of the inputs. Stochastic Gradient Descent (SGD) simply does away with the expectation in the update and computes the gradient of the parameters using only a single or a few training examples. In particular, we will use the SGD with momentum, that is a method that helps accelerate SGD in the relevant direction and damping oscillations. It does this by adding a fraction γ of the update vector of the past time step to the current update vector.

II. Layers

Here a brief explanation of the functions that I am considering in the trainNetwork class


The convolution layer is the core building block of a convolutional neural network (CNN) and it does most of the computational heavy lifting. They derive their name from the “convolution” operator. The primary purpose of convolution is to extract features from the input image preserving the spatial relationship between pixels by learning image features using small squares of input data. 

Figure 2. Feature extraction with convolution (image taken form 
In the example in Fig.2, the 3×3 matrix is called a 'filter' or 'kernel' and the matrix formed by sliding the filter over the image and computing the dot product is called the 'Convolved Feature' or 'Activation Map' (or the 'Feature Map'). In practice, a CNN learns the values of these filters on its own during the training process (although we still need to specify parameters such as number of filters, filter size, architecture of the network etc. before the training process). The more number of filters we have, the more image features get extracted and the better our network becomes at recognizing patterns in unseen images.
The size of the Feature Map depends on three parameters: the depth (that corresponds to the number of filters we use for the convolution operation), the stride (that is the number of pixels by which we slide our filter matrix over the input matrix) and the padding (that consists in padding the input matrix with zeros around the border).


ReLU stands for Rectified Linear Unit and is a non-linear operation: $ f(x)=max(0,x) $. Usually this is applied element-wise to the output of some other function, such as a matrix-vector product. It replaces all negative pixel values in the feature map by zero with the purpose of introducing non-linearity in our network, since most of the real-world data we would want to learn would be non-linear.


Neurons in a fully connected layer have full connections to all activations in the previous layer, as seen in regular Neural Networks. Hence their activations can be computed with a matrix multiplication followed by a bias offset. In our case, the purpose of the fully-connected layer is to use these features for classifying the input image into various classes based on the training dataset. Apart from classification, adding a fully-connected layer is also a cheap way of learning non-linear combinations of the features. 


It is common to periodically insert a pooling layer in-between successive convolution layers. Spatial Pooling (also called subsampling or downsampling) reduces the dimensionality of each feature map but retains the most important information. In particular, pooling
makes the input representations (feature dimension) smaller and more manageable
reduces the number of parameters and computations in the network, therefore, controlling overfitting
makes the network invariant to small transformations, distortions and translations in the input image
helps us arrive at an almost scale invariant representation of our image 
Spatial Pooling can be of different types: Max, Average, Sum etc.

In case of Max Pooling, we define a spatial neighborhood (for example, a 2×2 window) and take the largest element from the rectified feature map within that window. 

Instead of taking the largest element we could also take the average.


Dropout in deep learning works as follows: one or more neural network nodes is switched off once in a while so that it will not interact with the network. With dropout, the learned weights of the nodes become somewhat more insensitive to the weights of the other nodes and learn to decide somewhat more by their own. In general, dropout helps the network to generalize better and increase accuracy since the influence of a single node is decreased.


The purpose of the softmax classification layer is simply to transform all the net activations in your final output layer to a series of values that can be interpreted as probabilities. To do this, the softmax function is applied onto the net intputs.
$ \phi_{softmax} (z^i) = \dfrac{e^{z^i}}{\sum_{j=0}^k e^{z_k^i}} $


Local Response Normalization (LRN) layer implements the lateral inhibition that in neurobiology refers to the capacity of an excited neuron to subdue its neighbors. This layer is useful when we are dealing with ReLU neurons because they have unbounded activations and we need LRN to normalize that. We want to detect high frequency features with a large response. If we normalize around the local neighborhood of the excited neuron, it becomes even more sensitive as compared to its neighbors. At the same time, it will dampen the responses that are uniformly large in any given local neighborhood. If all the values are large, then normalizing those values will diminish all of them. So basically we want to encourage some kind of inhibition and boost the neurons with relatively larger activations.

III. training options

The training function takes as input a trainingOptions object that contains the parameters for the training. A brief explanation:

Optimizer chosen for minimize the loss function. To guarantee the Matlab compatibility, only the Stochastic Gradient Descent with Momentum ('sgdm') is allowed

Parameter for the sgdm: it corresponds to the contribution of the previous step to the current iteration

Initial learning rate η for the optimizer

These are the settings for regulating the learning rate. It is a struct containing three values:
RateSchedule: if it is set to 'piecewise', the learning rate will drop of a RateDropFactor every a RateDropPeriod number of epochs. 

Regularizers allow to apply penalties on layer parameters or layer activity during optimization. This is the factor of the L2 regularization.

Number of epochs for training

Display the information of the training every VerboseFrequency iterations

Random shuffle of the data before training if set to 'once'

Path for saving the checkpoints

Chose of the hardware for the training: 'cpu', 'gpu', 'multi-gpu' or 'parallel'. the load is divided between workers of GPUs or CPUs according to the relative division set by WorkerLoad

Custom output functions to call during training after each iteration passing a struct containing:
Current epoch number, Current iteration number, TimeSinceStart, TrainingLoss, BaseLearnRate, TrainingAccuracy (or TrainingRMSE for regression), State


[1] [Ian Goodfellow, Yoshua Bengio, Aaron Courville] Deep Learning,  MIT Press,, 2016

by Enrico Bertino ( at July 27, 2017 09:34 PM

July 14, 2017

Joel Dahne

Ahead of the Timeline

One of my first posts on this blog was a timeline for my work during the project. Predicting the amount of time something takes is always hard to do. Often time you tend to underestimate the complexity of parts of the work. This time however I overestimated the time the work would take.

If my timeline would have been correct I would have just started to work on Folding Functions (or reductions as they are often called). Instead I have completed the work on them and also for functions regarding plotting. In addition I have started to work on the documentation for the package as well as checking everything an extra time.

In this blog post I will go through what I have done this week, what I think is left to do and a little bit about what I might do if I complete the work on N-dimensional arrays in good time.

This Week

The Dot Function

The $dot$-function was the last function left to implement support for N-dimensional arrays in. It is very similar to the $sum$-function so I already had an idea of how to do it. As with  $sum$ I moved most of the handling of the vectorization from the m-files to the oct-file, the main reason being improved performance.

The $dot$-functions for intervals is actually a bit different from the standard one. First of all it supports vectorization which the standard one does not

> dot ([1, 2, 3; 4, 5, 6], 5)
error: dot: size of X and Y must match
> dot (infsupdec ([1, 2, 3; 4, 5, 6], 5)
ans = 1x3 interval vector

  [25]_com   [35]_com   [45]_com

It also treats empty arrays a little different, see bug #51333,

> dot ([], [])
ans = [](1x0)
> dot (infsupdec ([]), [])
ans = [0]_com

Package Documentation

I have done the minimal required changes to the documentation. That is I moved support for N-dimensional arrays from Limitation to Features and added some simple examples on how to create N-dimensional arrays.

Searching for Misses

During the work I have tried to update the documentation for all functions to account for the support of N-dimensional arrays and I have also tried to update some of the comments for the code. But as always, especially when working with a lot of files, you miss things, both in the documentation and old comments.

I did a quick grep for the words "matrix" and "matrices" since they are candidates for being changed to "array". Doing this I found 35 files where I missed things. It was mainly minor things, comments using the "matrix" which I now changed to "array", but also some documentation which I had forgotten to update.

What is Left?

Package Documentation - Examples

As mentioned above I have done the minimal required changes to the documentation. It would be very nice to add some more interesting example using N-dimensional arrays of intervals in a useful way. Ironically I have not been able to come up with an interesting example but I will continue to think about it. If you have an example that you think would be interesting and want to share, please let me know!

Coding Style

As I mentioned in one of the first blog posts, the coding style for the interval package was not following the standard for Octave. During my work I have adapted all files I have worked with to the coding standard for Octave. A lot of the files I have not needed to do any changes to, so they are still using the old style. It would probably be a good idea to update them as well.

Testing - ITF1788

The interval framwork libary developed by Oliver is used to test the correctness of many of the functions in the package. At the moment it tests evaluation of scalars but in principle it should be no problem to use it for testing vectorization or even broadcasting. Oliver has already started to work on this.

After N-dimensional arrays?

If I continue at this pace I will finish the work on N-dimensional arrays before the time of the project is over. Of course the things that are left might take longer than expected, they usually do, but there is a chance that I will have time left after everything is done. So what should I do then? There are more thing that can be done on the interval package, for example adding more examples to the documentation, however I think I would like to start working on a new package for Taylor arithmetics.

Before GSoC I started to implement a proof of concept for Taylor arithmetics in Ocatve which can be found here. I would then start to work on implementing a proper version of it, where I would actually make use of N-dimensional interval arrays. If I want to create a package for this I would also need to learn a lot of other things, one of them being how to manage a package on octave forge.

At the moment I will try to finish my work on N-dimensional arrays. Then I can discuss it with Oliver and see what he thinks about it.

by Joel Dahne ( at July 14, 2017 04:59 PM

July 13, 2017

Joel Dahne

Set inversion with fsolve

This week my work have mainly been focused on the interval version of fsolve. I was not sure if and how this could make use of N-dimensional arrays and to find that out I had to understand the function. In the end it turned out that the only generalization that could be done were trivial and required very few changes. However I did find some other problems with the functions that I have been able to fix. Connected to fsolve are the functions ctc_intersect and ctc_union. The also needed only minor changes to allow for N-dimensional input. I will start by giving an introduction to fsolve, ctc_union and ctc_intersect and then I will mention the changes I have done to them.

Introduction to fsolve

The standard version of fsolve in Octave is used to solve systems of nonlinear equations. That is, given a functions $f$ and a starting point $x_0$ it returns a value $x$ such that $f(x)$ is close to zero. The interval version of fsolve does much more than this. It is used to enclose the preimage of a set $Y$ under $f$. Given a domain $X$, a set $Y$ and a function $f$ it returns an enclosure of the set
f^{-1}(Y) = \{x \in X: f(x) \in Y\}.
By letting $Y = 0$ we get similar functionality as the standard fsolve, with the difference that the output is an enclosure of all zeros to the function (compared to one point for which $f$ returns close to zero).

Example: The Unit Circle

Consider the function $f(x, y) = \sqrt{x^2 + y^2} - 1$ which is zero exactly on the unit circle. Plugging this in to the standard fsolve we get with $(0.5, 0.5)$ as a starting guess

> x = fsolve (@(x) f(x(1), x(2)), [0.5, 0.5])
x = 0.70711 0.70711

which indeed is close to a zero. But we get no information about other zeros.

Using the interval version of fsolve with $X = [-3, 3] \times [-3, 3]$ as starting domain we get

> [x paving] = fsolve (f, infsup ([-3, -3], [3, 3]));
> x
x ⊂ 2×1 interval vector

     [-1.002, +1.002]
   [-1.0079, +1.0079]

Plotting the paving we get the picture

which indeed is a good enclosure of the unit circle.

How it works

In its simplest form fsolve uses a simple bisection scheme to find the enclosure. Using interval methods we can find enclosure to images of sets. Given a set $X_0 \subset X$ we have three different possibilities
  • $f(X_0) \subset Y$ in which case we add $X_0$ to the paving
  • $f(X_0) \cap Y = \emptyset$ in which case we discard $X_0$
  • Otherwise we bisect $X_0$ and continue on the parts
By setting a tolerance of when to stop bisecting boxes we get the algorithm to terminate in a finite number of steps.


Using bisection is not always very efficient, especially when the domain has many dimensions. One way to speed up the convergence is with what's called contractors. In short a contractor is a function that can take the set $X_0$ and returns a set $X_0' \subset X_0$ with the property that $f(X_0 \setminus X_0') \cap Y = \emptyset$. It's a way of making $X_0$ smaller without having to bisect it that many times.

When you construct a contractor you use the reverse operations definer on intervals. I will not go into how this works, if you are interested you can find more information in the package documentation [1] and in these youtube videos about Set Inversion Via Interval Analysis (SIVIA) [2].

The functions ctc_union and ctc_intersect are used to combine contractors on sets into contractors on unions or intersections of these sets.

Generalization to N-dimensional arrays

How can fsolve be generalized to N-dimensional arrays? The only natural thing to do is to allow for the input and output of $f$ to be N-dimensional arrays. This also is no problem to do. While you mathematically probably would say that fsolve is used to do set inversion for functions $f: \mathbb{R}^n \to \mathbb{R}^m$ it can of course also be used for example on functions $f: \mathbb{R}^{n_1}\times \mathbb{R}^{n_2} \to \mathbb{R}^{m_1}\times \mathbb{R}^{m_2}$.

This is however a bit different when using vectorization. When not using vectorization (and not using contractions) fsolve expects that the functions takes one argument which is an array with each element corresponding to a variable. If vectorization is used it instead assumes that the functions takes one argument for each variable. Every argument is then given as a vector with each element corresponding to one value of the variable for which to compute the function. Here we have no use of N-dimensional arrays.


The only change in functionality that I have done to the functions is to allow for N-dimensional arrays as input and output when vectorization is not used. This required only minor changes, essentially changing expressions like
max (max (wid (interval)))

max (wid (interval)(:))

It was also enough to do these changes in ctc_union and ctc_intersect to have these support N-dimensional arrays.

I have made no functional changes when vectorization is used. I have however made an optimization in the construction of the arguments to the function. The arguments are stored in an array but before being given to the function they need to be split up into the different variables. This is done by creating a cell array with each element being a vector with the values of one of the variables. Previously the construction of this cell array was very inefficient, it split the interval into its lower and upper part and then called the constructor to create an interval again. Now it copies the intervals into the cell without having to call the constructor. This actually seems have been quite a big improvement, using the old version the example with the unit circle from above took around 0.129 seconds and with the new version it takes about 0.092 seconds. This is of course only one benchmark, but a speed up of about 40% for this test is promising!

Lastly I noticed a problem in the example used in the documentation of the function. The function used is

# Solve x1 ^ 2 + x2 ^ 2 = 1 for -3 ≤ x1, x2 ≤ 3 again,
# but now contractions speed up the algorithm.
function [fval, cx1, cx2] = f (y, x1, x2)
  # Forward evaluation
  x1_sqr = x1 .^ 2;
  x2_sqr = x2 .^ 2;
  fval = hypot (x1, x2);
  # Reverse evaluation and contraction
  y = intersect (y, fval);
  # Contract the squares
  x1_sqr = intersect (x1_sqr, y - x2_sqr);
  x2_sqr = intersect (x2_sqr, y - x1_sqr);
  # Contract the parameters
  cx1 = sqrrev (x1_sqr, x1);
  cx2 = sqrrev (x2_sqr, x2);

Do you see the problem? I think it took me more than a day to realize that the problems I was having was not because of a bug in fsolve but because this function computes the wrong thing. The function is supposed to be $f(x_1, x_2) = x_1^2 + x_2^2$ but when calculating the value it calls hypot which is given by $hypot(x_1, x_2) = \sqrt{x_1^2 + x_2^2}$. For $f(x_1, x_2) = 1$, which is used in the example, it gives the same result, but otherwise it will of course not work.


by Joel Dahne ( at July 13, 2017 10:24 AM

July 11, 2017

Michele Ginesi

Gnu Scientific Library

Bessel functions

During the second part of GSoC I have to work on Bessel functions. The bug is related in particular on the Bessel function of first kind and regard the fact that the function is not computed if the $x$ argument is too big.
During this week I studied the actual implementation i.e. the Amos library, written in Fortran. The problem is related to the fact that zbesj.f simply refuse to compute the result when the input argument is too large (the same problem happens for all bessel functions, both for real and complex arguments). What I did was to "unlock" the amos in such a way to compute the value in any case (which seems to be the strategy used by MATLAB). Then I compared the relative errors with the relative errors done by the Gnu Scientific Library (GSL).
At first, I would precise some limitation present in GSL:
  • The parameter $x$ must be real, while it can be complex in Amos and MATLAB. The same holds for the parameter $\alpha$.
  • The class of the output does not adapt to the class of the input, returning always a double.
Doing some tests, it seems that the amos work better in terms of accuracy (even if we are talking about errors which do not differ in the order). I concentrated on values in which Amos usually refuse to compute, since in every other zone of the $x-\alpha$ plane it is known the error is in the order of 1e-15. For $\alpha\in\{-1,0,1\}$ they return the same result, while for other values of $\alpha$, Amos are in general more accurate.
Anyway, I would remark the fact that there are not, as far as I know, accurate algorithms in double precision for very large magnitude. In fact, for such arguments, both Amos and GSL make a relative error of the order of the unity. This problem is evident when using SageMath to compute accurate values, e.g.
sage: bessel_J(1,10^40).n()
sage: bessel_J(1,10^40).n(digits=16)
sage: bessel_J(1,10^40).n(digits=20)
sage: bessel_J(1,10^40).n(digits=25)
sage: bessel_J(1,10^40).n(digits=30)
sage: bessel_J(1,10^40).n(digits=35)
sage: bessel_J(1,10^40).n(digits=40)
The values "stabilize" only when the number of digits is bigger than 30, far away from double precision.

Incomplete gamma function

I also gave a look on how to use GSL to eventually improve the incomplete gamma function. It is not possible, however, to use only GSL functions gsl_sf_gamma_inc_P and gsl_sf_gamma_inc_Q due to their limitations:
  • There is no the "scaled" option.
  • The value of $x$ must be real. This may be not a problem since even MATLAB does not accept complex value (while the version I worked on does).
  • The parameter $x$ must be positive. This is actually a problem of MATLAB compatibility.
  • The class of the output does not adapt to the class of the input, returning always a double.
I tested the accuracy of the GSL and they work better when $a1$ and $x\ll1$ so I think I will fix gammainc.m using the algorithm of gsl_sf_gamma_inc_P and gsl_sf_gamma_inc_Q for that values of the input arguments.

Incomplete Beta function

The actual incomplete beta function needs to be replaced. I've already studied the continued fraction exapnsion (which seems to be the best one to use). In GSL is implemented in a good way but still present two limitations:
  • Does not adapt to the input class (it always work in double precision).
  • There is no "upper" version. This is missing also in the current betainc function, but it is present in MATLAB. Unfortunately, it is not sufficient to compute it as $1 - I_x(a,b)$, it is necessary to find an accurate way to compute directly it.
So I will take inspiration to GSL version of the function but I think I will write beatinc as a single .m file.

by Michele Ginesi ( at July 11, 2017 09:01 AM

July 01, 2017

Michele Ginesi

First period resume

First period resume

First period resume

Here I present a brief resume of the work done in this first month.

1st week: Gammainc

The incomplete gamma function presented a series of inaccurate results in the previous implementation, so it has been decided to rewrite it from scratch. A big part of the work was done by Marco and Nir (here the discussion). I gave my contribution by fixing some problems in the implementation and by making a functioning commit (thanks to the suggestions given by Carne during the OctConf in Geneve).

2nd and 3rd week: Gammaincinv

The inverse of the incomplete gamma function was completely missing in Octave (and this was a problem of compatibility with Matlab). Now the function is present, written as a single .m file. The implementation consist in a simple, but efficient, Newton's method.

Last week: Betainc

When I first submitted my timeline, there was only a bug related to betainc, involving the input validation. During the first period of GSoC emerged a new bug of "incorrect result" type. From this, me and my mentor decided that it is necessary to rewrite the function, so I used this last week to study efficient algorithms to evaluate it. In particular, seems that the most efficient way is to use continude fractions, maybe after a rescale of the function. I'm already working on it, but I will complete it during the first week of august, after finishing the work on Bessel functions.

by Michele Ginesi ( at July 01, 2017 07:58 AM

June 30, 2017

Joel Dahne

One Month In

Now one month of GSoC has passed and so far everything has gone much better than I expected! According to my timeline this week would have been the first of two were I work on vectorization. Instead I have already mostly finished the vectorization and have started to work on other things. In this blog post I'll give a summary of what work I have completed and what I have left to do. I'll structure it according to where the functions are listed in the $INDEX$-file [1]. The number after the heading is the number of functions in that category.

Since this will mainly be a list of which files have been modified and which are left to do this might not be very interesting if you are not familiar with the structure of the interval package.

Interval constant (3)

All of these have been modified to support N-dimensional arrays.

Interval constructor (5)

All of these have been modified to support N-dimensional arrays.

Interval function (most with tightest accuracy) (63)

Almost all of these functions worked out of the box! At least after the API functions to the MPFR and crlibm libraries were fixed, they are further down in the list.

The only function that did not work immediately were $linspace$. Even though this function could be generalized to N-dimensinal arrays the standard Octave function only works for matrices (I think the Matlab version only allows scalars). This means that adding support for N-dimensional vectors for the interval version is not a priority. I might do it later on but it is not necessary.

Interval matrix operation (16)

Most of the matrix functions does not make sense for N-dimensional arrays. For example matrix multiplication and matrix inversion only makes sense for matrices. However all of the reduction functions are also here, they include $dot$, $prod$, $sum$, $sumabs$ and $sumsq$.

At the moment I have implemented support for N-dimensional arrays for $sum$, $sumabs$ and $prod$. The functions $dot$ and $sumsq$ are not ready, I'm waiting to see what happens with bug #51333 [2] before I continue with that work. Depending on the bug I might also have to modify the behaviour of $sum$, $sumabs$ and $prod$ slightly.

Interval comparison (19)

All of these have been modified to support N-dimensional arrays.

Set operation (7)

All of these functions have been modified to support N-dimensional arrays except one, $mince$. The function $mince$ is an interval version of $linspace$ and reasoning here is the same as that for $linspace$ above.

Interval reverse operation (12)

Like the interval functions above, all of the functions worked out of the box!

Interval numeric function (11)

Also these functions worked out of the box, with some small modifications to the documentation for some of them.

Interval input and output (9)

Here there are some functions which require some comments, the ones I do not comment about have all gotten support for N-dimensional arrays.

I think that this function does not make sense to generalize to N-dimensions. It could perhaps take an N-dimensional arrays as input, but it will always return a row vector.  I have left it as it is for now at least.

$disp$ and $display$
These are functions that might be subject to change later on. At the moment it prints N-dimensional arrays of intervals in the same way Octave does for normal arrays. It's however not clear how to handle the $\subset$ symbol and we might decide to change it.

Interval solver or optimizer (5)

The functions $gauss$ and $polyval$ are not generalizable to N-dimensional vectors. I don't think that $fzero$ can be generalized either, for it to work the functions must be real-valued.

The function $fsolve$ can perhaps be modified to support N-dimensional vectors. It uses the SIVIA algorithm [3] and I have to dive deeper into how it works to see if it can be done.

For $fminsearch$ nothing needed to be done, it worked for N-dimensional arrays directly.

Interval contractor arithmetic (2)

Both of these functions are used together with $fsolve$ so they also depend on if SIVIA can be generalized or not.

Verified solver or optimizer (6)

All of these functions work on matrices and cannot be generalized.

Utility function (29)

All of these for which it made sense have been modified to support N-dimensional arrays. Some of them only works for matrices, these are $ctranspose$, $diag$, $transpose$, $tril$ and $triu$. I have left them as they were, though I fixed a bug in $diag$.

API function to the MPFR and crlibm libraries (8)

These are the functions that in general required most work. The ones I have added full support for N-dimensional arrays in are $crlibm\_function$, $mpfr\_function\_d$ and $mpfr\_vector\_sum\_d$. Some of them cannot be generalized, these are $mpfr\_matrix\_mul_d$, $mpfr\_matrix\_sqr\_d$ and $mpfr\_to\_string\_d$. The functions $mpfr\_linspace\_d$ and $mpfr\_vector\_dot\_d$ are related to what I mentioned above for $linspace$ and $dot$.


So summing up the functions that still require some work are
  • Functions related to $fsolve$
  • The functions $dot$ and $sumsq$
  • The functions $linspace$ and $mince$
Especially the functions related to $fsolve$ might take some time to handle. My goal is to dive deeper into this next week.

Apart from this there are also some more things that needs to be considered. The documentation for the package will need to be updated. This includes adding some examples which make use of the new functionality.

The interval package also did not follow the coding style for Octave. All the functions which I have made changes to have been updated with the correct coding style, but many of the functions that worked out of the box still use the old style. It might be that we want to unify the coding standard for all files before the next release.

[1] The $INDEX$ file
[2] Bug #51333
[3] The SIVIA algorithm

by Joel Dahne ( at June 30, 2017 04:38 PM

Enrico Bertino

End of the first work period

Hi all,

this is the end of this first period of GSoC! It was a challenging but very interesting period and I am very excited about the next two months. It is the first time where I have the opportunity to make something that has a real value, albeit small, on someone else's work. Even when I have to do small tasks, like structure the package or learn Tensorflow APIs, I do it with enthusiasm because I have very clear in mind the ultimate goal and the value that my efforts would bring. It may seem trivial, but for a student this is not the daily bread :) I really have fun coding for this project and I hope this will last until the end!

Speaking about the project, I've spent some time wondering which was the best solution to test the correct installation of the Tensorflow Python APIs on the machine. The last solution was putting the test in a function __nnet_init__ and calling it in the PKG_ADD (code in inst/__nnet_init__ in my repo [1]).

Regarding the code, in this last days I tried to connect the dots, calling a Tensorflow network from Octave in a "Matlab compatible" way. In particular, I use the classes that I made two weeks ago in order to implement a basic version of trainNetwork, that is the core function of this package. As I explained in my post of June 12, trainNetwork takes as input the data and two objects: the layers and the options. I had some some difficulty during the implementation of the Layer class due to the inheritance and the overloading. Eventually, I decided to store the layers in a cell array as attribute of the Layer class. Overloading the subsref, I let the user call a specific layer with the '()' access, like a classic array. With this kind of overloading I managed to solve the main problem of this structure, that is the possibility to get a property of a layer doing for example layers(1).Name

classdef Layer < handle
  properties (Access = private)
    layers = {};

  methods (Hidden, Access = {?Layers})
    function this = Layer (varargin)
        nargin = numel(varargin);
        this.layers = cell(1, nargin);
      for i = 1:nargin
        this.layers{i} = varargin{i};

  methods (Hidden)
    function obj = subsref(this, idx)
      switch idx(1).type
          case '()'
              idx(1).type = '{}';
              obj = builtin('subsref',this.layers,idx);
          case '{}'
              error('{} indexing not supported');
          case '.'
            obj = builtin('subsref',this,idx);

    function obj = numel(this)
      obj = builtin('numel',this.layers);

    function obj = size(this)
      obj = builtin('size',this.layers);

Therefore, I implemented the same example of my last post in a proper way. In tests/script you can find the function cnn_linear_model which consists simply in:

1. Loading the datasets

2. Defining layers and options
  layers = [ ...
    imageInputLayer([28 28 1])
  options = trainingOptions('sgdm', 'MaxEpochs', 1);

3. Training
net = trainNetwork(trainImages, trainAngles, layers, options);
4. Prediction
acc = net.predict(testImages, testAngles) 

TrainNetwork is a draft and I have not yet implemented the classe seriesNetwork, but I think it's a good start :) In next weeks I will focus on the Tensorflow backend of the above mentioned functions, with the goal of having a working version at the end of the second period!



by Enrico Bertino ( at June 30, 2017 09:32 AM

June 24, 2017

Enrico Bertino

Package structure

Hello! During the first period of GSoC I have worked mostly on analyzing the Matlab structure of the net package in order to guarantee the compatibility throughout the whole project. The focus of the project is on the convolutional neural networks, about which I will write the next post.

Regarding the package structure, the core will be composed by three parts:

  1. Layers: there are 11 types of layers that I defined as Octave classes, using classdef. These layers can be concatenated in order to create a Layer object defining the architecture of the network. This will be the input for the training function. 
  2. Training: the core of the project is the training function, which takes as input the data, the layers and some options and returns the network as output. 
  3. Network: the network object has three methods (activations, classify and predict) that let the user compute the final classification and prediction. 

Figure 1: conv nnet flowchart 

I have already implemented a draft for the first point, the layers classes [1]. Every layer type inherits some attributes and methods from the parent class Layers. This is useful for creating the Layer object: the concatenation of different layers is always a Layer object that will be used as input for the training function. For this purpose, I overloaded the cat, horzcat and vertcat operators for Layers and subsref for Layer. I need to finalize some details for the disp methods of these classes.

Figure 2: Layers classes definitions 

 is used in every class for the parameter management and the attributes setting.

The objects of these classes can be instantiated with a corresponding function, implemented in the directory inst/. Here an example for creating a Layer object 

> a = imageInputLayer([2,2,3]); # first layer
> b = convolution2dLayer(1,1); # second layer
> c = dropoutLayer(1); # third layer
> layers = [a b c]; # Layer object from layers concat
> drop = layers(3); # Layer element access
> drop.Probability # Access layer attribute
ans = 0.50000

All functions can be tested with the make check of the package.

The next step is a focus on the Tensorflow integration, with Pytave, writing a complete test for a regression of images angles and comparing the precision and the computational time with Matlab.


p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Monaco; color: #f4f4f4; background-color: #000000; background-color: rgba(0, 0, 0, 0.95)} span.s1 {font-variant-ligatures: no-common-ligatures} p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Monaco; color: #f4f4f4; background-color: #000000; background-color: rgba(0, 0, 0, 0.95)} span.s1 {font-variant-ligatures: no-common-ligatures}

by Enrico Bertino ( at June 24, 2017 08:55 PM

Train a Convolutional Neural Network for Regression


I spent the last period working mostly on Tensorflow, studying the APIs and writing some examples in order to explore the possible implementations of neural networks. For this goal, I chose an interesting example proposed in the Matlab examples at [1]. The dataset is composed by 5000 images, rotated by an angle α, and a corresponding integer label (the rotation angle α). The goal is to make a regression to predict the angle of a rotated image and straighten it up.
All files can be found in tests/examples/cnn_linear_model in my repo [2].

I have kept the structure as in the Matlab example, but I generated a new dataset starting from LeCun's MNIST digits (datasets at [3]). Each image was rotated by a random angle between 0° and 70°, in order to keep the right orientation of the digits (code in dataset_generation.m). In Fig. 1 some rotated digits with the corresponding original digits.

Figure 1. rotated images in columns 1,3,5 and originals in columns 2,4,6

The implemented linear model is:
$ \hat{Y} = \omega X + b $,
where the weights $\omega$ and the bias $b$ will be optimized during the training minimizing a loss function. As loss function, I used the mean square error (MSE):
$  \dfrac{1}{n} \sum_{i=1}^n (\hat{Y_i} - Y_i)^2 $,
where the $Y_i$ are the training labels. 

In order to show the effective improvement given by a Neural Network, I started to make a simple regression feeding the X variable of the model directly with the 28x28 images. Even if for the MSE minimization a close form exists, I implemented an iterative method for discovering some Tensorflow features (code in For evaluate the accuracy of the regression, I consider a correct regression if the difference between angles is less than 20°. After 20 epochs, the convergence was almost reached, giving an accuracy of $0.6146$.

Figure 2. rotated images in columns 1,3,5 and after the regression in columns 2,4,6

I want to analyze now the improvement given by a feature extraction performed with a convolutional neural network (CNN). As in the Matlab example, I used a basic CNN since the input images are quite simple (only numbers with monochromatic background) and consequently the features to extract are few.
  • INPUT [28x28x1] will hold the raw pixel values of the image, in this case an image of width 28, height 28
  • CONV layer will compute the output of neurons that are connected to local regions in the input, each computing a dot product between their weights and a small region they are connected to in the input volume. This results in volume such as [12x12x25]: 25 filters of size 12x12
  • RELU layer will apply an element-wise activation function, such as the $max(0,x)$ thresholding at zero. This leaves the size of the volume unchanged ([12x12x25]).
  • FC (i.e. fully-connected) layer will compute the class scores, resulting in volume of size [1x1x1], which corresponds to the rotation angle. As with ordinary Neural Networks, each neuron in this layer will be connected to all the numbers in the previous volume.
Figure 3. CNN linear model architecture

We can visualize the architecture with Tensorboard where the graph of the model is represented.

Figure 4. Model graph generated with Tensorboard

With the implementation in, the results are quite satisfying: after 15 epochs, it reached an accuracy of $0.75$ (205 seconds overall). One can see in Fig. 4 the marked improvement of the regression.

Figure 5. rotated images in columns 1,3,5 and after the CNN regression in columns 2,4,6

With the same parameters, Matlab reached an accuracy of $0.76$ in 370 seconds (code in regression_Matlab_nnet.m), so the performances are quite promising

In the next post (in few days), I will integrate the work done up to now, calling the Python class within Octave and making a function that simulates the behavior of Matlab. Leveraging the layers classes that I made 2 weeks ago, I will implement a draft of the functions trainNetwork and predict making the Matlab script callable also in Octave.

I will also care about the dependencies of the package: I will add the dependency from Pytave in the package description and write a test as PKG_ADD in order to verify the version of Tensorflow during the installation of the package.


by Enrico Bertino ( at June 24, 2017 02:36 PM

June 22, 2017

Joel Dahne

Vectorization and broadcasting

At the moment I'm actually ahead of my schedule and this week I started to work on support for vectorization on N-dimensional arrays. The by far biggest challenge was to implement proper broadcasting and most of this post will be devoted to going through that. At the end I also mention some of the other things I have done during the week.

Broadcasting arrays

At the moment I have implement support for broadcasting on all binary functions. Since all binary functions behave similarly in respect to broadcasting I will use $+$ in all my example below, but this could in principle be any binary function working on intervals.

When adding to arrays, $A, B$, of the same size the result is just an arrays of the same size with each entry containing the sum of the corresponding entries in $A$ and $B$. If $A$ and $B$ does not have the same size then we try to perform broadcasting. The simplest form of broadcasting is when $A$ is an arrays and $B$ is a scalar. Then we just take the value of $B$ and add to every element in $A$. For example

> A = infsupdec ([1, 2; 3, 4])
A = 2×2 interval matrix
   [1]_com   [2]_com
   [3]_com   [4]_com
> B = infsupdec (5)
B = [5]_com
> A + B
ans = 2×2 interval matrix
   [6]_com   [7]_com
   [8]_com   [9]_com

However it is not only when one of the inputs is a scalar that broadcasting can be performed. Broadcasting is performed separately for each dimension of the input. We require either that the dimensions are equal, and no broadcasting is performed, or that one of the inputs have that dimension equal to $1$, we then concatenate this input along that dimension until they are of equal size. If for example $A$ has dimension $4\times4\times4$ and $B$ dimension $4\times4\times1$ we concatenate $B$ with itself along the third dimension four times to get two arrays of the same size. Since a scalar has all dimensions equal to 1 we see that it can be broadcasted to any size. Both $A$ and $B$ can also be broadcasted at the same time, along different dimensions, for example

> A = infsupdec (ones (1, 5, 2))
A = 1×5×2 interval array
ans(:,:,1) =
   [1]_com   [1]_com   [1]_com   [1]_com   [1]_com
ans(:,:,2) =
   [1]_com   [1]_com   [1]_com   [1]_com   [1]_com
> B = infsupdec ([1, 2, 3, 4, 5; 6, 7, 8, 9, 10])
B = 2×5 interval matrix
   [1]_com   [2]_com   [3]_com   [4]_com    [5]_com
   [6]_com   [7]_com   [8]_com   [9]_com   [10]_com
> A + B
ans = 2×5×2 interval array
ans(:,:,1) =
   [2]_com   [3]_com   [4]_com    [5]_com    [6]_com
   [7]_com   [8]_com   [9]_com   [10]_com   [11]_com
ans(:,:,2) =
   [2]_com   [3]_com   [4]_com    [5]_com    [6]_com
   [7]_com   [8]_com   [9]_com   [10]_com   [11]_com

The implementation

I'll go through a little bit about my implementation. I warn you that I'm not that familiar with the internals of Octave so some things I say might be wrong, or at least not totally correct.

Internally all, numerical, arrays are stored as a linear vector and the dimensions are only metadata. This means that the most efficient way to walk through an array is with a linearly increasing index. When $A$ and $B$ have the same size the most efficient way to sum them is to linearly go through the arrays. In pseudo code

// Calculate C = A + B
for (int i = 0; i < numel (A); i++) {
  C(i) = A(i) + B(i);

This works fine, and apart from unrolling the loop or doing optimizations like that it is probably the most efficient way to do it.

If $A$ and $B$ are not of the same size then one way to do it would be to simply extend $A$ or/and $B$ along the needed dimensions. This would however require coping a lot of data, something we want to avoid (memory access is expensive). Instead we try to be smart with our indexing to access the right data from both $A$ and $B$.

After asking on the IRC-channel I got pointed to this Octave function which performs broadcasting. My implementation, which can be found here, is heavily inspired by that function.


Here I compare the performance of the new implementation with the old one. Since the old one could only handle matrices we are limited by that. We can measure the time it takes to add two matrices $A$, $B$ with the code

tic; A + B; toc;

We do 10 runs for each test and all times are in seconds.

Addition of large matrices

Case 1: A = B = infsupdec (ones (1000, 1000));
       Old         New
       0.324722    0.277179
       0.320914    0.276116
       0.322018    0.276075
       0.318713    0.279258
       0.332041    0.279593
       0.318429    0.279987
       0.323752    0.279089
       0.317823    0.276036
       0.320509    0.280964
       0.320610    0.281123
Mean:  0.32195     0.27854
Case 2: A = B = infsupdec (ones (10, 100000));
        Old         New
        0.299321    0.272691
        0.297020    0.282591
        0.296460    0.274298
        0.294541    0.279661
        0.298306    0.277274
        0.301532    0.275531
        0.298163    0.278576
        0.298954    0.279868
        0.302849    0.275991
        0.297765    0.278806
Mean:   0.29849    0.27753

Case 3: A = B = infsupdec (ones (100000, 10));
        Old         New
        0.286433    0.279107
        0.289503    0.278251
        0.297562    0.279579
        0.292759    0.283311
        0.292983    0.281306
        0.290947    0.282310
        0.293025    0.286172
        0.294153    0.278886
        0.293457    0.278625
        0.296661    0.280804
Mean:   0.29275     0.28084

Broadcasting scalars

Case 4: A = infsupdec (ones (1000, 1000));
             B = infsupdec (1);
        Old         New
        0.298695    0.292419
        0.298158    0.292274
        0.305242    0.296036
        0.295867    0.291311
        0.296971    0.297255
        0.304297    0.292871
        0.298172    0.300329
        0.297251    0.291668
        0.299236    0.294128
        0.300457    0.298005
Mean;   0.29943     0.29463

Case 5: A = infsupdec (1);
             B = infsupdec (ones (1000, 1000));
         Old         New
        0.317276    0.291100
        0.316858    0.296519
        0.316617    0.292958
        0.316159    0.299662
        0.317939    0.301558
        0.322162    0.295338
        0.321277    0.293561
        0.314640    0.291500
        0.317211    0.295487
        0.317177    0.294376
Mean:   0.31773     0.29521

Broadcasting vectors

Case 6: A = infsupdec (ones (1000, 1000));
             B = infsupdec (ones (1000, 1));
        Old         New
        0.299546    0.284229
        0.301177    0.284458
        0.300725    0.276269
        0.299368    0.276957
        0.303953    0.278034
        0.300894    0.275058
        0.301776    0.276692
        0.302462    0.282946
        0.304010    0.275573
        0.301196    0.273109
Mean:   0.30151     0.27833

Case 7: A = infsupdec (ones (1000, 1000));
             B = infsupdec (ones (1, 1000));
         Old         New
        0.300554    0.295892
        0.301361    0.294287
        0.302575    0.299116
        0.304808    0.294184
        0.306700    0.291606
        0.301233    0.298059
        0.301591    0.292777
        0.302998    0.290288
        0.300452    0.291975
        0.305531    0.290178
Mean:   0.30278     0.29384

We see that in all cases the new version is faster or at least equally fast as the old version. In the old version the order of the input made a slight difference in performance (case 4 vs case 5). In the new version both inputs are treated in exactly the same way so we no longer see that difference.

Possible improvements

In theory the cases when we broadcast a scalar could be the fastest ones. If $B$ is a scalar we could, in pseudo code, do something similar to
// Calculate C = A + B with B scalar
for (int i = 0; i < numel (A); i++) {
  C(i) = A(i) + B;

This is however not implemented at the moment. Instead we use the ordinary routine to calculate the index for $B$ (since it is a scalar it will always evaluate to $1$). If we would like to optimize more for this case we could add a check for if $A$ or $B$ are scalars and then optimize for that. Of course this would also make the code more complicated, something to watch out for. At the moment I leave it like this but if we later want to optimize for that case it could be done.

Other work

Apart from the work to fix the broadcasting for binary functions there were very little to do for many of the functions. All binary functions that use this code, and all unary functions using an even simpler code, worked directly after fixing the oct-files. Some of them required small changes to the documentation but other than that the octave-scripts were fine. So mainly it has been a matter of actually going through all files and check that they actually did work.

Bug #51283

When going through all the functions I noticed a bug in the interval version of $\sin$,

 > sin (infsupdec (0))
ans = [0]_com
> sin (infsupdec ([0, 0]))
ans = 1×2 interval vector
   [0, -0]_com   [0, -0]_com

The second version here is wrong, $-0$ should never be allowed as a value for the supremum of an interval. I was able to track this down to how Octaves $\max$ function works, see bug #51283. As Oliver writes there the exact behaviour of the $\max$-function is not specified in IEEE Std 754-2008 so we cannot rely on that. To solve this I have added a line to manually set all $-0$ to $+0$ in the supremum of the interval.


by Joel Dahne ( at June 22, 2017 05:12 PM

June 20, 2017

Michele Ginesi

Timetable: modification

Timetable: modification

Timetable: modification

According to my timetable (that you can find here), during this last week of June, I should've work on the input validation of betainc. Since a new bug related to this function has been found and, moreover, the actual implementation doesn't accept the "lower" or "upper" tail (as MATLAB do), me and my mentor decided to use this week to start studying how to rewrite betainc (main references will be [1] and [2]) and to use the last part fo the GSoC to actually implement it. In this way, my timetable remain almost identical (I will use July to work on Bessel functions) and I will be able to fix also this problem.

[1] Abramowitz, Stegun "Handbook of Mathematical Functions"
[2] Cuyt, Brevik Petersen, Vendonk, Waadeland "Handbook of Continued Fractions for Special Functions"

by Michele Ginesi ( at June 20, 2017 07:01 AM