Planet Octave

May 21, 2018

Erivelton Gualter

Code begins - week 1

The first week of coding has been completed.

As I mentioned in the last post, the goal of this previous week was to create a fixed layout to study the plot diagrams for the sisotool, as well as to add some UI Element functionality to control the interface. The following …

Week 5

With 25th May approaching, I’m confident to say that I’ve completed my first evaluation (although some grammatical and styling rechecking is left). Now, as directed by Kai, I’ve implemented the wiki_login.m function from using the internals of url-transfer.cc/h class. I’ve scapped the earlier design of implementing cookie_manager.m script and libcurl_wrapper.cc class for doing the cookie thing and directly implemented __wiki_login__ as an internal function in urlwrite.cc. A user is now able to log into api.php

To test my implemtation:

• Make a test account on wiki.octave.space with username and password of your choice.

• hg clone https://me_ydv_5@bitbucket.org/me_ydv_5/octave

• hg up ocs

• cd path/to/your/build/tree/ and execute make -jX

• ./run-octave

• execute wiki_login ("https://wiki.octave.space/api.php", yourUsername, yourPassword);

If it is all successful, you should see a prompt Success: logged in as yourUsername.

Other than that, I’m really grateful to jwe for moving the version.h file to liboctave from libinterp as this is needed to get the OCTAVE_VERSION for the user agent (see week 3 for this). Eariler, it was in libinterp and we must not use anything from libinterp in liboctave as it should be compiled irrespective of the former.

NOTE: You may need to re-run ./bootstrap and ./configure due to above changes.

Lastly, I would request to someone who uses Windows (My Windows ran into some problem and I’m now unable to connect to internet, I’ll reinstall it whenever I get some time) and/or Mac OS to test my implementation and report any errors.

Needless to say I’m always looking forward to constructive feedback/criticism.

Link to BitBucket repo.

May 16, 2018

Week 4

Continuing further, I added more options in libcurl_wrapper.cc. As described in earlier posts, the current implementation of wiki_login.m has the JAVA’s interface to Octave in use and I need to replace it with Octave’s own implementation. So, I’ve taken two steps in this direction. Now a user is able to retrieve token when he executes wiki_login AND use the cookies that are set in a temporary .txt file to login into the api.php wiki. Currently, there’s a problem in logging in, because the following cookies are unable to get added, octave_org_session, octave_orgUserID, octave_orgUserName, octave_orgToken. I got to know about these cookies when I tried to execute the curl-cli commands for logging in.

I also understood how the HAVE_CURL macro encapsulates the curl_transfer class, i.e, if curl is available in a machine, then this class exists, else not. HAVE_CURL is a macro that is set during ./configure stage of building the software. I will be extending my work in this class essentially in the coming week.

I’ve also added the files in their appropriate directories.

A new dummy wiki has been created by Kai for testing purposes. I’ll be using this from now on.

Link to BitBucket repo.

May 15, 2018

Erivelton Gualter

Community Bonding Period

The community bonding period is over. The past 3 weeks were really busy because I was in my finals week, final projects and my doctoral research. However, I basically completed everything I wanted to before the “Coding officially begins!”:

• Finished Optimal Control and Intelligent Control System classes;
• Submitted a conference …

May 08, 2018

Week 3

UPDATE on 12-May-2018: You can now directly test my work by cloning my repo, updating the source tree to my bookmark by hg up ocs, making a build and calling wiki_login from octave-cli to get a login token in return. I’ve added the files and necessary changes in the codebase itself. There’s no directory ocode now.

If you happen to already have a build of octave, just do the following:

• cd path/to/your/source/tree
• hg pull https://me_ydv_5@bitbucket.org/me_ydv_5/octave in your source tree.
• hg up ocs
• hg up -r 8bbf393
• make -jX in your build tree.

This will save your time of cloning the entire repo and compiliing.

As mentioned in the previous post, I worked on __publish_wiki_output__.m and publish.m code. The __publish_wiki_output__.m has been added as an internal function in scripts/miscellaneous/private. I skimmed through the parser in publish.m to get a gist of how it actually works. It has three levels of parsing:

1. Extract the overall structure (paragraphs and code sections).

2. Parsing the content of a paragraph.

3. Generate the output of the script code and look after figures produced in code.

After that, I studied how the url-transfer.h file is implemented which contains a base class named base_url_transfer which has a derived class named url_transder. One thing that puzzled me while doing so was, why there has to be a macro HAVE_CURL in order for curl_transfer to be defined and why we haven’t defined url_transfer class itself? I would try to get these doubts solved this week.

The problem of user agent was solved by selecting the following user agent:

    GNU Octave/OCTAVE_VERSION (https://www.gnu.org/software/octave/ ; help@octave.org) libcurl/LIBCURL_VERSION


where OCTAVE_VERSION and LIBCURL_VERSION correspond to the user’s octave and libcurl version respectively. This code precisely does the same for us.

My inteded plan for the wrapper is, make a cookie_manager.m file that will process the various user options (like verbose output, timeout settings, api.php url, etc.) and pass the values to an internal __curl__.cc function which will in turn, take help from libcurl_wrapper.cc to do various tasks (all the work related to cookies will be looked after by it, essentially).

Curently, all the code in wiki_login.m has been commented out except the first step of login, i.e, getting a login token from the api.php, which it smoothly does, as of now. I am assuming that the file which would store the cookies, is temporary and should be deleted once the session expires. This is one of the things I will be looking on in this week.

I’ve migrated all the developments from my forked git repo to my mercurial bookmark ocs recently and so I was not sure where should I put the files in my source tree. Thus, I’ve put all of them in a directory ocode for now.

To test this for yourself:

• Clone my build tree using hg clone https://me_ydv_5@bitbucket.org/me_ydv_5/octave
• Make yourself a build of octave (make -j2, etc.).
• cd octave.
• Update to my bookmark using hg up ocs. (IMPORTANT!)
• cd ocode.
• Execute Makefile in octave.
• Execute wiki_login in octave to get a login token.

All other details of the wrapper’s implementation will be followed in the next post.

The next week would follow the following advances:

• Choosing the right location for the files (after I get a green light for the current developmental path).
• Extending other options in the wrapper for wiki_login’s steps 2 and 3.
• Implementing cookie_manager with other user options.
• Writing help text and text cases, if any.
• Correction of existing work/ changing the stategy as adviced by mentor, or anyone else.
• Look into how can I use existing base_url_transfer class in the wrapper and resolve my query of the HAVE_CURL macro and shared pointers, etc.

I am optimistic that I would be able to complete my first evaluation work by 25th May or so, as I will need to focus on my end term examinations after that which will start from 1 June. We don’t get holidays in between the exams!

Please let me know if I am doing it the right way or not, by either replying to this thread or by simply dropping a message on #octave for <batterylow>. All the suggestions are always welcomed.

Oh and not to forget, I got my domain an SSL certificate, now all the requests are served via HTTPS only!

Stay tuned for next update.

May 01, 2018

Week 2

Week 1 included setting up all the work environment, bitbucket repository for tracking my project’s progess and review,setting up and getting this blog aggregated to planet.octave.org. It also included reading up various files that are of concern to the project.

Week 2 would focus on getting my hands dirty in refactoring the __publish_wiki_output__.m and (possibly) publish.m code. This would also include looking up what exact methods/functions will be needed to implement the wrapper. Currently, a proof of concept is written as a bash script. My work of writing the wrapper will be highly inspired from this script.

The wrapper is written so that MediaWiki can be communicated directly using Octave and there won’t be any need to use Java’s interface to Octave or the bash script itself. To know how exactly MediaWiki API works, have a look at this nicely written post. Note that the $wgEnableAPI written in the post is now deprecated from MediaWiki’s version 1.32.0. Another thing that needs looking upon is MediaWiki needs a user agent in order for the client to be identified. So we need to decide what would be it. Stay tuned for next update! April 26, 2018 Sudeepam Pandey Starting with GSoC 2018. So this year, I applied to the Google summer of code and got in. Google summer of code, or GSoC, as it is usually called, is a program funded by Google that has helped open source grow for over a decade. Under this program, Google awards stipends to university students for contributing code to open source organizations during their summer breaks from the university. The details of the program can be found here: Starting with Google summer of code. Now this year, I have been selected to work with GNU Octave. It is a free and open source software/ high level programming language which is primarily focused on scientific computing. It is largely compatible with MATLAB and is a brilliant open source alternative to it. More details about GNU Octave can be found at Free your numbers! Introducing GNU Octave. My GSoC project is about adding a Command line suggestion feature to GNU Octave. Stay tuned, I will share the details of the project very soon. April 25, 2018 Sahil Yadav Week 1 I’ve been selected as a Google Summer of Code , 2018 student developer at GNU Octave. GNU Octave is a high-level language, primarily intended for numerical computations. It provides a convenient command line interface for solving linear and nonlinear problems numerically, and for performing other numerical experiments using a language that is mostly compatible with Matlab. It may also be used as a batch-oriented language. A very heartful thanks to Kai T. Ohlhus, Doug Stewart, Ankit Raj and others who saw my potential and chose me for GSoC 2018. My project is Octave Code Sharing. Community bonding period would include reading up material that’d be essential for the project. Abstract: This project aims to come up with a pan-octave implementation that could be used to connect to wiki.octave.org with appropriate credentials for publishing octave scripts that could be hosted for distribution using the MediaWiki API. Currently, no formal implementation is there but a proof of concept which was implemented as a bash script, later refactored to use JAVA’s interface to Octave. Since the network connection itself has latency and response time which are significantly large to get into notice, an Octave script, which will help in connection, will not be much of a performance killer. To maintain a stateless HTTP protocol, some information that would be needed will be stored as cookies with the help of Libcurl library. All this would lead to set up of RESTful services for GNU Octave which could be further extended to support the compatibility with MATLAB’s RESTful interface. Timeline: 1. April 23 - May 14 Community Bonding Period: Get to know more about Libcurl and its implementation. Learn about HTTP request headers and stateless transfer protocol. Finalise location of files in the codebase that will be needed for code sharing. Study the existing work that has been done by the mentor. Study publish() and grabcode() functions. Study octave::url_transfer class. Learn about MediaWiki API and its implementation (backend working). 2. May 15 - May 23 Implement the Libcurl wrapper in libcurl_wrapper.cc/h. This would include the various abstractions such as ‘ALL’, ‘SESS’, ‘FLUSH’, ‘RELOAD’ that could be used by octave’s script to use Libcurl cookies to connect to the server. There’s a preliminary implementation for the same which was implemented by the mentor sometime back, which can be extended in this phase for a more general design. 3. May 23 - May 31 Add ‘wiki’ as an ‘output_format’ in publish() function. This would extend the implementation of private function __ publish_wiki_output__.m that is to be used to format a wiki published code. 4. June 1 - June 10 Non Coding period due to University major examinations. 5. June 11 - June 15 First Phase Evaluation - Write Documentation of the work done upto this point and other tests such as setting up the wiki installation environment for testing the script that would be implemented in point 6. Catch up with the work if anything is lagging behind. 6. June 15 - June 30 Implement wiki_login.m which would use the Libcurl wrapper implemented in point 2. Currently this file uses Java’s interface to octave. But with the wrapper, the file would become an implementation of RESTful services for ‘code sharing’ and will not be dependent on Java’s interface to octave. 7. July 1 - July 9 Implement ‘weboptions’ function. It will return a default ‘weboptions’ object to specify parameters for a request to web service. This function will use the libCURL wrapper that would’ve been already implemented. Current scope of the function would be restricted to four options, viz., ‘Username’, ‘password’, ‘Keyname’, ‘KeyValue’. 8. July 10 - July 13 Second Evaluation Phase - Write documentation of the work done upto here and other tests required for ‘weboptions’ function. Catch up with any previous work if left. 9. July 14 - July 20 Implement ‘webread’ function. This will read content from a web service and return data formatted as text. Other output arguments (cmap, alpha) will not be supported currently. 10. July 21 - July 27 Implement ‘webwrite’ function. This will put data in the body of an HTTP POST request to the web service. 11. July 28 - August 6 Buffer period for anything that remains. Complete documentation and testing for ‘webwrite’ and ‘webread’. For now, libcurl_wrapper.cc/h would be placed in libinterp/corefcn but could be changed whether it should be used by default or on the user’s choice because someone might not want to connect themselves with wiki.octave.org. I’ve for now, merged two weeks, i.e from 15 June to 30 June because the main task would be include the wrapper implementation for connecting to the webserver. This may span upto two weeks and so is the reason for two weeks of the same task. Additional Things to be done after GSoC is over: The web function is already partially implemented. The task will be to finish the implementation with various arguments such as ‘-notoolbar’, ‘-noaddressbox’, ‘-new’ for MATLAB’s compatibility. ‘websave’, ‘ftp’ and ‘sendmail’ that are a part of RESTful services, will also be implemented. Any other part of GNU octave which currently might need RESTful services using cookies can be amended to use the implementation that would result from the project. The ‘webread’ function will be extended to read values into JSON format and images as well. Thanks for reading, will be posting next update soon. Looking forward to a bug-free and code-some summer. :-) Week 1 I’ve been selected as a Google Summer of Code , 2018 student developer at GNU Octave. GNU Octave is a high-level language, primarily intended for numerical computations. It provides a convenient command line interface for solving linear and nonlinear problems numerically, and for performing other numerical experiments using a language that is mostly compatible with Matlab. It may also be used as a batch-oriented language. A very heartful thanks to Kai T. Ohlhus, Doug Stewart, Ankit Raj and others who saw my potential and chose me for GSoC 2018. My project is Octave Code Sharing. Community bonding period would include reading up material that’d be essential for the project. Abstract: This project aims to come up with a pan-octave implementation that could be used to connect to wiki.octave.org with appropriate credentials for publishing octave scripts that could be hosted for distribution using the MediaWiki API. Currently, no formal implementation is there but a proof of concept which was implemented as a bash script, later refactored to use JAVA’s interface to Octave. Since the network connection itself has latency and response time which are significantly large to get into notice, an Octave script, which will help in connection, will not be much of a performance killer. To maintain a stateless HTTP protocol, some information that would be needed will be stored as cookies with the help of Libcurl library. All this would lead to set up of RESTful services for GNU Octave which could be further extended to support the compatibility with MATLAB’s RESTful interface. Timeline: 1. April 23 - May 14 Community Bonding Period: Get to know more about Libcurl and its implementation. Learn about HTTP request headers and stateless transfer protocol. Finalise location of files in the codebase that will be needed for code sharing. Study the existing work that has been done by the mentor. Study publish() and grabcode() functions. Study octave::url_transfer class. Learn about MediaWiki API and its implementation (backend working). 2. May 15 - May 23 Implement the Libcurl wrapper in libcurl_wrapper.cc/h. This would include the various abstractions such as ‘ALL’, ‘SESS’, ‘FLUSH’, ‘RELOAD’ that could be used by octave’s script to use Libcurl cookies to connect to the server. There’s a preliminary implementation for the same which was implemented by the mentor sometime back, which can be extended in this phase for a more general design. 3. May 23 - May 31 Add ‘wiki’ as an ‘output_format’ in publish() function. This would extend the implementation of private function __ publish_wiki_output__.m that is to be used to format a wiki published code. 4. June 1 - June 10 Non Coding period due to University major examinations. 5. June 11 - June 15 First Phase Evaluation - Write Documentation of the work done upto this point and other tests such as setting up the wiki installation environment for testing the script that would be implemented in point 6. Catch up with the work if anything is lagging behind. 6. June 15 - June 30 Implement wiki_login.m which would use the Libcurl wrapper implemented in point 2. Currently this file uses Java’s interface to octave. But with the wrapper, the file would become an implementation of RESTful services for ‘code sharing’ and will not be dependent on Java’s interface to octave. 7. July 1 - July 9 Implement ‘weboptions’ function. It will return a default ‘weboptions’ object to specify parameters for a request to web service. This function will use the libCURL wrapper that would’ve been already implemented. Current scope of the function would be restricted to four options, viz., ‘Username’, ‘password’, ‘Keyname’, ‘KeyValue’. 8. July 10 - July 13 Second Evaluation Phase - Write documentation of the work done upto here and other tests required for ‘weboptions’ function. Catch up with any previous work if left. 9. July 14 - July 20 Implement ‘webread’ function. This will read content from a web service and return data formatted as text. Other output arguments (cmap, alpha) will not be supported currently. 10. July 21 - July 27 Implement ‘webwrite’ function. This will put data in the body of an HTTP POST request to the web service. 11. July 28 - August 6 Buffer period for anything that remains. Complete documentation and testing for ‘webwrite’ and ‘webread’. For now, libcurl_wrapper.cc/h would be placed in libinterp/corefcn but could be changed whether it should be used by default or on the user’s choice because someone might not want to connect themselves with wiki.octave.org. I’ve for now, merged two weeks, i.e from 15 June to 30 June because the main task would be include the wrapper implementation for connecting to the webserver. This may span upto two weeks and so is the reason for two weeks of the same task. Additional Things to be done after GSoC is over: The web function is already partially implemented. The task will be to finish the implementation with various arguments such as ‘-notoolbar’, ‘-noaddressbox’, ‘-new’ for MATLAB’s compatibility. ‘websave’, ‘ftp’ and ‘sendmail’ that are a part of RESTful services, will also be implemented. Any other part of GNU octave which currently might need RESTful services using cookies can be amended to use the implementation that would result from the project. The ‘webread’ function will be extended to read values into JSON format and images as well. Thanks for reading, will be posting next update soon. Looking forward to a bug-free and code-some summer. :-) April 23, 2018 Erivelton Gualter Welcome to Octave and Google Summer of Code This summer I got accepeted to the Summer Google of Code under the GNU Octave. This program, admistred by Google, facilates the emergence of students to the Open Source Community. My primary goal to participate in the GSoC is to build a long term relationship with the open source community … Welcome to Octave and Google Summer of Code This summer I got accepeted to the Summer Google of Code under the GNU Octave. This program, admistred by Google, facilates the emergence of students to the Open Source Community. My primary goal to participate in the GSoC is to build a long term relationship with the open source community … March 06, 2018 Jordi Gutiérrez Hermoso Advent of D I wrote my Advent of Code in D. The programming language. It was the first time I used D in earnest every day for something substantial. It was fun and I learned things along the way, such as easy metaprogramming, concurrency I could write correctly, and functional programming that doesn’t feel like I have one arm tied behind my back. I would do it all over again. My main programming languages are C++ and Python. For me, D is the combination of the best of these two: the power of C++ with the ease of use of Python. Or to put it another way, D is the C++ I always wanted. This used to be D’s sales pitch, down to its name. There’s lots of evident C++ heritage in D. It is a C++ successor worthy of consideration. Why D? This is the question people always ask me. Whenever I bring up D, I am faced with the following set of standard rebuttals: • Why not Rust? • D? That’s still around? • D doesn’t bring anything new or interesting • But the GC… I’ll answer these briefly: D was easier for me to learn than Rust, yes, it’s still around and very lively, it has lots of interesting ideas, and what garbage collector? I guess there’s a GC, but I’ve never noticed and it’s never gotten in my way. I will let D speak for itself further below. For now, I would like to address the “why D?” rebuttals in a different way. It seems to me that people would rather not have to learn another new thing. Right now, Rust has a lot of attention and some of the code, and right now it seems like Rust may be the solution we always wanted for safe, systems-level coding. It takes effort to work on a new programming language. So, I think the “why D?” people are mostly saying, “why should I have to care about a different programming language, can’t I just immediately dismiss D and spend time learning Rust instead?” I posit that no, you shouldn’t immediately dismiss D. If nothing else, try to listen to its ideas, many of which are distilled into Alexandrescu’s The D Programming Language. I recommend this book as good reading material for computer science, even if you never plan to write any D (as a language reference itself, it’s already dated in a number of ways, but I still recommend it for the ideas it discusses). Also browse the D Gems section in the D tour. In the meantime, let me show you what I learned about D while using it. Writing D every day for over 25 days I took slightly longer than 25 days to write my advent of code solutions, partly because some stumped me a little and partly because around actual Christmas I wanted to spend time with family instead of writing code. When I was writing code, I would say that nearly every day of advent of code forced me to look into a new aspect of D. You can see my solutions in this Mercurial repository. I am not going to go too much into details about the abstract theory concerning the solution of each problem. Perhaps another time. I will instead focus on the specific D techniques I learned about or found most useful for each. Contents Day 1: parsing arguments, type conversions, template constraints For Day 1, I was planning to be a bit more careful about everything around the code. I was going to carefully parse CLI arguments, produce docstrings and error messages when anything went wrong, and carefully validate template arguments with constraints (comparable to concepts in C++). While I could have done all of this, as days went by I tried to golf my solutions, so I abandoned most of this boilerplate. Instead, I lazily relied on getting D stack traces at runtime or compiler errors when I messed up. As you can see from my solution, had I kept it up, the boilerplate isn’t too bad, though. Template constraints are achieved by adding if(isNumeric!numType), which checks at compile time that my template was given a template argument of the correct type, where isNumeric comes from import std.traits. I also found that getopt was a sufficiently mature standard library for handling command-line parsing. It’s not quite as rich as Python’s argparse, merely sufficient. This about shows all it can do: string input; auto opts = getopt( args, "input|i", "Input captcha to process", &input ); if (opts.helpWanted) { defaultGetoptPrinter("Day 1 of AoC", opts.options); } Finally, a frequent workhorse that appeared from Day 1 was std.conv for parsing strings into numbers. A single function, to is surprisingly versatile and does much more than that, by taking a single template argument for converting (not casting) one type into another. It knows not only how to parse strings into numbers and vice versa, but also how to convert numerical types keeping as much precision as possible or reading list or associative array literals from strings if they are in their standard string representation. It’s a good basic example of D’s power and flexibility in generic programming. Day 2: functional programming and uniform function call syntax For whatever reason, probably because I was kind of trying to golf my solutions, I ended up writing a lot of functionalish code, with lots of map, reduce, filter, and so forth. This started early on with Day 2. D is mostly unopinionated about which style of programming one should use and offers tools to do object-orientation, functional programming, or just plain procedural programming, presenting no obstacle to the mixing these styles. Lambdas are easily written inline with concise syntax, e.g. x => x*x, and the basic standard functional tools like map, reduce, filter and so on are available. D’s approach to functional programming is quite pragmatic. While I rarely used it, because I wasn’t being too careful for these solutions, D functions can be labelled pure, which means that they can have no side effects. However, this still lets them do local impure things such as reassigning a variable or having a for loop. The only restriction is that all of their impurity must be “on the stack”, and that they cannot call any impure functions themselves. Another feature that I came to completely fall in love with was what they call uniform function call syntax (UFCS). With some caveats, this basically means that foo.bar(baz) is just sugar for bar(foo, baz) If the function only has one argument, the round brackets are optional and foo.bar is sugar for bar(foo). This very basic syntactic convenience makes it so easy and pleasant to chain function calls together, lending itself to making it more inviting to write functional code. It also is a happy unification between OOP and FP, because syntactically it’s the same to give an object a new member function as it is to create a free-standing function whose first argument is the object. Day 3: let’s try some complex arithmetic! For me, 2-dimensional geometry is often very well described by complex numbers. The spiral in the problem here seemed easy to describe as an associative array from complex coordinates to integer values. So, I decided to give D’s std.complex a try. It was easy to use and there were no big surprises here. Day 4: reusing familiar tools to find duplicates There weren’t any new D techniques here, but it was nice to see how easy it was to build a simple word counter from D builtins. Slightly disappointed that this data structure itself wasn’t builtin like Python’s own collections.Counter but hardly an insurmountable problem. Day 5: more practice with familiar tools Again, not much new D here. I like the relative ease with which it’s possible to read integers into a list using map and std.conv.to. Day 6: ranges There’s usually a fundamental paradigm or structure in programming languages out of which everything else depends on. Haskell has functions and monads, C has pointers and arrays, C++ has classes and templates, Python has dicts and iterators, Javascript has callbacks and objects, Rust has borrowing and immutability. Ranges are one of D’s fundamental concepts. Roughly speaking, a range is anything that can be iterated over, like an array or a lazy generator. Thanks to D’s powerful metaprogramming, ranges can be defined to satisfy a kind of compile-time duck typing: if it has methods to check for emptiness, get the first element, and get the next element, then it’s an InputRange. This duck typing is kind of reminiscent of type classes in Haskell. D’s general principle of having containers and algorithms on those containers is built upon the range concept. Ranges are intended to be simpler reformulation of iterators from the C++ standard libary. I have been using ranges all along, as foreach loops are kind of like sugar for invoking those methods on ranges. However, for day 6 I actually wanted to use a method that had to invoke an std.range method, enumerate. It simply iterates over a range while simultaneously producing a counter. This I used to write some brief code to obtain both the maximum of an array and the index in which it occurs. Another range-related feature that appears for the first time here is slicing. Certain random-access ranges which allow integer indexing also allow slicing. The typical method to remove elements from an array is to use this slicing. For example, to remove the first five elements and the last two elements from an array: arr = arr[5..$-2];

Here the dollar sign is sugar for arr.length and this removal is simply done by moving some start and end pointers in memory — no other bytes are touched.

The D Tour has a good taste of ranges and Programming in D goes into more depth.

Day 7: structs and compile-time regexes

My solution for this problem was more complicated, and it forced me to break out an actual tree data structure. Because I wasn’t trying to be particularly parsimonious about memory usage or execution speed, I decided to create the tree by having a node struct with a global associative array indexing all of the nodes.

In D, structs have value semantics and classes have reference semantics. Roughly, this means that structs are on the stack, they get copied around when being passed into functions, while classes are always handled by reference instead and dynamically allocated and destroyed. Another difference between structs and classes is that classes have inheritance (and hence, polymorphic dispatch) but structs don’t. However, you can give structs methods, and they will have an implicit this parameter, although this is little more than sugar for free-standing functions.

Enough on OOP. Let’s talk about the really exciting stuff: compile-time regular expressions!

For this problem, there was some input parsing to do. Let’s look at what I wrote:

void parseLine(string line) {
static nodeRegex = regex(r"(?P<name>\w+) $$(?P<weight>\d+)$$( -> (?P<children>[\w,]+))?");
auto row = matchFirst(line, nodeRegex);
// init the node struct here
}

The static keyword instructs D that this variable has to be computed at compile-time. D’s compiler basically has its own interpreter that can execute arbitrary code as long as all of the inputs are available at compile time. In this case, this parses and compiles this regex into the binary. The next line, where I call matchFirst on each line, is done at runtime, but if for whatever reason I had these strings available at compile time (say, defined as a big inline string a few lines above the same source file), I could also do the regex parsing at compile time if I wanted to.

This is really nice. This is one of my favourite D features. Add a static and you can precompute into your binary just about anything. You often don’t even need any extra syntax. If the compiler realises that it has all of the information at compile time to do something, it might just do it. This is known as compile-time function execution, hereafter, CTFE. The D Tour has a good overview of the topic.

Day 8: more compile-time fun with mixin

Day 8 was another problem where the most interesting part was parsing. As before, I used a compile-time regex. But the interesting part of this problem was the following bit of code for parsing strings into their corresponding D comparison operation, as I originally wrote it:

auto comparisons = [
"<": function(int a, int b) => a  < b,
">": function(int a, int b) => a > b,
"==": function(int a, int  b) => a == b,
"<=": function(int a, int b) => a <= b,
">=": function(int a, int b) => a >= b,
"!=": function(int a, int b) => a  != b,
];

Okay, this isn’t terrible. It’s just… not very pretty. I don’t like that it’s basically the same line repeated six times. I furthermore also don’t like that within each line, I have to repeat the operator in the string part and in the function body. Enter the mixin keyword! Basically, string mixins allow you to evaluate any string at compile time. They’re kind of like the C preprocessor, but much safer. For example, string mixins only evaluate complete expressions, so no shenanigans like #define private public are allowed. My first attempt to shorten the above looked like this:

bool function(int,int)[string] comparisons;
static foreach(cmp; ["<", ">", "==", "<=", ">=", "!="]) {
comparisons[cmp] = mixin("function(int a, int b) => a "~cmp~" b");
}

Since I decided to use a compile-time static loop to populate my array, I now needed a separate declaration of the variable which forced me to spell out its ungainly type: an associative array that takes a string and returns a function with that signature. The mixin here takes a concatenated string that evaluates to a function.

However, this didn’t work for two reasons!

The first one is that static foreach was introduced on September 2017. The D compilers packaged in Debian didn’t have it yet when I wrote that code! The second problem is more subtle: initialisation of associative arrays currently cannot be statically done because their internal data structures rely on runtime computations, according to my understanding of this discussion. They might fix it some day?

So, next best thing is my final answer:

bool function(int,int)[string] comparisons;

auto getComparisons(Args...)() {
foreach(cmp; Args) {
comparisons[cmp] = mixin("function(int a, int b) => a "~cmp~" b");
}
return comparisons;
}

shared static this() {
comparisons = getComparisons!("<", ">", "==",  "<=", ">=", "!=");
}

Alright, by size this is hardly shorter than the repetitive original. But I still think it’s better! It has no dull repetition where bugs are most often introduced, and it’s using a variable-argument templated function so that the mixin can have its values available at compile time. It uses the next best thing to compile-time initialisation, which is a module initialiser shared static this() that just calls the function to perform the init.

Day 9: a switch statement!

Day 9 was a simpler parsing problem, so simple that instead of using a regex I decided to just use a switch statement. There isn’t anything terribly fancy about switch statements, and they work almost exactly the same as they do in other languages. The only distinct features of switch statements in D is that they work on numeric, string, or bool types and that they have deprecated implicit fallthrough. Fallthrough instead must be explicitly done with goto case; or will be once the deprecation is complete.

Oh, and you can also specify ranges for a case statement, e.g.

case 'a': .. case 'z':
// do stuff with lowercase ASCII
break;

It’s the small conveniences that make this pleasant. Programming in D has a good discussion on switch statements.

Day 10: learning what ranges cannot do

So, superficially, you might think that expressions like arr[2..$-2], which is valid, would also allow for things like arr[$-2..1] to traverse the array in reverse order or some other syntax for having a different step size than +1. At least I did. These kinds of array indexing are common in numeric-based arrays such as Octave, Julia, R, or Python’s numpy. So for day 10’s hash, which requires reversing an array, I thought I could just do that.

Turns out that the language doesn’t have syntax to allow this, but after a quick trip to the standard library I found the necessary functions. What I thought could be written as

arr[a..b] = arr[b..a];

reverse(arr[a..b]);

Other than this minor discovery about ranges, Day 10 was more about getting the algorithm right than using any specialised D utilities. Since real hashes typically allow several sizes, I templated the hash functions with the total size, rounds of hashing, and chunk size, with a template constraint that the chunk size must divide the total size:

auto getHash(int Size=256, int Rounds=64, int ChunkSize=16)(string input)
if( Size % ChunkSize == 0)
{
// ...
}

Nothing new here. I just like that template constraints are so easy to write.

Day 11: offline hex coding

I did most of Day 11 on paper. It took me a while to figure out a proper hex coordinate system and what the distance function in that coordinate system should be. I had seen hex coordinates from playing Battle for Wesnoth, but took me a while to figure them out again. Once I had that, the actual D code is pretty simple and used no techniques I hadn’t seen before. I think this is the first time I used the cumulativeFold function, but other than that, nothing to see here. An immutable global associative array populated at module init time like before,

pure static this(){
directions = [
"ne": [1,1],
"n": [0,1],
"nw": [-1,0],
"sw": [-1,-1],
"s": [0,-1],
"se": [1,0],
];
}

and that’s it.

Day 12: for want of a set

The only new D technique for this problem was that I decided to use a set structure to keep track of which graph nodes had been visited. The only problem is that D doesn’t have a built-in set structure (yet?), but it does have a setDifference function. It’s a bit clunky. It only works on ordered arrays, but that was sufficient for my purpose here, and probably not much worse than hashing with a traditional set structure would have been.

One further observation: D has an in keyword, which can be used to test membership, like in Python (it also has an unrelated use for defining input and output arguments to functions), but unlike Python, only for associative arrays. This makes sense, because the complexity of testing for membership for other data structures can vary widely depending on the structure and the chosen algorithm, and there isn’t a clear universal choice like there is for associative arrays.

If desired, however, it’s possible to define the in operator for any other class, like so:

bool opBinaryRight!("in")(T elt) {
// check that elt is in this
}

I would assume that’s what you could use to write a set class for D.

Day 13: more offline coding

This one is another where I did most of the solution on paper and thus managed to write a very short program. No new D techniques here, just the usual functionalish style that I seem to be developing.

Day 14: reusing older code as a module

The problem here is interesting because I’ve solved this labelling of connected components problem before in C++ for GNU Octave. I wrote the initial bwlabeln implementation using union-find. I was tempted to do the same here, but I couldn’t think of a quick way to do so, and talking to others in the #lobsters channel in IRC, I realised that a simpler recursive solution would work without overflowing the stack (because the problem is small enough, not because a stack-based algorithm is clever).

The interesting part is reusing an earlier solution, the hashing algorithm from Day 10. At first blush, this is quite simple: every D file also creates its own module, namespaced if desired by directories. It’s very reminiscent of Python’s import statement and module namespacing. The only snag is that my other file has a void main(string[] args) function and so does this one. The linker won’t like that duplicate definition of symbols. For this purpose, D offers conditional compilation, which in C and C++ is usually achieved via a familiar C preprocessor macro idiom.

In D, this idiom is codified into the language proper via the version kewyord, like so

version(standalone) {
void main(string[] args){
// do main things here
}
}

This instructs the compiler to compile the inside of the version block only if an option called “standalone” is passed in,

gdc -O2 -fversion=standalone app.d -o day10

or, with regrettably slightly different flags,

ldc2 -O2 -d-version=standalone app.d -of day10

There are other built-in arguments for version, such as “linux” or “OSX” to conditionally compile for a particular operating system. This keyword offers quite a bit of flexibility for conditional compilation, and it’s a big improvement over C preprocessor idioms.

Day 15: generators, lambdas, functions, and delegates

This problem was an opportunity to test out a new function, generate, which takes a function and iterates it repeatedly on a range. Haskell calls this one iterate, which I think is a better name. It’s also a lazy generator, so you need something like take to say how much of the generator do you want to use. For example, the Haskell code

pows = take 11 $iterate (\x -> x*2) 1 can be translated into D as auto x = 1; auto pows = generate!(x => x*2).take(11); There are other examples in the documentation. Let also take a moment here to talk about the different anonymous functions in D. The following both declare a function that squares its input: function(int a) { return a^^2;} delegate(int a) { return a^^2;} The difference is just a question of closure. The delegate version carries a hidden pointer to its enclosing scope, so it can dynamically close over the outer scope variables. If you can’t afford to pay this runtime penalty, the function version doesn’t reference the enclosing scope (no extra pointer). So, for a generator, you typically want to use a delegate, since you want the generator to remember its scoped variables across successive calls, like what I did: auto generator(ulong val, ulong mult) { return generate!(delegate(){ val = (val * mult) % 2147483647; return val; } ); } This function returns a generator range where each entry will result in a new entry of this pseudorandom linear congruence generator. The delegate/function is part of the type, and can be omitted if it can be inferred by context (e.g. when passing a function into another function as an argument). Furthermore, there’s a lambda shorthand that I have been using all along, where the {return foo;} boilerplate can be shortened to just => like so: (a) => a^^2 This form is only valid where there’s enough context to infer if it’s a delegate or a function, as well as the type of a itself. More details in the language spec. Day 16: permutations with primitive tools This permutations problem made me reach for the std.algorithm function bringToFront for cyclicly permuting an array in place like so, bringToFront(progs[rot..$], progs[0..rot]);

It’s a surprisingly versatile function that can be used to perform more tricks than cyclic permutations. Its documentation is worth a read.

I also ran into a D bug here. I had to create a character array from an immutable input string, but due to Unicode-related reasons that D has for handling characters especially, I had to cast to ubyte[] instead of char[].

Besides that, for the second part where you had to realise that permutations cannot have too big of an orbit, I also ended up using a string array with the canFind from std.algorithm. I would have preferred a string set with hashing instead of linear searching, but it didn’t make a huge difference for the size of this problem.

I really want sets in the D standard library. Maybe I should see what I can do to make them happen.

Day 17: avoiding all the work with a clever observation

This puzzle is a variation of the Josephus problem. I needed some help from #lobsters in IRC to figure out how to solve it. There aren’t any new D techniques, just some dumb array concatenation with the tilde operator for inserting elements into an array:

circ = circ[0..pos] ~ y ~ circ[pos..$]; The second part can be solved via the simple observation that you only need to track one position, the one immediately following zero. Day 18: concurrency I can finally understand and write correctly This day was an exciting one for me. Discussing this problem with others, it seems many people had much difficulty solving this problem in other programming languages. Most people seemed to have to emulate their own concurrency with no help from their programming language of choice. In contrast, D absolutely shined here, because it is based on the actor concurrency model (message passing), which precisely fits the problem statement. (There are other concurrency primitives in case the actor concurrency model isn’t sufficient, but it was sufficient for me.) The basic idea of concurrency in D is that each thread of execution localises all of its state. By default, threads share no data. In order to communicate with each other, threads pass messages. A thread can indicate at any time when it’s ready to send or receive messages. Messages can be any type, and each thread says what type it’s expecting to receive. If a thread receives a type it’s not prepared to handle, it will throw an exception. There are more details, such as what happens if a thread receives too many messages but doesn’t respond to any of them. Let’s not go into that now. Basic idea: threads get spawned, threads send and receive messages. Let’s spend a little bit of time looking at the relevant functions and types I used, all defined in std.concurrency spawn Starts a thread of execution. The first argument is a reference to the function that this thread will execute, along with any arguments that function may take. Returns a Tid type, a thread ID. This is used as the address to send messages to. A thread can refer to its parent thread via the special variable ownerTid. Unless explicitly declared shared, the arguments of a threaded function must be immutable. This is how the compiler guarantees no race conditions when manipulating those variables. Of course, with shared variables, the programmer is signalling that they are taking over synchronisation of that data, which may require using low-level mutexes. send Send a message to a particular thread. The first argument is the thread id. The other arguments can be anything. It’s up to the receiving thread to handle the arguments it receives. receiveOnly Indicate that this thread is ready to receive a single type, and returns the value of that type. The type must of course be specified as a compile-time argument. receive Indicates what to do with any of several possible types. The arguments to this function are a collection of functions, whose parameters types will be dynamically type-matched with received types. I didn’t need this function, but I wanted to mention it exists. receiveTimeout The problem statement is designed to deadlock. Although there probably is a more elegant solution, timing out on deadlock was the solution I wrote. This function does just that: listens for a message for a set of amount of time. If a message is received in the designated time, its handler function is executed and receiveTimeout returns true. If the timeout happens, it returns false instead. Armed with these tools, the solution was a breeze to write. I first spawn two threads and save their thread ids, auto tid1 = spawn(&runProgram, opcodes, 0); auto tid2 = spawn(&runProgram, opcodes, 1); Each of the two threads defined by runProgram then immediately stops, waiting for a thread id, to know whom to talk to, void runProgram(immutable string[] opcodes, ulong pid) { auto otherProg = receiveOnly!Tid(); // ... } The parent thread then connects the two worker threads to each other, send(tid1, tid2); send(tid2, tid1); And off they go, the two threads run through the opcodes in the problem statement, and eventually they deadlock, which I decided to handle with a timeout like so, case "rcv": if (!receiveTimeout(100.msecs, (long val) {regs[reg] = val;})) { goto done; } // no timeout, handle next opcode goto default; After the thread has timed out, it signals to the parent thread that it’s done, done: send(ownerTid, thisTid, sent); The parent in turn receives two tuples with thread id and computed value from each thread, and based on that decides what to output, after figuring out which thread is which, // Wait for both children to let us know they're done. auto done1 = receiveOnly!(Tid, long); auto done2 = receiveOnly!(Tid, long); if (done1[0] == tid2) { writeln(done1[1]); } else { writeln(done2[1]); } And voilà, concurrency easy-peasy. Day 19: string parsing with enums and final switches The only new D technique here are final switches. A final switch is for enum types, to make the compiler enforce you writing a case to match all of the possible values. That’s what I did here, where I wanted to make sure I matched the up, down, left, and right directions: final switch(dir){ case DIR.d: j++; break; case DIR.u: j--; break; case DIR.l: i--; break; case DIR.r: i++; break; } The rest of the problem is merely some string parsing. Day 20: a physics problem with vector operations I have not yet done serious numerical work with D, but I can see that it has all of the necessary ingredients for it. One of the most obvious amongst these is that it has built-in support for writing vector instructions. Given struct to model a particle in motion, struct particle { double[3] pos; double[3] vel; double[3] acc; } the following function return another particle where all of the vectors have been divided by its norm (i.e. normalised to 1): auto unit(particle p) { auto pos = p.pos, vel = p.vel, acc = p.acc; pos[] /= pos.norm; vel[] /= vel.norm; acc[] /= acc.norm; particle u = {pos, vel, acc}; return u; } This vec[] /= scalar notation divides all of the vector by the given scalar. But that’s not all. You can also add or multiply vectors elementwise with similar syntax, double[3] diff1 = p.pos, diff2 = p.vel; diff1[] -= p.vel[]; diff2[] -= p.acc[]; Here diff1 and diff2 give the vector difference between the position and velocity, and respectively, velocity in acceleration (I use the criterion of all of three of these being mostly collinear to determine if all particles have escaped the system and thus can no longer interact with any other particle). This is mostly syntactic sugar, however. Although the D compiler can sometimes turn instructions like these into native vector instructions like AVX, real vectorisation has to be done via some standard library support Day 21: an indexable, hashable, comparable struct I was happy to recognise, via some string mixins, that I could solve this problem by considering the dihedral group of the square: immutable dihedralFun = "function(ref const Pattern p) { auto n = p.dim; auto output = new int[][](n, n); foreach(i; 0..n) { foreach(j; 0..n) { output[i][j] = p.grid[%s][%s]; } } return output; }" ; immutable dihedralFourGroup = [ mixin(format(dihedralFun, "i", "j")), mixin(format(dihedralFun, "n-i-1", "j")), mixin(format(dihedralFun, "i", "n-j-1")), mixin(format(dihedralFun, "n-i-1", "n-j-1")), mixin(format(dihedralFun, "j", "i")), mixin(format(dihedralFun, "n-j-1", "i")), mixin(format(dihedralFun, "j", "n-i-1")), mixin(format(dihedralFun, "n-j-1", "n-i-1")), ]; This isn’t a new technique, but I’m really happy how it turns out. Almost like lisp macros, but without devolving into the lawless chaos of Python or Javascript eval or C preprocessor macros. As an aside, the format function accepts formatted strings with POSIX syntax for positional arguments, but there isn’t anything built-in as nice as Perl string interpolation or Python format strings. The real meat of this problem was to implement a grid structure that could be hashed, compared, and indexed. This is is all done with a number of utility functions. For indexing and slicing, the basic idea is that for a user-defined type foo foo[bar..baz] is sugar for foo.opIndex(foo.opSlice(bar, baz)) so those are the two functions you need to implement for indexing and slicing. Similarly, for equality comparison and hashing, you implement opEquals and toHash respectively. I relied on the dihedral functions above for comparison for this problem. After implementing these functions for a struct (recall: like a class, but with value semantics and no inheritance), the rest of the problem was string parsing and a bit of logic to implement the fractal-like growth rule. Day 22: more enums, final switches, and complex numbers Another rectangular grid problem, which I again decided to represent via complex numbers. The posssible infection states given in the problem I turned into an enum which I then checked with a final switch as before. The grid is then just an associative array from complex grid positions to infection states. Nothing new here. By now these are all familiar tools and writing D code is becoming habitual for me. Day 23: another opcode parsing problem This problem wasn’t technically difficult from a D point of view. The usual switches and string parsing techniques of days 8 and 18 work just as well. In fact, I started with the code of day 18 and modified it slightly to fit this problem. The challenge was to statically analyse the opcode program to determine that it is implementing a very inefficient primality testing algorithm. I won’t go into an analysis of that program here because others have already done a remarkable job of explaining it. Once this analysis was complete, the meat of the problem then becomes to write a faster primality testing algorithm, such as dumb (but not too dumb) trial division, auto isComposite(long p) { return iota(2, sqrt(cast(double) p)).filter!(x => p % x == 0).any; } and use this test at the appropriate location. Day 24: a routine graph-search problem This problem required some sort of graph structure, which I implemented as an associative array from node ids to all edges incident to that node. The problem then reduces to some sort of graph traversal (I did depth-first search), keeping track of edge weights. No new D techniques here either, just more practice with my growing bag of tricks. Day 25: formatted reads to finish off Advent of D The final problem involved parsing a slightly more verbose DSL. For this, I decided to use formatted strings for reading, like so, auto branchFmt = " - Write the value %d. - Move one slot to the %s. - Continue with state %s. " ; auto parseBranch(File f) { int writeval; string movedir; char newstate; f.readf(branchFmt, &writeval, &movedir, &newstate); return Branch(writeval ? true : false, movedir == "left" ? -1 : 1, newstate); } This is admittedly a bit brittle. Even the type check between the formatted string and the expected types is done at runtime (but newer D versions have a compile-time version of readf for type-checking the format string). An error here can cause exceptions at runtime. Other than this, the only new technique here is that I actually wrote a loop to parse the program file: auto parseInstructions(File f) { Instruction[char] instructions; while(!f.eof) { char state; f.readf("In state %s:\n", &state); f.readln; // "If the current value is 0:" auto if0 = f.parseBranch; f.readln; // "If the current value is 1:" auto if1 = f.parseBranch; f.readln; // Blank line instructions[state] = Instruction(if0, if1); } return instructions; } A small comfort here is that checking for eof in the loop condition actually works. This is always subtly wrong in C++ and I can never remember why. What’s left of the problem is absolutely routine D by now: associative arrays, UFCS, foreach loops, standard library utilities for summing and iterating and so forth. A few of my favourite things. Concluding remarks The best part is that my code was also fast! I was comparing my solutions above with someone else who was doing their Advent of Code in C. I could routinely match his execution speed on the problems where we bothered to compare, whenever we wrote similar algorithms. I’m eager to see what D can do when faced with some real number-crunching. After all this, I have come to appreciate D more, as well as seeing some of its weak points. I think I have already raved enough about how much I like its functional style, its standard library, its type-checking, and its compile-time calculation. I also ran into a few bugs and deprecated features. I have also observed some language questionable design choices. Not once did I notice having a garbage collector. It was lots of fun. Merry belated Christmas! August 29, 2017 Enrico Bertino Summary of work done during GSoC GSoC17 is at the end and I want to thank my mentors and the Octave community for giving me the opportunity to participate in this unique experience. During this Google Summer of Code, my goal was to implement from scratch the Convolutional Neural Networks package for GNU Octave. It will be integrated with the already existing nnet package. This was a very interesting project and a stimulating experience for both the implemented code and the theoretical base behind the algorithms treated. A part has been implemented in Octave and an other part in Python using the Tensorflow API. Code repository All the code implemented during these months can be found in my public repository: (my username: citti berto, bookmark enrico) Since I implemented a completely new part of the package, I pushed the entire project in three commits and I wait for the community approving for preparing a PR for the official package [1]. Summary The first commit (ade115a, [2]) contains the layers. There is a class for each layer, with a corresponding function which calls the constructor. All the layers inherit from a Layer class which lets the user create a layers concatenation, that is the network architecture. Layers have several parameters, for which I have guaranteed the compatibility with Matlab [3]. The second commit (479ecc5 [4]) is about the Python part, including an init file checking the Tensorflow installation. I implemented a Python module, TFintefrace, which includes: • layer.py: an abstract class for layers inheritance • layers/layers.py: the layer classes that are used to add the right layer to the TF graph • dataset.py: a class for managing the datasets input • trainNetwork.py: the core class, which initiates the graph and the session, performs the training and the predictions • deepdream.py: a version of [5] for deepdream implementation (it has to be completed) The third commit (e7201d8 [6]) includes: • trainingOptions: All the options for the training. Up to now, the only optimizer available is the stochastic gradient descent with momentum (sgdm) implemented in the class TrainingOptionsSGDM. • trainNetwork: passing the data, the architecture and the options, this function performs the training and returns a SeriesNetwork object • SeriesNetwork: class that contains the trained network, including the Tensorflow graph and session. This has three methods • predict: predicting scores for regression problems • classify: predicting labels for classification problems • activations: getting the output of a specific layer of the architecture Goals not met I did not manage to implement some features because of the lack of time due to the bug fixing in the last period. The problem was the conspicuous time spent testing the algorithms (because of the different random generators between Matlab, Octave and Python/Tensorflow). I will work in the next weeks to implement the missing features and I plan to continue to contribute to maintaining this package to keep it up to date with both Tensorflow new versions and Matlab new feature. Function Missing features activations OutputAs (for changing output format) imageInputLayer DataAugmentation and Normalization trainNetwork Accepted inputs: imds or tbl trainNetwork .mat Checkpoints trainNetwork ExecutionEnvironment: 'multi-gpu' and 'parallel' ClassificationOutputLayer classnames TrainingOptions WorkerLoad and OutputFcn DeepDreamImages Generalization to any network and AlexNet example Tutorial for testing the package 1. Install Python Tensorflow API (as explained in [4]) 2. Install Pytave (following these instructions [5]) 3. Install nnet package (In Octave: install [6] and load [7]) 4. Check the package with make check PYTAVE="pytave/dir/" 5. Open Octave, add the Pytave dir the the paths and run your first network: ### TRAINING ### # Load the training set [XTrain,TTrain] = digitTrain4DArrayData(); # Define the layers layers = [imageInputLayer([28 28 1]); convolution2dLayer(5,20); reluLayer(); maxPooling2dLayer(2,'Stride',2); fullyConnectedLayer(10); softmaxLayer(); classificationLayer()]; # Define the training options options = trainingOptions('sgdm', 'MaxEpochs', 15, 'InitialLearnRate', 0.04); # Train the network net = trainNetwork(XTrain,TTrain,layers,options); ### TESTING ### # Load the testing set [XTest,TTest]= digitTest4DArrayData(); # Predict the new labels YTestPred = classify(net,XTest); Future improvements • Manage the session saving • Save the checkpoints as .mat files and not as TF checkpoints • Optimize array passage via Pytave • Categorical variables for classification problems Links August 28, 2017 Joel Dahne Final Report This is the final report on my work with the interval package during Google Summer of Code 2017. This whole blog have been dedicated to the project and by reading all posts you can follow my work from beginning to end. The work has been challenging and extremely fun! I have learned a lot about interval arithmetic, the Octave and Octave-forge project, and also how to contribute to open-source in general. I have found the whole Octave community to be very helpful and especially I want to thank my mentor, Oliver Heimlich, and co-mentor, Kai Torben Ohlhus, for helping me during the project. Here I will give a small introduction to Octave and the interval package for new readers and a summary of how the work has gone and how you can run the code I have actually contributed with. Octave and the Interval Package Octave, or GNU Octave, is a free program for scientific computing. It is very similar to Matlab and its syntax is largely compatible with it. Octave comes with a large set of core functionality but can also be extended with packages from Octave forge. These add new functionality, for example image processing, fuzzy logic or more statistic functions. One of those packages is the interval package which allows you to compute with interval arithmetic in Octave. Summary of the Work The goal with the project was to improve the Octave-forge interval package by implementing support for creating, and working with, N-dimensional arrays of intervals. The work has gone very well and we have just released version 3.0.0 of the interval package incorporating all the contributions I have made during the project. The package now has full support for working with N-dimensional arrays in the same way you do with ordinary floating point numbers in Octave. In addition I have also fixed some bugs not directly related to N-dimensional arrays, see for example bug #51783 and #51283. During the project I have made a total of 108 commits. I have made changes to 332 of the packages 666 files. Some of these changes have only been changes in coding style or (mainly) automated adding of tests. Not counting these I have, manually, made changes to 110 files. If you want to take a look at all the commits I have contributed with the best way is to download the repository after which you can see all the commits from GSoC with hg log -u joeldahne@hotmail.com -d "2017-06-01 to 2017-08-29" Unfortunately I have not found a good way of isolating commits from a specific period and author on sourceforge where the package is hosted. Instead you can find a list of all commits at the end of this blog post. The NEWS-file from the release of version 3.0.0 is also a pretty good overview of what I have done. While not all of the changes are a result of GSoC quite a lot of them are. Running the Code As mentioned above we have just released version 3.0.0 of the interval package. With the new release it is very easy to test the newly added functionality. If you already have Octave installed the easiest way to install the package is with the command “pkg install -forge interval”. This will install the latest release of the package, at the time of writing this is 3.0.0 but that will of course change in the future. You can also download version 3.0.0 directly from Octave-forge. If you want you can also download the source code from the official repository and test it with "make run" or install it with "make install". To download the repository, update to version 3.0.0 an run Octave with the package on Linux use the following hg clone http://hg.code.sf.net/p/octave/interval octave-interval cd octave-interval hg update release-3.0.0 make run A Package for Taylor Arithmetic The task took less time than planned so I had time to work upon a project depending on my previous work. I started to work on a project for Taylor arithmetic, you can read my blog post about it. I created a proof of concept implementation as part of my application for Google Summer of Code and I have now started to turn that into a real package. The repository can be found here. It is still far from complete but my goal is to eventually add it as a package at Octave-Forge. How that goes depends mainly on how much time I have to spend on it the following semesters. If you want to run the code as it is now you can pull the repository and then run it with "make run", this requires that Octave and version 3.0.0 (or higher) of the interval package is installed. List of Commits Here is a list of all 108 commits I have done to the interval package https://sourceforge.net/p/octave/interval/ci/68eb1b3281c9 summary: Added Swedish translation for package metadata https://sourceforge.net/p/octave/interval/ci/52d6a2565ed2 summary: @infsupdec/factorial.m: Fix decoration (bug #51783) https://sourceforge.net/p/octave/interval/ci/cf97d56a8e2a summary: mpfr_function_d.cc: Cast int to octave_idx_type https://sourceforge.net/p/octave/interval/ci/5ed7880c917f summary: maint: Fix input to source https://sourceforge.net/p/octave/interval/ci/637c532ea650 summary: @infsupdec/dot.m: Fix decoration on empty input https://sourceforge.net/p/octave/interval/ci/51425fd67692 summary: @infsup/postpad.m, @infsup/prepad.m, @infsupdec/postpad.m, @infsupdec/prepad.m: Corrected identification of dimension for N-dimensional arrays https://sourceforge.net/p/octave/interval/ci/b4e7f1546e36 summary: mpfr_function_d.cc, mpfr_vector_dot_d.cc: Fixed bug when broadcasting with one size equal to zero https://sourceforge.net/p/octave/interval/ci/02db5199932c summary: doc: NEWS.texinfo: Info about vectorization for nthroot and pownrev https://sourceforge.net/p/octave/interval/ci/197d033fc878 summary: @infsup/pownrev.m, @infsupdec/pownrev.m: Support for vectorization of p https://sourceforge.net/p/octave/interval/ci/7f8d1945264c summary: @infsup/nthroot.m, @infsupdec/nthroot.m, mpfr_function_d.cc: Support for vectorization of n https://sourceforge.net/p/octave/interval/ci/e841c37383d6 summary: doc: NEWS.texinfo: Summarized recent changes from GSoC https://sourceforge.net/p/octave/interval/ci/bc3e9523d42a summary: mpfr_vector_dot_d.cc: Fixed bug when broadcasting with one size equal to zero https://sourceforge.net/p/octave/interval/ci/0160ff2c2134 summary: doc: examples.texinfo: Updated example for the latest Symbolic package version https://sourceforge.net/p/octave/interval/ci/f37e1074c3ce summary: @infsup/*.m, @infsupdec/*.m: Added missing N-dimensional versions of tests https://sourceforge.net/p/octave/interval/ci/eb6b9c01bf6a summary: @infsup/*.m, @infsupdec/*.m: N-dimensional versions of all ternary tests https://sourceforge.net/p/octave/interval/ci/de71e0ad0a6e summary: @infsup/*.m, @infsupdec/*.m: N-dimensional versions of all binary tests https://sourceforge.net/p/octave/interval/ci/9440d748b4aa summary: @infsup/*.m, @infsupdec*.m: N-dimensional version of all unary tests https://sourceforge.net/p/octave/interval/ci/cb5b7b824400 summary: @infsup/powrev2.m: Fixed bug when called with vector arguments https://sourceforge.net/p/octave/interval/ci/a3e8fdb85b97 summary: @infsup/pownrev.m, @infsupdec/pownrev.m: Reworked vectorization test https://sourceforge.net/p/octave/interval/ci/aae143789b81 summary: @infsup/pown.m: Added support for N-dimensional arrays https://sourceforge.net/p/octave/interval/ci/f05814665570 summary: @infsup/nthroot.n: Reworked vectorization test https://sourceforge.net/p/octave/interval/ci/c4c27574e331 summary: @infsup/nthroot.m, @infsupdec/nthroot.m: Clarified that N must be scalar https://sourceforge.net/p/octave/interval/ci/3bfd4e7e6600 summary: @infup/pow.m, @infsupdec/pow.m: Fixed bug when called with vector arguments https://sourceforge.net/p/octave/interval/ci/e013c55a4e2a summary: @infsup/overlap.m, @infsupdec/overlap.m: Fixed formatting of vector test. https://sourceforge.net/p/octave/interval/ci/d700a1ad4855 summary: doc: Modified a test so that it now passes https://sourceforge.net/p/octave/interval/ci/2404686b07fb summary: doc: Fixed formatting of example https://sourceforge.net/p/octave/interval/ci/c078807c6b32 summary: doc: SKIP an example that always fail in the doc-test https://sourceforge.net/p/octave/interval/ci/598dea2b8eb8 summary: doc: Fixed missed ending of example in Getting Started https://sourceforge.net/p/octave/interval/ci/66ca24edfba4 summary: Updated coding style for all infsupdec-class functions https://sourceforge.net/p/octave/interval/ci/4c629b9000fc summary: Updated coding style for all infsup-class functions https://sourceforge.net/p/octave/interval/ci/3898a7ad5d93 summary: Updated coding style for all non-class functions https://sourceforge.net/p/octave/interval/ci/f911db83df14 summary: @infsupdec/dot.m: Fixed wrong size of decoration when called with two empty matrices https://sourceforge.net/p/octave/interval/ci/8e6a9e37ee40 summary: Small updates to documentation and comments for a lot of function to account for the support of N-dimensional arrays https://sourceforge.net/p/octave/interval/ci/a2ef249d54ae summary: doc: A small update to Examples, the interval Newton method can only find zeros inside the initial interval https://sourceforge.net/p/octave/interval/ci/76431745772f summary: doc: Updates to Getting Started, mainly how to create N-dimensional arrays https://sourceforge.net/p/octave/interval/ci/9821eadca12b summary: doc: Small updates to Preface regarding N-dimensional arrays and fixed one link https://sourceforge.net/p/octave/interval/ci/0f1aadf5f13a summary: ctc_intersect.m, ctc_union.m: Fixed bugs when used for vectorization and when called with 0 or 1 output arguments https://sourceforge.net/p/octave/interval/ci/0e01dc19dc75 summary: @infsup/sumsq.m: Updated to use the new functionality of dot.m https://sourceforge.net/p/octave/interval/ci/2fba2056ed31 summary: @infsup/dot.m, @infsupdec/dot.m, mpfr_vector_dot_d.cc: Added support for N-dimensional vectors. Moved all vectorization to the oct-file. Small changes to functionality to mimic how the sum function works. https://sourceforge.net/p/octave/interval/ci/05fc90112ea9 summary: ctc_intersect.m, ctc_union.m: Added support for N-dimensional arrays https://sourceforge.net/p/octave/interval/ci/2a4d1e9fa43e summary: @infsup/fsolve.m: Added support for N-dimensional arrays. Fixed problem with the function in the example. Improved performance when creating the cell used in vectorization. https://sourceforge.net/p/octave/interval/ci/7a96c346225a summary: @infsup/disp.m: Fixed wrong enumeration of submatrices https://sourceforge.net/p/octave/interval/ci/933117890e45 summary: Fixed typo in NEWS.texinfo https://sourceforge.net/p/octave/interval/ci/67734688adb1 summary: @infsup/diag.m: Added description of the previous bug fix in the NEWS file https://sourceforge.net/p/octave/interval/ci/07ebc81867cd summary: @infsup/diag.m: Fixed error when called with more than 1 argument https://sourceforge.net/p/octave/interval/ci/554c34fb3246 summary: @infsup/meshgrid.m, @infsupdec/meshgrid.m: Removed these functions, now falls back on standard implementation, also updated index https://sourceforge.net/p/octave/interval/ci/4bad2e7451f5 summary: @infsup/plot.m: Updated documentation https://sourceforge.net/p/octave/interval/ci/5e0100cdc25f summary: @infsup/plot3.m: Small change to allow for N-dimensional arrays as input https://sourceforge.net/p/octave/interval/ci/695a223ccbb7 summary: @infsupdec/prod.m: Added support for N-dimensional arrays https://sourceforge.net/p/octave/interval/ci/55f570c13f01 summary: @infsup/prod.m: Added support for N-dimensional arrays. Removed short circuit in simple cases. https://sourceforge.net/p/octave/interval/ci/445d7e5150aa summary: @infsup/sum.m, @infsupdec/sum.m, mpfr_vector_sum_d.cc: Added support for N-dimensional vectors. Moved all vectorization to the oct-file. Small changes to functionality to mimic Octaves standard sum function. https://sourceforge.net/p/octave/interval/ci/f27ad212735e summary: @infsup/fminsearch.m: Updated documentation https://sourceforge.net/p/octave/interval/ci/19724b3f581e summary: mpfr_function_d.cc: Finalized support for N-dimensional arrays with binary functions and added support for it with ternary functions. https://sourceforge.net/p/octave/interval/ci/023e2788e445 summary: midrad.m: Added tests for N-dimensional arrays https://sourceforge.net/p/octave/interval/ci/617484113019 summary: @infsupdec/infsupdec.m: Added full support for creating N-dimensional arrays and added tests https://sourceforge.net/p/octave/interval/ci/f1c28a3d48b7 summary: @infsup/subset.m, @infsupdec/subset.m: Updated documentation https://sourceforge.net/p/octave/interval/ci/8e9990e3ed13 summary: @infsup/strictsubset.m, @infsupdec/strictsubset.m: Fixed coding style and updated documentation https://sourceforge.net/p/octave/interval/ci/5ca9b26b580d summary: @infsup/strictprecedes.m, @infsupdec/strictprecedes.m: Updated documentation https://sourceforge.net/p/octave/interval/ci/1c76721003f1 summary: @infsup/sdist.m: Updated documentation https://sourceforge.net/p/octave/interval/ci/e044dea7c820 summary: @infsup/precedes.m, @infsupdec/precedes.m: Updated documentation https://sourceforge.net/p/octave/interval/ci/d48ca9206299 summary: @infsup/overlap.m, @infsupdec/overlap.m: Fixed coding style and updated documentation https://sourceforge.net/p/octave/interval/ci/1ad6b47f9335 summary: @infsup/issingleton.m: Updated documentation https://sourceforge.net/p/octave/interval/ci/11dccbcfd97e summary: @infsup/ismember.m: Updated documentation https://sourceforge.net/p/octave/interval/ci/ade5da12cf88 summary: @infsup/isentire.m: Updated documentation https://sourceforge.net/p/octave/interval/ci/9fc874b71533 summary: @infsup/isempty.m, @infsupdec/isempty.m: Updated documentation https://sourceforge.net/p/octave/interval/ci/0c5c4eaf263b summary: @infsup/iscommoninterval.m: Updated documentation https://sourceforge.net/p/octave/interval/ci/c980da83b634 summary: @infsup/interior.m, @infsupdec/interior.m: Updated documentation https://sourceforge.net/p/octave/interval/ci/f67d78651aee summary: @infsup/idist.m: Updated documentation https://sourceforge.net/p/octave/interval/ci/70a389bd59f5 summary: @infsup/hdist.m: Fixed coding style and updated documentation. https://sourceforge.net/p/octave/interval/ci/e34401d2a6d6 summary: @infsup/sin.m, @infsupdec/sin.m: Added workaround for bug #51283 https://sourceforge.net/p/octave/interval/ci/fea4c6516101 summary: @infsup/gt.m: Updated documentation https://sourceforge.net/p/octave/interval/ci/ce973432d240 summary: @infsup/ge.m: Updated documentation https://sourceforge.net/p/octave/interval/ci/a435986d4b0c summary: @infsup/lt.m, @infsupdec/lt.m: Updated documentation https://sourceforge.net/p/octave/interval/ci/23fa89e3c461 summary: @infsup/le.m, @infsupdec/le.m: Updated documentation https://sourceforge.net/p/octave/interval/ci/730397b9e339 summary: @infsup/disjoint.m, @infsupdec/disjoint.m: Updated documentation https://sourceforge.net/p/octave/interval/ci/682ad849fc98 summary: mpfr_function_d.cc: Added support for N-dimensional arrays for unary functions. Also temporary support for binary functions. https://sourceforge.net/p/octave/interval/ci/9661e0256c51 summary: crlibm_function.cc: Added support for N-dimensional arrays https://sourceforge.net/p/octave/interval/ci/0ec3d29c0779 summary: @infsup/infsup.m: Fixed documentation and added missing line continuation https://sourceforge.net/p/octave/interval/ci/ebb00e763a46 summary: @infsup/disp.m: Fixed documentation https://sourceforge.net/p/octave/interval/ci/1ab2749da374 summary: @infsup/size.m: Fixed documentation https://sourceforge.net/p/octave/interval/ci/507ca8478b72 summary: @infsup/size.m: Fixes to the documentation https://sourceforge.net/p/octave/interval/ci/667da39afece summary: nai.m: Small fix to one of the tests https://sourceforge.net/p/octave/interval/ci/3eb13f91065a summary: hull.m: Fixes according to Olivers review https://sourceforge.net/p/octave/interval/ci/20d784a6605d summary: @infsup/display.m: Vectorized loop https://sourceforge.net/p/octave/interval/ci/fab7aa26410f summary: @infsup/disp.m: Fixes according to Olivers review, mainly details in the output https://sourceforge.net/p/octave/interval/ci/4959566545db summary: @infsup/infsup.m: Updated documentation and added test for N-dimensional arrays https://sourceforge.net/p/octave/interval/ci/b25fbf0ad324 summary: @infsup/infsup.m: Fixed coding style https://sourceforge.net/p/octave/interval/ci/486a73046d5e summary: @infsup/disp.m: Updated documentation and added more tests for N-dimensional arrays https://sourceforge.net/p/octave/interval/ci/b6a0435da31f summary: exacttointerval.m: Uppdated documentation and added tests for N-dimensional arrays https://sourceforge.net/p/octave/interval/ci/bdad0e6da132 summary: @infsup/intervaltotext.m, @infsupdec/intervaltotext.m: Updated documentation and added tests for N-dimensional arrays https://sourceforge.net/p/octave/interval/ci/55a3e708aef4 summary: @infsup/intervaltotext.m: Fixed coding style https://sourceforge.net/p/octave/interval/ci/536b0c5023ee summary: @infsup/subsref.m, @infsupdec/subsref.m: Added tests for N-dimensional arrays https://sourceforge.net/p/octave/interval/ci/c9654a4dcd8d summary: @infsup/size.m: Added support for N-dimensional arrays https://sourceforge.net/p/octave/interval/ci/636709d06194 summary: @infsup/end.m: Added support for N-dimensional arrays https://sourceforge.net/p/octave/interval/ci/e6451e037120 summary: nai.m: Added support for N-dimensional arrays https://sourceforge.net/p/octave/interval/ci/837cccf1d627 summary: @infsup/resize.m, @infsupdec/resize.m: Added support for N-dimensional arrays https://sourceforge.net/p/octave/interval/ci/11cb9006b9ea summary: @infsup/reshape.m, @infsupdec/reshape.m: Added support for N-dimensional arrays https://sourceforge.net/p/octave/interval/ci/c276af6c42ae summary: @infsup/prepad.m, @infsupdec/prepad.m: Added small parts to the documentation and tests for N-dimensional arrays https://sourceforge.net/p/octave/interval/ci/c92c829dc946 summary: @infsup/postpad.m, @infsupdec/postpad.m: Added small parts to the documentation and tests for N-dimensional arrays https://sourceforge.net/p/octave/interval/ci/0f8f6864123e summary: @infsup/meshgrid.m, @infsupdec/meshgrid.m: Added support for outputting 3-dimensional arrays https://sourceforge.net/p/octave/interval/ci/b7faafef6030 summary: @infsup/cat.m, @infsupdec/cat.m: Added support for N-dimensional arrays https://sourceforge.net/p/octave/interval/ci/d416a17a6d3d summary: hull.m: Added support for N-dimensional arrays https://sourceforge.net/p/octave/interval/ci/031831f6bdfd summary: empty.m, entire.m: Added support for N-dimensional arrays https://sourceforge.net/p/octave/interval/ci/2816b68b83c4 summary: @infsup/display.m: Added support for displaying high dimensional arrays https://sourceforge.net/p/octave/interval/ci/79c8dfa8ac54 summary: @infsup/disp.m: Added support for displaying high dimensional arrays https://sourceforge.net/p/octave/interval/ci/cc87924e52ac summary: @infsup/disp.m: Fixed coding style https://sourceforge.net/p/octave/interval/ci/29a6b4ecda2a summary: @infsupdec/infsupdec.m: Temporary fix for creating high dimensional arrays https://sourceforge.net/p/octave/interval/ci/039abcf7623d summary: @infsupdec/infsupdec.m: Fixed coding style Michele Ginesi Final Resume SummaryDuring the GSoC I worked on different special functions that needed to be improved or implemented from scratch. Discussing with my mentors and the community, we decided that my work should be pushed on a copy of the scource code of Octave on my repository [1] and then I should have work with different bookmarks for each function I had to work on. When different functions happened to be related (e.g. gammainc and gammaincinv), I worked on these on the same bookmark. I present now a summary and the bookmarks related to the functions. Incomplete gamma function bookmark: gammainc first commit: d1e03faf080b last commit: 107dc1d24c1b added files: /libinterp/corefcn/__gammainc_lentz__.cc, /scripts/specfun/gammainc.m, /scripts/specfun/gammaincinv.m removed files:/libinterp/corefcn/gammainc.cc, /liboctave/external/slatec-fn/dgami.f, /liboctave/external/slatec-fn/dgamit.f, /liboctave/external/slatec-fn/gami.f, /liboctave/external/slatec-fn/gamit.f, /liboctave/external/slatec-fn/xdgami.f, /liboctave/external/slatec-fn/xdgamit.f, /liboctave/external/slatec-fn/xgmainc.f, /liboctave/external/slatec-fn/xsgmainc.f modified files: NEWS, /doc/interpreter/arith.txi, /libinterp/corefcn/module.mk, /liboctave/external/slatec-fn/module.mk, /liboctave/numeric/lo-specfun.cc, /scripts/specfun/module.mk Summary of the work On this bookmark I worked on the incomplete gamma function and its inverse. The incomplete gamma function gammainc had both missing features (it were missed the "scaled" options) and some problem of inaccurate result type (see bug # 47800). Part of the work was already been done by Marco and Nir, I had to finish it. We decided to implement it as a single .m file (gammainc.m) which call (for some inputs) a subfunction written in C++ (__gammainc_lentz__.cc). The inverse of the incomplete gamma function was missing in Octave (see bug # 48036). I implemented it as a single .m file (gammaincinv.m) which uses a Newton method. Bessel functions bookmark: bessel first commit: aef0656026cc last commit: e9468092daf9 modified files: /liboctave/external/amos/README, /liboctave/external/amos/cbesh.f, /liboctave/external/amos/cbesi.f, /liboctave/external/amos/cbesj.f, /liboctave/external/amos/cbesk.f, /liboctave/external/amos/zbesh.f, /liboctave/external/amos/zbesi.f, /liboctave/external/amos/zbesj.f, /liboctave/external/amos/zbesk.f, /liboctave/numeric/lo-specfun.cc, /scripts/specfun/bessel.m Summary of the work On this bookmark I worked on Bessel functions. There was a bug reporting NaN as output when the argument$x$was too large in magnitude (see bug # 48316). The problem was given by Amos library, which refuses to compute the output in such cases. I started "unlocking" this library, in such a way to compute the output even when the argument was over the limit setted by the library. Then I compared the results with other libraries (e.g. Cephes [2], Gnu Scientific library [3] and C++ special function library [4]) and some implementations I made. In the end, I discovered that the "unlocked" Amos were the best one to use, so we decided to maintain them (in the "unlocked" form), modifying the error variable to explain the loss of accuracy. Incomplete beta function bookmark: betainc first commit: 712a069d2860 last commit: e0c0dd40f096 added files: /libinterp/corefcn/__betainc_lentz__.cc, /scripts/specfun/betainc.m, /scripts/specfun/betaincinv.m removed files: /libinterp/corefcn/betainc.cc, /liboctave/external/slatec-fn/betai.f, /liboctave/external/slatec-fn/dbetai.f, /liboctave/external/slatec-fn/xbetai.f, /liboctave/external/slatec-fn/xdbetai.f modified files: /libinterp/corefcn/module.mk, /liboctave/external/slatec-fn/module.mk, /liboctave/numeric/lo-specfun.cc, /liboctave/numeric/lo-specfun.h, /scripts/specfun/module.mk, /scripts/statistics/distributions/betainv.m, /scripts/statistics/distributions/binocdf.m Summary of the work On this bookmark I worked on the incomplete beta function and its inverse. The incomplete beta function missed the "upper" version and had reported bugs on input validation (see bug # 34405) and inaccurate result (see bug # 51157). We decided to rewrite it from scratch. It is now implemented ad a single .m file (betainc.m) which make the input validation part, then the output is computed using a continued fraction evaluation, done by a C++ function (__betainc_lentz__.cc). The inverse was present in Octave but missed the "upper" version (since it was missing also in betainc itself). The function is now written as a single .m file (betaincinv.m) which implement a Newton method where the initial guess is computed by few steps of bisection method. Integral functions bookmark: expint first commit: 61d533c7d2d8 last commit: d5222cffb1a5 added files:/libinterp/corefcn/__expint_lentz__.cc, /scripts/specfun/cosint.m, /scripts/specfun/sinint.m modified files: /doc/interpreter/arith.txi, /libinterp/corefcn/module.mk, /scripts/specfun/expint.m, /scripts/specfun/module.mk Summary of the work On this bookmark I worked on exponential integral, sine integral and cosine integral. I already rewrote the exponential integral before the GSoC. Here I just moved the Lentz algorithm to an external C++ function (__expint_lentz__.cc), accordingly to gammainc and betainc. I've also modified the exit criterion for the asymptotic expansion using [5] (pages 1 -- 4) as reference. The functions sinint and cosint were present only in the symbolic package of Octave but was missing a numerical implementation in the core. I wrote them as .m files (sinint.m and cosint.m). Both codes use the series expansion near the origin and relations with expint for the other values. To do There is still room for improvement for some of the functions I wrote. In particular, gammainc can be improved in accuracy for certain couple of values, and I would like to make a template version for the various Lentz algorithms in C++ so to avoid code duplication in the functions. In October I will start a PhD in Computer Science, still here in Verona. This will permit me to remain in contact with my mentor Marco Caliari, so that we will work on these aspects. [1] https://bitbucket.org/M_Ginesi/octave [2] http://www.netlib.org/cephes/ [3] https://www.gnu.org/software/gsl/ [4] http://en.cppreference.com/w/cpp/numeric/special_math [5] N. Bleistein and R.A. Handelsman, "Asymptotic Expansions of Integrals", Dover Publications, 1986. Piyush Jain Geometry Package (Octave) Geometry package: Implement boolean operations on polygons As part of GSoC 2017 , this project is intended to implement a set of boolean operations and supporting function for acting on polygons. These include the standard set of potential operations such as union/OR, intersection/AND, difference/subtraction, and exclusiveor/XOR. Other things to be implemented are the following functions: polybool, ispolycw, poly2ccw, poly2cw, poly2fv, polyjoin, and polysplit. This Repository is a clone(fork) of the Geometry Package which is a part of the Octave Forge Project. This fork adds new functions to the official Geometry Package as part of GSoC (Google Summer of Code) 2016. The official Geometry Package can be found here Link to commits on official repo : Added files and functions 1. /inst/polygons2d/clipPolygon_mrf.m 2. /inst/polygons2d/private/_poly2struct_.m 3. /src/martinez.cpp 4. /src/polygon.cpp 5. /src/utilities.cpp 6. /src/polybool_mrf.cc 7. /inst/polygons2d/funcAliases Bonding Period After discussion with my mentor and keeping my proposal in mind, I tried to understand and enlist the tasks more specifically and in detail. I first tried to understand the conventions and other things about the organisation. I tried to understand the first basic thing which I will need throughout this project, i.e. how we create an oct-interface for the C++ codes to be executable in Octave. My first goal was to explore the possibility if the already implemented mex-interface geometry package can be improved in performance by replacing it with oct-interface. So, for understanding how these oct-files work and other things , I started to implement something in oct-files and getting familiar with it. First Coding Phase As stated, there is an already implemented Geometry3.0 package. This has mex interface for its functions. I tried to compare its performance with the oct interface version. For benchmarking, I first implemented my own function for polyUnion (using clipper library) with oct-interface (Find it here). Then, I compared its performance over a number of different sets of polgons (parametrized over number of vertices) and recorded the elapsed times with both the interfaces. On plotting a graph of Number of vertices V/s Elapsed time (for oct and mex), following obseravtions were made : </br> • The oct interface had a better performance over mex. • For 10000 vertices, the oct interface took about 0.008 seconds while the mex interface took about 0.014 seconds. This implies oct interface took 8X10e-3 seconds / 10e4 vertices i.e. 8X10e-7 seconds per vertex. For mex, it was 14X10e-7 seconds per vertex. • As it can be seen from the above data, the oct interface was not more than twice as good as mex interface.</br> From these observations, it was concluded that it is not worth to change the interface from mex to oct since there was not much improvement in performance. Thus, our next goal is now to incorporate new algorithms. After spending decent time over checking out the new algorithm and understanding its implementation, I have now started to implement the polybool function. I also tried to compare its performance with the already implemented clipPolygon in the current geometry package. The new algorithm has a better performance than the old one. The implementation of boolean operations on polygons, like DIFFERENCE , INTERSECTION , XOR and UNION with the new algorithm is almost done. A little work on tests, demos and documentation is still needed and I am working on that. More about the algorithm by F. Martínez, A.J. Rueda, F.R. Feito The algorithm is very easy to understand, among other things because it can be seen as an extension of the classical algorithm, based on the plane sweep, for computing the intersection points between a set of segments.When a new intersection between the edges of polygons is found, the algorithm subdivides the edges at the intersection point. This produces a plane sweep algorithm with only two kind of events: left and right endpoints, making the algorithm quite simple. Furthermore, the subdivision of edges provides a simple way of processing degeneracies. </br> Overall sketch of the approach for computing Boolean operations on polygons: • Subdivide the edges of the polygons at their intersection points. • Select those subdivided edges that lie inside the other polygon—or that do not lie depending on the operation. • Join the edges selected in step 2 to form the result polygon.</br> Complexity Let n be the total number of edges of all the polygons involved in the Boolean operation and k be the number of intersections of all the polygon edges. The whole algorithm runs in time O(n+k)log(n).</br> After raw testing this new algorithm on several cases, I am now adding few tests in the m-script. Other than that, a demo has also been added in the m-script. The demo can be seen by the command demo clipPolygon_mrf. The tests have also been added. The test can be seen by test clipPolygon_mrf. Second Coding Phase After implementing the polybool function and checking it, we are planning to include it in the next release of geometry package. Now , to move forward, I am first importing some functions from last year GSoC’s repo and ensuring their MATLAB compatibility. Functions like poly2ccw, poly2cw, joinpolygons, splitPolygons have been created as aliases while ensuring their compatibility with their mathworks counterparts. Then after, there was some time invested on understanding the CGAL library and its implementation. The further plan is to sync the matgeom package with the geometry package. Third Coding Phase Proceeding towards the next goal, the idea is to devise some way to automate the process somewhat, of syncing the matgeom and geometry package. The issue is that when a new release of geometry package is planned, there are some things which ahve been updated in matgeom but not in their geometry counterparts (if it exists). So, every time before releasing, so much time has to be invested in manually checking each edit and syncing it into the geometry. To achieve this, first a workaround is implemented on a dummy repository - dummyMatGeom. Its master branch is matGeom (dummy) and there is another branch (named geometry) is created which contains geometry package (dummy). To test the entire procedure, go to the dummy repository dummyMatGeom , pull both branches in different folders, say “dummyMatGeom” for master branch and “dummyGeom” for geometry branch. Then follow the given steps as explained on the wiki page. Challenges • Clearly, the above procedure will only sync the script of the function, not it’s tests and demo, which are in separate folders in a Matlab package structure. Even if we try to concatenate their corresponding test/demo scripts with the function scripts (as it is in an octave package structure), there will be discrepancies because the notion or writing tests for octave and matlab packages are quite different. The way octave allows tests to work is unique to octave as explained here. SO, we can’t simply concatenate the Matlab test scripts with the functions. • Git doesn’t preserves the original version of geometry scripts and overwrites the whole file completely. For example : 1. Original file at matGeom (upstream)  % Bla bla % bla bla bla function z = blabla (x,y) % Help of function for i=1:length(x) z(i) = x(i)*y(i); end  1. Ported to geometry  # Copyright - Somebody # Bla bla # bla bla bla # texinfo # Help of function function z = blabla (x,y) z = x .* y; endfunction  1. Updated in matGeom  % Bla bla % bla bla bla function z = blabla (x,y) % Help of function % updated to be more clear z = zeros (size(x)); for i=1:length(x) z(i) = x(i)*y(i); end  1. After syncing , the expected result is something like this :  # Copyright - Somebody # Bla bla # bla bla bla # texinfo # Help of function # updated to be more clear function z = blabla (x,y) z = zeros (size(x)); z = x .* y; endfunction  But, this doesn’t happen as expected. Git just finds the files which have been modified and overwrites those files completely. Considering the possibilities of the solutions, there are ways like git patch or git interactive which allows us to select the lines specifically which we want to be committed, but that would not serve our purpose as it would not be better than syncing it manually, file by file. Looking for a better solution to handle this ! Now, the further idea is to release geometry and I am getting involved into it to get a feel of how things are done. Thus, as the time concludes, it’s time to say Goodbye to GSoC’17. It was overall a great learning experience. August 19, 2017 Michele Ginesi Integral functions Integral functionsDuring the last week I made few modifications to expint.m and wrote sinint.m and cosint.m from scratch. All the work done can be found on the bookmark expint of my repository. expint As I mentioned here I rewrote expint.m from scratch before the GSoC. During the last week I moved the Lentz algorithm to a .cc function (in order to remain coherent with the implementations of gammainc and betainc) and added few tests. sinint The sinint function is present in the symbolic package, but is not present a numerical implementation in the core. The sine integral is defined as $$\text{Si} (z) = \int_0^z \frac{\sin(t)}{t}\,dt.$$ To compute it we use the series expansion $$\text{Si}(z) = \sum_{n=0}^\infty \frac{(-1)^n z^{2n+1}}{(2n+1)(2n+1)!}$$ when the module of the argument is smaller than 2. For bigger values we use the following relation with the exponential integral $$\text{Si} = \frac{1}{2i} (E_1(iz)-E_1(-iz)) + \frac{\pi}{2},\quad |\text{arg}(z)| \frac{\pi}{2}$$ and the following simmetry relations $$\text{Si}(-z) = -\text{Si}(z),$$ $$\text{Si}(\bar{z}) = \overline {\text{Si}(z)}.$$ The function is write as a single .m file. cosint As the sinint function, also cosint is present in the symbolic package, but there is not a numerical implementation in the core. The cosine integral is defined as $$\text{Ci} (z) = -\int_z^\infty \frac{\cos(t)}{t}\,dt.$$ An equivalent definition is $$\text{Ci} (z) = \gamma + \log z + \int_0^z \frac{\cos t - 1}{t}\,dt.$$ To compute it we use the series expansion $$\text{Ci}(z) = \gamma + \log z + \sum_{n=1}^\infty \frac{(-1)^n z^{2n}}{(2n)(2n)!}$$ when the module of the argument is smaller than 2. For bigger values we use the following relation with the exponential integral $$\text{Ci} = -\frac{1}{2} (E_1(iz)+E_1(-iz)),\quad |\text{arg}(z)| \frac{\pi}{2}$$ and the following simmetry relations $$\text{Ci}(-z) = \text{Ci}(z) -i\pi,\quad 0\text{arg}(z)\pi,$$ $$\text{Ci}(\bar{z}) = \overline{\text{Ci}(z)} .$$ As for sinint, also cosint is written as a single .m file. August 18, 2017 Joel Dahne The Final Polish We are preparing for releasing version 3.0.0 of the interval package and this last week have mainly been about fixing minor bugs related to the release. I mention two of the more interesting bugs here. Compact Format We (Oliver) recently added support for "format compact" when printing intervals. It turns out that the way to determine if compact format is enabled differs very much between different version of Octave. There are at least three different ways to get the information. In the older releases (< 4.2.0 I believe) you use "get (0, "FormatSpacing")" but there appear to be a bug for version < 4.0.0 for which this always return "loose". For the current tip of the development branch you can use "[~, spacing] = format ()" to get the spacing. Finally in between these two version you use "__compactformat__ ()". In the end Oliver, probably, found a way to handle this mess and compact format should now be fully supported for intervals. The function to do this is available here https://sourceforge.net/p/octave/interval/ci/default/tree/inst/@infsup/private/__loosespacing__.m. Dot-product of Empty Matrices When updating "dot" to support N-dimensional arrays I also modified it so that it behaves similar to Octaves standard implementation. The difference is in how it handles empty input. Previously we had > x = infsupdec (ones (0, 2)); > dot (x, x) ans = 0×2 interval matrix but with the new version we get > dot (x, x) ans = 1×2 interval vector [0]_com [0]_com which is consistent with the standard implementation. In the function we use "min" to compute the decoration for the result. Normally "min (x)" and "dot (x, x)" returns results of the same size (the dimension along which it is computed is set to 1), but they handle empty input differently. We have > x = ones (0, 2); > dot (x, x) ans = 0 0 > min (x) ans = [](0x2) This meant that the decoration would be incorrect since the implementation assumed they always had the same size. Fortunately the solution was very simple. If the dimension along which we are computing the dot-product is zero. the decoration should always be "com". So just adding a check for that was enough. You could argue that "min (ones (0, 2))" should return "[inf, inf]" similarly to how many of the other reductions, like "sum" or "prod", return their unit for empty input. But this would most likely be very confusing for a lot of people. And it is not compatible with how Matlab does it either. Updates on the Taylor Package I have also had some time to work on the Taylor package this week. The basic utility functions are now completed and I have started to work on functions for actually computing with Taylor expansions. At the moment there are only a limited amount of functions implemented. For example we can calculate the Taylor expansion of order 4 for the functions$\frac{e^x + \log(x)}{1 + x}$at$x = 5$. ## Create a variable of degree 4 and with value 5 > x = taylor (infsupdec (5), 4) x = [5]_com + [1]_com X + [0]_com X^2 + [0]_com X^3 + [0]_com X^4 ## Calculate the function > (exp (x) + log (x))./(1 + x) ans = [25.003, 25.004]_com + [20.601, 20.602]_com X + [8.9308, 8.9309]_com X^2 + [2.6345, 2.6346]_com X^3 + [0.59148, 0.59149]_com X^4 August 12, 2017 Michele Ginesi betaincinv betaincinvThe inverse of the incomplete beta function was present in Octave, but without the "upper" option (since it was missing in betainc itself). We decided to rewrite it from scratch using Newton method, as for gammaincinv (see my post on it if you are interested). To make the code numerically more accurate, we decide which version ("lower" or "upper") invert depending on the inputs. At first we compute the trivial values (0 and 1). Then the remaining terms are divided in two sets: those that will be inverted with the "lower" version, and those that will be inverted with the "upper" one. For both cases, we perform 10 iterations of bisection method and then we perform a Newton method. The implementation (together with the new implementation of betainc) can be found on my repository, bookmark "betainc". August 11, 2017 Joel Dahne Improving the Automatic Tests Oliver and I have been working on improving the test framework used for the interval package. The package shares a large number of tests with other interval packages through an interval test framework that Oliver created. Here is the repository. Creating the Tests Previously these tests were separated from the rest of the package and you usually ran them with help of the Makefile. Now Oliver has moved them to the m-files and you can run them, together with the other tests for the function, with test @infsup/function in Octave. This makes it much easier to test the functions directly. In addition to making the tests easier to use we also wanted to extend them to not only test scalar evaluation but also vector evaluation. The test data, input ad expected output, is stored in a cell array and when performing the scalar testing we simply loop over that cell and run the function for each element. The actual code looks like this (in this case for plus) %!test %! # Scalar evaluation %! testcases = testdata.NoSignal.infsup.add; %! for testcase = [testcases]' %! assert (isequaln (... %! plus (testcase.in{1}, testcase.in{2}), ... %! testcase.out)); %! endfor For testing the vector evaluation we simply concatenate the cell array into a vector and give that to the function. Here is what that code looks like %! # Vector evaluation %! testcases = testdata.NoSignal.infsup.add; %! in1 = vertcat (vertcat (testcases.in){:, 1}); %! in2 = vertcat (vertcat (testcases.in){:, 2}); %! out = vertcat (testcases.out); %! assert (isequaln (plus (in1, in2), out)); Lastly we also wanted to test evaluation of N-dimensional arrays. This is done by concatenating the data into a vector and then reshape that vector into an N-dimensional array. But what size should we use for the array? Well, we want to have at least three dimensions because otherwise we are not really testing N-dimensional arrays. My solution was to completely factor the length of the vector and use that as size, testsize = factor (length (in1)), and if the length of the vector has two or fewer factors we add a few elements to the end until we get at least three factors. This is the code for that %!test %! # N-dimensional array evaluation %! testcases = testdata.NoSignal.infsup.add; %! in1 = vertcat (vertcat (testcases.in){:, 1}); %! in2 = vertcat (vertcat (testcases.in){:, 2}); %! out = vertcat (testcases.out); %! # Reshape data %! i = -1; %! do %! i = i + 1; %! testsize = factor (numel (in1) + i); %! until (numel (testsize) > 2) %! in1 = reshape ([in1; in1(1:i)], testsize); %! in2 = reshape ([in2; in2(1:i)], testsize); %! out = reshape ([out; out(1:i)], testsize); %! assert (isequaln (plus (in1, in2), out)); This works very well, except when the number of test cases is to small. If the number of test is less than four this will fail. But there are only a handful of functions with that few tests so I fixed those independently. Running the tests Okay, so we have created a bunch of new tests for the package. Do we actually find any new bugs with them? Yes! The function pow.m failed on the vector test. The problem? In one place$\&\&$was used instead of$\&$. For scalar input I believe these behave the same but they differ for vector input. Both the function nthroot.m and the function pownrev.m failed the vector test. Neither allowed vectorization of the integer parameter. For nthroot.m this is the same for standard Octave version so it should perhaps not be treated as a bug. The function pownrev.m uses nthroot.m internally so it also had the same limitation. This time I would however treat it as a bug because the function pown.m does allow vectorization of the integer parameter and if that supports it the reverse function should probably also do it. So I implemented support for vectorization of the integer parameter for both nthroot.m and pownrev.m and they now pass the test. No problems were found with the N-dimensional tests that the vector tests did not find. This is a good indication that the support for N-dimensional arrays is at least partly correct. Always good to know! August 02, 2017 Michele Ginesi betainc table, th, td { border: 1px solid black; } betaincThe betainc function has two bugs reported: #34405 on the input validation and #51157 on inaccurate result. Moreover, it is missing the "upper" version, which is present in MATLAB. The function The incomplete beta function ratio is defined as $$I_x(a,b) = \dfrac{B_x(a,b)}{B(a,b)},\quad 0\le x \le 1,\,a>0,\,b>0,$$ where$B(a,b)$is the classical beta function and $$B_x(a,b)=\int_0^x t^{a-1}(1-t)^{b-1}\,dt.$$ In the "upper" version the integral goes from$x$to$1$. To compute this we will use the fact that $$\begin{array}{rcl} I_x(a,b) + I_x^U(a,b) &=& \dfrac{1}{B(a,b)}\left( \int_0^x t^{a-1}(1-t)^{b-1}\,dt + \int_x^1 t^{a-1}(1-t)^{b-1}\,dt\right)\\ &=&\dfrac{1}{B(a,b)}\int_0^1 t^{a-1}(1-t)^{b-1}\,dt\\ &=&\dfrac{B(a,b)}{B(a,b)}\\ &=&1 \end{array}$$ and the relation $$I_x(a,b) + I_{1-x}(b,a) = 1$$ so that $$I_x^U(a,b) = I_{1-x}(b,a).$$ The implementation Even if it is possible to obtain a Taylor series representation of the incomplete beta function, it seems to not be used. Indeed the MATLAB help cite only the continuous fraction representation present in "Handbook of Mathematical Functions" by Abramowitz and Stegun: $$I_x(a,b) = \dfrac{x^a(1-x)^b}{aB(a,b)}\left(\dfrac{1}{1+} \dfrac{d_1}{1+} \dfrac{d_2}{1+}\ldots\right)$$ with $$d_{2m+1} = -\dfrac{(a+m)(a+b+m)}{(a+2m)(a+2m+1)}x$$ and $$d_{2m} = \dfrac{m(b-m)}{(a+2m-1)(a+2m)}x$$ which seems to be the same strategy used by GSL. To be more precise, this continued fraction is computed directly when $$x\dfrac{a-1}{a+b-2}$$ otherwise, the computed fraction is used to compute$I_{1-x}(b,a)$and then it is used the fact that $$I_x(a,b) = 1-I_{1-x}(b,a).$$ In my implementation I use a continued fraction present in "Handboob of Continued Fractions for Special Functions" by Cuyt, Petersen, Verdonk, Waadeland and Jones, which is more complicated but converges in fewer steps: $$\dfrac{B(a,b)I_x(a,b)}{x^a(1-x)^b} = \mathop{\huge{\text{K}}}_{m=1}^\infty \left(\dfrac{\alpha_m(x)}{\beta_m(x)}\right),$$ where $$\begin{array}{rcl} \alpha_1(x) &=&1,\\ \alpha_{m+1}(x) &=&\dfrac{(a+m-1)(a+b+m-1)(b-m)m}{(a+2m-1)^2}x^2,\quad m\geq 1,\\ \beta_{m+1}(x) &=&a + 2m + \left( \dfrac{m(b-m)}{a+2m-1} - \dfrac{(a+m)(a+b+m)}{a+2m+1} \right)x,\quad m\geq 0. \end{array}$$ This is most useful when $$x\leq\dfrac{a}{a+b},$$ thus, the continued fraction is computed directly when this condition is satisfied, while it is used to evaluate$I_{1-x}(b,a)$otherwise. The function is now written as a .m file, which check the validity of the inputs and divide the same in the values which need to be rescaled and in those wo doesn't need it. Then the continued fraction is computed by an external .c function. Finally, the .m file explicit$I_x(a,b)$. betaincinv Next step will be to write the inverse. It was already present in Octave, but is missing the upper version, so it has to be rewritten. August 01, 2017 Michele Ginesi Second period resume Second period resumetd, th{ border: 1px solid } Here I present a brief resume of the work done in this second month. Bessel functions The topic of this month were Bessel functions. On the bug tracker it is reported only a problem regarding the "J" one (see bug 48316), but the same problem is present in every type of Bessel function (they return NaN + NaNi when the argument is too big). Amos, Cephes, C++ and GSL Actually, Bessel functions in Octave are computed via the Amos library, written in Fortran. Studying the implementation I discovered that the reported bug follows from the fact that if the input is too large in module, the function zbesj.f set IERR to 4 (IERR is a variable which describe how the algorithm has terminate) and set the output to zero, then lo-specfunc.cc return NaN when IERR=4. Obviusly, the same happen for other Bessel functions. What I initially did was to "unlock" these .f files in such a way to give still IERR=4, but computing anyway the output, and modify lo-specfun.cc in order to return the value even if IERR is 4. Then I tested the accuracy, together with other libraries. On the bug report where suggested the Cephes library, so I tested also them, the C++ Special mathematical functions library, and the GSL (Gnu Scientific library). Unfortunately, both these alternatives work worse than Amos. I also tried to study and implement some asymptotic expansions by myself to use in the cases which give inaccurate results, unfortunately without success. For completeness, in the following table there are some results of the tests (ERR in Cephes refers to cos total loss of precision error): 1e09 1e10 1e11 1e12 1e13 Amos 1.6257e-16 0.0000e+00 0.0000e+00 1.3379e-16 1.1905e-16 Cephes 2.825060e-08 ERR ERR ERR ERR GSL 4.8770e-16 2.2068e-16 4.2553e-16 1.3379e-16 1.1905e-16 C++ 2.82506e-08 2.68591e-07 1.55655e-05 8.58396e-07 0.000389545 1e15 1e20 1e25 1e30 Amos 1.3522e-16 1.6256e-16 15.22810 2.04092 GSL 1.3522e-16 0 15.22810 2.04092 The problem with the double precision As I explained in my last post the problem seems to be that there are no efficient algorithms in double precision for arguments so large. In fact, the error done by Amos is quite small if compared with the value computed with SageMath (which I'm using as reference one), but only if we use the double precision also in Sage: using more digits, one can see that even the first digit change. Here the tests: sage: bessel_J(1,10^40).n() 7.95201138653707e-21 sage: bessel_J(1,10^40).n(digits=16) -7.522035532561300e-21 sage: bessel_J(1,10^40).n(digits=20) -5.7641641376196142462e-21 sage: bessel_J(1,10^40).n(digits=25) 1.620655257788838811378880e-21 sage: bessel_J(1,10^40).n(digits=30) 1.42325562284462027823158499095e-21 sage: bessel_J(1,10^40).n(digits=35) 1.4232556228446202782315849909515310e-21 sage: bessel_J(1,10^40).n(digits=40) 1.423255622844620278231584990951531026335e-21 The value stabilizes only when we use more than 30 digits (two times the digits used in double precision). The decision Even if we are aware of the fact that the result is not always accurate, for MATLAB compatibility we decided to unlock the Amos, since they are still the most accurate and, even more important, the type of the inputs is the same as in MATLAB (while, for example, GSL doesn't accept complex$x$value). Moreover, in Octave is possible to obtain in output the value of IERR, thing that is not possible in MATLAB. You can find the work on the bookmark "bessel" of my repository. Betainc During these last days I also started to implement betainc from scratch, I think it will be ready for the first days of August. Then, it will be necessary to rewrite also betaincinv, since the actual version doesn't have the "upper" version. This should not be too difficult. I think we can use a simple Newton method (as for gammaincinv), the only problem will be, as for gammaincinv, to find good initial guesses. July 28, 2017 Joel Dahne A Package for Taylor Arithmetic In the last blog post I wrote about what was left to do with implementing support for N-dimensional arrays in the interval package. There are still some things to do but I have had, and most likely will have, some time to work on other things. Before the summer I started to work on a proof of concept implementation of Taylor arithmetic in Octave and this week I have continued to work on that. This blog post will be about that. A Short Introduction to Taylor Arithmetic Taylor arithmetic is a way to calculate with truncated Taylor expansions of functions. The main benefit is that it can be used to calculate derivatives of arbitrary order. Taylor expansion or Taylor series (I will use these words interchangeably) are well known and from Wikipedia we have: The Taylor series of real or complex valued function$f(x)$that is infinitely differentiable at a real or complex number$a$is the power series $$f(a) + \frac{f'(a)}{1!}(x-a) + \frac{f''(a)}{2!}(x-a)^2 + \frac{f'''(a)}{3!}(x-a)^3 + ....$$ From the definition it is clear that if we happen to know the coefficients of the Taylor series of$f$at the point$a$we can also calculate all derivatives of$f$at that point by simply multiplying a coefficient with the corresponding factorial. The simplest example of Taylor arithmetic is addition of two Taylor series. If$f$has the Taylor series$\sum_{n=0}^\infty (f)_n (x-a)^n$and$g$the Taylor series$\sum_{n=0}^\infty (g)_n (x-a)^n$then$f + g$will have the Taylor series $$\sum_{n=0}^\infty (f + g)_n (x-a)^n = \sum_{n=0}^\infty ((f)_n + (g)_n)(x-a)^n$$ If we instead consider the product,$fg$, we get $$\sum_{n=0}^\infty (fg)_n (x-a)^n = \sum_{n=0}^\infty \left(\sum_{i=0}^n (f)_n(g)_n\right)(x-a)^n.$$ With a bit of work you can find similar formulas for other standard functions. For example the coefficients,$(e^f)_n$, of the Taylor expansion of$\exp(f)$is given by$(e^f)_0 = e^{(f)_0}$and for$n > 0$$$(e^f)_n = \frac{1}{n}\sum_{i=0}^{n-1}(k-j)(e^f)_i(f)_{n-i}.$$ When doing the computations on a computer we consider truncated Taylor series, we choose an order and keep only coefficients up to that order. There is also nothing that stops us from using intervals as coefficients, this allows us to get rigorous enclosures of derivatives of functions. For a more complete introduction to Taylor arithmetic in conjunction with interval arithmetic see [1] which was my first encounter to it. For another implementation of it in code take a look at [2]. Current Implementation Status As mentioned in the last post my repository can be found here When I started to write on the package, before summer, my main goal was to get something working quickly. Thus I implemented the basic functions needed to do some kind of Taylor arithmetic, a constructor, some help functions and a few functions like$\exp$and$\sin$. This last week I have focused on implementing the basic utility functions, for example$size$, and rewriting the constructor. In the process I think I have broken the arithmetic functions, I will fix them later. You can at least create and display Taylor expansions now. For example creating a variable$x$with value 5 of order 3 > x = taylor (infsupdec (5), 3) x = [5]_com + [1]_com X + [0]_com X^2 + [0]_com X^3 or a matrix with 4 variables of order 2 > X = taylor (infsupdec ([1, 2; 3, 4]), 2) X = 2×2 Taylor matrix of order 2 ans(:,1) = [1]_com + [1]_com X + [0]_com X^2 [3]_com + [1]_com X + [0]_com X^2 ans(:,2) = [2]_com + [1]_com X + [0]_com X^2 [4]_com + [1]_com X + [0]_com X^2 If you want you can create a Taylor expansion with explicitly given coefficients you can do that as well > f = taylor (infsupdec ([1; -2; 3, -4)) f = [1]_com + [-2]_com X + [3]_com X^2 + [-4]_com X^3 This would represent a function$f$with$f(a) = 1$,$f'(a) = -2$,$f''(a) = 3 \cdot 2! = 6$and$f'''(a) = -4 \cdot 3! = -24$. Creating a Package My goal is to create a full package for Taylor arithmetic along with some functions making use of it. The most important step is of course to create a working implementation but there are other things to consider as well. I have a few things I have not completely understood about it. Depending on how much time I have next week I will try to read a bit more about it probably ask some questions on the mailing list. Here are at least some of the things I have been thinking about Mercurial vs Git? I have understood that most of the Octave forge packages uses Mercurial for version control. I was not familiar with Mercurial before so the natural choice for me was to use Git. Now I feel I could switch to Mercurial if needed but I would like to know the potential benefits better, I'm still new to Mercurial so I don't have the full picture. One benefit is of course that it is easier if most packages use the same system, but other than that? How much work is it? If I were to manage a package for Taylor arithmetic how much work is it? This summer I have been working full time with Octave so I have had lots of time but this will of course not always be the case. I know it takes time if I want to continue to improve the package, but how much, and what, continuous work is there? What is needed besides the implementation? From what I have understood there are a couple of things that should be included in a package besides the actual m-files. For example a Makefile for creating the release, an INDEX-file and a CITATION-file. I should probably also include some kind of documentation, especially since Taylor arithmetic is not that well known. Is there anything else I need to think about? What is the process to get a package approved? If I were to apply (whatever that means) for the package to go to Octave forge what is the process for that? What is required before it can be approved and what is required after it is approved? [1] W. Tucker, Validated Numerics, Princeton University Press, 2011. [2] F. Blomquist, W. Hofschuster, W. Krämer, Real and complex taylor arithmetic in C-XSC, Preprint 2005/4, Bergische Universität Wuppertal. July 27, 2017 Enrico Bertino Deep learning functions Hi there, the second part of the project is finishing. This period was quite interesting because I had to dive into the theory behind Neural Networks In particular [1], [2], [3] and [4] were very useful and I will sum up some concepts here below. On the other hand, coding became more challenging and the focus was on the python layer and in particular the way to structure the class in order to make everything scalable and generalizable. Summarizing the situation, in the first period I implemented all the Octave classes for the user interface. Those are Matlab compatible and they call some Python function in a seamless way. On the Python side, the TensorFlow API is used to build the graph of the Neural Network and perform training, evaluation and prediction. I implemented the three core functions: trainNetwork, SeriesNetwork and trainingOptions. To do this, I used a Python class in which I initialize an object with the graph of the network and I store this object as attribute of SeriesNetwork. Doing that, I call the methods of this class from trainNetwork to perform the training and from predict/classify to perform the predictions. Since it was quite hard to have a clear vision of the situation, I used a Python wrapper (Keras) that allowed me to focus on the integration, "unpack" the problem and go forth "module" by "module". Now I am removing the dependency on the Keras library using directly the Tensorflow API. The code in my repo [5]. Since I have already explained in last posts how I structured the package, in this post I would like to focus on the theoretical basis of the deep learning functions used in the package. In particular I will present the available layers and the parameters that are available for the training. Theoretical dive I. Fundamentals I want to start with a brief explanation about the perceptron and the back propagation, two key concepts in the artificial neural networks world. Neurons Let's start from the perceptron, that is the starting point for understanding neural networks and its components. A perceptron is simply a "node" that takes several binary inputs,$ x_1, x_2, ... $, and produces a single binary output: The neuron's output, 0 or 1, is determined by whether the linear combination of the inputs$  \omega \cdot x = \sum_j \omega_j x_j  $is less than or greater than some threshold value. That is a simple mathematical model but is very versatile and powerful because we can combine many perceptrons and varying the weights and the threshold we can get different models. Moving the threshold to the other side of the inequality and replacing it by what's known as the perceptron's bias, b = −threshold, we can rewrite it as$ out = \bigg \{ \begin{array}{rl} 0 & \omega \cdot x + b \leq 0 \\ 1 & \omega \cdot x + b > 0 \\ \end{array} $Using the perceptrons like artificial neurons of a network, it turns out that we can devise learning algorithms which can automatically tune the weights and biases. This tuning happens in response to external stimuli, without direct intervention by a programmer and this enables us to have an "automatic" learning. Speaking about learning algorithms, the proceedings are simple: we suppose we make a small change in some weight or bias and what see the corresponding change in the output from the network. If a small change in a weight or bias causes only a small change in output, then we could use this fact to modify the weights and biases to get our network to behave more in the manner we want. The problem is that this isn't what happens when our network contains perceptrons since a small change of any single perceptron can sometimes cause the output of that perceptron to completely flip, say from 0 to 1. We can overcome this problem by introducing an activation function. Instead of the binary output we use a function depending on weights and bias. The most common is the sigmoid function:$ \sigma (\omega \cdot x + b ) = \dfrac{1}{1 + e^{-(\omega \cdot x + b ) } } $ Figure 1. Single neuron With the smoothness of the activation function$ \sigma $we are able to analytically measure the output changes since$ \Delta out $is a linear function of the changes$ \Delta \omega $and$ \Delta b$:$ \Delta out \approx \sum_j \dfrac{\partial out}{\partial \omega_j} \Delta \omega_j + \dfrac{\partial out}{\partial b} \Delta b $Loss function Let x be a training input and y(x) the desired output. What we'd like is an algorithm which lets us find weights and biases so that the output from the network approximates y(x) for all x. Most used loss function is mean squared error (MSE) :$ L( \omega, b) = \dfrac{1}{2n} \sum_x || Y(x) - out ||^2 $, where n is the total number of training inputs, out is the vector of outputs from the network when x is input. To minimize the loss function, there are many optimizing algorithms. The one we will use is the gradient descend, of which every iteration of an epoch is defined as:$ \omega_k \rightarrow \omega_k' = \omega_k - \dfrac{\eta}{m} \sum_j \dfrac{\partial L_{X_j}}{\partial \omega_k}  b_k \rightarrow b_k' = b_k - \dfrac{\eta}{m} \sum_j \dfrac{\partial L_{X_j}}{\partial b_k} $where m is the size of the batch of inputs with which we feed the network and$ \eta $is the learning rate. Backpropagation The last concept that I would like to emphasize is the backpropagation. Its goal is to compute the partial derivatives$ \partial L / \partial  \omega $and$ \partial L / \partial b} $of the loss function L with respect to any weight or bias in the network. The reason is that those partial derivatives are computationally heavy and the network training would be excessively slow. Let be$ z^l $the weighted input to the neurons in layer l, that can be viewed as a linear function of the activations of the previous layer:$ z^l = \omega^l a^{l-1} + b^l $. In the fundamental steps of backpropagation we compute: 1) the final error:$ \delta ^L = \Delta_a L \odot \sigma' (z^L) $The first term measures how fast the loss is changing as a function of every output activation and the second term measures how fast the activation function is changing at$ z_L $2) the error of every layer l:$ \delta^l = ((\omega^{l+1})^T \delta^{l+1} ) \odot \sigma' (z^l) $3) the partial derivative of the loss function with respect to any bias in the net$ \dfrac{\partial L}{\partial b^l_j} = \delta^l_j $4) the partial derivative of the loss function with respect to any weight in the net$ \dfrac{\partial L}{\partial \omega^l_{jk}} = a_k^{l-1} \delta^l_j $We can therefore update the weights and the biases with the gradient descent and train the network. Since inputs can be too numerous, we can use only a random sample of the inputs. Stochastic Gradient Descent (SGD) simply does away with the expectation in the update and computes the gradient of the parameters using only a single or a few training examples. In particular, we will use the SGD with momentum, that is a method that helps accelerate SGD in the relevant direction and damping oscillations. It does this by adding a fraction γ of the update vector of the past time step to the current update vector. II. Layers Here a brief explanation of the functions that I am considering in the trainNetwork class Convolution2DLayer The convolution layer is the core building block of a convolutional neural network (CNN) and it does most of the computational heavy lifting. They derive their name from the “convolution” operator. The primary purpose of convolution is to extract features from the input image preserving the spatial relationship between pixels by learning image features using small squares of input data.  Figure 2. Feature extraction with convolution (image taken form https://goo.gl/h7pXxf) In the example in Fig.2, the 3×3 matrix is called a 'filter' or 'kernel' and the matrix formed by sliding the filter over the image and computing the dot product is called the 'Convolved Feature' or 'Activation Map' (or the 'Feature Map'). In practice, a CNN learns the values of these filters on its own during the training process (although we still need to specify parameters such as number of filters, filter size, architecture of the network etc. before the training process). The more number of filters we have, the more image features get extracted and the better our network becomes at recognizing patterns in unseen images. The size of the Feature Map depends on three parameters: the depth (that corresponds to the number of filters we use for the convolution operation), the stride (that is the number of pixels by which we slide our filter matrix over the input matrix) and the padding (that consists in padding the input matrix with zeros around the border). ReLULayer ReLU stands for Rectified Linear Unit and is a non-linear operation:$ f(x)=max(0,x) $. Usually this is applied element-wise to the output of some other function, such as a matrix-vector product. It replaces all negative pixel values in the feature map by zero with the purpose of introducing non-linearity in our network, since most of the real-world data we would want to learn would be non-linear. FullyConnectedLayer Neurons in a fully connected layer have full connections to all activations in the previous layer, as seen in regular Neural Networks. Hence their activations can be computed with a matrix multiplication followed by a bias offset. In our case, the purpose of the fully-connected layer is to use these features for classifying the input image into various classes based on the training dataset. Apart from classification, adding a fully-connected layer is also a cheap way of learning non-linear combinations of the features. Pooling It is common to periodically insert a pooling layer in-between successive convolution layers. Spatial Pooling (also called subsampling or downsampling) reduces the dimensionality of each feature map but retains the most important information. In particular, pooling makes the input representations (feature dimension) smaller and more manageable reduces the number of parameters and computations in the network, therefore, controlling overfitting makes the network invariant to small transformations, distortions and translations in the input image helps us arrive at an almost scale invariant representation of our image Spatial Pooling can be of different types: Max, Average, Sum etc. MaxPooling2DLayer In case of Max Pooling, we define a spatial neighborhood (for example, a 2×2 window) and take the largest element from the rectified feature map within that window. AveragePooling2DLayer Instead of taking the largest element we could also take the average. DropoutLayer Dropout in deep learning works as follows: one or more neural network nodes is switched off once in a while so that it will not interact with the network. With dropout, the learned weights of the nodes become somewhat more insensitive to the weights of the other nodes and learn to decide somewhat more by their own. In general, dropout helps the network to generalize better and increase accuracy since the influence of a single node is decreased. SoftmaxLayer The purpose of the softmax classification layer is simply to transform all the net activations in your final output layer to a series of values that can be interpreted as probabilities. To do this, the softmax function is applied onto the net intputs.$ \phi_{softmax} (z^i) = \dfrac{e^{z^i}}{\sum_{j=0}^k e^{z_k^i}} $CrossChannelNormalizationLayer Local Response Normalization (LRN) layer implements the lateral inhibition that in neurobiology refers to the capacity of an excited neuron to subdue its neighbors. This layer is useful when we are dealing with ReLU neurons because they have unbounded activations and we need LRN to normalize that. We want to detect high frequency features with a large response. If we normalize around the local neighborhood of the excited neuron, it becomes even more sensitive as compared to its neighbors. At the same time, it will dampen the responses that are uniformly large in any given local neighborhood. If all the values are large, then normalizing those values will diminish all of them. So basically we want to encourage some kind of inhibition and boost the neurons with relatively larger activations. III. training options The training function takes as input a trainingOptions object that contains the parameters for the training. A brief explanation: SolverName Optimizer chosen for minimize the loss function. To guarantee the Matlab compatibility, only the Stochastic Gradient Descent with Momentum ('sgdm') is allowed Momentum Parameter for the sgdm: it corresponds to the contribution of the previous step to the current iteration InitialLearnRate Initial learning rate η for the optimizer LearnRateScheduleSettings These are the settings for regulating the learning rate. It is a struct containing three values: RateSchedule: if it is set to 'piecewise', the learning rate will drop of a RateDropFactor every a RateDropPeriod number of epochs. L2Regularization Regularizers allow to apply penalties on layer parameters or layer activity during optimization. This is the factor of the L2 regularization. MaxEpochs Number of epochs for training Verbose Display the information of the training every VerboseFrequency iterations Shuffle Random shuffle of the data before training if set to 'once' CheckpointPath Path for saving the checkpoints ExecutionEnvironment Chose of the hardware for the training: 'cpu', 'gpu', 'multi-gpu' or 'parallel'. the load is divided between workers of GPUs or CPUs according to the relative division set by WorkerLoad OutputFcn Custom output functions to call during training after each iteration passing a struct containing: Current epoch number, Current iteration number, TimeSinceStart, TrainingLoss, BaseLearnRate, TrainingAccuracy (or TrainingRMSE for regression), State Peace, Enrico [1] [Ian Goodfellow, Yoshua Bengio, Aaron Courville] Deep Learning, MIT Press, http://www.deeplearningbook.org, 2016 [2] http://ufldl.stanford.edu/tutorial/supervised/ConvolutionalNeuralNetwork/ [3] http://web.stanford.edu/class/cs20si/syllabus.html [4] http://neuralnetworksanddeeplearning.com/ [5] https://bitbucket.org/cittiberto/octave-nnet/src July 14, 2017 Joel Dahne Ahead of the Timeline One of my first posts on this blog was a timeline for my work during the project. Predicting the amount of time something takes is always hard to do. Often time you tend to underestimate the complexity of parts of the work. This time however I overestimated the time the work would take. If my timeline would have been correct I would have just started to work on Folding Functions (or reductions as they are often called). Instead I have completed the work on them and also for functions regarding plotting. In addition I have started to work on the documentation for the package as well as checking everything an extra time. In this blog post I will go through what I have done this week, what I think is left to do and a little bit about what I might do if I complete the work on N-dimensional arrays in good time. This Week The Dot Function The$dot$-function was the last function left to implement support for N-dimensional arrays in. It is very similar to the$sum$-function so I already had an idea of how to do it. As with$sum$I moved most of the handling of the vectorization from the m-files to the oct-file, the main reason being improved performance. The$dot$-functions for intervals is actually a bit different from the standard one. First of all it supports vectorization which the standard one does not > dot ([1, 2, 3; 4, 5, 6], 5) error: dot: size of X and Y must match > dot (infsupdec ([1, 2, 3; 4, 5, 6], 5) ans = 1x3 interval vector [25]_com [35]_com [45]_com It also treats empty arrays a little different, see bug #51333, > dot ([], []) ans = [](1x0) > dot (infsupdec ([]), []) ans = [0]_com Package Documentation I have done the minimal required changes to the documentation. That is I moved support for N-dimensional arrays from Limitation to Features and added some simple examples on how to create N-dimensional arrays. Searching for Misses During the work I have tried to update the documentation for all functions to account for the support of N-dimensional arrays and I have also tried to update some of the comments for the code. But as always, especially when working with a lot of files, you miss things, both in the documentation and old comments. I did a quick grep for the words "matrix" and "matrices" since they are candidates for being changed to "array". Doing this I found 35 files where I missed things. It was mainly minor things, comments using the "matrix" which I now changed to "array", but also some documentation which I had forgotten to update. What is Left? Package Documentation - Examples As mentioned above I have done the minimal required changes to the documentation. It would be very nice to add some more interesting example using N-dimensional arrays of intervals in a useful way. Ironically I have not been able to come up with an interesting example but I will continue to think about it. If you have an example that you think would be interesting and want to share, please let me know! Coding Style As I mentioned in one of the first blog posts, the coding style for the interval package was not following the standard for Octave. During my work I have adapted all files I have worked with to the coding standard for Octave. A lot of the files I have not needed to do any changes to, so they are still using the old style. It would probably be a good idea to update them as well. Testing - ITF1788 The interval framwork libary developed by Oliver is used to test the correctness of many of the functions in the package. At the moment it tests evaluation of scalars but in principle it should be no problem to use it for testing vectorization or even broadcasting. Oliver has already started to work on this. After N-dimensional arrays? If I continue at this pace I will finish the work on N-dimensional arrays before the time of the project is over. Of course the things that are left might take longer than expected, they usually do, but there is a chance that I will have time left after everything is done. So what should I do then? There are more thing that can be done on the interval package, for example adding more examples to the documentation, however I think I would like to start working on a new package for Taylor arithmetics. Before GSoC I started to implement a proof of concept for Taylor arithmetics in Ocatve which can be found here. I would then start to work on implementing a proper version of it, where I would actually make use of N-dimensional interval arrays. If I want to create a package for this I would also need to learn a lot of other things, one of them being how to manage a package on octave forge. At the moment I will try to finish my work on N-dimensional arrays. Then I can discuss it with Oliver and see what he thinks about it. July 13, 2017 Joel Dahne Set inversion with fsolve This week my work have mainly been focused on the interval version of fsolve. I was not sure if and how this could make use of N-dimensional arrays and to find that out I had to understand the function. In the end it turned out that the only generalization that could be done were trivial and required very few changes. However I did find some other problems with the functions that I have been able to fix. Connected to fsolve are the functions ctc_intersect and ctc_union. The also needed only minor changes to allow for N-dimensional input. I will start by giving an introduction to fsolve, ctc_union and ctc_intersect and then I will mention the changes I have done to them. Introduction to fsolve The standard version of fsolve in Octave is used to solve systems of nonlinear equations. That is, given a functions$f$and a starting point$x_0$it returns a value$x$such that$f(x)$is close to zero. The interval version of fsolve does much more than this. It is used to enclose the preimage of a set$Y$under$f$. Given a domain$X$, a set$Y$and a function$f$it returns an enclosure of the set $$f^{-1}(Y) = \{x \in X: f(x) \in Y\}.$$ By letting$Y = 0$we get similar functionality as the standard fsolve, with the difference that the output is an enclosure of all zeros to the function (compared to one point for which$f$returns close to zero). Example: The Unit Circle Consider the function$f(x, y) = \sqrt{x^2 + y^2} - 1$which is zero exactly on the unit circle. Plugging this in to the standard fsolve we get with$(0.5, 0.5)$as a starting guess > x = fsolve (@(x) f(x(1), x(2)), [0.5, 0.5]) x = 0.70711 0.70711 which indeed is close to a zero. But we get no information about other zeros. Using the interval version of fsolve with$X = [-3, 3] \times [-3, 3]$as starting domain we get > [x paving] = fsolve (f, infsup ([-3, -3], [3, 3])); > x x ⊂ 2×1 interval vector [-1.002, +1.002] [-1.0079, +1.0079] Plotting the paving we get the picture which indeed is a good enclosure of the unit circle. How it works In its simplest form fsolve uses a simple bisection scheme to find the enclosure. Using interval methods we can find enclosure to images of sets. Given a set$X_0 \subset X$we have three different possibilities •$f(X_0) \subset Y$in which case we add$X_0$to the paving •$f(X_0) \cap Y = \emptyset$in which case we discard$X_0$• Otherwise we bisect$X_0$and continue on the parts By setting a tolerance of when to stop bisecting boxes we get the algorithm to terminate in a finite number of steps. Contractors Using bisection is not always very efficient, especially when the domain has many dimensions. One way to speed up the convergence is with what's called contractors. In short a contractor is a function that can take the set$X_0$and returns a set$X_0' \subset X_0$with the property that$f(X_0 \setminus X_0') \cap Y = \emptyset$. It's a way of making$X_0$smaller without having to bisect it that many times. When you construct a contractor you use the reverse operations definer on intervals. I will not go into how this works, if you are interested you can find more information in the package documentation [1] and in these youtube videos about Set Inversion Via Interval Analysis (SIVIA) [2]. The functions ctc_union and ctc_intersect are used to combine contractors on sets into contractors on unions or intersections of these sets. Generalization to N-dimensional arrays How can fsolve be generalized to N-dimensional arrays? The only natural thing to do is to allow for the input and output of$f$to be N-dimensional arrays. This also is no problem to do. While you mathematically probably would say that fsolve is used to do set inversion for functions$f: \mathbb{R}^n \to \mathbb{R}^m$it can of course also be used for example on functions$f: \mathbb{R}^{n_1}\times \mathbb{R}^{n_2} \to \mathbb{R}^{m_1}\times \mathbb{R}^{m_2}$. This is however a bit different when using vectorization. When not using vectorization (and not using contractions) fsolve expects that the functions takes one argument which is an array with each element corresponding to a variable. If vectorization is used it instead assumes that the functions takes one argument for each variable. Every argument is then given as a vector with each element corresponding to one value of the variable for which to compute the function. Here we have no use of N-dimensional arrays. Modifications The only change in functionality that I have done to the functions is to allow for N-dimensional arrays as input and output when vectorization is not used. This required only minor changes, essentially changing expressions like max (max (wid (interval))) to max (wid (interval)(:)) It was also enough to do these changes in ctc_union and ctc_intersect to have these support N-dimensional arrays. I have made no functional changes when vectorization is used. I have however made an optimization in the construction of the arguments to the function. The arguments are stored in an array but before being given to the function they need to be split up into the different variables. This is done by creating a cell array with each element being a vector with the values of one of the variables. Previously the construction of this cell array was very inefficient, it split the interval into its lower and upper part and then called the constructor to create an interval again. Now it copies the intervals into the cell without having to call the constructor. This actually seems have been quite a big improvement, using the old version the example with the unit circle from above took around 0.129 seconds and with the new version it takes about 0.092 seconds. This is of course only one benchmark, but a speed up of about 40% for this test is promising! Lastly I noticed a problem in the example used in the documentation of the function. The function used is # Solve x1 ^ 2 + x2 ^ 2 = 1 for -3 ≤ x1, x2 ≤ 3 again, # but now contractions speed up the algorithm. function [fval, cx1, cx2] = f (y, x1, x2) # Forward evaluation x1_sqr = x1 .^ 2; x2_sqr = x2 .^ 2; fval = hypot (x1, x2); # Reverse evaluation and contraction y = intersect (y, fval); # Contract the squares x1_sqr = intersect (x1_sqr, y - x2_sqr); x2_sqr = intersect (x2_sqr, y - x1_sqr); # Contract the parameters cx1 = sqrrev (x1_sqr, x1); cx2 = sqrrev (x2_sqr, x2); endfunction Do you see the problem? I think it took me more than a day to realize that the problems I was having was not because of a bug in fsolve but because this function computes the wrong thing. The function is supposed to be$f(x_1, x_2) = x_1^2 + x_2^2$but when calculating the value it calls hypot which is given by$hypot(x_1, x_2) = \sqrt{x_1^2 + x_2^2}$. For$f(x_1, x_2) = 1$, which is used in the example, it gives the same result, but otherwise it will of course not work. [1] https://octave.sourceforge.io/interval/package_doc/Parameter-Estimation.html#Parameter-Estimation [2] https://www.youtube.com/watch?v=kxYh2cXHdNQ July 11, 2017 Michele Ginesi Gnu Scientific Library Bessel functions During the second part of GSoC I have to work on Bessel functions. The bug is related in particular on the Bessel function of first kind and regard the fact that the function is not computed if the$x$argument is too big. During this week I studied the actual implementation i.e. the Amos library, written in Fortran. The problem is related to the fact that zbesj.f simply refuse to compute the result when the input argument is too large (the same problem happens for all bessel functions, both for real and complex arguments). What I did was to "unlock" the amos in such a way to compute the value in any case (which seems to be the strategy used by MATLAB). Then I compared the relative errors with the relative errors done by the Gnu Scientific Library (GSL). At first, I would precise some limitation present in GSL: • The parameter$x$must be real, while it can be complex in Amos and MATLAB. The same holds for the parameter$\alpha$. • The class of the output does not adapt to the class of the input, returning always a double. Doing some tests, it seems that the amos work better in terms of accuracy (even if we are talking about errors which do not differ in the order). I concentrated on values in which Amos usually refuse to compute, since in every other zone of the$x-\alpha$plane it is known the error is in the order of 1e-15. For$\alpha\in\{-1,0,1\}$they return the same result, while for other values of$\alpha$, Amos are in general more accurate. Anyway, I would remark the fact that there are not, as far as I know, accurate algorithms in double precision for very large magnitude. In fact, for such arguments, both Amos and GSL make a relative error of the order of the unity. This problem is evident when using SageMath to compute accurate values, e.g. sage: bessel_J(1,10^40).n() 7.95201138653707e-21 sage: bessel_J(1,10^40).n(digits=16) -7.522035532561300e-21 sage: bessel_J(1,10^40).n(digits=20) -5.7641641376196142462e-21 sage: bessel_J(1,10^40).n(digits=25) 1.620655257788838811378880e-21 sage: bessel_J(1,10^40).n(digits=30) 1.42325562284462027823158499095e-21 sage: bessel_J(1,10^40).n(digits=35) 1.4232556228446202782315849909515310e-21 sage: bessel_J(1,10^40).n(digits=40) 1.423255622844620278231584990951531026335e-21 The values "stabilize" only when the number of digits is bigger than 30, far away from double precision. Incomplete gamma function I also gave a look on how to use GSL to eventually improve the incomplete gamma function. It is not possible, however, to use only GSL functions gsl_sf_gamma_inc_P and gsl_sf_gamma_inc_Q due to their limitations: • There is no the "scaled" option. • The value of$x$must be real. This may be not a problem since even MATLAB does not accept complex value (while the version I worked on does). • The parameter$x$must be positive. This is actually a problem of MATLAB compatibility. • The class of the output does not adapt to the class of the input, returning always a double. I tested the accuracy of the GSL and they work better when$a1$and$x\ll1$so I think I will fix gammainc.m using the algorithm of gsl_sf_gamma_inc_P and gsl_sf_gamma_inc_Q for that values of the input arguments. Incomplete Beta function The actual incomplete beta function needs to be replaced. I've already studied the continued fraction exapnsion (which seems to be the best one to use). In GSL is implemented in a good way but still present two limitations: • Does not adapt to the input class (it always work in double precision). • There is no "upper" version. This is missing also in the current betainc function, but it is present in MATLAB. Unfortunately, it is not sufficient to compute it as$1 - I_x(a,b)$, it is necessary to find an accurate way to compute directly it. So I will take inspiration to GSL version of the function but I think I will write beatinc as a single .m file. July 01, 2017 Michele Ginesi First period resume First period resume First period resume Here I present a brief resume of the work done in this first month. 1st week: Gammainc The incomplete gamma function presented a series of inaccurate results in the previous implementation, so it has been decided to rewrite it from scratch. A big part of the work was done by Marco and Nir (here the discussion). I gave my contribution by fixing some problems in the implementation and by making a functioning commit (thanks to the suggestions given by Carne during the OctConf in Geneve). 2nd and 3rd week: Gammaincinv The inverse of the incomplete gamma function was completely missing in Octave (and this was a problem of compatibility with Matlab). Now the function is present, written as a single .m file. The implementation consist in a simple, but efficient, Newton's method. Last week: Betainc When I first submitted my timeline, there was only a bug related to betainc, involving the input validation. During the first period of GSoC emerged a new bug of "incorrect result" type. From this, me and my mentor decided that it is necessary to rewrite the function, so I used this last week to study efficient algorithms to evaluate it. In particular, seems that the most efficient way is to use continude fractions, maybe after a rescale of the function. I'm already working on it, but I will complete it during the first week of august, after finishing the work on Bessel functions. June 30, 2017 Joel Dahne One Month In Now one month of GSoC has passed and so far everything has gone much better than I expected! According to my timeline this week would have been the first of two were I work on vectorization. Instead I have already mostly finished the vectorization and have started to work on other things. In this blog post I'll give a summary of what work I have completed and what I have left to do. I'll structure it according to where the functions are listed in the$INDEX$-file [1]. The number after the heading is the number of functions in that category. Since this will mainly be a list of which files have been modified and which are left to do this might not be very interesting if you are not familiar with the structure of the interval package. Interval constant (3) All of these have been modified to support N-dimensional arrays. Interval constructor (5) All of these have been modified to support N-dimensional arrays. Interval function (most with tightest accuracy) (63) Almost all of these functions worked out of the box! At least after the API functions to the MPFR and crlibm libraries were fixed, they are further down in the list. The only function that did not work immediately were$linspace$. Even though this function could be generalized to N-dimensinal arrays the standard Octave function only works for matrices (I think the Matlab version only allows scalars). This means that adding support for N-dimensional vectors for the interval version is not a priority. I might do it later on but it is not necessary. Interval matrix operation (16) Most of the matrix functions does not make sense for N-dimensional arrays. For example matrix multiplication and matrix inversion only makes sense for matrices. However all of the reduction functions are also here, they include$dot$,$prod$,$sum$,$sumabs$and$sumsq$. At the moment I have implemented support for N-dimensional arrays for$sum$,$sumabs$and$prod$. The functions$dot$and$sumsq$are not ready, I'm waiting to see what happens with bug #51333 [2] before I continue with that work. Depending on the bug I might also have to modify the behaviour of$sum$,$sumabs$and$prod$slightly. Interval comparison (19) All of these have been modified to support N-dimensional arrays. Set operation (7) All of these functions have been modified to support N-dimensional arrays except one,$mince$. The function$mince$is an interval version of$linspace$and reasoning here is the same as that for$linspace$above. Interval reverse operation (12) Like the interval functions above, all of the functions worked out of the box! Interval numeric function (11) Also these functions worked out of the box, with some small modifications to the documentation for some of them. Interval input and output (9) Here there are some functions which require some comments, the ones I do not comment about have all gotten support for N-dimensional arrays.$interval\_bitpack$I think that this function does not make sense to generalize to N-dimensions. It could perhaps take an N-dimensional arrays as input, but it will always return a row vector. I have left it as it is for now at least.$disp$and$display$These are functions that might be subject to change later on. At the moment it prints N-dimensional arrays of intervals in the same way Octave does for normal arrays. It's however not clear how to handle the$\subset$symbol and we might decide to change it. Interval solver or optimizer (5) The functions$gauss$and$polyval$are not generalizable to N-dimensional vectors. I don't think that$fzero$can be generalized either, for it to work the functions must be real-valued. The function$fsolve$can perhaps be modified to support N-dimensional vectors. It uses the SIVIA algorithm [3] and I have to dive deeper into how it works to see if it can be done. For$fminsearch$nothing needed to be done, it worked for N-dimensional arrays directly. Interval contractor arithmetic (2) Both of these functions are used together with$fsolve$so they also depend on if SIVIA can be generalized or not. Verified solver or optimizer (6) All of these functions work on matrices and cannot be generalized. Utility function (29) All of these for which it made sense have been modified to support N-dimensional arrays. Some of them only works for matrices, these are$ctranspose$,$diag$,$transpose$,$tril$and$triu$. I have left them as they were, though I fixed a bug in$diag$. API function to the MPFR and crlibm libraries (8) These are the functions that in general required most work. The ones I have added full support for N-dimensional arrays in are$crlibm\_function$,$mpfr\_function\_d$and$mpfr\_vector\_sum\_d$. Some of them cannot be generalized, these are$mpfr\_matrix\_mul_d$,$mpfr\_matrix\_sqr\_d$and$mpfr\_to\_string\_d$. The functions$mpfr\_linspace\_d$and$mpfr\_vector\_dot\_d$are related to what I mentioned above for$linspace$and$dot$. Summary So summing up the functions that still require some work are • Functions related to$fsolve$• The functions$dot$and$sumsq$• The functions$linspace$and$mince$Especially the functions related to$fsolve$might take some time to handle. My goal is to dive deeper into this next week. Apart from this there are also some more things that needs to be considered. The documentation for the package will need to be updated. This includes adding some examples which make use of the new functionality. The interval package also did not follow the coding style for Octave. All the functions which I have made changes to have been updated with the correct coding style, but many of the functions that worked out of the box still use the old style. It might be that we want to unify the coding standard for all files before the next release. [1] The$INDEX$file https://sourceforge.net/u/urathai/octave/ci/default/tree/INDEX [2] Bug #51333 https://savannah.gnu.org/bugs/index.php?51333 [3] The SIVIA algorithm https://www.youtube.com/watch?v=kxYh2cXHdNQ Enrico Bertino End of the first work period Hi all, this is the end of this first period of GSoC! It was a challenging but very interesting period and I am very excited about the next two months. It is the first time where I have the opportunity to make something that has a real value, albeit small, on someone else's work. Even when I have to do small tasks, like structure the package or learn Tensorflow APIs, I do it with enthusiasm because I have very clear in mind the ultimate goal and the value that my efforts would bring. It may seem trivial, but for a student this is not the daily bread :) I really have fun coding for this project and I hope this will last until the end! Speaking about the project, I've spent some time wondering which was the best solution to test the correct installation of the Tensorflow Python APIs on the machine. The last solution was putting the test in a function __nnet_init__ and calling it in the PKG_ADD (code in inst/__nnet_init__ in my repo [1]). Regarding the code, in this last days I tried to connect the dots, calling a Tensorflow network from Octave in a "Matlab compatible" way. In particular, I use the classes that I made two weeks ago in order to implement a basic version of trainNetwork, that is the core function of this package. As I explained in my post of June 12, trainNetwork takes as input the data and two objects: the layers and the options. I had some some difficulty during the implementation of the Layer class due to the inheritance and the overloading. Eventually, I decided to store the layers in a cell array as attribute of the Layer class. Overloading the subsref, I let the user call a specific layer with the '()' access, like a classic array. With this kind of overloading I managed to solve the main problem of this structure, that is the possibility to get a property of a layer doing for example layers(1).Name classdef Layer < handle properties (Access = private) layers = {}; endproperties methods (Hidden, Access = {?Layers}) function this = Layer (varargin) nargin = numel(varargin); this.layers = cell(1, nargin); for i = 1:nargin this.layers{i} = varargin{i}; end endfunction endmethods methods (Hidden) function obj = subsref(this, idx) switch idx(1).type case '()' idx(1).type = '{}'; obj = builtin('subsref',this.layers,idx); case '{}' error('{} indexing not supported'); case '.' obj = builtin('subsref',this,idx); endswitch endfunction function obj = numel(this) obj = builtin('numel',this.layers); endfunction function obj = size(this) obj = builtin('size',this.layers); endfunction endmethods endclassdef Therefore, I implemented the same example of my last post in a proper way. In tests/script you can find the function cnn_linear_model which consists simply in: 1. Loading the datasets load('tests/examples/cnn_linear_model/train_70.mat') load('tests/examples/cnn_linear_model/test_70.mat') 2. Defining layers and options layers = [ ... imageInputLayer([28 28 1]) convolution2dLayer(12,25) reluLayer fullyConnectedLayer(1) regressionLayer]; options = trainingOptions('sgdm', 'MaxEpochs', 1); 3. Training net = trainNetwork(trainImages, trainAngles, layers, options); 4. Prediction acc = net.predict(testImages, testAngles) TrainNetwork is a draft and I have not yet implemented the classe seriesNetwork, but I think it's a good start :) In next weeks I will focus on the Tensorflow backend of the above mentioned functions, with the goal of having a working version at the end of the second period! Peace, Enrico [1] https://bitbucket.org/cittiberto/octave-nnet/src June 24, 2017 Enrico Bertino Package structure Hello! During the first period of GSoC I have worked mostly on analyzing the Matlab structure of the net package in order to guarantee the compatibility throughout the whole project. The focus of the project is on the convolutional neural networks, about which I will write the next post. Regarding the package structure, the core will be composed by three parts: 1. Layers: there are 11 types of layers that I defined as Octave classes, using classdef. These layers can be concatenated in order to create a Layer object defining the architecture of the network. This will be the input for the training function. 2. Training: the core of the project is the training function, which takes as input the data, the layers and some options and returns the network as output. 3. Network: the network object has three methods (activations, classify and predict) that let the user compute the final classification and prediction.  Figure 1: conv nnet flowchart I have already implemented a draft for the first point, the layers classes [1]. Every layer type inherits some attributes and methods from the parent class Layers. This is useful for creating the Layer object: the concatenation of different layers is always a Layer object that will be used as input for the training function. For this purpose, I overloaded the cat, horzcat and vertcat operators for Layers and subsref for Layer. I need to finalize some details for the disp methods of these classes.  Figure 2: Layers classes definitions InputParser is used in every class for the parameter management and the attributes setting. The objects of these classes can be instantiated with a corresponding function, implemented in the directory inst/. Here an example for creating a Layer object > # TEST LAYERS > a = imageInputLayer([2,2,3]); # first layer > b = convolution2dLayer(1,1); # second layer > c = dropoutLayer(1); # third layer > layers = [a b c]; # Layer object from layers concat > drop = layers(3); # Layer element access > drop.Probability # Access layer attribute ans = 0.50000 All functions can be tested with the make check of the package. The next step is a focus on the Tensorflow integration, with Pytave, writing a complete test for a regression of images angles and comparing the precision and the computational time with Matlab. [1] https://bitbucket.org/cittiberto/octave-nnet/src/35e4df2a6f887582216fc266fc75976a816f04fc/inst/classes/?at=default p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Monaco; color: #f4f4f4; background-color: #000000; background-color: rgba(0, 0, 0, 0.95)} span.s1 {font-variant-ligatures: no-common-ligatures} p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Monaco; color: #f4f4f4; background-color: #000000; background-color: rgba(0, 0, 0, 0.95)} span.s1 {font-variant-ligatures: no-common-ligatures} Train a Convolutional Neural Network for Regression Hello, I spent the last period working mostly on Tensorflow, studying the APIs and writing some examples in order to explore the possible implementations of neural networks. For this goal, I chose an interesting example proposed in the Matlab examples at [1]. The dataset is composed by 5000 images, rotated by an angle α, and a corresponding integer label (the rotation angle α). The goal is to make a regression to predict the angle of a rotated image and straighten it up. All files can be found in tests/examples/cnn_linear_model in my repo [2]. I have kept the structure as in the Matlab example, but I generated a new dataset starting from LeCun's MNIST digits (datasets at [3]). Each image was rotated by a random angle between 0° and 70°, in order to keep the right orientation of the digits (code in dataset_generation.m). In Fig. 1 some rotated digits with the corresponding original digits.  Figure 1. rotated images in columns 1,3,5 and originals in columns 2,4,6 The implemented linear model is:$ \hat{Y} = \omega X + b $, where the weights$\omega$and the bias$b$will be optimized during the training minimizing a loss function. As loss function, I used the mean square error (MSE):$  \dfrac{1}{n} \sum_{i=1}^n (\hat{Y_i} - Y_i)^2 $, where the$Y_i$are the training labels. In order to show the effective improvement given by a Neural Network, I started to make a simple regression feeding the X variable of the model directly with the 28x28 images. Even if for the MSE minimization a close form exists, I implemented an iterative method for discovering some Tensorflow features (code in regression.py). For evaluate the accuracy of the regression, I consider a correct regression if the difference between angles is less than 20°. After 20 epochs, the convergence was almost reached, giving an accuracy of$0.6146$.  Figure 2. rotated images in columns 1,3,5 and after the regression in columns 2,4,6 I want to analyze now the improvement given by a feature extraction performed with a convolutional neural network (CNN). As in the Matlab example, I used a basic CNN since the input images are quite simple (only numbers with monochromatic background) and consequently the features to extract are few. • INPUT [28x28x1] will hold the raw pixel values of the image, in this case an image of width 28, height 28 • CONV layer will compute the output of neurons that are connected to local regions in the input, each computing a dot product between their weights and a small region they are connected to in the input volume. This results in volume such as [12x12x25]: 25 filters of size 12x12 • RELU layer will apply an element-wise activation function, such as the$max(0,x)$thresholding at zero. This leaves the size of the volume unchanged ([12x12x25]). • FC (i.e. fully-connected) layer will compute the class scores, resulting in volume of size [1x1x1], which corresponds to the rotation angle. As with ordinary Neural Networks, each neuron in this layer will be connected to all the numbers in the previous volume.  Figure 3. CNN linear model architecture We can visualize the architecture with Tensorboard where the graph of the model is represented.  Figure 4. Model graph generated with Tensorboard With the implementation in regression_w_CNN.py, the results are quite satisfying: after 15 epochs, it reached an accuracy of$0.75$(205 seconds overall). One can see in Fig. 4 the marked improvement of the regression.  Figure 5. rotated images in columns 1,3,5 and after the CNN regression in columns 2,4,6 With the same parameters, Matlab reached an accuracy of$0.76$in 370 seconds (code in regression_Matlab_nnet.m), so the performances are quite promising In the next post (in few days), I will integrate the work done up to now, calling the Python class within Octave and making a function that simulates the behavior of Matlab. Leveraging the layers classes that I made 2 weeks ago, I will implement a draft of the functions trainNetwork and predict making the Matlab script callable also in Octave. I will also care about the dependencies of the package: I will add the dependency from Pytave in the package description and write a test as PKG_ADD in order to verify the version of Tensorflow during the installation of the package. Peace, Enrico [3] http://yann.lecun.com/exdb/mnist/ June 22, 2017 Joel Dahne Vectorization and broadcasting At the moment I'm actually ahead of my schedule and this week I started to work on support for vectorization on N-dimensional arrays. The by far biggest challenge was to implement proper broadcasting and most of this post will be devoted to going through that. At the end I also mention some of the other things I have done during the week. Broadcasting arrays At the moment I have implement support for broadcasting on all binary functions. Since all binary functions behave similarly in respect to broadcasting I will use$+$in all my example below, but this could in principle be any binary function working on intervals. When adding to arrays,$A, B$, of the same size the result is just an arrays of the same size with each entry containing the sum of the corresponding entries in$A$and$B$. If$A$and$B$does not have the same size then we try to perform broadcasting. The simplest form of broadcasting is when$A$is an arrays and$B$is a scalar. Then we just take the value of$B$and add to every element in$A$. For example > A = infsupdec ([1, 2; 3, 4]) A = 2×2 interval matrix [1]_com [2]_com [3]_com [4]_com > B = infsupdec (5) B = [5]_com > A + B ans = 2×2 interval matrix [6]_com [7]_com [8]_com [9]_com However it is not only when one of the inputs is a scalar that broadcasting can be performed. Broadcasting is performed separately for each dimension of the input. We require either that the dimensions are equal, and no broadcasting is performed, or that one of the inputs have that dimension equal to$1$, we then concatenate this input along that dimension until they are of equal size. If for example$A$has dimension$4\times4\times4$and$B$dimension$4\times4\times1$we concatenate$B$with itself along the third dimension four times to get two arrays of the same size. Since a scalar has all dimensions equal to 1 we see that it can be broadcasted to any size. Both$A$and$B$can also be broadcasted at the same time, along different dimensions, for example > A = infsupdec (ones (1, 5, 2)) A = 1×5×2 interval array ans(:,:,1) = [1]_com [1]_com [1]_com [1]_com [1]_com ans(:,:,2) = [1]_com [1]_com [1]_com [1]_com [1]_com > B = infsupdec ([1, 2, 3, 4, 5; 6, 7, 8, 9, 10]) B = 2×5 interval matrix [1]_com [2]_com [3]_com [4]_com [5]_com [6]_com [7]_com [8]_com [9]_com [10]_com > A + B ans = 2×5×2 interval array ans(:,:,1) = [2]_com [3]_com [4]_com [5]_com [6]_com [7]_com [8]_com [9]_com [10]_com [11]_com ans(:,:,2) = [2]_com [3]_com [4]_com [5]_com [6]_com [7]_com [8]_com [9]_com [10]_com [11]_com The implementation I'll go through a little bit about my implementation. I warn you that I'm not that familiar with the internals of Octave so some things I say might be wrong, or at least not totally correct. Internally all, numerical, arrays are stored as a linear vector and the dimensions are only metadata. This means that the most efficient way to walk through an array is with a linearly increasing index. When$A$and$B$have the same size the most efficient way to sum them is to linearly go through the arrays. In pseudo code // Calculate C = A + B for (int i = 0; i < numel (A); i++) { C(i) = A(i) + B(i); } This works fine, and apart from unrolling the loop or doing optimizations like that it is probably the most efficient way to do it. If$A$and$B$are not of the same size then one way to do it would be to simply extend$A$or/and$B$along the needed dimensions. This would however require coping a lot of data, something we want to avoid (memory access is expensive). Instead we try to be smart with our indexing to access the right data from both$A$and$B$. After asking on the IRC-channel I got pointed to this Octave function which performs broadcasting. My implementation, which can be found here, is heavily inspired by that function. Performance Here I compare the performance of the new implementation with the old one. Since the old one could only handle matrices we are limited by that. We can measure the time it takes to add two matrices$A$,$B$with the code tic; A + B; toc; We do 10 runs for each test and all times are in seconds. Addition of large matrices Case 1: A = B = infsupdec (ones (1000, 1000)); Old New 0.324722 0.277179 0.320914 0.276116 0.322018 0.276075 0.318713 0.279258 0.332041 0.279593 0.318429 0.279987 0.323752 0.279089 0.317823 0.276036 0.320509 0.280964 0.320610 0.281123 Mean: 0.32195 0.27854 Case 2: A = B = infsupdec (ones (10, 100000)); Old New 0.299321 0.272691 0.297020 0.282591 0.296460 0.274298 0.294541 0.279661 0.298306 0.277274 0.301532 0.275531 0.298163 0.278576 0.298954 0.279868 0.302849 0.275991 0.297765 0.278806 Mean: 0.29849 0.27753 Case 3: A = B = infsupdec (ones (100000, 10)); Old New 0.286433 0.279107 0.289503 0.278251 0.297562 0.279579 0.292759 0.283311 0.292983 0.281306 0.290947 0.282310 0.293025 0.286172 0.294153 0.278886 0.293457 0.278625 0.296661 0.280804 Mean: 0.29275 0.28084 Broadcasting scalars Case 4: A = infsupdec (ones (1000, 1000)); B = infsupdec (1); Old New 0.298695 0.292419 0.298158 0.292274 0.305242 0.296036 0.295867 0.291311 0.296971 0.297255 0.304297 0.292871 0.298172 0.300329 0.297251 0.291668 0.299236 0.294128 0.300457 0.298005 Mean; 0.29943 0.29463 Case 5: A = infsupdec (1); B = infsupdec (ones (1000, 1000)); Old New 0.317276 0.291100 0.316858 0.296519 0.316617 0.292958 0.316159 0.299662 0.317939 0.301558 0.322162 0.295338 0.321277 0.293561 0.314640 0.291500 0.317211 0.295487 0.317177 0.294376 Mean: 0.31773 0.29521 Broadcasting vectors Case 6: A = infsupdec (ones (1000, 1000)); B = infsupdec (ones (1000, 1)); Old New 0.299546 0.284229 0.301177 0.284458 0.300725 0.276269 0.299368 0.276957 0.303953 0.278034 0.300894 0.275058 0.301776 0.276692 0.302462 0.282946 0.304010 0.275573 0.301196 0.273109 Mean: 0.30151 0.27833 Case 7: A = infsupdec (ones (1000, 1000)); B = infsupdec (ones (1, 1000)); Old New 0.300554 0.295892 0.301361 0.294287 0.302575 0.299116 0.304808 0.294184 0.306700 0.291606 0.301233 0.298059 0.301591 0.292777 0.302998 0.290288 0.300452 0.291975 0.305531 0.290178 Mean: 0.30278 0.29384 We see that in all cases the new version is faster or at least equally fast as the old version. In the old version the order of the input made a slight difference in performance (case 4 vs case 5). In the new version both inputs are treated in exactly the same way so we no longer see that difference. Possible improvements In theory the cases when we broadcast a scalar could be the fastest ones. If$B$is a scalar we could, in pseudo code, do something similar to // Calculate C = A + B with B scalar for (int i = 0; i < numel (A); i++) { C(i) = A(i) + B; } This is however not implemented at the moment. Instead we use the ordinary routine to calculate the index for$B$(since it is a scalar it will always evaluate to$1$). If we would like to optimize more for this case we could add a check for if$A$or$B$are scalars and then optimize for that. Of course this would also make the code more complicated, something to watch out for. At the moment I leave it like this but if we later want to optimize for that case it could be done. Other work Apart from the work to fix the broadcasting for binary functions there were very little to do for many of the functions. All binary functions that use this code, and all unary functions using an even simpler code, worked directly after fixing the oct-files. Some of them required small changes to the documentation but other than that the octave-scripts were fine. So mainly it has been a matter of actually going through all files and check that they actually did work. Bug #51283 When going through all the functions I noticed a bug in the interval version of$\sin$, > sin (infsupdec (0)) ans = [0]_com > sin (infsupdec ([0, 0])) ans = 1×2 interval vector [0, -0]_com [0, -0]_com The second version here is wrong,$-0$should never be allowed as a value for the supremum of an interval. I was able to track this down to how Octaves$\max$function works, see bug #51283. As Oliver writes there the exact behaviour of the$\max$-function is not specified in IEEE Std 754-2008 so we cannot rely on that. To solve this I have added a line to manually set all$-0$to$+0$in the supremum of the interval. June 20, 2017 Michele Ginesi Timetable: modification Timetable: modification Timetable: modification According to my timetable (that you can find here), during this last week of June, I should've work on the input validation of betainc. Since a new bug related to this function has been found and, moreover, the actual implementation doesn't accept the "lower" or "upper" tail (as MATLAB do), me and my mentor decided to use this week to start studying how to rewrite betainc (main references will be [1] and [2]) and to use the last part fo the GSoC to actually implement it. In this way, my timetable remain almost identical (I will use July to work on Bessel functions) and I will be able to fix also this problem. [1] Abramowitz, Stegun "Handbook of Mathematical Functions" [2] Cuyt, Brevik Petersen, Vendonk, Waadeland "Handbook of Continued Fractions for Special Functions" June 16, 2017 Joel Dahne Construction and Printing This week I have started to work on methods for constructing and printing N-dimensional arrays of intervals. In my timeline I estimated that this work would take 2 weeks. However in this first week I have managed to complete most of the work. I will give some comments on how I have worked with the Mercurial repository, how the work went and different things I encountered along the path. Working with Mercurial This is essentially the first time I'm using Mercurial for revision control, though I have used git before. However I quickly learned how to use it for the basic tasks that I need, committing, comparing files and checking the history. As mentioned in a previous post you can find my repository here [1]. Coding style When I started to work with the files I realized that they did not follow Octaves coding standard [2]. After a short discussion on the mailing list we decided that I will update the files I change to follow the standard coding style. Usually it is not a good idea to change coding style and add functionality in the same commit. However most of the changes to coding style are only white space changes so they can be ignored using the -w flag in Mercurial. Thus we decided that as long as the coding style changes are only such that it is ignored with -w I will do it in the same commit as the added functionality. If there are some coding style changes that's not only white space, the most common example is to long lines, I do a commit with only changes to the coding style first. So if you want to take a look at the functionality I have added you will probably want to use the -w flag. Note however that I have not updated the coding style for any files I have not changed otherwise. Committing Normally I do one commit for each file, though in many cases the bare intervals and the decorated intervals have almost identical functions and in that case I commit changes to them both at the same time. Of course it also happens that I have to go back and do more changes to a files, in that case I just do another commit. The actual work The work went much faster than I expected. The main reason for this is that Octave has very good support for indexing. For example expressions like isnai(x.inf <= x.sup) = false; works just as well for matrices as for N-dimensional arrays. In fact the constructor for bare intervals even worked for N-dimensional arrays from the beginning, there I only had to do slight modification to the documentation and add some tests! Not all functions were that easy though. Some functions that have not been updated in a while clearly assumed the input was a matrix, for example in$hull$sizes1 = cellfun ("size", l, 1); sizes2 = cellfun ("size", l, 2); In most cases I only needed to add more general indexing, often times even making the code clearer. In some functions all I had to do was to remove the check on the input data so that it would accept N-dimensional arrays. This was true in for example$cat$were all I had to do was to remove the check and do some minor modifications to the documentation. I can conclude with saying that Octave has great support for working with N-dimensional arrays. Since internally the data for intervals are stored only as arrays I could make good use of it! Noteworthy things While most functions were straight forward to modify some required some thought. How should they even work for N-dimensional input? Disp When modifying the$disp$-function I chose to mimic how Octave handles displaying N-dimensional arrays. I noticed that this is different from how Matlab handles it. In Matlab we have > x = zeros (2, 2, 2) x(:,:,1) = 0 0 0 0 x(:,:,2) = 0 0 0 0 while in Octave it's > x = zeros (2, 2, 2) x = ans(:,:,1) = 0 0 0 0 ans(:,:,2) = 0 0 0 0 I don't know the choice behind Octaves version. At least at first glance I think I prefer the way Matlab does it. But since I'm working in Octave I chose that style. The next question was how to handle the subset symbol,$\subset$. The interval package uses$=$or$\subset$depending on if the string representation is exact or not. For example > x = infsup (1/2048, 1 + 1/2048); > format short; x x ⊂ [0.00048828, 1.0005] > format long; x x = [0.00048828125, 1.00048828125] How should this be handled for N-dimensional arrays? One way would be to switch all$=$to$\subset$is the representation is not exact. Another to use$\subset$on all submatrices that does not have an exact string representation. The third way, and how it is implemented now, is to only change the first$=$to$\subset$, the one after the variable name. Like this > x(1,1,1:2) = infsup (1/2048, 1 + 1/2048) x ⊂ 1×1×2 interval array ans(:,:,1) = [0.00048828, 1.0005] ans(:,:,2) = [0.00048828, 1.0005] This might be a bit odd when you first look at it, on some places we use$=$and on some$\subset$. Though I think it somehow makes sense, we are saying that$x$is a subset of the$1\times1\times2$interval array given by ans(:,:,1) = [0.00048828, 1.0005] ans(:,:,2) = [0.00048828, 1.0005] which actually is true. Anyway I will leave like this for now and then we might decide to switch it up later. linspace and mince The standard implementation of$linspace$only supports scalar or vector input. It could be generalized to N-dimensional arrays by for example returning a N+1-dimensional array were the last dimension corresponds to the linearly spaced elements. But since this has not been done in the standard implementation I will at least wait with adding for intervals. The function$mince$can be seen as a interval generalization of$linspace$. It takes an interval and returns an array of intervals whose union cover it. This could similarly be expanded to N dimensions by creating the array along the N+1 dimension. But again we choose to at least wait with adding this. meshgrid and ndgrid The interval package already has an implementation of$meshgrid$. But since it previously did not support 3-dimensional arrays it had to fit 3-d data in a 2-d matrix. Now that it supports 3-d data it can output that instead. Currently the interval package does not implement$ndgrid$. When I looked into it I realized that the standard implementation of$ndgrid$actually works for interval arrays as well. I have not looked into the internals but in principle it should only need the$cat$function, which is implemented for intervals. Further I noticed that the standard$meshgrid$also works for intervals. However the interval implementation differs in that it converts all input to intervals, were as the standard implementation allows for non-uniform output. Using the interval implementation of$meshgrid$we have > [X Y] = meshgrid (infsup (1:3), 4:6) X = 3×3 interval matrix [1] [2] [3] [1] [2] [3] [1] [2] [3] Y = 3×3 interval matrix [4] [4] [4] [5] [5] [5] [6] [6] [6] but if we fall back to the standard implementation (by removing the interval implementation) we get > [X Y] = meshgrid (infsup (1:3), 4:6) X = 3×3 interval matrix [1] [2] [3] [1] [2] [3] [1] [2] [3] Y = 4 4 4 5 5 5 6 6 6 Note that the last matrix is not an interval matrix. So the question is, should we implement a version of$ndgrid$that converts everything to intervals or should we remove the implementation of$meshgrid$? It's at least most likely not a good idea that the functions are different. I think that removing the implementation of$meshgrid$makes most sense. First of all it's less code to maintain, which is always nice. Secondly you can manually convert all input to the function to intervals if you want uniform output. If you do not want uniform output then the standard implementation works were as the interval implementation does not, so the standard implementation is more general in a sense. We have to choose what to do, but for now I leave it as it is. Non-generalizable functions From what I have found there is no way to create a 3-dimensional array in Octave in the same way you can create a 2-dimensional one with for example M = [1, 2; 3, 4]; Instead higher dimensional arrays have to be created using other functions, for example$reshape$or$zeros$, or by specifying the submatrices directly M(:,:,1) = [1, 2; 3, 4]; M(:,:,2) = [5, 6; 7, 8]; This means that the functions$\_\_split\_interval\_literals\_\_$, which is used to split a string like$"[1, 2; 3, 4]"$into its separate components, cannot really be generalized to N dimensions. [1] https://sourceforge.net/u/urathai/octave/ci/default/tree/ [2] http://wiki.octave.org/Octave_style_guide Michele Ginesi gammaincinv The gammaincinv function The inverse of the incomplete gamma function Definition Given two real values$y\in[0,1]$and$a>0$, gammaincinv(y,a,tail) gives the value$x$such that $P(x,a) = y,\quad\text{ or }\quad Q(x,a) = y$ depending on the value of tail ("lower" or "upper"). Computation The computation is carried out via standard Newton method, solving, for$x$$y - P(x,a) = 0 \quad\text{ or }\quad y - Q(x,a) = 0.$ The Newton method uses as exit criterion a control on the residual and on the maximum number of iterations. For numerical stability, the initial guess and which gammainc function is inverted depends on the values of$y$and$a$. We use the following property: $\text{gammaincinv}(y,a,\text{upper'})=\text{gammaincinv}(1-y,a,\text{lower'})$ which follows immediately by the property $P(x,a) + Q(x,a) = 1.$ We start computing the trivial values, i.e.$a=1,y=0$and$y=1$. Then we rename the variables as follows (in order to obtain a more classical notation): if tail = "lower", then$p=y,q=1-y$; while if tail = "upper",$q=y,p=1-y$. We will invert with$p$as$y$in the cases in which we invert the lower incomplete gamma function, and with$q$as$y$in the cases in which we invert the upper one. Now, the initial guesses (and the relative version of the incomplete gamma function to invert) are the following: 1.$p\dfrac{(0.2(1+a))^a}{\Gamma(1+a)}. We define $r = (p\,\Gamma(1+a))^{\frac{1}{a}}$ and the parameters \begin{align*} c_2 &= \dfrac{1}{a+1}\\ c_3 &= \dfrac{3a+5}{2(a+1)^2(a+2)}\\ c_4 &= \dfrac{8a^2+33a+31}{3(a+1)^3(a+2)(a+3)}\\ c_5 &= \dfrac{125a^4+1179a^3+3971a^2+5661a+2888}{24(1+a)^4(a+2)^2(a+3)(a+4)}\\ \end{align*} Then the initial guess is $x_0 = r + c_2\,r^2+c_3\,r^3+c_4\,r^4+c_5\,r^5$ and we invert the lower incomplete gamma function. 2.q\dfrac{e^{-a/2}}{\Gamma(a+1)}, a\in(0,10)$. The initial guess is $x_0 = - \log (q) - \log(\Gamma(a))$ and we invert the upper incomplete gamma function. 3.$a\in(0,1)$. The initial guess is $x_0 = (p\Gamma(a+1))^{\frac{1}{a}}$ and we invert the lower incomplete gamma function. 4. Other cases. The initial guess is $x_0 = a\,t^3$ where $t = 1-d-\sqrt{d}\,\text{norminv}(q),\quad d=\dfrac{1}{9a}$ (where norminv is the inverse of the normal distribution function) and we invert the upper incomplete gamma function. The first three initial guesses can be found in the article "Efficient and accurate algorithms for the computation and inversion of the incomplete gamma function rations" by Gil, Segura ad Temme; while the last one can be found in the documentation of the function igami.c of the cephes library. You can find my work on my repository on bitbucket here, bookmark "gammainc". Comments Working on this function I found some problems in gammainc. To solve them, I just changed the flag for the condition 5 and added separate functions to treat Inf cases. June 06, 2017 Michele Ginesi gammainc The gammainc function The incomplete gamma function Definition There are actually four types of incomplete gamma functions: • lower (which is the default one) $$P(a,x) = \frac{1}{\Gamma(a)}\int_{0}^{x}e^{-t}t^{a-1}$$ • upper$$Q(a,x) = \frac{1}{\Gamma(a)}\int_{x}^{+\infty}t^{a-1}e^{-t}dt$$ • scaled lower and scaled upper which return the corresponding incomplete gamma function scaled by $$\Gamma(a+1)\frac{e^x}{x^a}$$ Computation To compute the incomplete gamma function, there is a total of seven strategies depending on the position of the arguments in the$x-a$plane. 1.$x=0,a=0$. In this case, the output is given as follows: • for "lower" and "scaledlower" is$1$• for "upper" and "scaledupper" is$0$2.$x=0,a\neq0$. In this case: • for "lower" is$0$• for "upper" and "scalelower" is$1$• for "scaledupper" is$+\infty$3.$x\neq0,a=0$. In this case: • for "lower" is$1$• for "scalelower" is$e^x$• for "upper" and "scaledupper" is$0$4.$x\neq0,a=1$. In this case: • for "lower" is$1-e^x$• for "upper" is$e^{-x}$• for "scaledlower" is$\frac{e^x-1}{x}$• for "scaledupper" is$\frac{1}{x}$5.$0 x\leq36,a\in\{2,3,4,\ldots,17,18\}$. In this case we used the following expansion (which holds true only for$n\in\mathbb{N}$): denote$\gamma(a,x):=\Gamma(a)P(a,x)$, then $$\gamma(n,x) = (n-1)!\left( 1-e^{-x}\sum_{k=0}^{n-1} \frac{x^k}{k!}\right).$$ 6.$x+0.25 a $or$x0$, or$x$not real or$|x|1$or$a5$. Here we used the expansion $$\gamma(a,x) = e^{-x} x^a \sum_{k=0}^{+\infty} \frac{\Gamma(a)}{\Gamma(a+1+k)}x^k.$$ 7. for all the remaining cases was used a continued fraction expansion. Call$\Gamma(a,x) = \Gamma(a) Q(a,x)$, the expansion is $$\Gamma(a,x) = e^{-x}x^a\left( \frac{1}{x+1-a-}\frac{1(1-a)}{x+3-a-}\frac{2(2-a)}{x+5-a-}\cdots \right).$$ This case is handled by an external .cc function. My contribution Actually I dind't participate from the start in the (re)making of this function: most of the work done on it is due to Nir and Marco (see the discussion on the bug tracker). My contribution was, before the GSoC, to adapt the codes in such a way to make the type of the output (and the tolerances inside the algorithms) coherent on the type of the input (single, int or double). Then I corrected few small bugs and made a patch helped by Carne during the OctConf in Geneve. During this first week of GSoC I fixed the input validation and added tests to this function, finding some problem in the implementation. Most of them depended on the fact that the continued fractions were used in cases in which they are not optimal. To solve these problems I changed the conditions to use the series expansion instead of the continued fractions. Now gammainc works properly also with complex argument (for the$x$variable), while Matlab doesn't accept non-real input. You can find my work on my repository on bitbucket here, bookmark "gammainc". I will work on this same bookmark the next two weeks while I will work on gammaincinv. Carnë Draug Function help text in scripts and command line I guess I'm one of the few [[citation needed]] that writes Octave programs instead of only things meant to run from the Octave console. You end up with a few hundred lines of Octave and tens of functions in a single file. For debugging you just source the whole thing. I always thought that Octave's help function only handled functions in their own m files, the help text being the first or second block of comments that does not start with Author or Copyright. But I was wrong. You can document functions in the Octave command line and you can also document functions in a script. Octave seems to pick a block of comments before a function definition as that function help text. Like so: octave-cli:1> ## This is the help text of the foo function. octave-cli:1> function x = foo (y, z), x = y+z; endfunction octave-cli:2> ## This is the help text of the bar function. octave-cli:2> ## octave-cli:2> ## Multi-line help text are not really a problem, octave-cli:2> ## just carry on. octave-cli:2> function x = bar (y, z), x = y+z; endfunction octave-cli:3> man foo 'foo' is a command-line function This is the help text of the foo function. octave-cli:4> man bar 'bar' is a command-line function This is the help text of the bar function. Multi-line help text are not really a problem, just carry on.  It also works in script file with one exception: the first function of the file which picks the first comment which will be the shebang line: $ cat > an-octave-script << END
> #!/usr/bin/env octave
>
> ## This is the help text of the foo function.
> function x = foo (y, z)
>    x = y+z;
> endfunction
>
> ## This is the help text of the bar function.
> ##
> ## Multi-line help text are not really a problem,
> ## just carry on.
> function x = bar (y, z)
>   x = y+z;
> endfunction
> END
$octave-cli octave-cli:1> source ("an-octave-script") octave-cli:2> man bar 'bar' is a command-line function This is the help text of the bar function. Multi-line help text are not really a problem, just carry on. octave-cli:3> man foo 'foo' is a command-line function !/usr/bin/env octave  I reported a bug about it (bug #51191) but I was pleasantly surprised that functions help text were being identified at all. June 01, 2017 Carnë Draug My Free software activities in May 2017 Octave Bugs and patches: • Fixed (bug #47115) where label2rgb in the Image package would not work with labelled images of type uint32 and uint64. Reason was that behind the scenes ind2rgb from the Octave core was being used, and Octave imitates Matlab in that labelled images can only be up to uint16. That sounds like a silly limitation so I dropped it from core in ind2rgb and ind2gray. However, there's other places where that limitation is in place and needs to be lifted such as cmunique, cmpermute, and the display functions. • Fixed hist3 handling of special cases with NaN values on the rows that define the histogram edges and centring values in an histogram when there's only one value. Started as side effect of fixing (bug #51059) . • (bug #44799) - document imrotate does not work with the spline method. • Fix input check for fspecial e65762ee9414 • Review wiener2, function for adaptive noise reduction, from Hartmut and extended it for N dimensions. • Fixed mean2 Matlab incompatibilities for some weird cases (bug #51144) • Make imcast support class logical. • Review otsuthresh, function to compute threshold value of an histogram via Otsu's method, by Avinoam Kalma • Review affine2 and affine3, classes to represent 2d and 3d affine transforms, submitted by Motherboard and Avinoam . Started generalising code to N dimensions to avoid code duplication. Only one method missing (maybe you wanna help?). As sysadmin: • Finished updating the wiki for Octave, due to issues with pygments. Debian • Started packaging libsystem-info-perl, got stuck due to a licensing issue May 31, 2017 Mike Miller Blog Relaunch, More Pythonic With Pelican I just finished converting this blog to Pelican, a static site generator built entirely on Python. The conversion process was so easy, and Pelican is such a pleasure to work with, that I wish I had done this much sooner instead of letting this blog languish for over a year and a half. To reboot the blog and kick things off again, I'll describe just how easy it was to convert to Pelican, compare it to what I was working with before, and lay out some ideas for the future. Starting with Pelican So Pelican is a static site generator, which means it builds a complete website from a template and a bunch of files representing content. The content is usually written in a plain text markup language like Markdown. Pelican has explicit support for common blog features like articles, pages, categories, comments, and syndication. The blog was already written in Markdown and built using a static site generator, and I wanted to try redoing it with Pelican. Creating a Pelican site is simple, just run pelican-quickstart, fill in some content, and run pelican to compile it. I only had to copy in the existing Markdown content and massage the metadata headers into Pelican's format. I also wanted to preserve the URL scheme I had already been using, and found that it was trivial to tweak the default config files to do exactly that. The result was a new site that mostly replicated the content and URLs of the existing blog. I noticed a small difference in the rules that Pelican uses for transforming a page title into a URL. There is one post that contains a "+" plus sign. My previous blog software converted this into the word "plus", while Pelican drops it by default. Fortunately, it's simple to add custom URL transformation rules, and I got this post where it needed to be. If generic transformation rules don't work, Pelican also allows an article to specify exactly what its URL should be in its metadata. I also wanted to be able to add a delimiter in articles to mark the beginning of the article as a summary. By default, Pelican pulls out the first 50 words as the summary, probably cutting off in mid-sentence. I wanted more control, without having to duplicate the summary in a header field. Fortunately, there's a plugin for that. The pelican-plugins repository has dozens of plugins, including one that recognizes a delimiter comment marking the end of the page summary. At this point, setting aside appearance, the old blog was converted perfectly, preserving all of the content, URLs, summaries, feeds, tags, and comments. I had wanted to know if I could use Pelican to publish my blog, and in about an hour I had answered that easily, and it was basically ready to use. Of course the next question was, what else can I do? Ditching Octopress I'll take a small detour here and try to explain why I did this conversion in the first place. When I started this blog three years ago, I wanted to use a static site generator. I settled on Octopress at the time because it seemed simple and included all of the features I wanted. Octopress is based on Jekyll, which was and is probably the most popular static site generator. Octopress looked like Jekyll with a bunch of integrated plugins and theming problems already solved. It looked like a quick way to get started with Jekyll without having to learn Ruby and mess around with CSS, just write and publish. And yes, it was pretty easy. One problem that was quickly apparent is that Octopress was basically a preconfigured Jekyll project. To customize Octopress for your own blog, you essentially have to fork Octopress and make changes to it. There was no way to run upstream Octopress against your own configuration and content. Another problem that's a direct corollary, Octopress is based on a now-ancient version of Jekyll. It's impossible to take advantage of any Jekyll improvements over the years because Octopress used Jekyll 0.12 from 2012. If Octopress updates to a newer version, the only way to take advantage would be to rebase your local fork onto Octopress upstream changes and hope for the best. Not only are configuration changes made in the Octopress repository itself, but content is put there by default. So a typical Octopress blog is a fork of the Octopress project, with local config files, and with all articles and pages mixed in there too. I actually created a subrepo for just the content, to try to maintain the content separately from the Octopress repo. This seemed better at first, but I still had config changes in one repo and content in the other, and they're not exactly independent. Add in a customized fork of a theme, and that's three separate repositories to maintain one simple blog. I saw that Octopress had been working on a redesign to fix these problems and work with the latest Jekyll, but it seems like development has stalled. Maybe Jekyll is good enough now that Octopress isn't really needed anymore? Maybe the authors moved on and the Octopress community dissolved? Whatever the state of the project, I had a blog built with a tool that was cumbersome to maintain, wasn't being updated, and I didn't really understand. I stopped updating my blog due to a confluence of many life events, and being difficult to maintain wasn't helping. I was at PyCon 2017 last week, and one of the strongest messages that I took with me was to try to find a Pythonic way to do everything. Since Python can do so many things and make them so easy, why not do more things in Python? I made a note to find and try out a best-practices Pythonic static site generator to reboot my blog, and I found Pelican. And here we are. Customizing Pelican Pelican feels extremely Pythonic to me, because it made it extremely easy to build a replacement blog from scratch using sane defaults and working with my existing Markdown. The default behavior is easy and intuitive and makes something that just works. Now, beyond using the defaults, customizing Pelican is done with Python modules, and there is an active community building reusable plugins and themes for Pelican. To add plugins and choose a theme, I only needed to point at the pelican-plugins and pelican-themes repositories and add a little configuration. Most of the themes are listed with screenshots and live demos at pelicanthemes.com. I was very happy to find the pelican-bootstrap3 theme because it is clean and minimal, it's based on the widely used Bootstrap framework, and it is extremely customizable. With just a little exploration of the config space, I now have something that behaves and looks more like I wanted the original blog to and less like the Pelican default theme. I had to stop myself from customizing too much, write this post, and update this blog after a long hiatus. It's functional and pretty much does what I want for now, but there is still more customization to be done. I would like to install a favicon. I want to get the built-in site search feature working. I would like to look at the Bootswatch themes for Bootstrap that can easily be dropped into this theme. I might add a tag cloud once I have some more content. For now the most important thing is that the blog is up and running again. I hope the switch to Pelican will lower the barrier to updating a bit and help keep things moving. Sometimes the right tools make the difference. I'm also curious what else Pelican can be used for. Are there any downsides that I haven't run into yet? Other Pythonic tools for compiling content into web sites, documents, presentations? Joel Dahne The first day and my repository Yesterday I handed in the last exam for the semester(!) and today I started to work on the project full time. I begun by setting up a repository for my code. At the moment I have only cloned the interval package, I'm still to make my own changes to it. I started to read the constructor for intervals to see how it works. While doing that I realized that there is much less to do than I thought. The constructor can actually handle creating intervals from numerical arrays of any dimension. The function displaying the interval can only handle up to two dimension but if you look at the internal state it actually has more dimensions. It is not perfect though, it does not work for decorated intervals and it cannot handle strings and cells as input. But still, it makes it much easier for me. I will continue to work and hopefully I have actually committed something by the end of the week. Next week I will be away but after that I will code for the rest of the summer. May 30, 2017 Michele Ginesi Timetable Timetable Timetable During this first period I searched for the bugs related to special functions in the bug tracker. I found these four bugs that need to be fixed: 1. gammainc: the upper version rounds down to zero if the output is small. For this bug there is a lot of work already done by Marco and Nir. I will just fix the last few things (like the input validation). 2. gammaincinv: after finishing gammainc, I should start working on the inverse (which is missing in Octave). 3. betainc: it is necessary to fix the input validation. 4. besselj: it gives NaN if the input is too large. Actually, also other versions of the Bessel function have the same problem. My idea of the timetable is the following • First part (May 30 - June 26: 4 weeks): • 1st week: finish to fix gammainc (input validation, documentation, add tests) • 2nd and 3rd weeks: write gammaincinv (via Newton's method) • 4th week: fix the input validation for betainc (and, if needed, add tests and fix the documentation) • Second part (July 1 - July 24: 3 weeks): fix the Bessel functions. Some ideas comprehend to test other libraries in addition to the amos (which is the library currently in use) and try to implement new algorithms. Add tests and revise the documentation. • Final part (July 29 - August 21: 3 weeks): add tests to the other special functions to make sure they work properly, trying to fix the problems that, eventually, will be found and revise the documentation if needed. May 28, 2017 Joel Dahne Timeline This post will be about specifying a timeline for my work this summer. As I mentioned in the introductory post the work will be about implementing support for higher dimensional arrays in the interval package. To begin with I have divided the work into 5 different parts: 1. Construction and Printing This part will be about implementing functions for creating and printing arrays. It will mainly consist of modifying the standard constructor and all the different functions used for printing intervals so they can handle higher dimensional arrays. 2. Vectorized Functions Here I will work on generalizing all functions supporting vectorization to also support arrays of higher dimensions. 3. Folding Functions Here I will work on generalizing all functions implementing some sort of folding to support higher dimensions. By folding I mean taking a multidimensional array as input and returning an array of lower dimension. Examples of these functions are$sum$,$prod$and$dot$. 4. Plotting I'm not sure what support Octave has for plotting higher dimensional arrays, but if there are some functions which could also be useful for intervals I will try to implement them here. 5. Documentation I'll write the documentation alongside the rest of the work. In the end I will try to add some usage examples and integrate it better with the standard documentation. So these are the parts in which I have divided my work in and the timeline will be • Phase 1 (30/5 - 30/6) • Week 1: Setting up • Week 2-3: Construction and Printing • Week 4: Vectorized Functions • Phase 2 (3/7 - 28/7) • Week 5: Continue on Vectorized Functions • Week 6-7: Folding Functions • Week 8: Plotting • Phase 3 (31/7 - 25/8) • Week 9: Continue on Plotting • Week 10-11: Documentation • Week 12: Finishing up! My first week will be rather short since I have an exam due in the middle of the week. After that I will also be away for a week not working on the project, I have no counted that week in the timeline above. May 21, 2017 Joel Dahne Introduction Hi! I'm Joel Dahne and I will be using this blog to share the progress of my work for Google Summer of Code 2017 under GNU Octave where I will be working on improving the interval package. About me Currently I'm a master student in mathematics at Uppsala University, Sweden, I'm just about to complete my first year and have one more to go. I first encountered interval numerics with my bachelor thesis, during which I tried to generalize the method described in this paper to higher dimensions. The code from the project is available on GitHub, https://github.com/Urathai/generateZeros. Working on that project ignited my interest for interval numerics and I want to work to increase its availability to the common user. The project In short my project is about adding support for N-dimensional arrays in the interval package. At the moment the package only has support for up to 2-dimensional arrays. Most of the time this is enough but sometimes it is limiting. The goal is to have the syntax identical to the one for normal arrays. Switching from floats to intervals should be as easy as adding$infsup$in front of the variable. During my time preparing I have also noticed that some of the folding functions (e.g.$sum$,$prod$and$dot$) handle some edge cases differently than the standard implementation. Hopefully this can be resolved at the same time. This blog During my project I will try to update this blog with my current progress. It's my first time writing a blog so we will see how it goes. The next step for me is to create more detailed plan of how I should structure my work during the summer, this will most likely be the subject of my next post. May 16, 2017 Enrico Bertino Introductory Post Hello, I'm Enrico Bertino, and this is the blog I'll be using to track my work for Google Summer of Code 2017. The project will consist in rifare the neural network package with a focus on convolutional neural networks. Here my proposal. Something about myself: I’m an Mathematical Engineering student and I live in Milan. After a BSc in Politecnico di Milano I did a master double degree both in Ecole Centrale de Nantes, France, and Politecnico di Milano. In the two years in France I attended engineering classes with spec in virtual reality and in Italy I am enrolled in the last year of MSc major Statistics and Big Data. During my project, I will leverage on the Pytave project in order to exploit Python code for the implementation of the package. This is almost necessary because nowadays there is a strong tendency to open-researching in the deep learning field, where even the biggest companies release their frameworks and open their code. The level of this resources is very high, mostly for frameworks and APIs on Python, such as Tensorflow. This integration will allow us to guarantee a continuous updating on neural networks improvement and leading to a greater focus on Octave interface. In this sense, letting Octave coexist with Tensorflow, we will exploit their synergy and reach fast consistent goals. As of now, I tested the integration of Tensorflow via Pytave. I have come across some problems of variable initialization in Ubuntu 16.04, whereas everything works fine on macOS Sierra. I will perform different tests in order to be be sure it works before starting the implementation of the package. Here my repository. May 13, 2017 Michele Ginesi expint The expint function The exponential integral This is the first function I worked on in Octave: you can find the discussion here. Even if it will not be part of my GSoC project, I think it would be interesting to show how the function has been rewritten. Definition and main properties The canonical definition for the exponential integral is $$E_i (z) = -\int_{-z}^{+\infty} \frac{e^{-t}}{t}dt$$ but, for Matlab compatibility, expint compute the function $$E_1(z) =\int_{z}^{+\infty}\frac{e^{-t}}{t}dt.$$ The two definitions are related, for positive real values$z$, by $$E_1(-z) = -E_i(z) - \mathbb{i}\pi.$$ More in general, the function$E_n$is defined as $$E_n(z) = \int_1^{+\infty}\frac{e^{-zt}}{t^n}$$ and the following recurrence relation holds true: $$E_{n+1}(z) = \frac{1}{n}\left(e^{-z}-zE_n(z)\right).$$ Moreover $$E_n(\bar{z}) = \overline{E_n(z)}.$$ Computation To compute$E_1$, I implemented three different strategies: • Taylor series (Abramowitz, Stegun, "Handbook of Mathematical Functions", formula 5.1.11, p 229): $$E_1(z) = -\gamma -\log z -\sum_{n=1}^{\infty}\frac{(-1)^nz^n}{nn!}$$ where$\gamma\approx 0.57721$is the Euler's constant. • Continued fractions (Abramowitz, Stegun, "Handbook of Mathematical Functions", formula 5.1.22, p 229): $$E_1(z) = e^{-z}\left(\frac{1}{z+}\frac{1}{1+}\frac{1}{z+}\frac{2}{1+}\frac{2}{z+}\cdots\right)$$ or, in a more explicit notation $$E_1(z)=e^{-z}\frac{1}{z+\frac{1}{1+\frac{1}{z+\frac{2}{1+\frac{2}{z+\ddots}}}}}$$ This formulation has been implemented using the modified Lentz algorithm ("Numerical recipes in Fortran 77" p.165). • Asymptotic series ("Selected Asymptotic Methods with Application to Magnetics and Antennas" formula A.10 p 161): $$E_1(z)\approx \frac{e^{-z}}{z}\sum_{n=0}^{N}\frac{(-1)^n n!}{z^n}.$$ A difficulty about this approximation is that the series is divergent, and that is the reason why the sum goes up to a certain$N$"big" and not up to infinity. Then I tested them in the square$[-100,100]\times[-100,100]$of the complex plane, comparing the result with the ones given by the Octave symbolic package. With these informations, I identified three zones of the complex plane: in each zone one strategy is better than the others. Then the final function divides the input into three parts, accordingly to the position of the same in the complex plane, and compute the function using the opportune strategy. May 10, 2017 Michele Ginesi Introduction Introduction to the project Introduction About me My name is Michele Ginesi, I obtained a Bachelor's degree in Applied Mathematics in Verona, Italy. Now I am getting a Master degree in Mathematics in the same university. I was selected to partecipate to the Google Summer of Code under GNU Octave for the project Make specfuns special again. About the project Special functions are an interesting and important topic in Mathematics, and so it is fundamental to have a way to compute them in an accurate way. Some examples of special functions (whith some important application of the same) are: • Gamma function$\Gamma$. This is one of the most important: it can be viewed as the extension of the factorial function ($\Gamma(n)=(n-1)!$, for$n\in\mathbb{N}$) and it is a component in various distribution functions in probability theory. • Beta function$B$. This was the first known Scattering amplitude in String theory. • Bessel functions. These are the canonical solutions$u(r)$of the Bessel's differential equation $$r^2\frac{d^2u}{dr^2}+r\frac{du}{dr}+(r^2-\alpha^2)u=0$$ for any complex number$\alpha$. At a first approach may seems that they are an end in themselves, but actually the Bessel equation describes the radial component of the two dimensional wave equation ($u_{tt}-c^2\Delta u = 0$) applied on a disc. The most common strategies used to approximate special functions are Taylor series and continued fractions, sometimes asymptotic series. Some particular functions can be implemented via recurrence formula or other type relations with other functions (for example,$B(z,w) = \Gamma(z)\Gamma(w)/\Gamma(z+w)$). This project will be divided into three main parts: 1. Fix the already known bugs related to special functions (e.g. #48316, #47800, #48036). 2. When the known bugs will be fixed, I will proceed to add new tests to make sure that all the functions are accurate. 3. Fix the new problems/bugs that, eventually, will be found during the second phase. The main reference for this work will be Handbook of Mathematical Functions by Irene Stegun and Milton Abramowitz which contains all the functions I will work on completed with (almost) all the expansions that are needed to implement them; and Handbook of Continued Fractions for Special Functions by Annie Cuyt, Vigdis Brevik Petersen, Brigitte Verdonk, Haakon Waadeland and William B. Jones. To test the functions I will use, in addition to the Handbook of Mathematical Functions, Sage and the Octave symbolic package to get a reference value. Here you can find my repository on which I will work. March 14, 2017 John W. Eaton Timeline of Octave Releases Someone asked me in private email why randi isn’t available in Octave. They were using 3.2.4 on Ubuntu 12.something and quoted a 6-year old email from the mailing list that said it was added in Octave 3.4. Maybe they are a time traveler from the past? Anyway, after fumbling around for the n-th time trying to find out when 3.2.4 was released (the answer is, a long time ago) I created this page to make those kinds of searches easier. Silent Running, or, Back to the X-Term of the Future I bought one of these: put 8GB of memory and a 120GB SSD in it and mounted it on the wall. So now I have (more or less) a modern-day quad-core X-Terminal with a gigabit network connection running Debian and Gnome. Next, I moved my former noisy, heat-generating deskside system to the basement and, voila, sweet silence! All I hear now is the residual ringing in my ears that is probably the result of attending too many loud rock-n-roll shows in my youth. But that’s a story for a different time… My home directory remains on the system that used to sit beside the desk. It has a 6-core AMD CPU and a pair of disks in a RAID-1 configuration. Accessing the files over the gigabit network seems fine, at least for normal usage. And even though the new ZBOX has pretty modest specs, it is more than sufficient for most of the things I do: email, web browsing, and running Octave for testing. I don’t use it for building Octave. For that I have a couple of other systems also located down in the basement where I can’t hear them. Total cost was about$250.  I figured that wasn’t too much to pay for silence.  And if it turned out that the performance wasn’t great, I could always save up a few more pennies to buy a different model that has a faster processor and/or more memory and then reuse this box as a firewall – it does have 2 separate ethernet ports, as if it were intended to be used for that purpose.  Or maybe as an HTPC since it also has an HDMI output and Intel graphics that are supposed to support up to 4k graphics (I don’t know, my monitor works great with this system but it is just 1980 x 1200).

Setup was pretty easy and Debian installed without any problems from a USB stick.

It takes about 35 seconds to boot up to a gdm3 login screen.  That includes a few seconds each for POST and grub delays.  I could eliminate those and save a few seconds but then it would be harder to enter the BIOS or change boot parameters.  And how often do I reboot anyway?

From the login screen to a terminal window is another 15 seconds or so.  And that’s all with a 1.6GHz Celeron processor.  If I’d splurged and spent a few hundred more for the Core i5 version, I guess I’d be up and running in about 30 seconds?

August 30, 2016

Cristiano Dorigo

CGS, TFQMR and final revision of all the codes

Dear all,

During these weeks I worked on improving CGS, which I rewrite from scratch. Then I implemented TFQMR, which is not available in Octave. Finally I made a revision on the codes that I improved the past weeks.
All the file revisioned (gmres, pcg, bicg, bicgstab) and the new ones (cgs and tfqmr) are available here:

https://CrisDorigo@bitbucket.org/CrisDorigo/socis16-octave-iter_meth

in the folder definitive_codes

CGS

As written in my second post, the actual Octave's CGS implementation is not very clear. Indeed I didn't understand most of the code (I was not able to recognize how the is was adapted). Moreover I tested it and the results are not correct (in my second post there is an example of this).
Then I decided to rewrite if from scratch following closely the algorithm given by Saad in the reference book suggested.
I wrote it following the pattern that I used in the other codes (i.e. to wrote three subscripts, each for the different cases: all matrices, all functions handle or mixed). Iit has also the same characteristics of the other methods improved, for example: the approximation returned is the one with the minimum residual, there are five types of flag (0 convergence, 1 maximum number of iterations reached, 2 preconditioner singular, 3 stagnation, 4 division by zero during the process)... I don't go in deep about this details to make less boring this post, since they are already explained in my previous posts.

TFQMR

In my last post I wrote that the mentors told me to implement a new algorithm not available in Octave. Since I had no suggestion about which method write, and the community had no particular needs, I chose the transpose-free qmr algorithm because it is the only methods treated in the Saad book.
Also with this algorithm, I use the same pattern of the others (i.e. the three subscripts) and synchronized in such a way it has the same type of input and output as the others.

One mention to the number of iterations effectively performed. In the Saad book, the method has different behaviours if the iteration number is even or if it is odd, but at every iteration the approximation is computed. Making some test in Matlab with its tfmqr, I noticed that if the method converges, the vector of the residuals in long two times the number of iteration that Matlab tells that are necessary to convergence. Then I think that Matlab counts as one iteration the block (odd iteration + even iteration), i.e. I think that Matlab_it = odd_Saad_it + even_Saad_it. In this way the iterations performed by Matlab is two times the effective iterations performed. I decided to count the iterations in this way to be compatible.

LAST REVISION

Finally I checked (one more time) the codes that I improved in these weeks (i.e. pcg, gmres, bicg, bicgstab).
Indeed I updated the documentation (for example: making all similar, giving more details about the preconditioning, adding the reference book,...).
Then I checked the codes and I tried to make it more readable and to adapt it at the Octave conventions.
Then I added some tests in all the codes, mostly to check if all the subscripts works (i.e. checked every combination of A matrix/function, M1 matrix/function, M2 matrix/function), but also to check that the flags works well (I checked only the flag 0, 1 and 2, since the others are difficult to reproduce) and also to check that these methods solves also complex linear systems.

The mentors suggest me to make a patch with the methods improved (pcg, gmres, bicg, bicgstab and cgs), in such a way it can be easily the review of them and if it is in time (maybe) they can be included in Octave 4.2.
This patch is available here

https://CrisDorigo@bitbucket.org/CrisDorigo/socis16-octave-iter_meth

and it is the file SoCiS16_Improve_iterative_methods.diff

I made this patch from the Octave changeset 22406:dc4e2203cd15 of the Octave 4.1.0+ version.

THANKS TO OCTAVE

Since the official deadline for the SoCiS  is tomorrow (the 31-th of August), this is my last post in this blog.

To make a small summary, during this summer of codes I make an improving on
most of the iterative methods to solve linear systems. In particular I improved pcg, gmres, bicg, bicgstab and cgs.
I wrote for all of these methods three subscripts to make them more efficient, dividing the particular cases "matrix", "function handle" and "mixed", in such a way that, according to the input data, the method uses the appropriate subscripts to make computations.
I tried to make these methods more compatible with Matlab (for example noticing that Matlab gives as output the approximation with the minimum residual and not the last performed, as used in Octave, or adding the possibility of "flag =2", i.e. singular preconditioner).
I tried to fix some "non-clear" situation that I noticed, for example pcg does not noticed if the matrix was not Hermitian positive definite if it was a complex matrix, or to face the problem of a division by zero in bicg, bicgstab and cgs.
Then I implemented tfqmr which is available in Matlab, but not in Octave.

Unfortunately, there are other two methods that needs an improve (pcr and qmr), but due to unexpected problems with the above methods I had not enough time to improve also these. When my scholastic commitments are not so big, I want to try to improve also these two methods.

I want to thank all the Octave community that gives me this fantastic opportunity to face a big project as this and then to learn  much theoretically and practical things about these iterative solvers for linear systems.
I want to thank specially the mentors, who were always available to give suggestions, advices and ideas of how to proceed in the work, mostly when came some unexpected problems.
I hope that this improving will be useful to Octave, and to contact me if there is something not clear or wrong.
I also hope to continue to help the Octave community in the following times.

One more thanks to all of you.

August 23, 2016

Francesco Faccio

Summary of the work - GSoC

MathJax TeX Test PageHello!

In this final blog post I want to summarize the work done so far for the GSoC. First of all I want to thank Octave community for their support:
when I had some problems I have always found someone in the mailing list or in the IRC channel willing to help me.

In my public repository I pushed three final commits:

https://bitbucket.org/Francesco_Faccio/octave-gsoc/commits/all

The first commit I pushed regards the work we did in the first part of the GSoC and it is a patch for Octave which allows to include IDA module of Sundials library as a dependence of Octave.

I wrote some configuration tests, checking for sundials_ida and sundials_nvecserial headers and functions and a macro to test whether sundials_ida has been configured with double precision realtype. I included a test for klu library in suitesparse, because it is required for sundials' sparse solver.
If all these conditions are satisfied, you can use ode15i and ode15s in Octave (you can see in my previous post how to configure sundials and octave correctly and how to run benchmark problems).

The second commit regards my work on solvers ode23 and ode45 which were already in Octave.
Since there were a lot of duplicate code, we decided to refactor the solvers, exporting the check of the options into two private functions:

$[defaults, classes, attributes] = odedefaults (n, t0, tf)$

is a function which returns a struct defaults, containing the default values of all the solvers, and structs classes and attributes, containing the arguments for validateattributes.

$options = odemergeopts (useroptions, options, classes, attributes, fun_name);$

is a function which loops over default fields and, if a user provides a new value, uses validateattributer or validatestring to test the value and overwrites the default one.

odeset.m and odeget.m have been modified in such a way that the user can add new special fields to an ODE option structure. They don't depend on private functions ode_struct_value_check and known_option_names, but use inputParser to perform input checking. This version of odeset doesn't include the option to catch a misspelled option name yet, so that 'Rel' doesn't become automatically 'RelTol'.

We wrote function decic, which can compute consistent initial conditions for ode15i, close to initial guesses y0 and yp0.

The third commit contains files ode15i.m, ode15s.m and a common interface __ode15__.cc

ode15i

The main functionalities of ode15i have been achieved. Here's a list of the options accepted by the solver:

RelTol: You can specify a relative error tolerance as a positive scalar. The default value is 1e-3.

AbsTol: You can specify an absolute error tolerance as a positive scalar or vector. The default value is 1e-6.

NormControl: Sundials controls the relative error using the norm of the solution at each step, so option NormControl is always "on".

OutputFcn, OutputSel: You can specify them to plot the components of the solution you want after each successful integration step. The default output function is $@odeplot$, but you can provide a custom output function.

Refine: You can increase the number of output points using the refine factor. __ode15__.cc uses the IDAGetDky function to reach the highest possible order of interpolation.

Stats: Only standard statistics are printed (number of successful steps, number of failed attempts and the number of function evaluations).

Events: If you provide an event function, it must be of the form $[value,isterminal,direction] = EventFcn(t,y,yp)$. The solver localizes a change of sign on your event function and uses linear interpolation to extimate the time when the event occurs. A way to improve this functionality will be to use the rootfinding function of Sundials in order to have an higher order of interpolation.

MaxStep: Use this option to specify an upper bound for the integration step. The default value is $0.1*abs(tf - t0)$.

InitialStep: Use this option to specify the first integration step of the solver. Its default value depends on the tspan and the initial slope.

Jacobian: You can specify the Jacobian as a function which returns the modified Jacobian $[\frac{\partial F}{\partial y}, \frac{\partial F}{\partial y'}] = jac (t, y, y')$ or as a cell array containing the two constant Jacobian matrices $\{[\frac{\partial F}{\partial y}], [\frac{\partial F}{\partial y'}] \}$.

In both cases, if the two parts of the Jacobian are sparse, the Jacobian will be stored in CCS format.
If you don't provide the Jacobian, it will be extimated internally by Sundials, using difference-quotient approximation.
In this implementation you can't provide only one part of the Jacobian yet, but you have to specify both.

JPattern, Vectorized: These options haven't been developed yet.

MaxOrder: You can specify an upper bound for the order of the solver. The default value is 5.

This solver can be extended in order to increase the compatibility with Matlab:
- completing the missing options
- allowing the user to solve equations with complex number
- allowing the user to use single precision variables.

ode15s

This solver uses the same dld-function of ode15i. Inside the m-file, the semi-explicit DAE is written as an implicit one and solve as ode15i.
The solver uses the same options of ode15i, but in addition, it has:

NonNegative: this option has not been implemented yet. It can be implemented using IDASetConstraint function.

Events: Same as ode15i, but it must have the form $[value,isterminal,direction] = EventsFcn(t,y)$.

Jacobian: You can provide a function which returns the Jacobian of the form $[\frac{\partial F}{\partial y}] = jac(t,y)$ or a constant Jacobian matrix.

Mass: You can provide a constant Mass matrix or a function handle of the form $M(t,y)$ or $M(t)$. If the Mass matrix depends on the state, the Jacobian Matrix will be computed internally from Sundials. If you provide a sparse Jacobian and a sparse Mass matrix, they will be stored in CCS format.

MStateDependence: Specify MStateDependence to "strong" or "weak" if the Mass matrix has the form $M(t,y)$. Use the value "none" if the Mass matrix has the form $M(t)$ or it is constant.

MvPattern: This option has not been developed yet.

MassSingular: This option has not been developed yet. The solver solves the problem both with and without a singular mass matrix in the same way.

InitialSlope: You can specify the initial slope of the problem. The default value is a vector of zeros.

BDF: this implementation of ode15s uses always BDF method.

This solver can be extended in order to increase the compatibility with Matlab:
- completing the missing options
- allowing the user to solve equations with complex number
- allowing the user to use single precision variables.

This code has not been merged yet, since Octave is concluding a release process.

Concluding, I really enjoied working on this project and there is still a lot of work to do. I will work in the next weeks to extend the functionalities of ode15i and ode15s. I plan to continue to contribute to Octave development, maybe both in ode and in some statistics project.

Chiara Segala

Final Evaluation

Dear all,

GSoC16 is at the end, I want to thank you for giving me the opportunity to participate in this unique experience.

My goal was to add two solvers to the odepkg package, two exponential integrators, that are a class of numerical methods for the solution of partial and ordinary differential equations.

• Here is a summary of the work done in this months:

For the midterm evaluation (repository of the midterm work):

After a first period of study, I started with implementations.
I start with the implementation of the φ functions (phi1m.m, phi2m.m, phi3m.m, phi4m.m), necessary to calculate the matrix functions in the two schemes (exprk.m, exprb.m) for a general exponential Runge-Kutta and Rosenbrock integrator. These schemes have been very useful to verify the correctness of the official methods. I applied them to four different exponential methods (exprk2.m, exprk3.m, exprb3.m, exprb4.m).
Then I wrote a code that implements the augmented matrix  Ã (augmat.m) described in theorem 2.1 of [HAM 11]. This matrix will be given as input to expmv and will allow us to implement the exponential integrators by evaluating a single exponential of this augmented matrix avoiding the need to compute any φ functions.

For the final evaluation (REPOSITORY OF THE FINAL WORK):

After the midterm evaluation, I implemented the official exponential method that will be in odepkg, it is the advanced third order method with adaptive time stepping from Runge-Kutta family. Unlike the general schemes, for this code I used the function expmv so that I could evaluate a single exponential of an augmented matrix instead of computing any φ functions. I wrote a code for the stepper (exp_runge_kutta_23) and for the solver (odexprk23), according to odepkg standards.
Then I changed the file augmat.m to calculate a "double" augmented matrix, and in this way, I have been able to calculate, with a single call to expmv, both the solution to the next step, and the lower order solution for the estimation of the error.
In the same way I implemented the official exponential fourth order method with adaptive time stepping from Rosenbrock family (exp_rosenbrock_34, odexprb34)
I finally tested the order of my methods and I translate in C++ a for loop (__taylor__.cc) of the Nick Higham function expmv to improve the speed of an algorithm.

• In the repository of the final work you will find a patch for odepkg.

I added to the package the following files

/inst/odexprk23.m : Solve a set of stiff Ordinary Differential Equations (stiff ODEs) with a third-order exponential Runge-Kutta method.
/inst/odexprb34.m : Solve a set of stiff Ordinary Differential Equations (stiff ODEs) with a fourth-order exponential Rosenbrock method.
/inst/steppers/exp_runge_kutta_23.m : Stepper for the odexprk23 solver.
/inst/steppers/exp_rosenbrock_34.m : Stepper for the odexprb34 solver.
/inst/utilities/expmv.m : Compute the matrix exponential times vector or matrix (2010 Awad H. Al-Mohy and Nicholas J. Higham).
/inst/utilities/augmat.m, estimates.m, expmv_tspan.m, select_taylor_degree_aug.m, NormAm.m, select_taylor_degree.m, theta_taylor, theta_taylor_half, theta_taylor_single : Functions related to expmv.
/src/__taylor__.cc : for loop of expmv.

• I export the patch from
odepkg changeset: 542:3f74a6e911e1,
octave version: 4.1.0+, changeset: 22348:9deb86bb5632.
My patch exponential_integrators.diff requires a changeset >= 22345.
To apply the patch follow the commands

hg import exponential_integrators.diff
hg archive ../odepkg-0.9.1.tar.gz

run Octave 4.1.0+

pkg install odepkg-0.9.1.tar.gz

to test the patch:

demo odexprk23
demo odexprb34 (slightly slower)

if everything goes well, the demo gives a plot on convergence of methods.

•  Demos give the following warnings
warning: odexprk23: unknown field 'LinOp' in ODE options
warning: odexprk23: unknown field 'GFun' in ODE options

warning: odexprb34: unknown field 'DFdt' in ODE options
I haven't fixed the warnings because another GSoC16 student is working on the package and because outside the scope of my project, but the problem can quickly be solved collaborating at the official end of this experience.

GNU Octave – Geometry Package – GSoC 2016

This Repository ( https://bitbucket.org/amr_keleg/octave-geometry ) is a clone(fork) of the Geometry Package which is a part of the Octave Forge Project (http://octave.sourceforge.net/).

This fork adds new functions to the official Geometry Package as part of GSoC (Google Summer of Code) 2016.

The official Geometry Package can be found here:https://sourceforge.net/p/octave/geometry/ci/default/tree/

• polyjoin: Convert cell arrays of polygons into NaN delimited coulmn vectors.
• polysplit: Convert NaN separated polygon vectors to cell arrays of polygons.
• ispolycw: Check whether the polygon(s) has(have) clockwise orientation or counterclockwise orientation.
• poly2cw: Convert Polygons to clockwise contours(polygons).
• poly2ccw: Convert Polygons to counterclockwise contours(polygons).
• polybool: Perform Boolean Operations on polygons.
Valid Operations are: (intersection,and,&), (union,or,|,+,plus), (subtraction,minus,-), (exclusive-or,xor).
• poly2fv: Convert Polygons into Faces and Vertices Representation.

Timeline

Community Bonding:

As mentioned in my proposal , I started working on the project 3 weeks before the official start of the GSoC program to compensate the three weeks that overlap with my final examinations.
I started my implementing the polysplit & polyjoin functions as they don’t depend on other functions.
These functions change the representation of the polygons from one form to another.
Since the polysplit & polyjoin functions don’t depend on Boost Geometry, they are implemented as .m files and placed in the inst/polygons2d/ directory.

First Coding Period:

During the first two weeks of this coding period, I continued the implementation of the polysplit/polyjoin functions adding help messages and regression tests.
Then, i worked on the first function that is implemented as a cc file which is the ispolycw function.

As all the package functions were implemented as .m files, i created a new directory src/ so that i can add .cc functions to the package.

The ispolycw function depends on calcualting the signed area of the polygon using the Boost Geometry function area(polygon).
The orientation of Self-intersecting polygons is defined as the respective orientation of the leftmost top point.If these points are co-linear ,  the signed area of the polygon is used to determine its orientation.

To compile the ispolycw function and generate .oct file that can be run by Octave, A new Makefile was added to the src/ directory.
The Makefile initially used the socket’s package implementation of the Makefile.
(https://sourceforge.net/p/octave/sockets/ci/default/tree/src/Makefile).

Two weeks after the start of the GSoC and we were already ahead of schedule. So we decided to start working on the polybool function as it is the main function of the whole project.
We decided to start by implementing the polybool function to support doing  boolean operation between only two polygons.

Second Coding Period:

By the start of the second coding period, we changed the implementation of the polybool  function to use the Boost Geometry multipolygon concept so that it can support applying boolean operations on multiple polygons with inner holes.

During the first three weeks the  second coding period, simple benchmarks were used to test the polybool  function.
And consequently multiple implementation bugs were fixed. One of these bugs was that a single inner ring was initially matched to multiple outer rings that surrounds this inner ring.
So a boolean array is currently used to ensure that each inner ring is matched to one and only one outer ring.

Then, i wrote two simple functions poly2cw poly2ccw which are used to toggle the orientation of polygons and thus convert inner rings (holes) into outer ones and vice versa.

Then in the fourth week of the second coding period, i decided to make better understanding of Makefiles , how to use them to build only changed files, how to organize dependencies between targets and how to use variables.
Also, i tried to use the dissolve extension of the Boost Geometry to handle self intersecting polygons.
It took me a few days to manage to use it as the Boost Geometry version on my system (Ubuntu 14.04) was 1.54 while the dissolve extension is developed based on Boost Geometry libray of version 1.60
After that, i adapted the ClipperLib benchmarks to Octave so that we can compare my implementation of these functions to the ClipperLib / other programs performance.

By the end of July, i  started working on the poly2fv function which is used to triangulate polygons so that they can be better visualized using the patch command.
This function returns a 2D matrix consisting of the triangles’ vertices and another 2D matrix representing the indexes of each triangle respectively.

For the first two weeks of August, i reviewed all the newly implemented package functions, added missing regression test , improved the help messages , added more input format checks , added warning messages when some boolean operations fail to be done mainly because of self-intersections that can’t be resolved.

Then during the last two weeks of the program, i had to learn how to use autotools so that i can use a configure script to test whether the Boost library is installed on the system, and check whether the dissolve function can be used to resolve self-intersecting polygons or not.
It was a great experience to manage to use the autotools to manage the generation of the final Makefile that is required to generate .oct files from the implemented .cc files.

 Week Task Working hours / week 22-4 / 29-4 Understanding the organisation conventions – Prepare a complete plan for each function (Input/Output requirements – Optimal implementation based on memory/time metrics) 20 hour/week 30-4 / 6-5 Implementing the polysplit function and testing it. 20 hour/week 7-5 / 13-5 Implementing the polyjoin function and testing it. 14 hour/week 14-5 / 10-6 * Finalize the implementation of the conversion functions (polysplit-polyjoin) 7 hours /week 11-6 / 19-6 Implementing and testing the ispolycw function. * 40 hours/week 20-6 / 27-6 Midterm Evaluation week 40 hours/week 25-6 / 1-7 Working on the poly2cw and ispolycw scripts 40 hours/week 2-7 / 8-7 Performing severe testing to all the implemented functions 30 hours/week 9-7 / 15-7 Working on the polybool script. 30 hours/week 16-7 / 22-7 40 hours/week 23-7 / 29-7 Implementing and testing the poly2fv function. 40 hours/week 30-7 / 5-8 A 2 week buffer for finalizing the scripts , debugging them and writing the documentation. 40 hours/week 6-8 / 12-8 40 hours/week 13-8 / 23-8 Tidying the code, writing tests, improving the  documentation and submitting the code sample. 40 hours/week

* Final Examinations – Schedule isn’t announced yet – will inform the mentor whenever it is announced.

Conventions

All the newly added functions has the following set of conventions:

1. Outer Rings of the polygon has a clockwise orientation, while Inner Rings has a counterclockwise orientation.
2. Description of the polygons is represented by two arguments X and Y. Each argument can be represented using Cell Arrays or nan Delimited Vectors.
3. The functions don’t require that the last point is the same as the first one.

Example: To represent a square as shown in figure:

— NaN Delimited Vectors:

x=[0 0 3 3 NaN 1 2 2 1];

y=[0 3 3 0 NaN 1 1 2 2];

— Cell Arrays:

x={[0 0 3 3],[1 2 2 1]};

y={[0 3 3 0],[1 1 2 2]};

Dependencies

polybool & poly2fv functions use the Boost Libraries.

The recommended version is 1.60

Getting Started

To test the updates to the Geometry package:

• Make sure you have Boost Libraries installed.
• Clone the package
hg clone https://amr_keleg@bitbucket.org/amr_keleg/octave-geometry

• Run ./bootstrap in the root directory of the package.
./bootstrap

• Create a tar file
tar -zcf geometry.tar.gz octave-geometry/

• Run Octave.
• In Octave Command Prompt, write
pkg install geometry.tar.gz

• Load the package
pkg load geometry

Tests

All the benchmarks where run on a Ubuntu 16.04 computer with Boost Library 1.60.

All the scripts can be found here:

As my blog doesn’t support tables, Kindly check the Tests/Notes which can be found as a single pdf file here:

Notes

-poly2fv depends on polytri library which isn’t developed to throw exceptions when it can’t triangulate polygons so sometimes poly2fv calls may cause Octave to crash.

Abhinav Tripathi

genuinelucifer

During Google Summer of Code 2016, I worked on the Symbolic Project under the organisation Octave. The project was directed towards adding a new IPC mechanism into symbolic – Pytave. During the course of GSoC work was done in 2 repositories (octsympy on github & pytave on bitbucket).

Code Changes:

All of the changes done by me can be viewed at the following links:

Summary of changes:

As we were to use pytave in Symbolic we worked on it to make it more mature. Add more functionalities to provide a better interface for Symbolic to deal with. Also many new features for conversion to and from python objects were introduced. The most important addition would be @pyobject and its functionalities.
On symbolic side, we used the features of pytave as much to our benefits as possible. We currently only have extra conversion for @sym objects which are specific to Symbolic. For rest of the octave types we use pytave‘s internal conversion.

Detailed PR wise changes:

To know about changes done before August, please read my other posts on this blog in the sequence: post-1, post-2, post-3 and post-4. Here I will summarise PR wise work done in August only.

In pytave, the PRs #33, #31 and #32 were merged which were under review in july. The PR 33 did some cleanup and added tests. PR 31 adapted existing tests since after inclusion of @pyobject we were now converting  python – lists, dicts and tuples; to @pyobject in octave. PR 32 fixed some doctests which were either not working in Python 3 or gave errors due to the latest @pyobject changes.
PR #24 added a new functionality to specify a namespace to be passed in py* functions to ensure that any changes in symbols/objects happens only in that namespace. This required some refactoring which was done in the same PR in a separate commit.
Then PR #40 added a method to allow comparison of 2 pyobjects. This would be useful in writing tests that need a desired output.

On the Symbolic side, PR #512 is under review which takes care of updating out existing methods to use the new pytave features. We have had a good amount of changes in pytave which now allow us to do most of the things without the need to execute and python code on our own (using pyexec). We now do it using py.* calls.

Goals not met:

Almost all the goals were met during the project. Only 1 major goal was not met which was building pytave on windows. Efforts were made towards that but later on it was more fruitfull to add new functionalities that would make the projects more mature. I would definetly try to work on this afterwards and hopefully in near future we will have fully working pytave on windows too.

Barbara Lócsi

Summary

GSoC 2016 is about to end so in this post I will sum up the work we had done during this period. I will write about 3 major topics: eig, gsvd and chol2inv. The original project was about eig, and when it was done we moved on to the other ones.

eig

The goals of the project were:
 Certain calling forms of the eig function are currently missing, including: preliminary balancing ✓computing left eigenvectors as a third output ✓ choosing among generalized eigenvalue algorithms ✓ choosing among return value formats of the eigenvalues (vector or matrix) ✓

Calling forms:

We finished these goals before the end of GSoC, and most of the work is already merged to the main octave repository:
In my repository there are some other to be reviewed commits that reduces code duplication with the use of templates.
You can find some information about the individual tasks on this blog in previous posts, and also here:

gsvd

The function gsvd was imported from Octave-Forge linear-algebra package. However it uses deprecated LAPACK functions: *ggsvd. Moreover it is not compatible with Matlab’s gsvd. For more information check these discussions:
Work already merged to the main Octave repository:
Other commits that are not yet merged:

chol2inv

chol2inv couldn’t handle sparse matrixes (bug # 36437), we submitted a fix for this bug and tests for chol2inv.
Commits:

GSoC 2016 with GNU Octave

This was an exeptional opportunity to gain experience and get into open source development and learn about Octave. I am really happy for this project and the time we spent working on it. I would like to thank for my mentors Carnë Draug and Nir Krakauer, without their help and guidance non of this would have been possible.

August 16, 2016

Francesco Faccio

How to replicate test

Hi!

Here's a little guide to replicate the test I made for ode15i

STEP 1

Download last version of Sundials (only IDA module is required (IDA version > 2.8.0), but SundialsTB, idas and kinsol will be used for some test). Follow the install guide of Sundials to build IDA module with KLU (you will have to specify the lib and include directories where you have previously installed KLU). Use SUNDIALS_PRECISION as double (default) in configuration.

STEP 2

Clone my octave repo you can find here: https://bitbucket.org/Francesco_Faccio/octave
and build Octave from source. KLU is already a Octave dependence, but you can specify other versions you have installed at configuration time.

STEP 3

Here you can find the scripts and C functions I used to test ode15i.
You can test my oct-file running the scripts in folder Octave and the same scripts in Matlab folder for testing Matlab's ode15i.

To test the C examples of Sundials compile the C file you can find in folder C, linking against libklu, libsundials_ida and libsundials_nvecserial. If you installed these libraries in default path, type:

$gcc -c idaHeat2D_klu.c idaRoberts_dns.c$ gcc -o idaHeat2D_klu idaHeat2D_klu.o  -lsundials_ida -lsundials_nvecserial -lklu -lm$gcc -o idaRoberts_dns idaRoberts_dns.o -lsundials_ida -lsundials_nvecserial -lklu -lm$ ./idaRoberts_dns\$ ./idaHeat2D_klu

Sundials provides some mex interfaces for solving the benchmark problems. In order to test that, you have to edit file SundialsTB/install_STB.m changing mex compiler (near row 20):

mexcompiler = 'mkoctfile --mex';

Install Octave and run from Octave install_STB.m script. Choose to compile IDAS and kinsol interface and install the Toolbox. A startup file is generated where you chose to install SundialsTB: run it.

Now you could be able to run the script in MEX folder, which uses Sundials mex interface for solving the dense problem.

For problem ida_klu in Octave and Matlab folder, you can choose to test the solver with a sparse jacobian function, a dense jacobian function, a cell jacobian dense or a sparse jacobian dense.

Since I added the cell jacobian option in ode15i recently and I changed some parameters in the benchmark problems, I have some new results:

Problem Robertson eq (tspan from zero to 4e10, abstol = [1e-8, 1e-14,1e-6], reltol = 1e-4, dense methods):

Matlab: 0.41s
Octave: 0.06s
C impl: 0.002s
Mex impl: 0.12s

Without jacobian:

Matlab: 0.45s
Octave: 0.07s
Mex impl: 0.15s

Problem Heat2D (tspan from zero to 10.24, reltol = 1e-5, abstol = 1e-8, sparse methods):

With a sparse jacobian function:

Matlab: 0.34s
Octave: 0.13s
C impl: 0.005s

With a dense jacobian function:

Matlab: 0.44s
Octave: 0.15s

With a dense jacobian cell:

Matlab: 0.45s
Octave: 0.08s

With a sparse jacobian cell:

Matlab: 0.34s
Octave: 0.07s

Without a jacobian:

Matlab: 0.56s
Octave: 0.47s

It seems a quite goode result, considering that C and Mex implementations are specificly designed for solving these problems. Since I use a m-interface before calling my oct-file, there is a little overhead, but it will be negligible for bigger more complex problems.

For these test I configured Octave with -O2 flag and all the results refer to the code till this commit

August 03, 2016

Amr Mohamed

Blog Post

Hi all,
I am so happy that our project is on the right track.
We have managed to implement the final function (poly2fv) on time.

This function perform Constrained Delaunay triangulation that i useful for plotting polygons with inner holes.
The function is implemented as a cc source file ,and it uses the polytri library which can be found here:
https://github.com/mhearne-usgs/polytri

We have also adapted some of the ClipperLib Benchmarks to Octave and here is a screenshot for one of them:

August 02, 2016

Cristiano Dorigo

BICG and BICGSTAB

Dear all,

In this last period I worked on the BICG and BICGSTAB algorithm.
You can find the codes in the folder bicg_bicgstab at
https://CrisDorigo@bitbucket.org/CrisDorigo/socis16-octave-iter_meth

Since the main loop of both of them have more or less 30-40 lines of code, I wrote them from scratch, because in my opinion was easier instead to modify and to adapt the original code at the many situations that occurred.

The strategy that I used is the same of PCG and GMRES: for each algorithm I wrote three sub-scripts to subdivide the cases which: (a) the matrix and the preconditioners are all matrices, (b) all functions handle or (c) some are matrices and some functions handle.

As for PCG and GMRES, I tried to make these codes compatible with Matlab.

In my BICG and BICGSTAB codes there are not actually the documentation, but I'll add them as soon as possible

Main modifications on BICGSTAB and BICG

Many of the changes that I applied at the BICGSTAB and BICG were quite similar to those that I made in PCG and GMRES. I make a little list to summarize them:

• Now the approximation of the solution X returned is the one that has the minimum residual and not the last one computed by the algorithm. Also the output variable ITER contains the iteration in which X was computed, not the number of iterations performed.
• In the original Octave BICGSTAB code the stagnation criterion is
resvec(end) == resvec(end - 1).
Instead in the BICG code it is: res0 <= res1 (where res1 is the actual residual and res0 the previous)
I changed this criterion with:
norm(x - x_pr) <= norm(x)*eps
(where x_pr is the previous iteration approximation)
This criterion is more efficient, since it checks the variation of the approximations and not the variation of the residual, and it seems more similar to the "stagnation" described in the Matlab help. Moreover in BICGSTAB (and in BICG) there are many fluctuations of the residuals (they are not decrescent in general), so it's better to check directly the approximations.
Due to these fluctuactions, the BICG criterion is wrong, since in this manner the Octave BICG usually stops also if it is not necessary.
• I added the "flag = 2" criterion, the one that Matlab describes as "preconditioner matrix ill-conditioned", that in Octave is skipped. The check that I implemented is for the singularity of the preconditioner matrix M, since seems that in Matlab this flag appears only in this situation.
Now I will focus on some situations regarding BICGSTAB:
•  I added the "half iterations" as in Matlab: studying the Saad implementation, I noticed that the approximation X is made in two steps at every iteration, indeed it is defined as:
x = x + alpha*p + omega*s
Then to be compatible with Matlab (since p/alpha and s/omega are not related and computed in different situations), I set x = x + alpha*p (and its residual) as the half iteration, and x = x + alpha*p + omega*s as the complete iteration.
In this way  the "real" total number of iteration performed (recalling that ITER contains the iteration which X was computed) are
(length(RESVEC) - 1) / 2.
• There was a problem when the matrices A, M1, M2 and/or the right-hand-side b are complex, indeed many times the residual blow up and the method does not converge. In the Saad reference book (where it was taken the Octave BICGSTAB, I think), the method is formulated only for the real case. The only difference is that using this formulation, some inner product are inverted (e.g. it uses <v, w>, instead of <w, v>). In the real case there is no problem since the inner real product is symmetric, but in the complex not, indeed the result of <v, w> is the complex conjugate of <w, v>.
• In the Matlab help I noticed that there is the possibility of "flag = 4". I tried to reproduce it, then I used the same input data in the Octave BICGSTAB: the approximation X was all composed by NaN, and also for the residuals. Then I study in deep the Octave code and the reference book. I discovered that ih this situation there was some diviosn by zero probably caused by the Non-Symmetric Lanzcos Biorthogonalization (that is used in both BICG and BICGSTAB).
With this Lanczos algorithm is possible the situation of a "serious breakdown" (not going into deep details, but it happens when a certain inner product is zero, instead of a non-zero number), and there is not easy examples to reproduce it. The refernce book suggest two possibility to avoid this situation: to restart the Lanczos biortho, or to use a difficult variant called "Look-Ahead Lanczos".
I tried to study the "Look-Ahead" but in the Saad book is only mentioned the general idea, and it is not simple to implement and to understand quickly. Then I tried the second strategy it is not efficient (an easy experiment to try this strategy is: run BICGSTAB to find the last iteration without NaN, then run another time the BICGSTAB with the output X as initial guess), indeed the method does not converge and the residual does not decrease.
Then (since in Matlab is not mentioned if it uses or not the Look Ahead) in the BICGSTAB implementation that I wrote I don't use any strategy and I set "flag = 4" when the guilty inner product is zero.
To see an example of this situation in the Octave BICGSTAB see the file example_bicgistab.m
Instead, during the implementation of BICG there was not (for now) strange situations. For both BICG and BICGSTAB I tried the same tests that are given in their Octave implementations and I add some test to try the "flag =2" and "flag = 4" situations.

DURING THE REMAINING TIME

I know that to fix PCG, GMRES, BICG and BICGSTAB I used a lot of time. This because in their implementation emerged a lot of strange situations that needed a deep revision of these algorithms and many times also a discussion (with the mentors or the community) of how to proceed to solve particular problems. Also the mentors told me that they expected less time/work to improve and make a revision of the codes already implemented.
They also told me to spend the remaning time of the SoCiS to implement some methods not present in Octave. For this, I want to ask you if there is one of the suggested algorithms (minres, symmlq, symmetr, tfqmr, bicgstabl or lsqr) that you prefer to implement.

As usual, if there is something wrong/not clear, please contact me via e-mail or write a comment.

August 01, 2016

Abhinav Tripathi

genuinelucifer

Work in the past month has been distributed over symbolic and pytave side.

The work that was merged to symbolic in this time was mostly to convert objects from python to octave types. PR #474 and #491 worked in this regard.  PR #490 helped to make the tests compatible with Pytave IPC. Then we also added Pytave IPC to the build system (travis) with the PR #489.
Also, we worked to improve the passing of variables to python by using pytave. PR #465 has mostly achieved the goal. But it is failing on travis. The cause was finally detected in PR #502 to be the passing of struct to pytave. I couldn’t get much time to work on this as I was working on pytave. And hopefully when pytave matures then this error will be resolved in pytave itself.

In pytave, there were some bug fixes which caused octave to segfault. In the PR #12, the issue was addressed when an octave value could not be converted to python which threw an exception and crashed octave. In the PR #15, the issue where the exception text from Python could not be properly extracted was handled. This was later improved with the PR #30 for the cases of crash with Python 3. In PR #29, the issue was handled when a large number (integer) was being converted from python to octave then boost threw positive_overflow exception.

Also, there were some feature improvements too. PR #13, added the feature that checks for a function called via pycall into main namespace and builtins namespace too for existence. After pyobject was merged into pytave, most of the work was related to pyobject. PR #17 completed the functionality of returning pyobject from py* functions by adding the same mechanism to pycall which was implemented in pyeval. PR #16 added the feature to check length of a pyobject. PR #25 dropped the existing conversion mechanism which converted python list, tuples and dicts to octave matrices and structs. Now they return a pyobject instead.

There are a few PRs currently under review. PR #33 removes some of the unused code from pycall and adds a few BIST tests which showcase changes due to PR #25. PR #32 fixes some doctests which are also affected by the changes in the same PR. PR #31 aims at adapting all the methods of pyobject to the changes which dropped the conversion of lists, dicts and tuples. In PR #24, we are trying to add a new feature to allow users to specify the namespace in which pyexec/pyeval would run their code. Hence allowing isolation of different codes to have the same names/functions by running in different namespaces. Currently it is buggy and doesn’t actually work as expected. The symbols in a local namespace are also accessible from the global namespace. Moreover any 2 local namespace are sharing the same symbols. Currently it just works in the case where 1 global and 1 local namespaces are required. In such case, they are having separate symbol tables (dictionary actually as it is python).

During this time we had one setback. We tried a lot to build pytave on windows but failed. It has been stopped currently but we will resume it after GSoC ends. Tatsuro helped a lot in this. Thanks to him we have gotten really close to successfully building pytave on windows. The complete details about what was tried and which errors occurred can be found on my wiki page where I described Building Pytave on windows.

The project has been going smoothly because of a great participation from the mentors and octave community. Hopefully we will be able to have a working release of symbolic with pytave IPC (atleast for linux) before the end of GSoC.

Francesco Faccio

Performance test

Hello!

As klu library is now linked to ode15i, I could test the performance of the code both in the dense and sparse case. I tested my oct-file against Matlab ode15i m-file, the C implementation of sundials examples and the m-file relying on the mex interface of sundials. Although the code still contains unefficient use of memory, I think that the results are pretty interesting.

Before going into details, I have to specify that C and mex implementations are written for solving these specific problems, so their code is highly optimized (specific Jacobian matrices are constructed directly inside functions required by IDA, while in my oct-file I receive a function that when evaluated it returns the Jacobian matrices I have to pass to IDA). In other words, solving a problem requires more time than solving the problem.

The first test regards the solution of Robertson chemical kinetics problem, in which the Jacobian is a 3x3 dense matrix. I found a solution for RelTol = 1e-4, AbsTol = [1e-8, 1e-14, 1e-6], tspan = [0, 0.4] and I got (in mean):

0.015s for the oct-file
0.13s for the mex-file
0.002s for the C-implementation
0.19s for Matlab ode15i.

The second test regards the solution of a 2D heat equation. In this problem the Jacobian was a 100x100 sparse matrix with 330 non zero elements. I tested with RelTol = 1e-5, AbsTol = 1e-8 and tspan =[0, 0.01, 0.02, 0.04, 0.08, 0.16, 0.32, 0.64, 1.28, 2.56, 5.12, 10.24] and got:

0.42s for the oct-file
0.005s for the C-implementation
0.45s for Matlab ode15i.

Unfortunately Sundials doesn't provide a mex interface for klu, so I couldn't test a sparse mex file version for this problem.

We can now make some considerations:
-Oct-file is 10 times faster than mex-file and this confirms that mex files cannot reach the performance of an equivalent (in this case, of a more general) otc-file
-Our solver is already faster than Matlab m-file
-The C implementation is the fastest, but code is designed to solve only a specific problem. We can try to get closer to this speed.

In order to improve the performance of the solver, the next step is to try to reduce looping and the use of memory and to write a DAE class, using functors and lambda functions to avoid the use of global pointers.