Planet Octave

July 09, 2018

Sahil Yadav

Week 12

I’ve implemented the weboptions function. I also added relevant comments wherever needed, including the code of first evaluation. This completes the second phase evaluation task, i.e, implementing wiki_upload script and weboptions. There was an unprecendented delay of a week when I was trying to upload the images on the server. It would’ve given me a head start if I had completed the task without the delay, although I am still on the track. There’s one thing left, the help text that I wrote for weboptions isn’t being displayed when I issue help weboptions because it’s a Classdef and not function. Other than this, I feel the task has been executed correctly.

Coming to the next task, I plan to implement webread and webwrite function with MATLAB compatibilty. I think I can change wiki_upload script to be a generic one and then use it for both wiki and webwrite. This would help us extend it with any other functionality in the future.

Note that a few of the weboptions fields are currently for MATLAB compatibilty, as described in the previous post. Also, second phase evaluation will take place this week. Hoping for not giving Kai a chance to complain! :-)


Link to BitBucket repo.

July 09, 2018 03:02 PM

July 06, 2018

Erivelton Gualter

Second Evaluation - week 8

So, here is my last post before the second evaluation. If you have been following my blog or the octave blog, you know that the purpose of this google summer of code project is to create an Interactive Tool for Single Input Single Output (SISO) Linear Control System Design. Also …

by Erivelton Gualter at July 06, 2018 12:00 AM

July 03, 2018

Sudeepam Pandey

GSoC project progress: part three

The goal for the second evaluations was to code up a complete, working, command line suggestion feature that supports identifiers and graphic properties of core octave and all the octave forge packages. I am happy to say that this goal has been achieved and we do have a working suggestion feature now. The link to public repository where the code can be found is this.

If you haven't already, you should read my previous posts to find out what the community wanted the feature to look like and how much progress had been already made. You may need that to understand the contents of this post. In this post, I would like to talk about the additional work that has been done and the work that will be done in the days to come.

At the time of the first evaluations, one of my mentors, Nicholas, expressed how he would be interested in seeing how the rest of the project progresses, including the aspects related to user interface and maintainability of the code by other developers. I'd like to address these points first.

So the UI is relatively simple. You enter a misspelling and some suggestions are displayed. We could have tried adding some GUI pop-ups but I refrained myself from trying to do those. There were two primary reasons for that.
  • First reason is that a GUI pop-up looks very unpleasant when you are working on the CLI of Octave, but honestly, that is more of a personal opinion I suppose.
  • Second, and the more strong reason is that adding a GUI pop-up would have been a really complicated task due to the way octave handles errors and would have resulted in things like displaying of the "undefined near line..." error messages for the misspelled command, after the correct command has been executed. 
There are some other reasons as well which have been discussed with the members of the community before. Obviously we can try changing things later on, if we really want to, but as of now, suggestions are simply displayed and the user can just use the upward arrow key of their keyboard and edit the previous command to quickly correct their misspelling.

I have accounted for code maintainability as well. I moved a few pieces of code here and there (see commit log) and have made the feature in a way that . . . all the code related to the UI, or how the feature presents itself to the user is in a separate file (scripts/help/__suggestions__.m) and all the code related to the suggestion engine, that generates the possible corrections for the misspelled identifier is in a separate file (scripts/help/__generate__.m). A lot of comments have been included in the code and the code is simple enough to be red and understood by anyone who knows how the Octave or MATLAB programming language works. Another important point is that, all the graphic properties and identifiers of Octave core and forge with which a misspelling can be compared have been stored in a database file called func.db (examples/data/func.db). I had described this file in my in my previous post.

Maintainability shall be very easy due to such an implementation. If UI changes are required, major changes must be done only to the file __suggestions__.m. If the algorithm of the suggestion engine has to be changed, changing the code of the file __generate__.m shall be enough and if new identifiers are added to octave (something that will be constantly done), including them in the well organized database file (which can be very easily done with a load>edit>save) would be enough.

Now I'd like to describe the other tasks that have been done in this coding phase. These include adding the support for the remaining packages of Octave forge and adding support for the graphic properties.

Including the remaining packages of Octave forge was very easy, all I had to do was, fetch the list of identifiers, clean up the data a little, and include it in the database file.

The challenging part was adding the support for graphic properties. This was mainly because of the fact that it required me to write a C++ code for a missing_property_hook() function which had to be similar in architecture to the already existing, missing_function_hook() function.

In the codebase of Octave, missing_function_hook() is a function that points to a particular m-script which is called when an unknown command is encountered by the parser. Like I had described earlier, I had extended its functionality to trigger the suggestion feature when an unknown identifier was found. The missing_property_hook() had to do something similar, call a certain m-script when an unknown graphic property is encountered.

Rik helped a lot with this part and finally, I was able to code up a missing_property_hook() function which would trigger the suggestion feature when an unknown graphic property is encountered. Although, the code does what it is supposed to, I'd be honest here and say that this part is still a little black-box to me. I'd appreciate it if some other maintainer who is good with c++ and familiar with the code of the missing_function_hook() function would take a look at the missing_property_hook() function and point out or fix any issues that they find.

I'd like to mention that the suggestion feature differentiates between the levels of parsing, i.e. whether the trigger is an unknown property or an unknown command, by looking at the number of input arguments. The rest of the functionality is same.

With all these things done, I was able to realize a complete and working command line suggestion feature and complete the goal that was set for the phase two evaluations. Future work that had been planed for phase three of coding includes writing the documentation, writing some tests, fixing any and every bug that is reported, and seeing if I could use a better algorithm for the suggestion engine. An additional thing that I would like to do is to nicely wrap up the on/off button and other such user settings into a single m-script for better user experience.

Since the phase two work is done, I'll start working on these things that have been planned for phase three from tomorrow onwards. I'll publish another post when I make some more significant changes, till then, thank you for reading and goodbye.

by Sudeepam Pandey (noreply@blogger.com) at July 03, 2018 10:09 AM

July 01, 2018

Sahil Yadav

Week 11

This week I fixed some styling in the previous codebase and other minor issues. Me and Kai had a very fruitful discussion this week for the roadmap ahead. We discussed many things.

Kai verified that wiki_upload.m (an important dependency for publish_to_wiki.m) works as expected including the checks for uploading the same image again, wikimedia itself does a hashcheck before uploading and sends an error if the image is identical to the previous one or a warning if the image already exists on the wiki server. You can check this by calling:

publish_to_wiki ("script", "username", "password")

where script, username and password correspond to the script that you want to publish, your username and password for wiki.octave.space (this will be changed to wiki.octave.org later) respectively. One problem that I noted was Octave hangs up if the connection gets lost in the middle of the transfer. But this is not native to this function, it affects all the files that require to use libcurl’s interface to Octave. Nevertheless, I tried to look into it, with the function CURLOPT_TIMEOUT and other, as well. Unfortunately, I couldn’t get a viable answer because the timeout value in the above function is measured from the start of the transfer and NOT when the connection gets lost. So suppose if someone has a large transfer, then setting this value is potentially dangerous because their tansfer will get cancelled in the middle of the process even though the connection is good. Other than this, there were provisions to exit the API if the speed gets below a given threshold. This again cannot be guaranteed because it might be that the user may have a lesser internet speed. So, the idea of resuming Octave was dropped altogether, its again upto the user to get a proper internet connection! Other than this, there was the quest of making test cases for wiki_upload.m for which I suggested to measure the Content-Length of the query string that is formed as part of the transfer. This again has problems, because the libcurl API for Octave doesn’t let the developer know the actual query string which is transferred over the connection. FYI, the query string formation takes place here. This would only add a big overhead for the returned values from the C++ API to the Octave code because all of the functions that use the perform () (it actually performs the transfer) function will need to be changed. I initially had suggested this because during debugging I had actually changed the form_query_string function according to my needs. But then I realised that this is not feasible because:

  • As mentioned above, there would be unnecessary overhead to change all the functions because of change in form_query string function. Also, a number of other already implemented functions use it. So, regression can occur.

  • We actually cannot measure the actual Content-Length properly, because the content is sent in UTF-8 format which replaces all the non-alphabetical characters with their UTF-8 version. So a whitespace is changed to %20. Of course there are other ways to measure the UTF values for all the characters by running various for loops and adding offsets to a given values to find another value, etc. But I do not think this is a good measure to test the main working of wiki_upload which essentially needs to check if the file has been uploaded or not on the wiki.

Will see if there’s anything else I could do about this, else I will need to manually check at various times if the file has been uploaded correctly or not.

Lastly, the commenting part, which I don’t want to loose marks for, is left and I’ll be doing that in the coming days, before the evaluation so that Kai doesn’t get a chance to bash me on this. ;-)

THIS COMPLETES THE FIRST HALF OF THE PROJECT, i.e., OCTAVE CODE SHARING.


Next lined up is, setting up RESTful web services for Octave.

We had a good discussion over this too. First to be implemented is, weboptions. I initially thought of implementing it using a struct in the backend. But then Kai suggested to use Classdef in Octave and making an object. Of course, he was right, one simple and foolish reason being, weboptions in MATLAB doesn’t display the Password field and if struct is deployed for this, there’s no way that the password is stored in a plain text and displayed as obscured.

Here is what I’ve implemented for this. It almost acts like the weboptions in Matlab with subtle differences. For example, when you call the following, it will show the answer that follows:

>>    d = weboptions

      <object weboptions>

MATLAB shows the values of the fields instead. For that I’ve created a method values so that when you call d.values or values(d), you get to see all the values that are set for the object. There are a few problems which will most probably get rectified as me and Kai have another meeting on IRC. Basically, I’ll need to find a way to represent cell string in it’s input form, i.e, something like

 ans =   {"foo", "bar"}

and not

ans =
    {
        [1,1] = "foo"
        [1,2] = "bar"
    }

You can set values for the various fields and members in the object by calling d.field_name = your_desired_value;. This will do as desired. Some of the fields are currently for the MATLAB compatiblity and will be dealt with later, like ContentReader, MediaType, CertificateFilename, etc.

Oh and kindly strip your latest commit on ocs (a merge commit) in your local repo, else it’ll create a new head. Sorry about that!

Wrapping up for now! Apologies for such a long post.


Link to BitBucket repo.

July 01, 2018 04:31 PM

Erivelton Gualter

Edit Compensantor Dynamics

So far, to design a controller using sisotool we need to select the desired feature to add to the compensator, such as real and complex pole or zero. In order to perform this task, we have two options. First, we can go to the main tab and select the feature …

by Erivelton Gualter at July 01, 2018 02:34 PM

June 24, 2018

Erivelton Gualter

Back to Coding

Results from the first evaluation came 9 days ago. All of the three gsoc student were successful! For the readers of my blog, you can find them at http://planet.octave.org/. If you already a reader of planet octave, you are in the right place.

The feedback from my …

by Erivelton Gualter at June 24, 2018 06:19 AM

June 23, 2018

Sahil Yadav

Week 9 & 10

UPDATE: I am more than happy to announce that I’ve been able to upload the images on to the wiki server!! The problem was not in form_data_post but in http_post (in my version of Octave’s repository). There were two faults:

  • I was calling http_action which then used to call http_post inturn which then POSTed the data, despite the fact that there was no POSTFIELD or POSTable data with me, it was FORM submission instead. I’ve now added the perform() function in the form_data_post itself so that I won’t need to interact with the former.
  • I had set the Content-Length parameter to get from the length of POSTFIELDS, which isn’t needed as of now.

Nevertheless, I’m pumped up again for furthering my work!


There has been some problem in completing the upload_to_wiki script. Unfortunately, I’m unable to send the images on to the servers. Other than that, everything is working fine. I’ve been trying to do the same since the past week, but still no success, even after tracking down the cURL’s source code.

Basically we need to have a linked list of pointers to the information that we want to send as a form data. I tried to traverse that (It is native to cURL and unseen for the end user). Everything looks fine there. The only bottleneck that I can observe for now is, the form_data_post function is not working as desired. I’ve tried all other alternatives to check where my code is not responding as per standards. But I think only the above function must have problem.

I also discussed this issue with the mentors and other people at IRC channel. However, there’s one more thing to it. The functionality that I’m currently using (CURLOPT_HTTPPOST) is deprecated now from cURL version 7.56 and instead MIME API is being used. (See this for more.) I asked Kai about this and he too agreed on the fact that I shuold be keeping backwards compatibilty, but then jwe and andy suggested me to use the latter funcitonality with MIME api and then make the feature available only to those who have cURL 7.56+. I’ll ask the mentor once again if either the problem gets solved for now and then I change it to MIME API once the work gets over, or I should instead do it now only.

Other than that, I will start implementing the RESTful services from now on, the other half part of the project. So, we’ll see how much of these functions that I’ve implemented so far get reused.

Until then! :-)


Link to BitBucket repo.

June 23, 2018 12:38 PM

June 11, 2018

Sahil Yadav

Week 8

Hi! Now that my exams are over, I am back on the track. First evaluation will take place this week. I will continue my work from the last task, i.e, wiki_login.m, to complete its implementation and then use it in publish_to_wiki function that will help us upload the wiki to user’s online wiki. After that I plant to implement the RESTful services part of the project, which has weboptions function lined up at first.

I’ll update this blog with any subsequent progresses.


Link to BitBucket repo.

June 11, 2018 03:15 PM

Sudeepam Pandey

GSoC project progress: part two

In my previous post, I talked about all the major discussions that have been made with the community, what the suggestion feature would be like, how I plan to realize this feature, and how I have extended the functionality of the scripts/help/__unimplemented__.m function file to integrate the command line suggestion feature with Octave. In this post, I would like to share my progress and talk about how the current implementation of the suggestion feature is working. The link to the public repository that contains the code for this feature can be found here.

The goal for the first evaluations was to code up a small model that would show how this feature integrates itself with Octave. That part, however, was completed by the time I had made my last blog post. I had been working on a full-fledged command line suggestion feature since then and till now, I have been able to complete a working command suggestion feature that supports identifiers from core octave and 40 octave-forge packages. Lets start looking at various parts of the feature.

Whenever the __unimplemented__.m function file, fails to identify, whatever the user entered, as a valid, unimplemented octave command, it calls one of my m-script, called __suggestions__.m and the command suggestion feature gets triggered. This script, __suggestions__.m, does the following things...
  • Firstly, based on the setting of the custom preference (set by the user with the command setpref ("Octave", "autosuggestion", true/false) it decides whether to display/not to display any suggestions. If the preference is 'false', it realizes that the user has turned off the feature and so it returns the control without calculating or displaying any suggestions. 
  • However, if the preference is true, it checks if whatever the user has entered is at-least a two letter string. If not, it again, returns the control without calculating or displaying any suggestions. This is done because it is less likely that a one letter strings is a misspelled form of some command.
  • However, if the string entered by the user is two lettered or more, the script goes on to calculate the commands that closely match the misspelling. The work of calculation is done by a different script and __suggestions__.m, only calls that script to get the closest matching commands. These commands are then displayed to the user as potential corrections.
  • If the misspelling of the user is short (length of the misspelling < 5), the script entertains one typo only, However, if the length of the misspelling is more than or equal to 5, two typos are entertained as well. This essentially means that for short misspellings, commands which are at an edit distance of 1 from the misspelling are shown as potential corrections and for relatively longer misspellings, commands which are at an edit distance of 2 from the misspelling are also shown as potential corrections.
Commands that closely match the misspelling are calculated by a different m-script. This m-script is called __generate__.m. It loads a list of possible commands from a database file called  func.db and then calculates the edit distance between the misspelling and each entry of the list using a different script called edit_distance.m. The commands having an edit distance equal to one or two are accepted as close matches and a list of all such commands along with their edit distances is returned to __suggestions__.m which displays some or all of these suggestions depending on the logic described before.

I'd like to mention that the strings package of Octave forge also has a function file that calculates the edit distance. It is called "editdistance.m". Therefore, to avoid compatibility issues or to avoid having two different function files that do the same thing, later on, I will include the edit_distance function that I wrote within the __generate__.m script.

Improving the speed of the generation script 

If we go on and calculate the edit distance between the misspelling and each and every identifier of octave (core+forge), our algorithm would take nearly 20 years to generate an output for each typographic error that the user makes. We, however, would like the time to be 20 milliseconds or so. For that, we use some smart techniques that reduce the sample space on which the algorithm has to operate.

To reduce the time, I've made a small assumption. I have assumed that the user never mistypes the first letter of a command. A rough analysis of the misspelling data that I received from Shane of octave-online before the commencement of the project, suggests that this is a reasonable assumption and would hardly reduce the accuracy of the suggestion feature. How good is this assumption for the speed? Well, I'd just say that, for a misspelling starting with the letter 'n', this small assumption reduces the sample size from 1492 to 36 (and that is not the best case!). The worst case was that of the letter 's' in which 178 out of 1492 commands were left. Even that corresponds to an 88% reduction in the sample size.

It is important to mention that doing this alphabetical separation at run-time would be a redundant task and a stupid idea, that would correspond to the algorithm taking 20 years again.

Another thing that we should consider, to improve the speed, is to show suggestions from octave core + loaded packages only. Obviously it is not a good idea to check among the commands that belong to a package which the user is not currently using (or worst, a package that is not installed on the user's machine).

Keeping these things in mind, I have created the func.db database file in such a way that the commands belonging to different packages are stored in different structures and are alphabetically separated as fields of that structure. So for example, func.db contains a structure called control which holds the identifiers from the control package only, and another structure core which holds the identifiers of core Octave only, and another structure signal which holds the identifiers of the signal package only and so on. The field a of the control structure (accessed by typing control.a) contains all the identifiers of control package starting with 'a', The field b (accessed by typing control.b) contains those identifiers of the control package that start with b, and so on. This has been repeated for all the packages available.

To make our __generate__.m script memory efficient as well, we load the core structure (which is always required) and then check for the loaded packages and load the structures corresponding to the loaded packages only, then, using a switch case, fetch all those commands which have the same first letter as that of the misspelling (in O(1), thanks to the way in which the database is arranged) and then proceed to the next step.

To understand the next step, we need to understand that the if a misspelling is of length p (say), and we are accepting corrections that are at an edit distance of one or two from the misspelling, then the corrections could have the following lengths only...
  • p-2: Two deletes in the misspelling,
  • p-1: One delete and one substitution, or one delete only.
  • p: One delete and one addition, or one or two substitutions.
  • p+1: One addition and one substitution, or one substitution only.
  • p+2: Two additions to the misspelling.
This fact, allows us to reduce the list further and would cut out some 5-10 more entries for normal length misspellings. This logic, however, is particularly useful for large length misspellings, because commands with large lengths are very less in number. If a user misspells the command "suppress_verbose_help_message" the script would take days to suggest a correction for this command without this logic, this is because edit distance algorithm is O(n1*n2) with dynamic programming, where n1 and n2 are the lengths of the strings being compared. This O(n1*n2) is repeated m times where m is the number of possible commands that could be close matches. With this logic however, the possible list would be cut down to one or two commands only. Thus, the value of m will be reduced and the close matches will be found within one or two iterations.

That summarizes all the measures that I have taken to improve the speed of the suggestion feature. The control flow had been described before this and so that concludes the working of the suggestion feature.

Conclusion

This concludes phase one. What's left is to include more forge packages and to include graphic properties within the scope of this feature. Writing the documentation, writing the tests, and debugging also remains but these shall be the tasks for subsequent coding phases. Till then, goodbye, see you in the next blog post. :)

by Sudeepam Pandey (noreply@blogger.com) at June 11, 2018 10:03 AM

June 10, 2018

Erivelton Gualter

First Evaluation - week 4

So, here is my last post before the first evaluation. If you have been following my blog or octave blog, you know that the purpose of this google summer of code project is to create an Interactive Tool for Single Input Single Output (SISO) Linear Control System Design. Also, well-known …

by Erivelton Gualter at June 10, 2018 04:36 PM

Sudeepam Pandey

GSoC project progress: part one

p { margin-bottom: 0.25cm; line-height: 120%; }

An Initial note....


Alright, so first of all, I would like to apologize for not writing a proper blog post up till now. I had my Final examinations during the first week of the coding period and immediately after that, to catch up, I got so involved with the coding part that I forgot to share the progress of the project on the blog. On the positive side however, I have completed a lot of work. I can safely say that I have completed the goals that were set for phase 1 evaluations (possible style fixes may be left), but that’s not the entire good news. The phase two evaluations goal is also halfway done!

Now, I do realize that I have not shared any details of my project until now, and so, in this blog post, I’ll share a lot of important details and talk about everything that has been discussed and done so far. I promise to post more often after this, ‘cumulative’ post. Here goes...

The Project Idea....


If you've red the last blog post, you'd know that I plan to add something called a 'Command Line suggestion feature' to Octave and you may be wondering what that means. Basically this feature would do something like this...

Whenever the users make a typographic error while working on Octave's command window, the command line suggestion feature would suggest a correction to them and say something like "The command you entered was not recognized, did you mean any of the following...?"

Now I could share a detailed time-line explaining 'when' I plan to do 'what' but I believe that not everyone would be interested in reading that and so I'll skip that for now. Instead, I'll quickly talk about the following...
  • What the community wants the overall project to be like.
  • What are the challenging parts of the project.
  • What are my evaluation goals.
  • What discussions have been made, and
  • How much progress has been made.
If you really would like to see my time-line then just ask for it in the comments section and I'll share a link.

The Community Bonding Period...


By the time you finish reading this section, probably the only thing left to talk about would be "How much progress has been made". That is just a glimpse of how much the community has been involved in this project. It also shows how successful GNU Octave is as an open source community, not every open-source community is as open when it comes to discussions.

Now the first thing to understand is that this project is essentially a UX improvement, and as such, Octave is not bound by 'MATLAB compatibility issues'. This is one of the primary reasons why there was so much to discuss in the community bonding period. Here are the main points that summarize the collective decision of the community on what the overall project should be like:
  • First of all, it was decided that the user interface, or the part handling 'how this feature hooks itself to octave' should be well separated from how the 'suggestions are generated'. This need was realized, immediately after realizing the fact that there are a lot of algorithms available that could be used to generate suggestions. Separating the integration and generation part would allow us to make sure that, in future, if a faster or a more accurate algorithm to generate suggestions is discovered, replacing the existing implementation becomes easier.
  • Secondly, a few problems such as, a very large output layer size, and failure on dynamic package loading were found with my proposed Neural Networks, based approach. Therefore, we decided to use a well established approach called the Edit distance algorithm for now and the Neural Networks based approach will be the 'research part' of the project. Essentially, the plan is to first use 'smart implementations' of the good old Edit-distance algorithm to realize this feature and to research and see if a Neural Network could do better after that has been done. If later on, we realize that a Neural Network (or for that matter, any other approach), really could do better than the Edit-distance approach, the algorithm can be replaced very easily (thanks to the previous point).
  • Next, we decided to include keywords, functions, and graphic properties within the scope of this feature. Very short keywords, user variables, and internal functions will not be included in its scope. Deprecated functions would also be included in the scope for now. Essentially, corrections would be suggested for typos close to anything that is within the scope of this feature and would not be suggested for anything that isn't.
  • Also, we decided to use the missing_function_hook() to realize the integration part of this feature. More about this later in this post.
  • Lastly, we decided that it is absolutely necessary to include an 'on/off switch' type of command that would let the users decide whether they want to use this feature or not. We plan to use custom preference for now to do this.
That summarizes the most important discussions that took place and with that, we are in a position to talk about how the second point and the last point are directly related to what are the 'challenging parts of the project'. Let's start with that.

Essentially, the second point talks about the algorithm that will be used to generate the corrections that are ultimately going to be shown to the user. The challenging part is that this algorithm should provide a minimum speed-accuracy trade-off. I did know about the Edit-distance algorithm beforehand but I initially believed that a Neural Network would outperform it in terms of the speed accuracy trade-off. Discussing the idea with the community made me realize that there are some critical loopholes in the Neural Network based model and although they could definitely be improved with more research, I should not jeopardize the entire project just to proof that Neural Networks could do better. We therefore decided to do what I had described earlier in the second point.

At this point, defining a 'smart implementation' of Edit distance remains. Basically, Edit distance is a very accurate algorithm that quantifies how dissimilar two strings are. The only problem with it is its speed (my primary reason for initially proposing a trained Neural Network). Essentially, by a smart implementation of the algorithm, we mean an implementation which would maximize the computation speed by reducing the sample space, on which the algorithm has to work on. This would be done using some clever data management techniques and some probability based assumptions. Some discussions related to these were also done during the community bonding and since then, I have been looking at a lot of suggestion features of other free and open source softwares to device some clever techniques. Good progress has been made but I'll share that in another blog post.

The last point talks about a very important 'on/off' feature, the tricky part with this was that Octave comes in both a GUI and a CLI and so a common method that does the job could have been hard to find. However, this problem was solved with relative ease, and we decided to use custom preference to realize this part. This gave us a simple and common command to switch on/ switch off the feature.

These discussions led me to reset my term evaluation goals which are as follows now:-
  • Phase-1 goal: To code up and push an algorithm independent version of the suggestion feature into my public repository. Essentially this would show how this feature integrates itself with Octave.
  • Phase-2 goal: A development version of Octave with a working (but maybe bugged and surely undocumented) command line suggestion feature integrated into it.
  • Phase-3 goal: The primary goal would be to have a well documented, well tested and bug free command line suggestion feature. The secondary goal would be to research and try to produce a Neural Network based correction generation script that outperforms the edit distance algorithm.
...and that, marked the end of the major discussions and the community bonding period.

Progress made so far...


So far, I have coded up the phase-1 goal. The public repository can be seen here. It very well shows how we have used the missing_function_hook() to integrate the feature with octave. The following points summarize the working:
  • Essentially, whenever the parser fails to identify something as a valid octave command it calls the missing_function_hook() which points to an internal function file, '__unimplemented__.m'.
  • This file checks if whatever the user entered is a valid, unimplemented octave (core or forge) command or if it is a implemented command but belongs to an unloaded forge package. If yes, it returns an appropriate message to the user and if not, it does, or rather, used to, do nothing.
  • To realize the suggestion feature, I have extended its functionality to check for typographic errors whenever the command entered was not identified as a valid unimplemented/ forge command.
By using the missing_function_hook() the keywords and built-in functions were automatically bought into the scope of this feature. Graphic properties remain because there is no missing_property_hook() in octave right now. I have discussed this with the community and I'll try to code it up in the subsequent weeks.
Besides that I have also figured out how the Edit Distance algorithm can be made 'smart'. I'll push an update and write another blog post as soon I master and code up the entire thing. For now, thanks for reading, see you in the next post. :)

by Sudeepam Pandey (noreply@blogger.com) at June 10, 2018 02:37 PM

June 03, 2018

Erivelton Gualter

Geeting closer to the First Evaluation - week 3

First Evaluation period is around in the corner. As was proposed on my first post of my timeline, the work I have been doing is on time.

For this past week, I added some functionalities to the Root Locus Editor. This time, the user can add: real poles, complex poles …

by Erivelton Gualter at June 03, 2018 10:48 AM

May 29, 2018

Erivelton Gualter

Plots are Working - week 2

Here we go one more week of code. This week I have continued my work from previous week related to interface of sisotool. Just reiterating what was done last week, I created a couple GUI to understand a little better how the UI Elements works in Octave. For this week …

by Erivelton Gualter at May 29, 2018 05:38 PM

Sahil Yadav

Week 6 & 7

As I’ve already completed my first evaluation work and got it reviewed the same from mentor, I won’t be doing much work in these two weeks due to my end semester exams. However, I’ll be continuing my post from week 8, positively.


Link to BitBucket repo.

May 29, 2018 05:26 PM

May 21, 2018

Erivelton Gualter

Code begins - week 1

The first week of coding has been completed.

As I mentioned in the last post, the goal of this previous week was to create a fixed layout to study the plot diagrams for the sisotool, as well as to add some UI Element functionality to control the interface. The following …

by Erivelton Gualter at May 21, 2018 04:27 PM

Sahil Yadav

Week 5

With 25th May approaching, I’m confident to say that I’ve completed my first evaluation (although some grammatical and styling rechecking is left). Now, as directed by Kai, I’ve implemented the wiki_login.m function from using the internals of url-transfer.cc/h class. I’ve scapped the earlier design of implementing cookie_manager.m script and libcurl_wrapper.cc class for doing the cookie thing and directly implemented __wiki_login__ as an internal function in urlwrite.cc. A user is now able to log into api.php

To test my implemtation:

  • Make a test account on wiki.octave.space with username and password of your choice.

  • hg clone https://me_ydv_5@bitbucket.org/me_ydv_5/octave

  • hg up ocs

  • cd path/to/your/build/tree/ and execute make -jX

  • ./run-octave

  • execute wiki_login ("https://wiki.octave.space/api.php", yourUsername, yourPassword);

If it is all successful, you should see a prompt Success: logged in as yourUsername.

Other than that, I’m really grateful to jwe for moving the version.h file to liboctave from libinterp as this is needed to get the OCTAVE_VERSION for the user agent (see week 3 for this). Eariler, it was in libinterp and we must not use anything from libinterp in liboctave as it should be compiled irrespective of the former.

NOTE: You may need to re-run ./bootstrap and ./configure due to above changes.

Lastly, I would request to someone who uses Windows (My Windows ran into some problem and I’m now unable to connect to internet, I’ll reinstall it whenever I get some time) and/or Mac OS to test my implementation and report any errors.

Needless to say I’m always looking forward to constructive feedback/criticism.


Link to BitBucket repo.

May 21, 2018 11:06 AM

May 16, 2018

Sahil Yadav

Week 4

Continuing further, I added more options in libcurl_wrapper.cc. As described in earlier posts, the current implementation of wiki_login.m has the JAVA’s interface to Octave in use and I need to replace it with Octave’s own implementation. So, I’ve taken two steps in this direction. Now a user is able to retrieve token when he executes wiki_login AND use the cookies that are set in a temporary .txt file to login into the api.php wiki. Currently, there’s a problem in logging in, because the following cookies are unable to get added, octave_org_session, octave_orgUserID, octave_orgUserName, octave_orgToken. I got to know about these cookies when I tried to execute the curl-cli commands for logging in.

I also understood how the HAVE_CURL macro encapsulates the curl_transfer class, i.e, if curl is available in a machine, then this class exists, else not. HAVE_CURL is a macro that is set during ./configure stage of building the software. I will be extending my work in this class essentially in the coming week.

I’ve also added the files in their appropriate directories.

A new dummy wiki has been created by Kai for testing purposes. I’ll be using this from now on.


Link to BitBucket repo.

May 16, 2018 04:16 PM

May 15, 2018

Erivelton Gualter

Community Bonding Period

The community bonding period is over. The past 3 weeks were really busy because I was in my finals week, final projects and my doctoral research. However, I basically completed everything I wanted to before the “Coding officially begins!”:

  • Finished Optimal Control and Intelligent Control System classes;
  • Submitted a conference …

by Erivelton Gualter at May 15, 2018 04:16 PM

May 08, 2018

Sahil Yadav

Week 3

UPDATE on 12-May-2018: You can now directly test my work by cloning my repo, updating the source tree to my bookmark by hg up ocs, making a build and calling wiki_login from octave-cli to get a login token in return. I’ve added the files and necessary changes in the codebase itself. There’s no directory ocode now.

If you happen to already have a build of octave, just do the following:

  • cd path/to/your/source/tree
  • hg pull https://me_ydv_5@bitbucket.org/me_ydv_5/octave in your source tree.
  • hg up ocs
  • hg up -r 8bbf393
  • make -jX in your build tree.

This will save your time of cloning the entire repo and compiliing.


As mentioned in the previous post, I worked on __publish_wiki_output__.m and publish.m code. The __publish_wiki_output__.m has been added as an internal function in scripts/miscellaneous/private. I skimmed through the parser in publish.m to get a gist of how it actually works. It has three levels of parsing:

  1. Extract the overall structure (paragraphs and code sections).

  2. Parsing the content of a paragraph.

  3. Generate the output of the script code and look after figures produced in code.


After that, I studied how the url-transfer.h file is implemented which contains a base class named base_url_transfer which has a derived class named url_transder. One thing that puzzled me while doing so was, why there has to be a macro HAVE_CURL in order for curl_transfer to be defined and why we haven’t defined url_transfer class itself? I would try to get these doubts solved this week.

The problem of user agent was solved by selecting the following user agent:

    GNU Octave/OCTAVE_VERSION (https://www.gnu.org/software/octave/ ; help@octave.org) libcurl/LIBCURL_VERSION

where OCTAVE_VERSION and LIBCURL_VERSION correspond to the user’s octave and libcurl version respectively. This code precisely does the same for us.

My inteded plan for the wrapper is, make a cookie_manager.m file that will process the various user options (like verbose output, timeout settings, api.php url, etc.) and pass the values to an internal __curl__.cc function which will in turn, take help from libcurl_wrapper.cc to do various tasks (all the work related to cookies will be looked after by it, essentially).

Curently, all the code in wiki_login.m has been commented out except the first step of login, i.e, getting a login token from the api.php, which it smoothly does, as of now. I am assuming that the file which would store the cookies, is temporary and should be deleted once the session expires. This is one of the things I will be looking on in this week.

I’ve migrated all the developments from my forked git repo to my mercurial bookmark ocs recently and so I was not sure where should I put the files in my source tree. Thus, I’ve put all of them in a directory ocode for now.

To test this for yourself:

  • Clone my build tree using hg clone https://me_ydv_5@bitbucket.org/me_ydv_5/octave
  • Make yourself a build of octave (make -j2, etc.).
  • cd octave.
  • Update to my bookmark using hg up ocs. (IMPORTANT!)
  • cd ocode.
  • Execute Makefile in octave.
  • Execute wiki_login in octave to get a login token.

All other details of the wrapper’s implementation will be followed in the next post.

The next week would follow the following advances:

  • Choosing the right location for the files (after I get a green light for the current developmental path).
  • Extending other options in the wrapper for wiki_login’s steps 2 and 3.
  • Implementing cookie_manager with other user options.
  • Writing help text and text cases, if any.
  • Correction of existing work/ changing the stategy as adviced by mentor, or anyone else.
  • Look into how can I use existing base_url_transfer class in the wrapper and resolve my query of the HAVE_CURL macro and shared pointers, etc.

I am optimistic that I would be able to complete my first evaluation work by 25th May or so, as I will need to focus on my end term examinations after that which will start from 1 June. We don’t get holidays in between the exams!

Please let me know if I am doing it the right way or not, by either replying to this thread or by simply dropping a message on #octave for <batterylow>. All the suggestions are always welcomed.

Oh and not to forget, I got my domain an SSL certificate, now all the requests are served via HTTPS only!

Stay tuned for next update.

May 08, 2018 05:16 AM

May 01, 2018

Sahil Yadav

Week 2

Week 1 included setting up all the work environment, bitbucket repository for tracking my project’s progess and review,setting up and getting this blog aggregated to planet.octave.org. It also included reading up various files that are of concern to the project.

Week 2 would focus on getting my hands dirty in refactoring the __publish_wiki_output__.m and (possibly) publish.m code. This would also include looking up what exact methods/functions will be needed to implement the wrapper. Currently, a proof of concept is written as a bash script. My work of writing the wrapper will be highly inspired from this script.

The wrapper is written so that MediaWiki can be communicated directly using Octave and there won’t be any need to use Java’s interface to Octave or the bash script itself. To know how exactly MediaWiki API works, have a look at this nicely written post. Note that the $wgEnableAPI written in the post is now deprecated from MediaWiki’s version 1.32.0. Another thing that needs looking upon is MediaWiki needs a user agent in order for the client to be identified. So we need to decide what would be it.

Stay tuned for next update!

May 01, 2018 05:51 AM

April 26, 2018

Sudeepam Pandey

Starting with GSoC 2018.

So this year, I applied to the Google summer of code and got in. Google summer of code, or GSoC, as it is usually called, is a program funded by Google that has helped open source grow for over a decade. Under this program, Google awards stipends to university students for contributing code to open source organizations during their summer breaks from the university. The details of the program can be found here: Starting with Google summer of code.

Now this year, I have been selected to work with GNU Octave. It is a free and open source software/ high level programming language which is primarily focused on scientific computing. It is largely compatible with MATLAB and is a brilliant open source alternative to it. More details about GNU Octave can be found at Free your numbers! Introducing GNU Octave.

My GSoC project is about adding a Command line suggestion feature to GNU Octave. Stay tuned, I will share the details of the project very soon.

by Sudeepam Pandey (noreply@blogger.com) at April 26, 2018 12:57 PM

April 25, 2018

Sahil Yadav

Week 1

I’ve been selected as a Google Summer of Code , 2018 student developer at GNU Octave. GNU Octave is a high-level language, primarily intended for numerical computations. It provides a convenient command line interface for solving linear and nonlinear problems numerically, and for performing other numerical experiments using a language that is mostly compatible with Matlab. It may also be used as a batch-oriented language.

A very heartful thanks to Kai T. Ohlhus, Doug Stewart, Ankit Raj and others who saw my potential and chose me for GSoC 2018.

My project is Octave Code Sharing. Community bonding period would include reading up material that’d be essential for the project.


Abstract:

This project aims to come up with a pan-octave implementation that could be used to connect to wiki.octave.org with appropriate credentials for publishing octave scripts that could be hosted for distribution using the MediaWiki API. Currently, no formal implementation is there but a proof of concept which was implemented as a bash script, later refactored to use JAVA’s interface to Octave. Since the network connection itself has latency and response time which are significantly large to get into notice, an Octave script, which will help in connection, will not be much of a performance killer. To maintain a stateless HTTP protocol, some information that would be needed will be stored as cookies with the help of Libcurl library. All this would lead to set up of RESTful services for GNU Octave which could be further extended to support the compatibility with MATLAB’s RESTful interface.


Timeline:

  1. April 23 - May 14

    Community Bonding Period: Get to know more about Libcurl and its implementation. Learn about HTTP request headers and stateless transfer protocol. Finalise location of files in the codebase that will be needed for code sharing. Study the existing work that has been done by the mentor. Study publish() and grabcode() functions. Study octave::url_transfer class. Learn about MediaWiki API and its implementation (backend working).

  2. May 15 - May 23

    Implement the Libcurl wrapper in libcurl_wrapper.cc/h. This would include the various abstractions such as ‘ALL’, ‘SESS’, ‘FLUSH’, ‘RELOAD’ that could be used by octave’s script to use Libcurl cookies to connect to the server. There’s a preliminary implementation for the same which was implemented by the mentor sometime back, which can be extended in this phase for a more general design.

  3. May 23 - May 31

    Add ‘wiki’ as an ‘output_format’ in publish() function. This would extend the implementation of private function __ publish_wiki_output__.m that is to be used to format a wiki published code.

  4. June 1 - June 10

    Non Coding period due to University major examinations.

  5. June 11 - June 15

    First Phase Evaluation - Write Documentation of the work done upto this point and other tests such as setting up the wiki installation environment for testing the script that would be implemented in point 6. Catch up with the work if anything is lagging behind.

  6. June 15 - June 30

    Implement wiki_login.m which would use the Libcurl wrapper implemented in point 2. Currently this file uses Java’s interface to octave. But with the wrapper, the file would become an implementation of RESTful services for ‘code sharing’ and will not be dependent on Java’s interface to octave.

  7. July 1 - July 9

    Implement ‘weboptions’ function. It will return a default ‘weboptions’ object to specify parameters for a request to web service. This function will use the libCURL wrapper that would’ve been already implemented. Current scope of the function would be restricted to four options, viz., ‘Username’, ‘password’, ‘Keyname’, ‘KeyValue’.

  8. July 10 - July 13

    Second Evaluation Phase - Write documentation of the work done upto here and other tests required for ‘weboptions’ function. Catch up with any previous work if left.

  9. July 14 - July 20

    Implement ‘webread’ function. This will read content from a web service and return data formatted as text. Other output arguments (cmap, alpha) will not be supported currently.

  10. July 21 - July 27

    Implement ‘webwrite’ function. This will put data in the body of an HTTP POST request to the web service.

  11. July 28 - August 6

    Buffer period for anything that remains. Complete documentation and testing for ‘webwrite’ and ‘webread’.

For now, libcurl_wrapper.cc/h would be placed in libinterp/corefcn but could be changed whether it should be used by default or on the user’s choice because someone might not want to connect themselves with wiki.octave.org. I’ve for now, merged two weeks, i.e from 15 June to 30 June because the main task would be include the wrapper implementation for connecting to the webserver. This may span upto two weeks and so is the reason for two weeks of the same task.

Additional Things to be done after GSoC is over:

The web function is already partially implemented. The task will be to finish the implementation with various arguments such as ‘-notoolbar’, ‘-noaddressbox’, ‘-new’ for MATLAB’s compatibility. ‘websave’, ‘ftp’ and ‘sendmail’ that are a part of RESTful services, will also be implemented. Any other part of GNU octave which currently might need RESTful services using cookies can be amended to use the implementation that would result from the project. The ‘webread’ function will be extended to read values into JSON format and images as well.


Thanks for reading, will be posting next update soon. Looking forward to a bug-free and code-some summer. :-)

April 25, 2018 07:16 AM

Week 1

I’ve been selected as a Google Summer of Code , 2018 student developer at GNU Octave. GNU Octave is a high-level language, primarily intended for numerical computations. It provides a convenient command line interface for solving linear and nonlinear problems numerically, and for performing other numerical experiments using a language that is mostly compatible with Matlab. It may also be used as a batch-oriented language.

A very heartful thanks to Kai T. Ohlhus, Doug Stewart, Ankit Raj and others who saw my potential and chose me for GSoC 2018.

My project is Octave Code Sharing. Community bonding period would include reading up material that’d be essential for the project.


Abstract:

This project aims to come up with a pan-octave implementation that could be used to connect to wiki.octave.org with appropriate credentials for publishing octave scripts that could be hosted for distribution using the MediaWiki API. Currently, no formal implementation is there but a proof of concept which was implemented as a bash script, later refactored to use JAVA’s interface to Octave. Since the network connection itself has latency and response time which are significantly large to get into notice, an Octave script, which will help in connection, will not be much of a performance killer. To maintain a stateless HTTP protocol, some information that would be needed will be stored as cookies with the help of Libcurl library. All this would lead to set up of RESTful services for GNU Octave which could be further extended to support the compatibility with MATLAB’s RESTful interface.


Timeline:

  1. April 23 - May 14

    Community Bonding Period: Get to know more about Libcurl and its implementation. Learn about HTTP request headers and stateless transfer protocol. Finalise location of files in the codebase that will be needed for code sharing. Study the existing work that has been done by the mentor. Study publish() and grabcode() functions. Study octave::url_transfer class. Learn about MediaWiki API and its implementation (backend working).

  2. May 15 - May 23

    Implement the Libcurl wrapper in libcurl_wrapper.cc/h. This would include the various abstractions such as ‘ALL’, ‘SESS’, ‘FLUSH’, ‘RELOAD’ that could be used by octave’s script to use Libcurl cookies to connect to the server. There’s a preliminary implementation for the same which was implemented by the mentor sometime back, which can be extended in this phase for a more general design.

  3. May 23 - May 31

    Add ‘wiki’ as an ‘output_format’ in publish() function. This would extend the implementation of private function __ publish_wiki_output__.m that is to be used to format a wiki published code.

  4. June 1 - June 10

    Non Coding period due to University major examinations.

  5. June 11 - June 15

    First Phase Evaluation - Write Documentation of the work done upto this point and other tests such as setting up the wiki installation environment for testing the script that would be implemented in point 6. Catch up with the work if anything is lagging behind.

  6. June 15 - June 30

    Implement wiki_login.m which would use the Libcurl wrapper implemented in point 2. Currently this file uses Java’s interface to octave. But with the wrapper, the file would become an implementation of RESTful services for ‘code sharing’ and will not be dependent on Java’s interface to octave.

  7. July 1 - July 9

    Implement ‘weboptions’ function. It will return a default ‘weboptions’ object to specify parameters for a request to web service. This function will use the libCURL wrapper that would’ve been already implemented. Current scope of the function would be restricted to four options, viz., ‘Username’, ‘password’, ‘Keyname’, ‘KeyValue’.

  8. July 10 - July 13

    Second Evaluation Phase - Write documentation of the work done upto here and other tests required for ‘weboptions’ function. Catch up with any previous work if left.

  9. July 14 - July 20

    Implement ‘webread’ function. This will read content from a web service and return data formatted as text. Other output arguments (cmap, alpha) will not be supported currently.

  10. July 21 - July 27

    Implement ‘webwrite’ function. This will put data in the body of an HTTP POST request to the web service.

  11. July 28 - August 6

    Buffer period for anything that remains. Complete documentation and testing for ‘webwrite’ and ‘webread’.

For now, libcurl_wrapper.cc/h would be placed in libinterp/corefcn but could be changed whether it should be used by default or on the user’s choice because someone might not want to connect themselves with wiki.octave.org. I’ve for now, merged two weeks, i.e from 15 June to 30 June because the main task would be include the wrapper implementation for connecting to the webserver. This may span upto two weeks and so is the reason for two weeks of the same task.

Additional Things to be done after GSoC is over:

The web function is already partially implemented. The task will be to finish the implementation with various arguments such as ‘-notoolbar’, ‘-noaddressbox’, ‘-new’ for MATLAB’s compatibility. ‘websave’, ‘ftp’ and ‘sendmail’ that are a part of RESTful services, will also be implemented. Any other part of GNU octave which currently might need RESTful services using cookies can be amended to use the implementation that would result from the project. The ‘webread’ function will be extended to read values into JSON format and images as well.


Thanks for reading, will be posting next update soon. Looking forward to a bug-free and code-some summer. :-)

April 25, 2018 07:16 AM

April 23, 2018

Erivelton Gualter

Welcome to Octave and Google Summer of Code

This summer I got accepeted to the Summer Google of Code under the GNU Octave. This program, admistred by Google, facilates the emergence of students to the Open Source Community. My primary goal to participate in the GSoC is to build a long term relationship with the open source community …

by Erivelton Gualter at April 23, 2018 07:46 AM

Welcome to Octave and Google Summer of Code

This summer I got accepeted to the Summer Google of Code under the GNU Octave. This program, admistred by Google, facilates the emergence of students to the Open Source Community. My primary goal to participate in the GSoC is to build a long term relationship with the open source community …

by Erivelton Gualter at April 23, 2018 07:46 AM

March 06, 2018

Jordi Gutiérrez Hermoso

Advent of D

I wrote my Advent of Code in D. The programming language. It was the first time I used D in earnest every day for something substantial. It was fun and I learned things along the way, such as easy metaprogramming, concurrency I could write correctly, and functional programming that doesn’t feel like I have one arm tied behind my back. I would do it all over again.

My main programming languages are C++ and Python. For me, D is the combination of the best of these two: the power of C++ with the ease of use of Python. Or to put it another way, D is the C++ I always wanted. This used to be D’s sales pitch, down to its name. There’s lots of evident C++ heritage in D. It is a C++ successor worthy of consideration.

Why D?

This is the question people always ask me. Whenever I bring up D, I am faced with the following set of standard rebuttals:

  • Why not Rust?
  • D? That’s still around?
  • D doesn’t bring anything new or interesting
  • But the GC…

I’ll answer these briefly: D was easier for me to learn than Rust, yes, it’s still around and very lively, it has lots of interesting ideas, and what garbage collector? I guess there’s a GC, but I’ve never noticed and it’s never gotten in my way.

I will let D speak for itself further below. For now, I would like to address the “why D?” rebuttals in a different way. It seems to me that people would rather not have to learn another new thing. Right now, Rust has a lot of attention and some of the code, and right now it seems like Rust may be the solution we always wanted for safe, systems-level coding. It takes effort to work on a new programming language. So, I think the “why D?” people are mostly saying, “why should I have to care about a different programming language, can’t I just immediately dismiss D and spend time learning Rust instead?”

I posit that no, you shouldn’t immediately dismiss D. If nothing else, try to listen to its ideas, many of which are distilled into Alexandrescu’s The D Programming Language. I recommend this book as good reading material for computer science, even if you never plan to write any D (as a language reference itself, it’s already dated in a number of ways, but I still recommend it for the ideas it discusses). Also browse the D Gems section in the D tour. In the meantime, let me show you what I learned about D while using it.

Writing D every day for over 25 days

I took slightly longer than 25 days to write my advent of code solutions, partly because some stumped me a little and partly because around actual Christmas I wanted to spend time with family instead of writing code. When I was writing code, I would say that nearly every day of advent of code forced me to look into a new aspect of D. You can see my solutions in this Mercurial repository.

I am not going to go too much into details about the abstract theory concerning the solution of each problem. Perhaps another time. I will instead focus on the specific D techniques I learned about or found most useful for each.

Contents

  1. Day 1: parsing arguments, type conversions, template constraints
  2. Day 2: functional programming and uniform function call syntax
  3. Day 3: let’s try some complex arithmetic!
  4. Day 4: reusing familiar tools to find duplicates
  5. Day 5: more practice with familiar tools
  6. Day 6: ranges
  7. Day 7: structs and compile-time regexes
  8. Day 8: more compile-time fun with mixin
  9. Day 9: a switch statement!
  10. Day 10: learning what ranges cannot do
  11. Day 11: offline hex coding
  12. Day 12: for want of a set
  13. Day 13: more offline coding
  14. Day 14: reusing older code as a module
  15. Day 15: generators, lambdas, functions, and delegates
  16. Day 16: permutations with primitive tools
  17. Day 17: avoiding all the work with a clever observation
  18. Day 18: concurrency I can finally understand and write correctly
  19. Day 19: string parsing with enums and final switches
  20. Day 20: a physics problem with vector operations
  21. Day 21: an indexable, hashable, comparable struct
  22. Day 22: more enums, final switches, and complex numbers
  23. Day 23: another opcode parsing problem
  24. Day 24: a routine graph-search problem
  25. Day 25: formatted reads to finish off Advent of D

Day 1: parsing arguments, type conversions, template constraints

(problem statement / my solution)

For Day 1, I was planning to be a bit more careful about everything around the code. I was going to carefully parse CLI arguments, produce docstrings and error messages when anything went wrong, and carefully validate template arguments with constraints (comparable to concepts in C++). While I could have done all of this, as days went by I tried to golf my solutions, so I abandoned most of this boilerplate. Instead, I lazily relied on getting D stack traces at runtime or compiler errors when I messed up.

As you can see from my solution, had I kept it up, the boilerplate isn’t too bad, though. Template constraints are achieved by adding if(isNumeric!numType), which checks at compile time that my template was given a template argument of the correct type, where isNumeric comes from import std.traits. I also found that getopt was a sufficiently mature standard library for handling command-line parsing. It’s not quite as rich as Python’s argparse, merely sufficient. This about shows all it can do:

  string input;
  auto opts = getopt(
    args,
    "input|i", "Input captcha to process", &input
  );

  if (opts.helpWanted) {
    defaultGetoptPrinter("Day 1 of AoC", opts.options);
  }

Finally, a frequent workhorse that appeared from Day 1 was std.conv for parsing strings into numbers. A single function, to is surprisingly versatile and does much more than that, by taking a single template argument for converting (not casting) one type into another. It knows not only how to parse strings into numbers and vice versa, but also how to convert numerical types keeping as much precision as possible or reading list or associative array literals from strings if they are in their standard string representation. It’s a good basic example of D’s power and flexibility in generic programming.

Day 2: functional programming and uniform function call syntax

(problem statement / my solution)

For whatever reason, probably because I was kind of trying to golf my solutions, I ended up writing a lot of functionalish code, with lots of map, reduce, filter, and so forth. This started early on with Day 2. D is mostly unopinionated about which style of programming one should use and offers tools to do object-orientation, functional programming, or just plain procedural programming, presenting no obstacle to the mixing these styles. Lambdas are easily written inline with concise syntax, e.g. x => x*x, and the basic standard functional tools like map, reduce, filter and so on are available.

D’s approach to functional programming is quite pragmatic. While I rarely used it, because I wasn’t being too careful for these solutions, D functions can be labelled pure, which means that they can have no side effects. However, this still lets them do local impure things such as reassigning a variable or having a for loop. The only restriction is that all of their impurity must be “on the stack”, and that they cannot call any impure functions themselves.

Another feature that I came to completely fall in love with was what they call uniform function call syntax (UFCS). With some caveats, this basically means that

 foo.bar(baz)

is just sugar for

 bar(foo, baz)

If the function only has one argument, the round brackets are optional and foo.bar is sugar for bar(foo). This very basic syntactic convenience makes it so easy and pleasant to chain function calls together, lending itself to making it more inviting to write functional code. It also is a happy unification between OOP and FP, because syntactically it’s the same to give an object a new member function as it is to create a free-standing function whose first argument is the object.

Day 3: let’s try some complex arithmetic!

(problem statement / my solution)

For me, 2-dimensional geometry is often very well described by complex numbers. The spiral in the problem here seemed easy to describe as an associative array from complex coordinates to integer values. So, I decided to give D’s std.complex a try. It was easy to use and there were no big surprises here.

Day 4: reusing familiar tools to find duplicates

(problem statement / my solution)

There weren’t any new D techniques here, but it was nice to see how easy it was to build a simple word counter from D builtins. Slightly disappointed that this data structure itself wasn’t builtin like Python’s own collections.Counter but hardly an insurmountable problem.

Day 5: more practice with familiar tools

(problem statement / my solution)

Again, not much new D here. I like the relative ease with which it’s possible to read integers into a list using map and std.conv.to.

Day 6: ranges

(problem statement / my solution)

There’s usually a fundamental paradigm or structure in programming languages out of which everything else depends on. Haskell has functions and monads, C has pointers and arrays, C++ has classes and templates, Python has dicts and iterators, Javascript has callbacks and objects, Rust has borrowing and immutability. Ranges are one of D’s fundamental concepts. Roughly speaking, a range is anything that can be iterated over, like an array or a lazy generator. Thanks to D’s powerful metaprogramming, ranges can be defined to satisfy a kind of compile-time duck typing: if it has methods to check for emptiness, get the first element, and get the next element, then it’s an InputRange. This duck typing is kind of reminiscent of type classes in Haskell. D’s general principle of having containers and algorithms on those containers is built upon the range concept. Ranges are intended to be simpler reformulation of iterators from the C++ standard libary.

I have been using ranges all along, as foreach loops are kind of like sugar for invoking those methods on ranges. However, for day 6 I actually wanted to use a method that had to invoke an std.range method, enumerate. It simply iterates over a range while simultaneously producing a counter. This I used to write some brief code to obtain both the maximum of an array and the index in which it occurs.

Another range-related feature that appears for the first time here is slicing. Certain random-access ranges which allow integer indexing also allow slicing. The typical method to remove elements from an array is to use this slicing. For example, to remove the first five elements and the last two elements from an array:

 arr = arr[5..$-2];

Here the dollar sign is sugar for arr.length and this removal is simply done by moving some start and end pointers in memory — no other bytes are touched.

The D Tour has a good taste of ranges and Programming in D goes into more depth.

Day 7: structs and compile-time regexes

(problem statement / my solution)

My solution for this problem was more complicated, and it forced me to break out an actual tree data structure. Because I wasn’t trying to be particularly parsimonious about memory usage or execution speed, I decided to create the tree by having a node struct with a global associative array indexing all of the nodes.

In D, structs have value semantics and classes have reference semantics. Roughly, this means that structs are on the stack, they get copied around when being passed into functions, while classes are always handled by reference instead and dynamically allocated and destroyed. Another difference between structs and classes is that classes have inheritance (and hence, polymorphic dispatch) but structs don’t. However, you can give structs methods, and they will have an implicit this parameter, although this is little more than sugar for free-standing functions.

Enough on OOP. Let’s talk about the really exciting stuff: compile-time regular expressions!

For this problem, there was some input parsing to do. Let’s look at what I wrote:

void parseLine(string line) {
 static nodeRegex = regex(r"(?P<name>\w+) \((?P<weight>\d+)\)( -> (?P<children>[\w,]+))?");  
 auto row = matchFirst(line, nodeRegex);
 // init the node struct here
}

The static keyword instructs D that this variable has to be computed at compile-time. D’s compiler basically has its own interpreter that can execute arbitrary code as long as all of the inputs are available at compile time. In this case, this parses and compiles this regex into the binary. The next line, where I call matchFirst on each line, is done at runtime, but if for whatever reason I had these strings available at compile time (say, defined as a big inline string a few lines above the same source file), I could also do the regex parsing at compile time if I wanted to.

This is really nice. This is one of my favourite D features. Add a static and you can precompute into your binary just about anything. You often don’t even need any extra syntax. If the compiler realises that it has all of the information at compile time to do something, it might just do it. This is known as compile-time function execution, hereafter, CTFE. The D Tour has a good overview of the topic.

Day 8: more compile-time fun with mixin

(problem statement / my solution)

Day 8 was another problem where the most interesting part was parsing. As before, I used a compile-time regex. But the interesting part of this problem was the following bit of code for parsing strings into their corresponding D comparison operation, as I originally wrote it:

auto comparisons = [
  "<": function(int a, int b) => a  < b,
  ">": function(int a, int b) => a > b,
  "==": function(int a, int  b) => a == b,
  "<=": function(int a, int b) => a <= b,
  ">=": function(int a, int b) => a >= b,
  "!=": function(int a, int b) => a  != b,
];

Okay, this isn’t terrible. It’s just… not very pretty. I don’t like that it’s basically the same line repeated six times. I furthermore also don’t like that within each line, I have to repeat the operator in the string part and in the function body. Enter the mixin keyword! Basically, string mixins allow you to evaluate any string at compile time. They’re kind of like the C preprocessor, but much safer. For example, string mixins only evaluate complete expressions, so no shenanigans like #define private public are allowed. My first attempt to shorten the above looked like this:

bool function(int,int)[string] comparisons;
static foreach(cmp; ["<", ">", "==", "<=", ">=", "!="]) {
  comparisons[cmp] = mixin("function(int a, int b) => a "~cmp~" b");
}

Since I decided to use a compile-time static loop to populate my array, I now needed a separate declaration of the variable which forced me to spell out its ungainly type: an associative array that takes a string and returns a function with that signature. The mixin here takes a concatenated string that evaluates to a function.

However, this didn’t work for two reasons!

The first one is that static foreach was introduced on September 2017. The D compilers packaged in Debian didn’t have it yet when I wrote that code! The second problem is more subtle: initialisation of associative arrays currently cannot be statically done because their internal data structures rely on runtime computations, according to my understanding of this discussion. They might fix it some day?

So, next best thing is my final answer:

bool function(int,int)[string] comparisons;

auto getComparisons(Args...)() {
  foreach(cmp; Args) {
    comparisons[cmp] = mixin("function(int a, int b) => a "~cmp~" b");
  }
  return comparisons;
}

shared static this() {
  comparisons = getComparisons!("<", ">", "==",  "<=", ">=", "!=");
}

Alright, by size this is hardly shorter than the repetitive original. But I still think it’s better! It has no dull repetition where bugs are most often introduced, and it’s using a variable-argument templated function so that the mixin can have its values available at compile time. It uses the next best thing to compile-time initialisation, which is a module initialiser shared static this() that just calls the function to perform the init.

Day 9: a switch statement!

(problem statement / my solution)

Day 9 was a simpler parsing problem, so simple that instead of using a regex I decided to just use a switch statement. There isn’t anything terribly fancy about switch statements, and they work almost exactly the same as they do in other languages. The only distinct features of switch statements in D is that they work on numeric, string, or bool types and that they have deprecated implicit fallthrough. Fallthrough instead must be explicitly done with goto case; or will be once the deprecation is complete.

Oh, and you can also specify ranges for a case statement, e.g.

  case 'a': .. case 'z':
    // do stuff with lowercase ASCII
    break;

It’s the small conveniences that make this pleasant. Programming in D has a good discussion on switch statements.

Day 10: learning what ranges cannot do

(problem statement / my solution)

So, superficially, you might think that expressions like arr[2..$-2], which is valid, would also allow for things like arr[$-2..1] to traverse the array in reverse order or some other syntax for having a different step size than +1. At least I did. These kinds of array indexing are common in numeric-based arrays such as Octave, Julia, R, or Python’s numpy. So for day 10’s hash, which requires reversing an array, I thought I could just do that.

Turns out that the language doesn’t have syntax to allow this, but after a quick trip to the standard library I found the necessary functions. What I thought could be written as

arr[a..b] = arr[b..a];

instead became

reverse(arr[a..b]);

Other than this minor discovery about ranges, Day 10 was more about getting the algorithm right than using any specialised D utilities. Since real hashes typically allow several sizes, I templated the hash functions with the total size, rounds of hashing, and chunk size, with a template constraint that the chunk size must divide the total size:

auto getHash(int Size=256, int Rounds=64, int ChunkSize=16)(string input)
 if( Size % ChunkSize == 0)
{
  // ...
}

Nothing new here. I just like that template constraints are so easy to write.

Day 11: offline hex coding

(problem statement / my solution)

I did most of Day 11 on paper. It took me a while to figure out a proper hex coordinate system and what the distance function in that coordinate system should be. I had seen hex coordinates from playing Battle for Wesnoth, but took me a while to figure them out again. Once I had that, the actual D code is pretty simple and used no techniques I hadn’t seen before. I think this is the first time I used the cumulativeFold function, but other than that, nothing to see here. An immutable global associative array populated at module init time like before,

pure static this(){
  directions = [
    "ne": [1,1],
    "n": [0,1],
    "nw": [-1,0],
    "sw": [-1,-1],
    "s": [0,-1],
    "se": [1,0],
  ];
}

and that’s it.

Day 12: for want of a set

(problem statement / my solution)

The only new D technique for this problem was that I decided to use a set structure to keep track of which graph nodes had been visited. The only problem is that D doesn’t have a built-in set structure (yet?), but it does have a setDifference function. It’s a bit clunky. It only works on ordered arrays, but that was sufficient for my purpose here, and probably not much worse than hashing with a traditional set structure would have been.

One further observation: D has an in keyword, which can be used to test membership, like in Python (it also has an unrelated use for defining input and output arguments to functions), but unlike Python, only for associative arrays. This makes sense, because the complexity of testing for membership for other data structures can vary widely depending on the structure and the chosen algorithm, and there isn’t a clear universal choice like there is for associative arrays.

If desired, however, it’s possible to define the in operator for any other class, like so:

bool opBinaryRight!("in")(T elt) {
  // check that elt is in `this`
}

I would assume that’s what you could use to write a set class for D.

Day 13: more offline coding

(problem statement / my solution)

This one is another where I did most of the solution on paper and thus managed to write a very short program. No new D techniques here, just the usual functionalish style that I seem to be developing.

Day 14: reusing older code as a module

(problem statement / my solution)

The problem here is interesting because I’ve solved this labelling of connected components problem before in C++ for GNU Octave. I wrote the initial bwlabeln implementation using union-find. I was tempted to do the same here, but I couldn’t think of a quick way to do so, and talking to others in the #lobsters channel in IRC, I realised that a simpler recursive solution would work without overflowing the stack (because the problem is small enough, not because a stack-based algorithm is clever).

The interesting part is reusing an earlier solution, the hashing algorithm from Day 10. At first blush, this is quite simple: every D file also creates its own module, namespaced if desired by directories. It’s very reminiscent of Python’s import statement and module namespacing. The only snag is that my other file has a void main(string[] args) function and so does this one. The linker won’t like that duplicate definition of symbols. For this purpose, D offers conditional compilation, which in C and C++ is usually achieved via a familiar C preprocessor macro idiom.

In D, this idiom is codified into the language proper via the version kewyord, like so

version(standalone) {
  void main(string[] args){
    // do main things here
  }
}

This instructs the compiler to compile the inside of the version block only if an option called “standalone” is passed in,

gdc -O2 -fversion=standalone app.d -o day10

or, with regrettably slightly different flags,

ldc2 -O2 -d-version=standalone app.d -of day10

There are other built-in arguments for version, such as “linux” or “OSX” to conditionally compile for a particular operating system. This keyword offers quite a bit of flexibility for conditional compilation, and it’s a big improvement over C preprocessor idioms.

Day 15: generators, lambdas, functions, and delegates

(problem statement / my solution)

This problem was an opportunity to test out a new function, generate, which takes a function and iterates it repeatedly on a range. Haskell calls this one iterate, which I think is a better name. It’s also a lazy generator, so you need something like take to say how much of the generator do you want to use. For example, the Haskell code

pows = take 11 $ iterate (\x -> x*2) 1

can be translated into D as

auto x = 1;
auto pows = generate!(x => x*2).take(11);

There are other examples in the documentation.

Let also take a moment here to talk about the different anonymous functions in D. The following both declare a function that squares its input:

function(int a) { return a^^2;}
delegate(int a) { return a^^2;}

The difference is just a question of closure. The delegate version carries a hidden pointer to its enclosing scope, so it can dynamically close over the outer scope variables. If you can’t afford to pay this runtime penalty, the function version doesn’t reference the enclosing scope (no extra pointer). So, for a generator, you typically want to use a delegate, since you want the generator to remember its scoped variables across successive calls, like what I did:

auto generator(ulong val, ulong mult) {
  return generate!(delegate(){
      val = (val * mult) % 2147483647;
      return val;
    }
  );
}

This function returns a generator range where each entry will result in a new entry of this pseudorandom linear congruence generator.

The delegate/function is part of the type, and can be omitted if it can be inferred by context (e.g. when passing a function into another function as an argument). Furthermore, there’s a lambda shorthand that I have been using all along, where the {return foo;} boilerplate can be shortened to just => like so:

  (a) => a^^2

This form is only valid where there’s enough context to infer if it’s a delegate or a function, as well as the type of a itself. More details in the language spec.

Day 16: permutations with primitive tools

(problem statement / my solution)

This permutations problem made me reach for the std.algorithm function bringToFront for cyclicly permuting an array in place like so,

   bringToFront(progs[rot..$], progs[0..rot]);

It’s a surprisingly versatile function that can be used to perform more tricks than cyclic permutations. Its documentation is worth a read.

I also ran into a D bug here. I had to create a character array from an immutable input string, but due to Unicode-related reasons that D has for handling characters especially, I had to cast to ubyte[] instead of char[].

Besides that, for the second part where you had to realise that permutations cannot have too big of an orbit, I also ended up using a string array with the canFind from std.algorithm. I would have preferred a string set with hashing instead of linear searching, but it didn’t make a huge difference for the size of this problem.

I really want sets in the D standard library. Maybe I should see what I can do to make them happen.

Day 17: avoiding all the work with a clever observation

(problem statement / my solution)

This puzzle is a variation of the Josephus problem. I needed some help from #lobsters in IRC to figure out how to solve it. There aren’t any new D techniques, just some dumb array concatenation with the tilde operator for inserting elements into an array:

circ = circ[0..pos] ~ y ~ circ[pos..$];

The second part can be solved via the simple observation that you only
need to track one position, the one immediately following zero.

Day 18: concurrency I can finally understand and write correctly

(problem statement / my solution)

This day was an exciting one for me. Discussing this problem with others, it seems many people had much difficulty solving this problem in other programming languages. Most people seemed to have to emulate their own concurrency with no help from their programming language of choice. In contrast, D absolutely shined here, because it is based on the actor concurrency model (message passing), which precisely fits the problem statement. (There are other concurrency primitives in case the actor concurrency model isn’t sufficient, but it was sufficient for me.)

The basic idea of concurrency in D is that each thread of execution localises all of its state. By default, threads share no data. In order to communicate with each other, threads pass messages. A thread can indicate at any time when it’s ready to send or receive messages. Messages can be any type, and each thread says what type it’s expecting to receive. If a thread receives a type it’s not prepared to handle, it will throw an exception.

There are more details, such as what happens if a thread receives too many messages but doesn’t respond to any of them. Let’s not go into that now. Basic idea: threads get spawned, threads send and receive messages.

Let’s spend a little bit of time looking at the relevant functions and types I used, all defined in std.concurrency

spawn
Starts a thread of execution. The first argument is a reference to the function that this thread will execute, along with any arguments that function may take. Returns a Tid type, a thread ID. This is used as the address to send messages to. A thread can refer to its parent thread via the special variable ownerTid.

Unless explicitly declared shared, the arguments of a threaded function must be immutable. This is how the compiler guarantees no race conditions when manipulating those variables. Of course, with shared variables, the programmer is signalling that they are taking over synchronisation of that data, which may require using low-level mutexes.

send
Send a message to a particular thread. The first argument is the thread id. The other arguments can be anything. It’s up to the receiving thread to handle the arguments it receives.
receiveOnly
Indicate that this thread is ready to receive a single type, and returns the value of that type. The type must of course be specified as a compile-time argument.
receive
Indicates what to do with any of several possible types. The arguments to this function are a collection of functions, whose parameters types will be dynamically type-matched with received types. I didn’t need this function, but I wanted to mention it exists.
receiveTimeout
The problem statement is designed to deadlock. Although there probably is a more elegant solution, timing out on deadlock was the solution I wrote. This function does just that: listens for a message for a set of amount of time. If a message is received in the designated time, its handler function is executed and receiveTimeout returns true. If the timeout happens, it returns false instead.

Armed with these tools, the solution was a breeze to write. I first spawn two threads and save their thread ids,

  auto tid1 = spawn(&runProgram, opcodes, 0);
  auto tid2 = spawn(&runProgram, opcodes, 1);

Each of the two threads defined by runProgram then immediately stops, waiting for a thread id, to know whom to talk to,

void runProgram(immutable string[] opcodes, ulong pid) {
  auto otherProg = receiveOnly!Tid();
 // ...
}

The parent thread then connects the two worker threads to each other,

  send(tid1, tid2);
  send(tid2, tid1);

And off they go, the two threads run through the opcodes in the problem statement, and eventually they deadlock, which I decided to handle with a timeout like so,

    case "rcv":
      if (!receiveTimeout(100.msecs, (long val) {regs[reg] = val;})) {
        goto done;
      }
      // no timeout, handle next opcode
      goto default;

After the thread has timed out, it signals to the parent thread that it’s done,

  done:
  send(ownerTid, thisTid, sent);

The parent in turn receives two tuples with thread id and computed value from each thread, and based on that decides what to output, after figuring out which thread is which,

  // Wait for both children to let us know they're done.
  auto done1 = receiveOnly!(Tid, long);
  auto done2 = receiveOnly!(Tid, long);
  if (done1[0] == tid2) {
    writeln(done1[1]);
  }
  else {
    writeln(done2[1]);
  }

And voilà, concurrency easy-peasy.

Day 19: string parsing with enums and final switches

(problem statement / my solution)

The only new D technique here are final switches. A final switch is for enum types, to make the compiler enforce you writing a case to match all of the possible values. That’s what I did here, where I wanted to make sure I matched the up, down, left, and right directions:

    final switch(dir){
    case DIR.d:
      j++;
      break;
    case DIR.u:
      j--;
      break;
    case DIR.l:
      i--;
      break;
    case DIR.r:
      i++;
      break;
    }

The rest of the problem is merely some string parsing.

Day 20: a physics problem with vector operations

(problem statement / my solution)

I have not yet done serious numerical work with D, but I can see that it has all of the necessary ingredients for it. One of the most obvious amongst these is that it has built-in support for writing vector instructions. Given struct to model a particle in motion,

struct particle {
  double[3] pos;
  double[3] vel;
  double[3] acc;
}

the following function return another particle where all of the vectors have been divided by its norm (i.e. normalised to 1):

auto unit(particle p) {
  auto
    pos = p.pos,
    vel = p.vel,
    acc = p.acc;
  pos[] /= pos.norm;
  vel[] /= vel.norm;
  acc[] /= acc.norm;
  particle u = {pos, vel, acc};
  return u;
}

This vec[] /= scalar notation divides all of the vector by the given scalar. But that’s not all. You can also add or multiply vectors elementwise with similar syntax,

      double[3] diff1 = p.pos, diff2 = p.vel;
      diff1[] -= p.vel[];
      diff2[] -= p.acc[];

Here diff1 and diff2 give the vector difference between the position and velocity, and respectively, velocity in acceleration (I use the criterion of all of three of these being mostly collinear to determine if all particles have escaped the system and thus can no longer interact with any other particle).

This is mostly syntactic sugar, however. Although the D compiler can sometimes turn instructions like these into native vector instructions like AVX, real vectorisation has to be done via some standard library support

Day 21: an indexable, hashable, comparable struct

(problem statement / my solution)

I was happy to recognise, via some string mixins, that I could solve this problem by considering the dihedral group of the square:

immutable dihedralFun =
"function(ref const Pattern p) {
  auto n = p.dim;
  auto output = new int[][](n, n);
  foreach(i; 0..n) {
    foreach(j; 0..n) {
      output[i][j] = p.grid[%s][%s];
    }
  }
  return output;
}"
;

immutable dihedralFourGroup = [
  mixin(format(dihedralFun, "i",     "j")),
  mixin(format(dihedralFun, "n-i-1", "j")),
  mixin(format(dihedralFun, "i",     "n-j-1")),
  mixin(format(dihedralFun, "n-i-1", "n-j-1")),
  mixin(format(dihedralFun, "j",     "i")),
  mixin(format(dihedralFun, "n-j-1", "i")),
  mixin(format(dihedralFun, "j",     "n-i-1")),
  mixin(format(dihedralFun, "n-j-1", "n-i-1")),
];

This isn’t a new technique, but I’m really happy how it turns out. Almost like lisp macros, but without devolving into the lawless chaos of Python or Javascript eval or C preprocessor macros. As an aside, the format function accepts formatted strings with POSIX syntax for positional arguments, but there isn’t anything built-in as nice as Perl string interpolation or Python format strings.

The real meat of this problem was to implement a grid structure that could be hashed, compared, and indexed. This is is all done with a number of utility functions. For indexing and slicing, the basic idea is that for a user-defined type foo

  foo[bar..baz]

is sugar for

  foo.opIndex(foo.opSlice(bar, baz))

so those are the two functions you need to implement for indexing and slicing. Similarly, for equality comparison and hashing, you implement opEquals and toHash respectively. I relied on the dihedral functions above for comparison for this problem.

After implementing these functions for a struct (recall: like a class, but with value semantics and no inheritance), the rest of the problem was string parsing and a bit of logic to implement the fractal-like growth rule.

Day 22: more enums, final switches, and complex numbers

(problem statement / my solution)

Another rectangular grid problem, which I again decided to represent via complex numbers. The posssible infection states given in the problem I turned into an enum which I then checked with a final switch as before. The grid is then just an associative array from complex grid positions to infection states.

Nothing new here. By now these are all familiar tools and writing D code is becoming habitual for me.

Day 23: another opcode parsing problem

(problem statement / my solution)

This problem wasn’t technically difficult from a D point of view. The usual switches and string parsing techniques of days 8 and 18 work just as well. In fact, I started with the code of day 18 and modified it slightly to fit this problem.

The challenge was to statically analyse the opcode program to determine that it is implementing a very inefficient primality testing algorithm. I won’t go into an analysis of that program here because others have already done a remarkable job of explaining it. Once this analysis was complete, the meat of the problem then becomes to write a faster primality testing algorithm, such as dumb (but not too dumb) trial division,

auto isComposite(long p) {
  return iota(2, sqrt(cast(double) p)).filter!(x => p % x == 0).any;
}

and use this test at the appropriate location.

Day 24: a routine graph-search problem

(problem statement / my solution)

This problem required some sort of graph structure, which I implemented as an associative array from node ids to all edges incident to that node. The problem then reduces to some sort of graph traversal (I did depth-first search), keeping track of edge weights.

No new D techniques here either, just more practice with my growing bag of tricks.

Day 25: formatted reads to finish off Advent of D

(problem statement / my solution)

The final problem involved parsing a slightly more verbose DSL. For this, I decided to use formatted strings for reading, like so,

auto branchFmt =
"    - Write the value %d.
    - Move one slot to the %s.
    - Continue with state %s.
"
;

auto parseBranch(File f) {
  int writeval;
  string movedir;
  char newstate;

  f.readf(branchFmt, &writeval, &movedir, &newstate);
  return Branch(writeval ? true : false, movedir == "left" ? -1 : 1, newstate);
}

This is admittedly a bit brittle. Even the type check between the formatted string and the expected types is done at runtime (but newer D versions have a compile-time version of readf for type-checking the format string). An error here can cause exceptions at runtime.

Other than this, the only new technique here is that I actually wrote a loop to parse the program file:

auto parseInstructions(File f) {
  Instruction[char] instructions;

  while(!f.eof) {
    char state;
    f.readf("In state %s:\n", &state);
    f.readln; // "If the current value is 0:"
    auto if0 = f.parseBranch;
    f.readln; // "If the current value is 1:"
    auto if1 = f.parseBranch;
    f.readln; // Blank line
    instructions[state] = Instruction(if0, if1);
  }
  return instructions;
}

A small comfort here is that checking for eof in the loop condition actually works. This is always subtly wrong in C++ and I can never remember why.

What’s left of the problem is absolutely routine D by now: associative arrays, UFCS, foreach loops, standard library utilities for summing and iterating and so forth. A few of my favourite things.

Concluding remarks

The best part is that my code was also fast! I was comparing my solutions above with someone else who was doing their Advent of Code in C. I could routinely match his execution speed on the problems where we bothered to compare, whenever we wrote similar algorithms. I’m eager to see what D can do when faced with some real number-crunching.

After all this, I have come to appreciate D more, as well as seeing some of its weak points. I think I have already raved enough about how much I like its functional style, its standard library, its type-checking, and its compile-time calculation. I also ran into a few bugs and deprecated features. I have also observed some language questionable design choices. Not once did I notice having a garbage collector. It was lots of fun.

Merry belated Christmas!

by Jordi at March 06, 2018 04:24 AM

August 29, 2017

Enrico Bertino

Summary of work done during GSoC

GSoC17 is at the end and I want to thank my mentors and the Octave community for giving me the opportunity to participate in this unique experience.

During this Google Summer of Code, my goal was to implement from scratch the Convolutional Neural Networks package for GNU Octave. It  will be integrated with the already existing nnet package.

This was a very interesting project and a stimulating experience for both the implemented code and the theoretical base behind the algorithms treated. A part has been implemented in Octave and an other part in Python using the Tensorflow API.


Code repository

All the code implemented during these months can be found in my public repository: 

 https://bitbucket.org/cittiberto/gsoc-octave-nnet/commits/all

(my username: citti berto, bookmark enrico)

Since I implemented a completely new part of the package, I pushed the entire project in three commits and I wait for the community approving for preparing a PR for the official package [1].


Summary

The first commit (ade115a, [2]) contains the layers. There is a class for each layer, with a corresponding function which calls the constructor. All the layers inherit from a Layer class which lets the user create a layers concatenation, that is the network architecture. Layers have several parameters, for which I have guaranteed the compatibility with Matlab [3].


The second commit (479ecc5 [4]) is about the Python part, including an init file checking the Tensorflow installation. I implemented a Python module, TFintefrace, which includes:

  • layer.py: an abstract class for layers inheritance
  • layers/layers.py: the layer classes that are used to add the right layer to the TF graph
  • dataset.py: a class for managing the datasets input
  • trainNetwork.py: the core class, which initiates the graph and the session, performs the training and the predictions
  • deepdream.py: a version of [5] for deepdream implementation (it has to be completed)


The third commit (e7201d8 [6]) includes:
  • trainingOptions: All the options for the training. Up to now, the only optimizer available is the stochastic gradient descent with momentum (sgdm) implemented in the class TrainingOptionsSGDM.
  • trainNetwork: passing the data, the architecture and the options, this function performs the training and returns a SeriesNetwork object
  • SeriesNetwork: class that contains the trained network, including the Tensorflow graph and session. This has three methods
    • predict: predicting scores for regression problems
    • classify: predicting labels for classification problems
    • activations: getting the output of a specific layer of the architecture

 

Goals not met

I did not manage to implement some features because of the lack of time due to the bug fixing in the last period. The problem was the conspicuous time spent testing the algorithms (because of the different random generators between Matlab, Octave and Python/Tensorflow). I will work in the next weeks to implement the missing features and I plan to continue to contribute to maintaining this package to keep it up to date with both Tensorflow new versions and Matlab new feature.


Function Missing features
activations OutputAs (for changing output format)
imageInputLayer DataAugmentation and Normalization
trainNetwork Accepted inputs: imds or tbl
trainNetwork .mat Checkpoints
trainNetwork ExecutionEnvironment: 'multi-gpu' and 'parallel'
ClassificationOutputLayer    classnames
TrainingOptions WorkerLoad and OutputFcn
DeepDreamImages Generalization to any network and AlexNet example

 

Tutorial for testing the package

  1. Install Python Tensorflow API (as explained in [4])
  2. Install Pytave (following these instructions [5])
  3. Install nnet package (In Octave: install [6] and load [7])
  4. Check the package with make check PYTAVE="pytave/dir/"
  5. Open Octave, add the Pytave dir the the paths and run your first network:

### TRAINING ###
# Load the training set
[XTrain,TTrain] = digitTrain4DArrayData();
 

# Define the layers
layers = [imageInputLayer([28 28 1]);
          convolution2dLayer(5,20);
          reluLayer();
          maxPooling2dLayer(2,'Stride',2);
          fullyConnectedLayer(10);
          softmaxLayer();
          classificationLayer()];

# Define the training options
options = trainingOptions('sgdm', 'MaxEpochs', 15, 'InitialLearnRate', 0.04);

# Train the network
net = trainNetwork(XTrain,TTrain,layers,options);


### TESTING  ###
# Load the testing set
[XTest,TTest]= digitTest4DArrayData();

# Predict the new labels
YTestPred = classify(net,XTest);


Future improvements

  • Manage the session saving
  • Save the checkpoints as .mat files and not as TF checkpoints
  • Optimize array passage via Pytave 
  • Categorical variables for classification problems

Links


Repo link: https://bitbucket.org/cittiberto/gsoc-octave-nnet/commits/all

[1] https://sourceforge.net/p/octave/nnet/ci/default/tree/
[2] https://bitbucket.org/cittiberto/gsoc-octave-nnet/commits/ade115a0ce0c80eb2f617622d32bfe3b2a729b13
[3] https://it.mathworks.com/help/nnet/classeslist.html?s_cid=doc_ftr
[4] https://bitbucket.org/cittiberto/gsoc-octave-nnet/commits/479ecc5c1b81dd44c626cc5276ebff5e9f509e84 
[5] https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/tutorials/deepdream
[6] https://bitbucket.org/cittiberto/gsoc-octave-nnet/commits/e7201d8081ca3c39f335067ca2a117e7971b5087
[7] https://www.tensorflow.org/install/
[8] https://bitbucket.org/mtmiller/pytave
[9] https://www.gnu.org/software/octave/doc/interpreter/Installing-and-Removing-Packages.html
[10] https://www.gnu.org/software/octave/doc/v4.0.1/Using-Packages.html

by Enrico Bertino (noreply@blogger.com) at August 29, 2017 06:23 PM

August 28, 2017

Joel Dahne

Final Report

This is the final report on my work with the interval package during Google Summer of Code 2017. This whole blog have been dedicated to the project and by reading all posts you can follow my work from beginning to end.

The work has been challenging and extremely fun! I have learned a lot about interval arithmetic, the Octave and Octave-forge project, and also how to contribute to open-source in general. I have found the whole Octave community to be very helpful and especially I want to thank my mentor, Oliver Heimlich, and co-mentor, Kai Torben Ohlhus, for helping me during the project.

Here I will give a small introduction to Octave and the interval package for new readers and a summary of how the work has gone and how you can run the code I have actually contributed with.

Octave and the Interval Package

Octave, or GNU Octave, is a free program for scientific computing. It is very similar to Matlab and its syntax is largely compatible with it. Octave comes with a large set of core functionality but can also be extended with packages from Octave forge. These add new functionality, for example image processing, fuzzy logic or more statistic functions. One of those packages is the interval package which allows you to compute with interval arithmetic in Octave.

Summary of the Work

The goal with the project was to improve the Octave-forge interval package by implementing support for creating, and working with, N-dimensional arrays of intervals.

The work has gone very well and we have just released version 3.0.0 of the interval package incorporating all the contributions I have made during the project.

The package now has full support for working with N-dimensional arrays in the same way you do with ordinary floating point numbers in Octave. In addition I have also fixed some bugs not directly related to N-dimensional arrays, see for example bug #51783 and #51283.

During the project I have made a total of 108 commits. I have made changes to 332 of the packages 666 files. Some of these changes have only been changes in coding style or (mainly) automated adding of tests. Not counting these I have, manually, made changes to 110 files.

If you want to take a look at all the commits I have contributed with the best way is to download the repository after which you can see all the commits from GSoC with

hg log -u joeldahne@hotmail.com -d "2017-06-01 to 2017-08-29"

Unfortunately I have not found a good way of isolating commits from a specific period and author on sourceforge where the package is hosted. Instead you can find a list of all commits at the end of this blog post.

The NEWS-file from the release of version 3.0.0 is also a pretty good overview of what I have done. While not all of the changes are a result of GSoC quite a lot of them are.

Running the Code

As mentioned above we have just released version 3.0.0 of the interval package. With the new release it is very easy to test the newly added functionality. If you already have Octave installed the easiest way to install the package is with the command “pkg install -forge interval”. This will install the latest release of the package, at the time of writing this is 3.0.0 but that will of course change in the future. You can also download version 3.0.0 directly from Octave-forge.

If you want you can also download the source code from the official repository and test it with "make run" or install it with "make install". To download the repository, update to version 3.0.0 an run Octave with the package on Linux use the following

 hg clone http://hg.code.sf.net/p/octave/interval octave-interval
 cd octave-interval
 hg update release-3.0.0
 make run



A Package for Taylor Arithmetic

The task took less time than planned so I had time to work upon a project depending on my previous work. I started to work on a project for Taylor arithmetic, you can read my blog post about it. I created a proof of concept implementation as part of my application for Google Summer of Code and I have now started to turn that into a real package. The repository can be found here.

It is still far from complete but my goal is to eventually add it as a package at Octave-Forge. How that goes depends mainly on how much time I have to spend on it the following semesters.

If you want to run the code as it is now you can pull the repository and then run it with "make run", this requires that Octave and version 3.0.0 (or higher) of the interval package is installed.

List of Commits

Here is a list of all 108 commits I have done to the interval package

https://sourceforge.net/p/octave/interval/ci/68eb1b3281c9
summary:     Added Swedish translation for package metadata

https://sourceforge.net/p/octave/interval/ci/52d6a2565ed2
summary:     @infsupdec/factorial.m: Fix decoration (bug #51783)

https://sourceforge.net/p/octave/interval/ci/cf97d56a8e2a
summary:     mpfr_function_d.cc: Cast int to octave_idx_type

https://sourceforge.net/p/octave/interval/ci/5ed7880c917f
summary:     maint: Fix input to source

https://sourceforge.net/p/octave/interval/ci/637c532ea650
summary:     @infsupdec/dot.m: Fix decoration on empty input

https://sourceforge.net/p/octave/interval/ci/51425fd67692
summary:     @infsup/postpad.m, @infsup/prepad.m, @infsupdec/postpad.m, @infsupdec/prepad.m: Corrected identification of dimension for N-dimensional arrays

https://sourceforge.net/p/octave/interval/ci/b4e7f1546e36
summary:     mpfr_function_d.cc, mpfr_vector_dot_d.cc: Fixed bug when broadcasting with one size equal to zero

https://sourceforge.net/p/octave/interval/ci/02db5199932c
summary:     doc: NEWS.texinfo: Info about vectorization for nthroot and pownrev

https://sourceforge.net/p/octave/interval/ci/197d033fc878
summary:     @infsup/pownrev.m, @infsupdec/pownrev.m: Support for vectorization of p

https://sourceforge.net/p/octave/interval/ci/7f8d1945264c
summary:     @infsup/nthroot.m, @infsupdec/nthroot.m, mpfr_function_d.cc: Support for vectorization of n

https://sourceforge.net/p/octave/interval/ci/e841c37383d6
summary:     doc: NEWS.texinfo: Summarized recent changes from GSoC

https://sourceforge.net/p/octave/interval/ci/bc3e9523d42a
summary:     mpfr_vector_dot_d.cc: Fixed bug when broadcasting with one size equal to zero

https://sourceforge.net/p/octave/interval/ci/0160ff2c2134
summary:     doc: examples.texinfo: Updated example for the latest Symbolic package version

https://sourceforge.net/p/octave/interval/ci/f37e1074c3ce
summary:     @infsup/*.m, @infsupdec/*.m: Added missing N-dimensional versions of tests

https://sourceforge.net/p/octave/interval/ci/eb6b9c01bf6a
summary:     @infsup/*.m, @infsupdec/*.m: N-dimensional versions of all ternary tests

https://sourceforge.net/p/octave/interval/ci/de71e0ad0a6e
summary:     @infsup/*.m, @infsupdec/*.m: N-dimensional versions of all binary tests

https://sourceforge.net/p/octave/interval/ci/9440d748b4aa
summary:     @infsup/*.m, @infsupdec*.m: N-dimensional version of all unary tests

https://sourceforge.net/p/octave/interval/ci/cb5b7b824400
summary:     @infsup/powrev2.m: Fixed bug when called with vector arguments

https://sourceforge.net/p/octave/interval/ci/a3e8fdb85b97
summary:     @infsup/pownrev.m, @infsupdec/pownrev.m: Reworked vectorization test

https://sourceforge.net/p/octave/interval/ci/aae143789b81
summary:     @infsup/pown.m: Added support for N-dimensional arrays

https://sourceforge.net/p/octave/interval/ci/f05814665570
summary:     @infsup/nthroot.n: Reworked vectorization test

https://sourceforge.net/p/octave/interval/ci/c4c27574e331
summary:     @infsup/nthroot.m, @infsupdec/nthroot.m: Clarified that N must be scalar

https://sourceforge.net/p/octave/interval/ci/3bfd4e7e6600
summary:     @infup/pow.m, @infsupdec/pow.m: Fixed bug when called with vector arguments

https://sourceforge.net/p/octave/interval/ci/e013c55a4e2a
summary:     @infsup/overlap.m, @infsupdec/overlap.m: Fixed formatting of vector test.

https://sourceforge.net/p/octave/interval/ci/d700a1ad4855
summary:     doc: Modified a test so that it now passes

https://sourceforge.net/p/octave/interval/ci/2404686b07fb
summary:     doc: Fixed formatting of example

https://sourceforge.net/p/octave/interval/ci/c078807c6b32
summary:     doc: SKIP an example that always fail in the doc-test

https://sourceforge.net/p/octave/interval/ci/598dea2b8eb8
summary:     doc: Fixed missed ending of example in Getting Started

https://sourceforge.net/p/octave/interval/ci/66ca24edfba4
summary:     Updated coding style for all infsupdec-class functions

https://sourceforge.net/p/octave/interval/ci/4c629b9000fc
summary:     Updated coding style for all infsup-class functions

https://sourceforge.net/p/octave/interval/ci/3898a7ad5d93
summary:     Updated coding style for all non-class functions

https://sourceforge.net/p/octave/interval/ci/f911db83df14
summary:     @infsupdec/dot.m: Fixed wrong size of decoration when called with two empty matrices

https://sourceforge.net/p/octave/interval/ci/8e6a9e37ee40
summary:     Small updates to documentation and comments for a lot of function to account for the support of N-dimensional arrays

https://sourceforge.net/p/octave/interval/ci/a2ef249d54ae
summary:     doc: A small update to Examples, the interval Newton method can only find zeros inside the initial interval

https://sourceforge.net/p/octave/interval/ci/76431745772f
summary:     doc: Updates to Getting Started, mainly how to create N-dimensional arrays

https://sourceforge.net/p/octave/interval/ci/9821eadca12b
summary:     doc: Small updates to Preface regarding N-dimensional arrays and fixed one link

https://sourceforge.net/p/octave/interval/ci/0f1aadf5f13a
summary:     ctc_intersect.m, ctc_union.m: Fixed bugs when used for vectorization and when called with 0 or 1 output arguments

https://sourceforge.net/p/octave/interval/ci/0e01dc19dc75
summary:     @infsup/sumsq.m: Updated to use the new functionality of dot.m

https://sourceforge.net/p/octave/interval/ci/2fba2056ed31
summary:     @infsup/dot.m, @infsupdec/dot.m, mpfr_vector_dot_d.cc: Added support for N-dimensional vectors. Moved all vectorization to the oct-file. Small changes to functionality to mimic how the sum function works.

https://sourceforge.net/p/octave/interval/ci/05fc90112ea9
summary:     ctc_intersect.m, ctc_union.m: Added support for N-dimensional arrays

https://sourceforge.net/p/octave/interval/ci/2a4d1e9fa43e
summary:     @infsup/fsolve.m: Added support for N-dimensional arrays. Fixed problem with the function in the example. Improved performance when creating the cell used in vectorization.

https://sourceforge.net/p/octave/interval/ci/7a96c346225a
summary:     @infsup/disp.m: Fixed wrong enumeration of submatrices

https://sourceforge.net/p/octave/interval/ci/933117890e45
summary:     Fixed typo in NEWS.texinfo

https://sourceforge.net/p/octave/interval/ci/67734688adb1
summary:     @infsup/diag.m: Added description of the previous bug fix in the NEWS file

https://sourceforge.net/p/octave/interval/ci/07ebc81867cd
summary:     @infsup/diag.m: Fixed error when called with more than 1 argument

https://sourceforge.net/p/octave/interval/ci/554c34fb3246
summary:     @infsup/meshgrid.m, @infsupdec/meshgrid.m: Removed these functions, now falls back on standard implementation, also updated index

https://sourceforge.net/p/octave/interval/ci/4bad2e7451f5
summary:     @infsup/plot.m: Updated documentation

https://sourceforge.net/p/octave/interval/ci/5e0100cdc25f
summary:     @infsup/plot3.m: Small change to allow for N-dimensional arrays as input

https://sourceforge.net/p/octave/interval/ci/695a223ccbb7
summary:     @infsupdec/prod.m: Added support for N-dimensional arrays

https://sourceforge.net/p/octave/interval/ci/55f570c13f01
summary:     @infsup/prod.m: Added support for N-dimensional arrays. Removed short circuit in simple cases.

https://sourceforge.net/p/octave/interval/ci/445d7e5150aa
summary:     @infsup/sum.m, @infsupdec/sum.m, mpfr_vector_sum_d.cc: Added support for N-dimensional vectors. Moved all vectorization to the oct-file. Small changes to functionality to mimic Octaves standard sum function.

https://sourceforge.net/p/octave/interval/ci/f27ad212735e
summary:     @infsup/fminsearch.m: Updated documentation

https://sourceforge.net/p/octave/interval/ci/19724b3f581e
summary:     mpfr_function_d.cc: Finalized support for N-dimensional arrays with binary functions and added support for it with ternary functions.

https://sourceforge.net/p/octave/interval/ci/023e2788e445
summary:     midrad.m: Added tests for N-dimensional arrays

https://sourceforge.net/p/octave/interval/ci/617484113019
summary:     @infsupdec/infsupdec.m: Added full support for creating N-dimensional arrays and added tests

https://sourceforge.net/p/octave/interval/ci/f1c28a3d48b7
summary:     @infsup/subset.m, @infsupdec/subset.m: Updated documentation

https://sourceforge.net/p/octave/interval/ci/8e9990e3ed13
summary:     @infsup/strictsubset.m, @infsupdec/strictsubset.m: Fixed coding style and updated documentation

https://sourceforge.net/p/octave/interval/ci/5ca9b26b580d
summary:     @infsup/strictprecedes.m, @infsupdec/strictprecedes.m: Updated documentation

https://sourceforge.net/p/octave/interval/ci/1c76721003f1
summary:     @infsup/sdist.m: Updated documentation

https://sourceforge.net/p/octave/interval/ci/e044dea7c820
summary:     @infsup/precedes.m, @infsupdec/precedes.m: Updated documentation

https://sourceforge.net/p/octave/interval/ci/d48ca9206299
summary:     @infsup/overlap.m, @infsupdec/overlap.m: Fixed coding style and updated documentation

https://sourceforge.net/p/octave/interval/ci/1ad6b47f9335
summary:     @infsup/issingleton.m: Updated documentation

https://sourceforge.net/p/octave/interval/ci/11dccbcfd97e
summary:     @infsup/ismember.m: Updated documentation

https://sourceforge.net/p/octave/interval/ci/ade5da12cf88
summary:     @infsup/isentire.m: Updated documentation

https://sourceforge.net/p/octave/interval/ci/9fc874b71533
summary:     @infsup/isempty.m, @infsupdec/isempty.m: Updated documentation

https://sourceforge.net/p/octave/interval/ci/0c5c4eaf263b
summary:     @infsup/iscommoninterval.m: Updated documentation

https://sourceforge.net/p/octave/interval/ci/c980da83b634
summary:     @infsup/interior.m, @infsupdec/interior.m: Updated documentation

https://sourceforge.net/p/octave/interval/ci/f67d78651aee
summary:     @infsup/idist.m: Updated documentation

https://sourceforge.net/p/octave/interval/ci/70a389bd59f5
summary:     @infsup/hdist.m: Fixed coding style and updated documentation.

https://sourceforge.net/p/octave/interval/ci/e34401d2a6d6
summary:     @infsup/sin.m, @infsupdec/sin.m: Added workaround for bug #51283

https://sourceforge.net/p/octave/interval/ci/fea4c6516101
summary:     @infsup/gt.m: Updated documentation

https://sourceforge.net/p/octave/interval/ci/ce973432d240
summary:     @infsup/ge.m: Updated documentation

https://sourceforge.net/p/octave/interval/ci/a435986d4b0c
summary:     @infsup/lt.m, @infsupdec/lt.m: Updated documentation

https://sourceforge.net/p/octave/interval/ci/23fa89e3c461
summary:     @infsup/le.m, @infsupdec/le.m: Updated documentation

https://sourceforge.net/p/octave/interval/ci/730397b9e339
summary:     @infsup/disjoint.m, @infsupdec/disjoint.m: Updated documentation

https://sourceforge.net/p/octave/interval/ci/682ad849fc98
summary:     mpfr_function_d.cc: Added support for N-dimensional arrays for unary functions. Also temporary support for binary functions.

https://sourceforge.net/p/octave/interval/ci/9661e0256c51
summary:     crlibm_function.cc: Added support for N-dimensional arrays

https://sourceforge.net/p/octave/interval/ci/0ec3d29c0779
summary:     @infsup/infsup.m: Fixed documentation and added missing line continuation

https://sourceforge.net/p/octave/interval/ci/ebb00e763a46
summary:     @infsup/disp.m: Fixed documentation

https://sourceforge.net/p/octave/interval/ci/1ab2749da374
summary:     @infsup/size.m: Fixed documentation

https://sourceforge.net/p/octave/interval/ci/507ca8478b72
summary:     @infsup/size.m: Fixes to the documentation

https://sourceforge.net/p/octave/interval/ci/667da39afece
summary:     nai.m: Small fix to one of the tests

https://sourceforge.net/p/octave/interval/ci/3eb13f91065a
summary:     hull.m: Fixes according to Olivers review

https://sourceforge.net/p/octave/interval/ci/20d784a6605d
summary:     @infsup/display.m: Vectorized loop

https://sourceforge.net/p/octave/interval/ci/fab7aa26410f
summary:     @infsup/disp.m: Fixes according to Olivers review, mainly details in the output

https://sourceforge.net/p/octave/interval/ci/4959566545db
summary:     @infsup/infsup.m: Updated documentation and added test for N-dimensional arrays

https://sourceforge.net/p/octave/interval/ci/b25fbf0ad324
summary:     @infsup/infsup.m: Fixed coding style

https://sourceforge.net/p/octave/interval/ci/486a73046d5e
summary:     @infsup/disp.m: Updated documentation and added more tests for N-dimensional arrays

https://sourceforge.net/p/octave/interval/ci/b6a0435da31f
summary:     exacttointerval.m: Uppdated documentation and added tests for N-dimensional arrays

https://sourceforge.net/p/octave/interval/ci/bdad0e6da132
summary:     @infsup/intervaltotext.m, @infsupdec/intervaltotext.m: Updated documentation and added tests for N-dimensional arrays

https://sourceforge.net/p/octave/interval/ci/55a3e708aef4
summary:     @infsup/intervaltotext.m: Fixed coding style

https://sourceforge.net/p/octave/interval/ci/536b0c5023ee
summary:     @infsup/subsref.m, @infsupdec/subsref.m: Added tests for N-dimensional arrays

https://sourceforge.net/p/octave/interval/ci/c9654a4dcd8d
summary:     @infsup/size.m: Added support for N-dimensional arrays

https://sourceforge.net/p/octave/interval/ci/636709d06194
summary:     @infsup/end.m: Added support for N-dimensional arrays

https://sourceforge.net/p/octave/interval/ci/e6451e037120
summary:     nai.m: Added support for N-dimensional arrays

https://sourceforge.net/p/octave/interval/ci/837cccf1d627
summary:     @infsup/resize.m, @infsupdec/resize.m: Added support for N-dimensional arrays

https://sourceforge.net/p/octave/interval/ci/11cb9006b9ea
summary:     @infsup/reshape.m, @infsupdec/reshape.m: Added support for N-dimensional arrays

https://sourceforge.net/p/octave/interval/ci/c276af6c42ae
summary:     @infsup/prepad.m, @infsupdec/prepad.m: Added small parts to the documentation and tests for N-dimensional arrays

https://sourceforge.net/p/octave/interval/ci/c92c829dc946
summary:     @infsup/postpad.m, @infsupdec/postpad.m: Added small parts to the documentation and tests for N-dimensional arrays

https://sourceforge.net/p/octave/interval/ci/0f8f6864123e
summary:     @infsup/meshgrid.m, @infsupdec/meshgrid.m: Added support for outputting 3-dimensional arrays

https://sourceforge.net/p/octave/interval/ci/b7faafef6030
summary:     @infsup/cat.m, @infsupdec/cat.m: Added support for N-dimensional arrays

https://sourceforge.net/p/octave/interval/ci/d416a17a6d3d
summary:     hull.m: Added support for N-dimensional arrays

https://sourceforge.net/p/octave/interval/ci/031831f6bdfd
summary:     empty.m, entire.m: Added support for N-dimensional arrays

https://sourceforge.net/p/octave/interval/ci/2816b68b83c4
summary:     @infsup/display.m: Added support for displaying high dimensional arrays

https://sourceforge.net/p/octave/interval/ci/79c8dfa8ac54
summary:     @infsup/disp.m: Added support for displaying high dimensional arrays

https://sourceforge.net/p/octave/interval/ci/cc87924e52ac
summary:     @infsup/disp.m: Fixed coding style

https://sourceforge.net/p/octave/interval/ci/29a6b4ecda2a
summary:     @infsupdec/infsupdec.m: Temporary fix for creating high dimensional arrays

https://sourceforge.net/p/octave/interval/ci/039abcf7623d
summary:     @infsupdec/infsupdec.m: Fixed coding style

by Joel Dahne (noreply@blogger.com) at August 28, 2017 03:50 PM

Michele Ginesi

Final Resume

SummaryDuring the GSoC I worked on different special functions that needed to be improved or implemented from scratch. Discussing with my mentors and the community, we decided that my work should be pushed on a copy of the scource code of Octave on my repository [1] and then I should have work with different bookmarks for each function I had to work on. When different functions happened to be related (e.g. gammainc and gammaincinv), I worked on these on the same bookmark. I present now a summary and the bookmarks related to the functions.

Incomplete gamma function

bookmark: gammainc
first commit: d1e03faf080b
last commit: 107dc1d24c1b
added files: /libinterp/corefcn/__gammainc_lentz__.cc, /scripts/specfun/gammainc.m, /scripts/specfun/gammaincinv.m
removed files:/libinterp/corefcn/gammainc.cc, /liboctave/external/slatec-fn/dgami.f, /liboctave/external/slatec-fn/dgamit.f, /liboctave/external/slatec-fn/gami.f, /liboctave/external/slatec-fn/gamit.f, /liboctave/external/slatec-fn/xdgami.f, /liboctave/external/slatec-fn/xdgamit.f, /liboctave/external/slatec-fn/xgmainc.f, /liboctave/external/slatec-fn/xsgmainc.f
modified files: NEWS, /doc/interpreter/arith.txi, /libinterp/corefcn/module.mk, /liboctave/external/slatec-fn/module.mk, /liboctave/numeric/lo-specfun.cc, /scripts/specfun/module.mk

Summary of the work

On this bookmark I worked on the incomplete gamma function and its inverse.
The incomplete gamma function gammainc had both missing features (it were missed the "scaled" options) and some problem of inaccurate result type (see bug # 47800). Part of the work was already been done by Marco and Nir, I had to finish it. We decided to implement it as a single .m file (gammainc.m) which call (for some inputs) a subfunction written in C++ (__gammainc_lentz__.cc).
The inverse of the incomplete gamma function was missing in Octave (see bug # 48036). I implemented it as a single .m file (gammaincinv.m) which uses a Newton method.

Bessel functions

bookmark: bessel
first commit: aef0656026cc
last commit: e9468092daf9
modified files: /liboctave/external/amos/README, /liboctave/external/amos/cbesh.f, /liboctave/external/amos/cbesi.f, /liboctave/external/amos/cbesj.f, /liboctave/external/amos/cbesk.f, /liboctave/external/amos/zbesh.f, /liboctave/external/amos/zbesi.f, /liboctave/external/amos/zbesj.f, /liboctave/external/amos/zbesk.f, /liboctave/numeric/lo-specfun.cc, /scripts/specfun/bessel.m

Summary of the work

On this bookmark I worked on Bessel functions.
There was a bug reporting NaN as output when the argument $x$ was too large in magnitude (see bug # 48316). The problem was given by Amos library, which refuses to compute the output in such cases. I started "unlocking" this library, in such a way to compute the output even when the argument was over the limit setted by the library. Then I compared the results with other libraries (e.g. Cephes [2], Gnu Scientific library [3] and C++ special function library [4]) and some implementations I made. In the end, I discovered that the "unlocked" Amos were the best one to use, so we decided to maintain them (in the "unlocked" form), modifying the error variable to explain the loss of accuracy.

Incomplete beta function

bookmark: betainc
first commit: 712a069d2860
last commit: e0c0dd40f096
added files: /libinterp/corefcn/__betainc_lentz__.cc, /scripts/specfun/betainc.m, /scripts/specfun/betaincinv.m
removed files: /libinterp/corefcn/betainc.cc, /liboctave/external/slatec-fn/betai.f, /liboctave/external/slatec-fn/dbetai.f, /liboctave/external/slatec-fn/xbetai.f, /liboctave/external/slatec-fn/xdbetai.f
modified files: /libinterp/corefcn/module.mk, /liboctave/external/slatec-fn/module.mk, /liboctave/numeric/lo-specfun.cc, /liboctave/numeric/lo-specfun.h, /scripts/specfun/module.mk, /scripts/statistics/distributions/betainv.m, /scripts/statistics/distributions/binocdf.m

Summary of the work

On this bookmark I worked on the incomplete beta function and its inverse.
The incomplete beta function missed the "upper" version and had reported bugs on input validation (see bug # 34405) and inaccurate result (see bug # 51157). We decided to rewrite it from scratch. It is now implemented ad a single .m file (betainc.m) which make the input validation part, then the output is computed using a continued fraction evaluation, done by a C++ function (__betainc_lentz__.cc).
The inverse was present in Octave but missed the "upper" version (since it was missing also in betainc itself). The function is now written as a single .m file (betaincinv.m) which implement a Newton method where the initial guess is computed by few steps of bisection method.

Integral functions

bookmark: expint
first commit: 61d533c7d2d8
last commit: d5222cffb1a5
added files:/libinterp/corefcn/__expint_lentz__.cc, /scripts/specfun/cosint.m, /scripts/specfun/sinint.m
modified files: /doc/interpreter/arith.txi, /libinterp/corefcn/module.mk, /scripts/specfun/expint.m, /scripts/specfun/module.mk

Summary of the work

On this bookmark I worked on exponential integral, sine integral and cosine integral. I already rewrote the exponential integral before the GSoC. Here I just moved the Lentz algorithm to an external C++ function (__expint_lentz__.cc), accordingly to gammainc and betainc. I've also modified the exit criterion for the asymptotic expansion using [5] (pages 1 -- 4) as reference.
The functions sinint and cosint were present only in the symbolic package of Octave but was missing a numerical implementation in the core. I wrote them as .m files (sinint.m and cosint.m). Both codes use the series expansion near the origin and relations with expint for the other values.

To do

There is still room for improvement for some of the functions I wrote. In particular, gammainc can be improved in accuracy for certain couple of values, and I would like to make a template version for the various Lentz algorithms in C++ so to avoid code duplication in the functions.
In October I will start a PhD in Computer Science, still here in Verona. This will permit me to remain in contact with my mentor Marco Caliari, so that we will work on these aspects.

[1] https://bitbucket.org/M_Ginesi/octave
[2] http://www.netlib.org/cephes/
[3] https://www.gnu.org/software/gsl/
[4] http://en.cppreference.com/w/cpp/numeric/special_math
[5] N. Bleistein and R.A. Handelsman, "Asymptotic Expansions of Integrals", Dover Publications, 1986.

by Michele Ginesi (noreply@blogger.com) at August 28, 2017 04:46 AM

Piyush Jain

Geometry Package (Octave)

Geometry package: Implement boolean operations on polygons

As part of GSoC 2017 , this project is intended to implement a set of boolean operations and supporting function for acting on polygons. These include the standard set of potential operations such as union/OR, intersection/AND, difference/subtraction, and exclusiveor/XOR. Other things to be implemented are the following functions: polybool, ispolycw, poly2ccw, poly2cw, poly2fv, polyjoin, and polysplit.

This Repository is a clone(fork) of the Geometry Package which is a part of the Octave Forge Project.

This fork adds new functions to the official Geometry Package as part of GSoC (Google Summer of Code) 2016.

The official Geometry Package can be found here

Link to commits on official repo :

Added files and functions

  1. /inst/polygons2d/clipPolygon_mrf.m
  2. /inst/polygons2d/private/_poly2struct_.m
  3. /src/martinez.cpp
  4. /src/polygon.cpp
  5. /src/utilities.cpp
  6. /src/polybool_mrf.cc
  7. /inst/polygons2d/funcAliases

Bonding Period

After discussion with my mentor and keeping my proposal in mind, I tried to understand and enlist the tasks more specifically and in detail. I first tried to understand the conventions and other things about the organisation. I tried to understand the first basic thing which I will need throughout this project, i.e. how we create an oct-interface for the C++ codes to be executable in Octave. My first goal was to explore the possibility if the already implemented mex-interface geometry package can be improved in performance by replacing it with oct-interface. So, for understanding how these oct-files work and other things , I started to implement something in oct-files and getting familiar with it.

First Coding Phase

As stated, there is an already implemented Geometry3.0 package. This has mex interface for its functions. I tried to compare its performance with the oct interface version. For benchmarking, I first implemented my own function for polyUnion (using clipper library) with oct-interface (Find it here). Then, I compared its performance over a number of different sets of polgons (parametrized over number of vertices) and recorded the elapsed times with both the interfaces. On plotting a graph of Number of vertices V/s Elapsed time (for oct and mex), following obseravtions were made : </br>

  • The oct interface had a better performance over mex.
  • For 10000 vertices, the oct interface took about 0.008 seconds while the mex interface took about 0.014 seconds. This implies oct interface took 8X10e-3 seconds / 10e4 vertices i.e. 8X10e-7 seconds per vertex. For mex, it was 14X10e-7 seconds per vertex.
  • As it can be seen from the above data, the oct interface was not more than twice as good as mex interface.</br> From these observations, it was concluded that it is not worth to change the interface from mex to oct since there was not much improvement in performance. Thus, our next goal is now to incorporate new algorithms.

After spending decent time over checking out the new algorithm and understanding its implementation, I have now started to implement the polybool function. I also tried to compare its performance with the already implemented clipPolygon in the current geometry package. The new algorithm has a better performance than the old one.

The implementation of boolean operations on polygons, like DIFFERENCE , INTERSECTION , XOR and UNION with the new algorithm is almost done. A little work on tests, demos and documentation is still needed and I am working on that.

More about the algorithm by F. Martínez, A.J. Rueda, F.R. Feito

The algorithm is very easy to understand, among other things because it can be seen as an extension of the classical algorithm, based on the plane sweep, for computing the intersection points between a set of segments.When a new intersection between the edges of polygons is found, the algorithm subdivides the edges at the intersection point. This produces a plane sweep algorithm with only two kind of events: left and right endpoints, making the algorithm quite simple. Furthermore, the subdivision of edges provides a simple way of processing degeneracies. </br> Overall sketch of the approach for computing Boolean operations on polygons:

  • Subdivide the edges of the polygons at their intersection points.
  • Select those subdivided edges that lie inside the other polygon—or that do not lie depending on the operation.
  • Join the edges selected in step 2 to form the result polygon.</br>

    Complexity

    Let n be the total number of edges of all the polygons involved in the Boolean operation and k be the number of intersections of all the polygon edges. The whole algorithm runs in time O(n+k)log(n).</br>

After raw testing this new algorithm on several cases, I am now adding few tests in the m-script. Other than that, a demo has also been added in the m-script. The demo can be seen by the command demo clipPolygon_mrf. The tests have also been added. The test can be seen by test clipPolygon_mrf.

Second Coding Phase

After implementing the polybool function and checking it, we are planning to include it in the next release of geometry package. Now , to move forward, I am first importing some functions from last year GSoC’s repo and ensuring their MATLAB compatibility. Functions like poly2ccw, poly2cw, joinpolygons, splitPolygons have been created as aliases while ensuring their compatibility with their mathworks counterparts. Then after, there was some time invested on understanding the CGAL library and its implementation.

The further plan is to sync the matgeom package with the geometry package.

Third Coding Phase

Proceeding towards the next goal, the idea is to devise some way to automate the process somewhat, of syncing the matgeom and geometry package. The issue is that when a new release of geometry package is planned, there are some things which ahve been updated in matgeom but not in their geometry counterparts (if it exists). So, every time before releasing, so much time has to be invested in manually checking each edit and syncing it into the geometry.

To achieve this, first a workaround is implemented on a dummy repository - dummyMatGeom. Its master branch is matGeom (dummy) and there is another branch (named geometry) is created which contains geometry package (dummy). To test the entire procedure, go to the dummy repository dummyMatGeom , pull both branches in different folders, say “dummyMatGeom” for master branch and “dummyGeom” for geometry branch. Then follow the given steps as explained on the wiki page.

Challenges

  • Clearly, the above procedure will only sync the script of the function, not it’s tests and demo, which are in separate folders in a Matlab package structure. Even if we try to concatenate their corresponding test/demo scripts with the function scripts (as it is in an octave package structure), there will be discrepancies because the notion or writing tests for octave and matlab packages are quite different. The way octave allows tests to work is unique to octave as explained here. SO, we can’t simply concatenate the Matlab test scripts with the functions.

  • Git doesn’t preserves the original version of geometry scripts and overwrites the whole file completely. For example :

  1. Original file at matGeom (upstream)

% Bla bla
% bla bla bla

function z = blabla (x,y)
% Help of function
for i=1:length(x)
   z(i) = x(i)*y(i);
end

  1. Ported to geometry

# Copyright - Somebody
# Bla bla
# bla bla bla

# texinfo
# Help of function

function z = blabla (x,y)
   z = x .* y;
endfunction

  1. Updated in matGeom

% Bla bla
% bla bla bla

function z = blabla (x,y)
% Help of function
% updated to be more clear
z = zeros (size(x));
for i=1:length(x)
   z(i) = x(i)*y(i);
end

  1. After syncing , the expected result is something like this :

# Copyright - Somebody
# Bla bla
# bla bla bla

# texinfo
# Help of function
# updated to be more clear

function z = blabla (x,y)
   z = zeros (size(x));
   z = x .* y;
endfunction

But, this doesn’t happen as expected. Git just finds the files which have been modified and overwrites those files completely. Considering the possibilities of the solutions, there are ways like git patch or git interactive which allows us to select the lines specifically which we want to be committed, but that would not serve our purpose as it would not be better than syncing it manually, file by file. Looking for a better solution to handle this !

Now, the further idea is to release geometry and I am getting involved into it to get a feel of how things are done.

Thus, as the time concludes, it’s time to say Goodbye to GSoC’17. It was overall a great learning experience.

August 28, 2017 12:00 AM

August 19, 2017

Michele Ginesi

Integral functions

Integral functionsDuring the last week I made few modifications to expint.m and wrote sinint.m and cosint.m from scratch. All the work done can be found on the bookmark expint of my repository.

expint

As I mentioned here I rewrote expint.m from scratch before the GSoC. During the last week I moved the Lentz algorithm to a .cc function (in order to remain coherent with the implementations of gammainc and betainc) and added few tests.

sinint

The sinint function is present in the symbolic package, but is not present a numerical implementation in the core.
The sine integral is defined as $$ \text{Si} (z) = \int_0^z \frac{\sin(t)}{t}\,dt. $$ To compute it we use the series expansion $$ \text{Si}(z) = \sum_{n=0}^\infty \frac{(-1)^n z^{2n+1}}{(2n+1)(2n+1)!} $$ when the module of the argument is smaller than 2. For bigger values we use the following relation with the exponential integral $$ \text{Si} = \frac{1}{2i} (E_1(iz)-E_1(-iz)) + \frac{\pi}{2},\quad |\text{arg}(z)| \frac{\pi}{2}$$ and the following simmetry relations $$ \text{Si}(-z) = -\text{Si}(z), $$ $$ \text{Si}(\bar{z}) = \overline {\text{Si}(z)}. $$ The function is write as a single .m file.

cosint

As the sinint function, also cosint is present in the symbolic package, but there is not a numerical implementation in the core.
The cosine integral is defined as $$ \text{Ci} (z) = -\int_z^\infty \frac{\cos(t)}{t}\,dt. $$ An equivalent definition is $$ \text{Ci} (z) = \gamma + \log z + \int_0^z \frac{\cos t - 1}{t}\,dt. $$ To compute it we use the series expansion $$ \text{Ci}(z) = \gamma + \log z + \sum_{n=1}^\infty \frac{(-1)^n z^{2n}}{(2n)(2n)!} $$ when the module of the argument is smaller than 2. For bigger values we use the following relation with the exponential integral $$ \text{Ci} = -\frac{1}{2} (E_1(iz)+E_1(-iz)),\quad |\text{arg}(z)| \frac{\pi}{2}$$ and the following simmetry relations $$ \text{Ci}(-z) = \text{Ci}(z) -i\pi,\quad 0\text{arg}(z)\pi, $$ $$ \text{Ci}(\bar{z}) = \overline{\text{Ci}(z)} .$$ As for sinint, also cosint is written as a single .m file.

by Michele Ginesi (noreply@blogger.com) at August 19, 2017 02:32 AM

August 18, 2017

Joel Dahne

The Final Polish

We are preparing for releasing version 3.0.0 of the interval package and this last week have mainly been about fixing minor bugs related to the release. I mention two of the more interesting bugs here.

Compact Format

We (Oliver) recently added support for "format compact" when printing intervals. It turns out that the way to determine if compact format is enabled differs very much between different version of Octave. There are at least three different ways to get the information.

In the older releases (< 4.2.0 I believe) you use "get (0, "FormatSpacing")" but there appear to be a bug for version < 4.0.0 for which this always return "loose".

For the current tip of the development branch you can use "[~, spacing] = format ()" to get the spacing.

Finally in between these two version you use "__compactformat__ ()".

In the end Oliver, probably, found a way to handle this mess and compact format should now be fully supported for intervals. The function to do this is available here https://sourceforge.net/p/octave/interval/ci/default/tree/inst/@infsup/private/__loosespacing__.m.

Dot-product of Empty Matrices

When updating "dot" to support N-dimensional arrays I also modified it so that it behaves similar to Octaves standard implementation. The difference is in how it handles empty input. Previously we had

> x = infsupdec (ones (0, 2));
> dot (x, x)
ans = 0×2 interval matrix

but with the new version we get

> dot (x, x)
ans = 1×2 interval vector
   [0]_com   [0]_com

which is consistent with the standard implementation.

In the function we use "min" to compute the decoration for the result. Normally "min (x)" and "dot (x, x)" returns results of the same size (the dimension along which it is computed is set to 1), but they handle empty input differently. We have

> x = ones (0, 2);
> dot (x, x)
ans =
   0   0
> min (x)
ans = [](0x2)

This meant that the decoration would be incorrect since the implementation assumed they always had the same size. Fortunately the solution was very simple. If the dimension along which we are computing the dot-product is zero. the decoration should always be "com". So just adding a check for that was enough.

You could argue that "min (ones (0, 2))" should return "[inf, inf]" similarly to how many of the other reductions, like "sum" or "prod", return their unit for empty input. But this would most likely be very confusing for a lot of people. And it is not compatible with how Matlab does it either.

Updates on the Taylor Package

I have also had some time to work on the Taylor package this week. The basic utility functions are now completed and I have started to work on functions for actually computing with Taylor expansions. At the moment there are only a limited amount of functions implemented. For example we can calculate the Taylor expansion of order 4 for the functions $\frac{e^x + \log(x)}{1 + x}$ at $x = 5$.

## Create a variable of degree 4 and with value 5
> x = taylor (infsupdec (5), 4)
x = [5]_com + [1]_com X + [0]_com X^2 + [0]_com X^3 + [0]_com X^4

## Calculate the function
> (exp (x) + log (x))./(1 + x)
ans = [25.003, 25.004]_com + [20.601, 20.602]_com X + [8.9308, 8.9309]_com X^2 + [2.6345, 2.6346]_com X^3 + [0.59148, 0.59149]_com X^4



by Joel Dahne (noreply@blogger.com) at August 18, 2017 05:01 PM

August 12, 2017

Michele Ginesi

betaincinv

betaincinvThe inverse of the incomplete beta function was present in Octave, but without the "upper" option (since it was missing in betainc itself). We decided to rewrite it from scratch using Newton method, as for gammaincinv (see my post on it if you are interested).
To make the code numerically more accurate, we decide which version ("lower" or "upper") invert depending on the inputs.
At first we compute the trivial values (0 and 1). Then the remaining terms are divided in two sets: those that will be inverted with the "lower" version, and those that will be inverted with the "upper" one. For both cases, we perform 10 iterations of bisection method and then we perform a Newton method.
The implementation (together with the new implementation of betainc) can be found on my repository, bookmark "betainc".

by Michele Ginesi (noreply@blogger.com) at August 12, 2017 08:52 AM

August 11, 2017

Joel Dahne

Improving the Automatic Tests

Oliver and I have been working on improving the test framework used for the interval package. The package shares a large number of tests with other interval packages through an interval test framework that Oliver created. Here is the repository.

Creating the Tests

Previously these tests were separated from the rest of the package and you usually ran them with help of the Makefile. Now Oliver has moved them to the m-files and you can run them, together with the other tests for the function, with test @infsup/function in Octave. This makes it much easier to test the functions directly.

In addition to making the tests easier to use we also wanted to extend them to not only test scalar evaluation but also vector evaluation. The test data, input ad expected output, is stored in a cell array and when performing the scalar testing we simply loop over that cell and run the function for each element. The actual code looks like this (in this case for plus)

%!test
%! # Scalar evaluation
%! testcases = testdata.NoSignal.infsup.add;
%! for testcase = [testcases]'
%!   assert (isequaln (...
%!     plus (testcase.in{1}, testcase.in{2}), ...
%!     testcase.out));
%! endfor

For testing the vector evaluation we simply concatenate the cell array into a vector and give that to the function. Here is what that code looks like

%! # Vector evaluation
%! testcases = testdata.NoSignal.infsup.add;
%! in1 = vertcat (vertcat (testcases.in){:, 1});
%! in2 = vertcat (vertcat (testcases.in){:, 2});
%! out = vertcat (testcases.out);
%! assert (isequaln (plus (in1, in2), out));

Lastly we also wanted to test evaluation of N-dimensional arrays. This is done by concatenating the data into a vector and then reshape that vector into an N-dimensional array. But what size should we use for the array? Well, we want to have at least three dimensions because otherwise we are not really testing N-dimensional arrays. My solution was to completely factor the length of the vector and use that as size, testsize = factor (length (in1)), and if the length of the vector has two or fewer factors we add a few elements to the end until we get at least three factors. This is the code for that

%!test
%! # N-dimensional array evaluation
%! testcases = testdata.NoSignal.infsup.add;
%! in1 = vertcat (vertcat (testcases.in){:, 1});
%! in2 = vertcat (vertcat (testcases.in){:, 2});
%! out = vertcat (testcases.out);
%! # Reshape data
%! i = -1;
%! do
%!   i = i + 1;
%!   testsize = factor (numel (in1) + i);
%! until (numel (testsize) > 2)
%! in1 = reshape ([in1; in1(1:i)], testsize);
%! in2 = reshape ([in2; in2(1:i)], testsize);
%! out = reshape ([out; out(1:i)], testsize);
%! assert (isequaln (plus (in1, in2), out));

This works very well, except when the number of test cases is to small. If the number of test is less than four this will fail. But there are only a handful of functions with that few tests so I fixed those independently.

Running the tests

Okay, so we have created a bunch of new tests for the package. Do we actually find any new bugs with them? Yes!

The function pow.m failed on the vector test. The problem? In one place $\&\&$ was used instead of $\&$. For scalar input I believe these behave the same but they differ for vector input.

Both the function nthroot.m and the function pownrev.m failed the vector test. Neither allowed vectorization of the integer parameter. For nthroot.m this is the same for standard Octave version so it should perhaps not be treated as a bug. The function pownrev.m uses nthroot.m internally so it also had the same limitation. This time I would however treat it as a bug because the function pown.m does allow vectorization of the integer parameter and if that supports it the reverse function should probably also do it. So I implemented support for vectorization of the integer parameter for both nthroot.m and pownrev.m and they now pass the test.

No problems were found with the N-dimensional tests that the vector tests did not find. This is a good indication that the support for N-dimensional arrays is at least partly correct. Always good to know!

by Joel Dahne (noreply@blogger.com) at August 11, 2017 03:04 PM

August 02, 2017

Michele Ginesi

betainc

table, th, td { border: 1px solid black; } betaincThe betainc function has two bugs reported: #34405 on the input validation and #51157 on inaccurate result. Moreover, it is missing the "upper" version, which is present in MATLAB.

The function

The incomplete beta function ratio is defined as $$I_x(a,b) = \dfrac{B_x(a,b)}{B(a,b)},\quad 0\le x \le 1,\,a>0,\,b>0,$$ where $B(a,b)$ is the classical beta function and $$B_x(a,b)=\int_0^x t^{a-1}(1-t)^{b-1}\,dt.$$ In the "upper" version the integral goes from $x$ to $1$. To compute this we will use the fact that $$\begin{array}{rcl} I_x(a,b) + I_x^U(a,b) &=& \dfrac{1}{B(a,b)}\left( \int_0^x t^{a-1}(1-t)^{b-1}\,dt + \int_x^1 t^{a-1}(1-t)^{b-1}\,dt\right)\\ &=&\dfrac{1}{B(a,b)}\int_0^1 t^{a-1}(1-t)^{b-1}\,dt\\ &=&\dfrac{B(a,b)}{B(a,b)}\\ &=&1 \end{array}$$ and the relation $$I_x(a,b) + I_{1-x}(b,a) = 1$$ so that $$I_x^U(a,b) = I_{1-x}(b,a).$$

The implementation

Even if it is possible to obtain a Taylor series representation of the incomplete beta function, it seems to not be used. Indeed the MATLAB help cite only the continuous fraction representation present in "Handbook of Mathematical Functions" by Abramowitz and Stegun: $$I_x(a,b) = \dfrac{x^a(1-x)^b}{aB(a,b)}\left(\dfrac{1}{1+} \dfrac{d_1}{1+} \dfrac{d_2}{1+}\ldots\right)$$ with $$d_{2m+1} = -\dfrac{(a+m)(a+b+m)}{(a+2m)(a+2m+1)}x$$ and $$d_{2m} = \dfrac{m(b-m)}{(a+2m-1)(a+2m)}x$$ which seems to be the same strategy used by GSL. To be more precise, this continued fraction is computed directly when $$x\dfrac{a-1}{a+b-2}$$ otherwise, the computed fraction is used to compute $I_{1-x}(b,a)$ and then it is used the fact that $$I_x(a,b) = 1-I_{1-x}(b,a).$$ In my implementation I use a continued fraction present in "Handboob of Continued Fractions for Special Functions" by Cuyt, Petersen, Verdonk, Waadeland and Jones, which is more complicated but converges in fewer steps: $$\dfrac{B(a,b)I_x(a,b)}{x^a(1-x)^b} = \mathop{\huge{\text{K}}}_{m=1}^\infty \left(\dfrac{\alpha_m(x)}{\beta_m(x)}\right),$$ where $$\begin{array}{rcl} \alpha_1(x) &=&1,\\ \alpha_{m+1}(x) &=&\dfrac{(a+m-1)(a+b+m-1)(b-m)m}{(a+2m-1)^2}x^2,\quad m\geq 1,\\ \beta_{m+1}(x) &=&a + 2m + \left( \dfrac{m(b-m)}{a+2m-1} - \dfrac{(a+m)(a+b+m)}{a+2m+1} \right)x,\quad m\geq 0. \end{array}$$ This is most useful when $$x\leq\dfrac{a}{a+b},$$ thus, the continued fraction is computed directly when this condition is satisfied, while it is used to evaluate $I_{1-x}(b,a)$ otherwise.
The function is now written as a .m file, which check the validity of the inputs and divide the same in the values which need to be rescaled and in those wo doesn't need it. Then the continued fraction is computed by an external .c function. Finally, the .m file explicit $I_x(a,b)$.

betaincinv

Next step will be to write the inverse. It was already present in Octave, but is missing the upper version, so it has to be rewritten.

by Michele Ginesi (noreply@blogger.com) at August 02, 2017 04:12 AM

August 01, 2017

Michele Ginesi

Second period resume

Second period resumetd, th{ border: 1px solid } Here I present a brief resume of the work done in this second month.

Bessel functions

The topic of this month were Bessel functions. On the bug tracker it is reported only a problem regarding the "J" one (see bug 48316), but the same problem is present in every type of Bessel function (they return NaN + NaNi when the argument is too big).

Amos, Cephes, C++ and GSL

Actually, Bessel functions in Octave are computed via the Amos library, written in Fortran. Studying the implementation I discovered that the reported bug follows from the fact that if the input is too large in module, the function zbesj.f set IERR to 4 (IERR is a variable which describe how the algorithm has terminate) and set the output to zero, then lo-specfunc.cc return NaN when IERR=4. Obviusly, the same happen for other Bessel functions.
What I initially did was to "unlock" these .f files in such a way to give still IERR=4, but computing anyway the output, and modify lo-specfun.cc in order to return the value even if IERR is 4. Then I tested the accuracy, together with other libraries.
On the bug report where suggested the Cephes library, so I tested also them, the C++ Special mathematical functions library, and the GSL (Gnu Scientific library). Unfortunately, both these alternatives work worse than Amos. I also tried to study and implement some asymptotic expansions by myself to use in the cases which give inaccurate results, unfortunately without success.
For completeness, in the following table there are some results of the tests (ERR in Cephes refers to cos total loss of precision error):
1e09 1e10 1e11 1e12 1e13
Amos 1.6257e-16 0.0000e+00 0.0000e+00 1.3379e-16 1.1905e-16
Cephes 2.825060e-08 ERR ERR ERR ERR
GSL 4.8770e-16 2.2068e-16 4.2553e-16 1.3379e-16 1.1905e-16
C++ 2.82506e-08 2.68591e-07 1.55655e-05 8.58396e-07 0.000389545


1e15 1e20 1e25 1e30
Amos 1.3522e-16 1.6256e-16 15.22810 2.04092
GSL 1.3522e-16 0 15.22810 2.04092

The problem with the double precision

As I explained in my last post the problem seems to be that there are no efficient algorithms in double precision for arguments so large. In fact, the error done by Amos is quite small if compared with the value computed with SageMath (which I'm using as reference one), but only if we use the double precision also in Sage: using more digits, one can see that even the first digit change. Here the tests:
sage: bessel_J(1,10^40).n()
7.95201138653707e-21
sage: bessel_J(1,10^40).n(digits=16)
-7.522035532561300e-21
sage: bessel_J(1,10^40).n(digits=20)
-5.7641641376196142462e-21
sage: bessel_J(1,10^40).n(digits=25)
1.620655257788838811378880e-21
sage: bessel_J(1,10^40).n(digits=30)
1.42325562284462027823158499095e-21
sage: bessel_J(1,10^40).n(digits=35)
1.4232556228446202782315849909515310e-21
sage: bessel_J(1,10^40).n(digits=40)
1.423255622844620278231584990951531026335e-21
The value stabilizes only when we use more than 30 digits (two times the digits used in double precision).

The decision

Even if we are aware of the fact that the result is not always accurate, for MATLAB compatibility we decided to unlock the Amos, since they are still the most accurate and, even more important, the type of the inputs is the same as in MATLAB (while, for example, GSL doesn't accept complex $x$ value). Moreover, in Octave is possible to obtain in output the value of IERR, thing that is not possible in MATLAB.
You can find the work on the bookmark "bessel" of my repository.

Betainc

During these last days I also started to implement betainc from scratch, I think it will be ready for the first days of August. Then, it will be necessary to rewrite also betaincinv, since the actual version doesn't have the "upper" version. This should not be too difficult. I think we can use a simple Newton method (as for gammaincinv), the only problem will be, as for gammaincinv, to find good initial guesses.

by Michele Ginesi (noreply@blogger.com) at August 01, 2017 09:30 AM

July 28, 2017

Joel Dahne

A Package for Taylor Arithmetic

In the last blog post I wrote about what was left to do with implementing support for N-dimensional arrays in the interval package. There are still some things to do but I have had, and most likely will have, some time to work on other things. Before the summer I started to work on a proof of concept implementation of Taylor arithmetic in Octave and this week I have continued to work on that. This blog post will be about that.

A Short Introduction to Taylor Arithmetic

Taylor arithmetic is a way to calculate with truncated Taylor expansions of functions. The main benefit is that it can be used to calculate derivatives of arbitrary order.

Taylor expansion or Taylor series (I will use these words interchangeably) are well known and from Wikipedia we have: The Taylor series of real or complex valued function $f(x)$ that is infinitely differentiable at a real or complex number $a$ is the power series
$$
f(a) + \frac{f'(a)}{1!}(x-a) + \frac{f''(a)}{2!}(x-a)^2 + \frac{f'''(a)}{3!}(x-a)^3 + ....
$$
From the definition it is clear that if we happen to know the coefficients of the Taylor series of $f$ at the point $a$ we can also calculate all derivatives of $f$ at that point by simply multiplying a coefficient with the corresponding factorial.

The simplest example of Taylor arithmetic is addition of two Taylor series. If $f$ has the Taylor series $\sum_{n=0}^\infty (f)_n (x-a)^n$ and $g$ the Taylor series $\sum_{n=0}^\infty (g)_n (x-a)^n$ then $f + g$ will have the Taylor series
$$
\sum_{n=0}^\infty (f + g)_n (x-a)^n = \sum_{n=0}^\infty ((f)_n + (g)_n)(x-a)^n$
$$
If we instead consider the product, $fg$, we get
$$
\sum_{n=0}^\infty (fg)_n (x-a)^n = \sum_{n=0}^\infty \left(\sum_{i=0}^n (f)_n(g)_n\right)(x-a)^n.
$$

With a bit of work you can find similar formulas for other standard functions. For example the coefficients, $(e^f)_n$, of the Taylor expansion of $\exp(f)$ is given by $(e^f)_0 = e^{(f)_0}$ and for $n > 0$
$$
(e^f)_n = \frac{1}{n}\sum_{i=0}^{n-1}(k-j)(e^f)_i(f)_{n-i}.
$$

When doing the computations on a computer we consider truncated Taylor series, we choose an order and keep only coefficients up to that order. There is also nothing that stops us from using intervals as coefficients, this allows us to get rigorous enclosures of derivatives of functions.

For a more complete introduction to Taylor arithmetic in conjunction with interval arithmetic see [1] which was my first encounter to it. For another implementation of it in code take a look at [2].

Current Implementation Status

As mentioned in the last post my repository can be found here

When I started to write on the package, before summer, my main goal was to get something working quickly. Thus I implemented the basic functions needed to do some kind of Taylor arithmetic, a constructor, some help functions and a few functions like $\exp$ and $\sin$.

This last week I have focused on implementing the basic utility functions, for example $size$, and rewriting the constructor. In the process I think I have broken the arithmetic functions, I will fix them later.

You can at least create and display Taylor expansions now. For example creating a variable $x$ with value 5 of order 3

> x = taylor (infsupdec (5), 3)
x = [5]_com + [1]_com X + [0]_com X^2 + [0]_com X^3

or a matrix with 4 variables of order 2

> X = taylor (infsupdec ([1, 2; 3, 4]), 2)
X = 2×2 Taylor matrix of order 2

ans(:,1) =

   [1]_com + [1]_com X + [0]_com X^2
   [3]_com + [1]_com X + [0]_com X^2

ans(:,2) =

   [2]_com + [1]_com X + [0]_com X^2
   [4]_com + [1]_com X + [0]_com X^2

If you want you can create a Taylor expansion with explicitly given coefficients you can do that as well

> f = taylor (infsupdec ([1; -2; 3, -4))
f = [1]_com + [-2]_com X + [3]_com X^2 + [-4]_com X^3

This would represent a function $f$ with $f(a) = 1$, $f'(a) = -2$, $f''(a) = 3 \cdot 2! = 6$ and $f'''(a) = -4 \cdot 3! = -24$.

Creating a Package

My goal is to create a full package for Taylor arithmetic along with some functions making use of it. The most important step is of course to create a working implementation but there are other things to consider as well. I have a few things I have not completely understood about it. Depending on how much time I have next week I will try to read a bit more about it probably ask some questions on the mailing list. Here are at least some of the things I have been thinking about

Mercurial vs Git?

I have understood that most of the Octave forge packages uses Mercurial for version control. I was not familiar with Mercurial before so the natural choice for me was to use Git. Now I feel I could switch to Mercurial if needed but I would like to know the potential benefits better, I'm still new to Mercurial so I don't have the full picture. One benefit is of course that it is easier if most  packages use the same system, but other than that?

How much work is it?

If I were to manage a package for Taylor arithmetic how much work is it? This summer I have been working full time with Octave so I have had lots of time but this will of course not always be the case. I know it takes time if I want to continue to improve the package, but how much, and what, continuous work is there?

What is needed besides the implementation?

From what I have understood there are a couple of things that should be included in a package besides the actual m-files. For example a Makefile for creating the release, an INDEX-file and a CITATION-file. I should probably also include some kind of documentation, especially since Taylor arithmetic is not that well known. Is there anything else I need to think about?

What is the process to get a package approved?

If I were to apply (whatever that means) for the package to go to Octave forge what is the process for that? What is required before it can be approved and what is required after it is approved?


[1] W. Tucker, Validated Numerics, Princeton University Press, 2011.
[2] F. Blomquist, W. Hofschuster, W. Krämer, Real and complex taylor arithmetic in C-XSC, Preprint 2005/4, Bergische Universität Wuppertal.

by Joel Dahne (noreply@blogger.com) at July 28, 2017 09:56 PM

July 27, 2017

Enrico Bertino

Deep learning functions

Hi there,

the second part of the project is finishing. This period was quite interesting because I had to dive into the theory behind Neural Networks  In particular [1], [2], [3] and [4] were very useful and I will sum up some concepts here below. On the other hand, coding became more challenging and the focus was on the python layer and in particular the way to structure the class in order to make everything scalable and generalizable. Summarizing the situation, in the first period I implemented all the Octave classes for the user interface. Those are Matlab compatible and they call some Python function in a seamless way. On the Python side, the TensorFlow API is used to build the graph of the Neural Network and perform training, evaluation and prediction.

I implemented the three core functions: trainNetwork, SeriesNetwork and trainingOptions. To do this, I used a Python class in which I initialize an object with the graph of the network and I store this object as attribute of SeriesNetwork. Doing that, I call the methods of this class from trainNetwork to perform the training and from predict/classify to perform the predictions. Since it was quite hard to have a clear vision of the situation, I used a Python wrapper (Keras) that allowed me to focus on the integration, "unpack" the problem and go forth "module" by "module". Now I am removing the dependency on the Keras library using directly the Tensorflow API. The code in my repo [5].

Since I have already explained in last posts how I structured the package, in this post I would like to focus on the theoretical basis of the deep learning functions used in the package. In particular I will present the available layers and the parameters that are available for the training.  


Theoretical dive

I. Fundamentals

I want to start with a brief explanation about the perceptron and the back propagation, two key concepts in the artificial neural networks world. 

Neurons

Let's start from the perceptron, that is the starting point for understanding neural networks and its components. A perceptron is simply a "node" that takes several binary inputs,  $ x_1, x_2, ... $, and produces a single binary output:

The neuron's output, 0 or 1, is determined by whether the linear combination of the inputs $  \omega \cdot x = \sum_j \omega_j x_j  $ is less than or greater than some threshold value. That is a simple mathematical model but is very versatile and powerful because we can combine many perceptrons and varying the weights and the threshold we can get different models. Moving the threshold to the other side of the inequality and replacing it by what's known as the perceptron's bias, b = −threshold, we can rewrite it as


$ out = \bigg \{ \begin{array}{rl} 0 & \omega \cdot x + b \leq 0 \\ 1 & \omega \cdot x + b > 0 \\ \end{array} $

Using the perceptrons like artificial neurons of a network, it turns out that we can devise learning algorithms which can automatically tune the weights and biases. This tuning happens in response to external stimuli, without direct intervention by a programmer and this enables us to have an "automatic" learning.

Speaking about learning algorithms, the proceedings are simple: we suppose we make a small change in some weight or bias and what see the corresponding change in the output from the network. If a small change in a weight or bias causes only a small change in output, then we could use this fact to modify the weights and biases to get our network to behave more in the manner we want. The problem is that this isn't what happens when our network contains perceptrons since a small change of any single perceptron can sometimes cause the output of that perceptron to completely flip, say from 0 to 1. We can overcome this problem by introducing an activation function. Instead of the binary output we use a function depending on weights and bias. The most common is the sigmoid function:
$ \sigma (\omega \cdot x + b ) = \dfrac{1}{1 + e^{-(\omega \cdot x + b ) } } $


Figure 1. Single neuron 

With the smoothness of the activation function $ \sigma $ we are able to analytically measure the output changes since $ \Delta out $ is a linear function of the changes $ \Delta \omega $ and $ \Delta b$ :
$ \Delta out \approx \sum_j \dfrac{\partial out}{\partial \omega_j} \Delta \omega_j + \dfrac{\partial out}{\partial b} \Delta b $

Loss function

Let x be a training input and y(x) the desired output. What we'd like is an algorithm which lets us find weights and biases so that the output from the network approximates y(x) for all x. Most used loss function is mean squared error (MSE) :
$ L( \omega, b) = \dfrac{1}{2n} \sum_x || Y(x) - out ||^2 $ ,
where n is the total number of training inputs, out is the vector of outputs from the network when x is input. 
To minimize the loss function, there are many optimizing algorithms. The one we will use is the gradient descend, of which every iteration of an epoch is defined as:

$ \omega_k \rightarrow \omega_k' = \omega_k - \dfrac{\eta}{m} \sum_j \dfrac{\partial L_{X_j}}{\partial \omega_k} $
$ b_k \rightarrow b_k' = b_k - \dfrac{\eta}{m} \sum_j \dfrac{\partial L_{X_j}}{\partial b_k} $

where m is the size of the batch of inputs with which we feed the network and $ \eta $ is the learning rate.

Backpropagation

The last concept that I would like to emphasize is the backpropagation. Its goal is to compute the partial derivatives$ \partial L / \partial  \omega $ and $ \partial L / \partial b} $ of the loss function L with respect to any weight or bias in the network. The reason is that those partial derivatives are computationally heavy and the network training would be excessively slow.

Let be $ z^l $ the weighted input to the neurons in layer l, that can be viewed as a linear function of the activations of the previous layer: $ z^l = \omega^l a^{l-1} + b^l $ .
In the fundamental steps of backpropagation we compute:

1) the final error:
$ \delta ^L = \Delta_a L \odot \sigma' (z^L) $
The first term measures how fast the loss is changing as a function of every output activation and the second term measures how fast the activation function is changing at $ z_L $

2) the error of every layer l:
$ \delta^l = ((\omega^{l+1})^T \delta^{l+1} ) \odot \sigma' (z^l) $


3) the partial derivative of the loss function with respect to any bias in the net
$ \dfrac{\partial L}{\partial b^l_j} = \delta^l_j $


4) the partial derivative of the loss function with respect to any weight in the net
$ \dfrac{\partial L}{\partial \omega^l_{jk}} = a_k^{l-1} \delta^l_j $


We can therefore update the weights and the biases with the gradient descent and train the network. Since inputs can be too numerous, we can use only a random sample of the inputs. Stochastic Gradient Descent (SGD) simply does away with the expectation in the update and computes the gradient of the parameters using only a single or a few training examples. In particular, we will use the SGD with momentum, that is a method that helps accelerate SGD in the relevant direction and damping oscillations. It does this by adding a fraction γ of the update vector of the past time step to the current update vector.


II. Layers

Here a brief explanation of the functions that I am considering in the trainNetwork class

Convolution2DLayer

The convolution layer is the core building block of a convolutional neural network (CNN) and it does most of the computational heavy lifting. They derive their name from the “convolution” operator. The primary purpose of convolution is to extract features from the input image preserving the spatial relationship between pixels by learning image features using small squares of input data. 


Figure 2. Feature extraction with convolution (image taken form https://goo.gl/h7pXxf) 
In the example in Fig.2, the 3×3 matrix is called a 'filter' or 'kernel' and the matrix formed by sliding the filter over the image and computing the dot product is called the 'Convolved Feature' or 'Activation Map' (or the 'Feature Map'). In practice, a CNN learns the values of these filters on its own during the training process (although we still need to specify parameters such as number of filters, filter size, architecture of the network etc. before the training process). The more number of filters we have, the more image features get extracted and the better our network becomes at recognizing patterns in unseen images.
The size of the Feature Map depends on three parameters: the depth (that corresponds to the number of filters we use for the convolution operation), the stride (that is the number of pixels by which we slide our filter matrix over the input matrix) and the padding (that consists in padding the input matrix with zeros around the border).

ReLULayer


ReLU stands for Rectified Linear Unit and is a non-linear operation: $ f(x)=max(0,x) $. Usually this is applied element-wise to the output of some other function, such as a matrix-vector product. It replaces all negative pixel values in the feature map by zero with the purpose of introducing non-linearity in our network, since most of the real-world data we would want to learn would be non-linear.

FullyConnectedLayer

Neurons in a fully connected layer have full connections to all activations in the previous layer, as seen in regular Neural Networks. Hence their activations can be computed with a matrix multiplication followed by a bias offset. In our case, the purpose of the fully-connected layer is to use these features for classifying the input image into various classes based on the training dataset. Apart from classification, adding a fully-connected layer is also a cheap way of learning non-linear combinations of the features. 

Pooling

It is common to periodically insert a pooling layer in-between successive convolution layers. Spatial Pooling (also called subsampling or downsampling) reduces the dimensionality of each feature map but retains the most important information. In particular, pooling
makes the input representations (feature dimension) smaller and more manageable
reduces the number of parameters and computations in the network, therefore, controlling overfitting
makes the network invariant to small transformations, distortions and translations in the input image
helps us arrive at an almost scale invariant representation of our image 
Spatial Pooling can be of different types: Max, Average, Sum etc.

MaxPooling2DLayer
In case of Max Pooling, we define a spatial neighborhood (for example, a 2×2 window) and take the largest element from the rectified feature map within that window. 

AveragePooling2DLayer
Instead of taking the largest element we could also take the average.

DropoutLayer

Dropout in deep learning works as follows: one or more neural network nodes is switched off once in a while so that it will not interact with the network. With dropout, the learned weights of the nodes become somewhat more insensitive to the weights of the other nodes and learn to decide somewhat more by their own. In general, dropout helps the network to generalize better and increase accuracy since the influence of a single node is decreased.

SoftmaxLayer

The purpose of the softmax classification layer is simply to transform all the net activations in your final output layer to a series of values that can be interpreted as probabilities. To do this, the softmax function is applied onto the net intputs.
$ \phi_{softmax} (z^i) = \dfrac{e^{z^i}}{\sum_{j=0}^k e^{z_k^i}} $


CrossChannelNormalizationLayer

Local Response Normalization (LRN) layer implements the lateral inhibition that in neurobiology refers to the capacity of an excited neuron to subdue its neighbors. This layer is useful when we are dealing with ReLU neurons because they have unbounded activations and we need LRN to normalize that. We want to detect high frequency features with a large response. If we normalize around the local neighborhood of the excited neuron, it becomes even more sensitive as compared to its neighbors. At the same time, it will dampen the responses that are uniformly large in any given local neighborhood. If all the values are large, then normalizing those values will diminish all of them. So basically we want to encourage some kind of inhibition and boost the neurons with relatively larger activations.

III. training options

The training function takes as input a trainingOptions object that contains the parameters for the training. A brief explanation:

SolverName
Optimizer chosen for minimize the loss function. To guarantee the Matlab compatibility, only the Stochastic Gradient Descent with Momentum ('sgdm') is allowed

Momentum
Parameter for the sgdm: it corresponds to the contribution of the previous step to the current iteration

InitialLearnRate
Initial learning rate η for the optimizer

LearnRateScheduleSettings
These are the settings for regulating the learning rate. It is a struct containing three values:
RateSchedule: if it is set to 'piecewise', the learning rate will drop of a RateDropFactor every a RateDropPeriod number of epochs. 

L2Regularization
Regularizers allow to apply penalties on layer parameters or layer activity during optimization. This is the factor of the L2 regularization.

MaxEpochs
Number of epochs for training

Verbose
Display the information of the training every VerboseFrequency iterations

Shuffle
Random shuffle of the data before training if set to 'once'

CheckpointPath
Path for saving the checkpoints

ExecutionEnvironment
Chose of the hardware for the training: 'cpu', 'gpu', 'multi-gpu' or 'parallel'. the load is divided between workers of GPUs or CPUs according to the relative division set by WorkerLoad

OutputFcn
Custom output functions to call during training after each iteration passing a struct containing:
Current epoch number, Current iteration number, TimeSinceStart, TrainingLoss, BaseLearnRate, TrainingAccuracy (or TrainingRMSE for regression), State

Peace,
Enrico

[1] [Ian Goodfellow, Yoshua Bengio, Aaron Courville] Deep Learning,  MIT Press, http://www.deeplearningbook.org, 2016
[2] http://ufldl.stanford.edu/tutorial/supervised/ConvolutionalNeuralNetwork/
[3] http://web.stanford.edu/class/cs20si/syllabus.html
[4] http://neuralnetworksanddeeplearning.com/
[5] https://bitbucket.org/cittiberto/octave-nnet/src

by Enrico Bertino (noreply@blogger.com) at July 27, 2017 09:34 PM

July 14, 2017

Joel Dahne

Ahead of the Timeline

One of my first posts on this blog was a timeline for my work during the project. Predicting the amount of time something takes is always hard to do. Often time you tend to underestimate the complexity of parts of the work. This time however I overestimated the time the work would take.

If my timeline would have been correct I would have just started to work on Folding Functions (or reductions as they are often called). Instead I have completed the work on them and also for functions regarding plotting. In addition I have started to work on the documentation for the package as well as checking everything an extra time.

In this blog post I will go through what I have done this week, what I think is left to do and a little bit about what I might do if I complete the work on N-dimensional arrays in good time.

This Week

The Dot Function

The $dot$-function was the last function left to implement support for N-dimensional arrays in. It is very similar to the $sum$-function so I already had an idea of how to do it. As with  $sum$ I moved most of the handling of the vectorization from the m-files to the oct-file, the main reason being improved performance.

The $dot$-functions for intervals is actually a bit different from the standard one. First of all it supports vectorization which the standard one does not

> dot ([1, 2, 3; 4, 5, 6], 5)
error: dot: size of X and Y must match
> dot (infsupdec ([1, 2, 3; 4, 5, 6], 5)
ans = 1x3 interval vector

  [25]_com   [35]_com   [45]_com

It also treats empty arrays a little different, see bug #51333,

> dot ([], [])
ans = [](1x0)
> dot (infsupdec ([]), [])
ans = [0]_com



Package Documentation

I have done the minimal required changes to the documentation. That is I moved support for N-dimensional arrays from Limitation to Features and added some simple examples on how to create N-dimensional arrays.

Searching for Misses

During the work I have tried to update the documentation for all functions to account for the support of N-dimensional arrays and I have also tried to update some of the comments for the code. But as always, especially when working with a lot of files, you miss things, both in the documentation and old comments.

I did a quick grep for the words "matrix" and "matrices" since they are candidates for being changed to "array". Doing this I found 35 files where I missed things. It was mainly minor things, comments using the "matrix" which I now changed to "array", but also some documentation which I had forgotten to update.

What is Left?

Package Documentation - Examples

As mentioned above I have done the minimal required changes to the documentation. It would be very nice to add some more interesting example using N-dimensional arrays of intervals in a useful way. Ironically I have not been able to come up with an interesting example but I will continue to think about it. If you have an example that you think would be interesting and want to share, please let me know!

Coding Style

As I mentioned in one of the first blog posts, the coding style for the interval package was not following the standard for Octave. During my work I have adapted all files I have worked with to the coding standard for Octave. A lot of the files I have not needed to do any changes to, so they are still using the old style. It would probably be a good idea to update them as well.

Testing - ITF1788

The interval framwork libary developed by Oliver is used to test the correctness of many of the functions in the package. At the moment it tests evaluation of scalars but in principle it should be no problem to use it for testing vectorization or even broadcasting. Oliver has already started to work on this.

After N-dimensional arrays?

If I continue at this pace I will finish the work on N-dimensional arrays before the time of the project is over. Of course the things that are left might take longer than expected, they usually do, but there is a chance that I will have time left after everything is done. So what should I do then? There are more thing that can be done on the interval package, for example adding more examples to the documentation, however I think I would like to start working on a new package for Taylor arithmetics.

Before GSoC I started to implement a proof of concept for Taylor arithmetics in Ocatve which can be found here. I would then start to work on implementing a proper version of it, where I would actually make use of N-dimensional interval arrays. If I want to create a package for this I would also need to learn a lot of other things, one of them being how to manage a package on octave forge.

At the moment I will try to finish my work on N-dimensional arrays. Then I can discuss it with Oliver and see what he thinks about it.

by Joel Dahne (noreply@blogger.com) at July 14, 2017 04:59 PM

July 13, 2017

Joel Dahne

Set inversion with fsolve

This week my work have mainly been focused on the interval version of fsolve. I was not sure if and how this could make use of N-dimensional arrays and to find that out I had to understand the function. In the end it turned out that the only generalization that could be done were trivial and required very few changes. However I did find some other problems with the functions that I have been able to fix. Connected to fsolve are the functions ctc_intersect and ctc_union. The also needed only minor changes to allow for N-dimensional input. I will start by giving an introduction to fsolve, ctc_union and ctc_intersect and then I will mention the changes I have done to them.

Introduction to fsolve

The standard version of fsolve in Octave is used to solve systems of nonlinear equations. That is, given a functions $f$ and a starting point $x_0$ it returns a value $x$ such that $f(x)$ is close to zero. The interval version of fsolve does much more than this. It is used to enclose the preimage of a set $Y$ under $f$. Given a domain $X$, a set $Y$ and a function $f$ it returns an enclosure of the set
$$
f^{-1}(Y) = \{x \in X: f(x) \in Y\}.
$$
By letting $Y = 0$ we get similar functionality as the standard fsolve, with the difference that the output is an enclosure of all zeros to the function (compared to one point for which $f$ returns close to zero).

Example: The Unit Circle

Consider the function $f(x, y) = \sqrt{x^2 + y^2} - 1$ which is zero exactly on the unit circle. Plugging this in to the standard fsolve we get with $(0.5, 0.5)$ as a starting guess

> x = fsolve (@(x) f(x(1), x(2)), [0.5, 0.5])
x = 0.70711 0.70711

which indeed is close to a zero. But we get no information about other zeros.

Using the interval version of fsolve with $X = [-3, 3] \times [-3, 3]$ as starting domain we get

> [x paving] = fsolve (f, infsup ([-3, -3], [3, 3]));
> x
x ⊂ 2×1 interval vector

     [-1.002, +1.002]
   [-1.0079, +1.0079]

Plotting the paving we get the picture

which indeed is a good enclosure of the unit circle.

How it works

In its simplest form fsolve uses a simple bisection scheme to find the enclosure. Using interval methods we can find enclosure to images of sets. Given a set $X_0 \subset X$ we have three different possibilities
  • $f(X_0) \subset Y$ in which case we add $X_0$ to the paving
  • $f(X_0) \cap Y = \emptyset$ in which case we discard $X_0$
  • Otherwise we bisect $X_0$ and continue on the parts
By setting a tolerance of when to stop bisecting boxes we get the algorithm to terminate in a finite number of steps.

Contractors

Using bisection is not always very efficient, especially when the domain has many dimensions. One way to speed up the convergence is with what's called contractors. In short a contractor is a function that can take the set $X_0$ and returns a set $X_0' \subset X_0$ with the property that $f(X_0 \setminus X_0') \cap Y = \emptyset$. It's a way of making $X_0$ smaller without having to bisect it that many times.

When you construct a contractor you use the reverse operations definer on intervals. I will not go into how this works, if you are interested you can find more information in the package documentation [1] and in these youtube videos about Set Inversion Via Interval Analysis (SIVIA) [2].

The functions ctc_union and ctc_intersect are used to combine contractors on sets into contractors on unions or intersections of these sets.

Generalization to N-dimensional arrays

How can fsolve be generalized to N-dimensional arrays? The only natural thing to do is to allow for the input and output of $f$ to be N-dimensional arrays. This also is no problem to do. While you mathematically probably would say that fsolve is used to do set inversion for functions $f: \mathbb{R}^n \to \mathbb{R}^m$ it can of course also be used for example on functions $f: \mathbb{R}^{n_1}\times \mathbb{R}^{n_2} \to \mathbb{R}^{m_1}\times \mathbb{R}^{m_2}$.

This is however a bit different when using vectorization. When not using vectorization (and not using contractions) fsolve expects that the functions takes one argument which is an array with each element corresponding to a variable. If vectorization is used it instead assumes that the functions takes one argument for each variable. Every argument is then given as a vector with each element corresponding to one value of the variable for which to compute the function. Here we have no use of N-dimensional arrays.

Modifications

The only change in functionality that I have done to the functions is to allow for N-dimensional arrays as input and output when vectorization is not used. This required only minor changes, essentially changing expressions like
max (max (wid (interval)))

to
max (wid (interval)(:))

It was also enough to do these changes in ctc_union and ctc_intersect to have these support N-dimensional arrays.

I have made no functional changes when vectorization is used. I have however made an optimization in the construction of the arguments to the function. The arguments are stored in an array but before being given to the function they need to be split up into the different variables. This is done by creating a cell array with each element being a vector with the values of one of the variables. Previously the construction of this cell array was very inefficient, it split the interval into its lower and upper part and then called the constructor to create an interval again. Now it copies the intervals into the cell without having to call the constructor. This actually seems have been quite a big improvement, using the old version the example with the unit circle from above took around 0.129 seconds and with the new version it takes about 0.092 seconds. This is of course only one benchmark, but a speed up of about 40% for this test is promising!

Lastly I noticed a problem in the example used in the documentation of the function. The function used is

# Solve x1 ^ 2 + x2 ^ 2 = 1 for -3 ≤ x1, x2 ≤ 3 again,
# but now contractions speed up the algorithm.
function [fval, cx1, cx2] = f (y, x1, x2)
  # Forward evaluation
  x1_sqr = x1 .^ 2;
  x2_sqr = x2 .^ 2;
  fval = hypot (x1, x2);
  # Reverse evaluation and contraction
  y = intersect (y, fval);
  # Contract the squares
  x1_sqr = intersect (x1_sqr, y - x2_sqr);
  x2_sqr = intersect (x2_sqr, y - x1_sqr);
  # Contract the parameters
  cx1 = sqrrev (x1_sqr, x1);
  cx2 = sqrrev (x2_sqr, x2);
endfunction

Do you see the problem? I think it took me more than a day to realize that the problems I was having was not because of a bug in fsolve but because this function computes the wrong thing. The function is supposed to be $f(x_1, x_2) = x_1^2 + x_2^2$ but when calculating the value it calls hypot which is given by $hypot(x_1, x_2) = \sqrt{x_1^2 + x_2^2}$. For $f(x_1, x_2) = 1$, which is used in the example, it gives the same result, but otherwise it will of course not work.

[1] https://octave.sourceforge.io/interval/package_doc/Parameter-Estimation.html#Parameter-Estimation
[2] https://www.youtube.com/watch?v=kxYh2cXHdNQ


by Joel Dahne (noreply@blogger.com) at July 13, 2017 10:24 AM

July 11, 2017

Michele Ginesi

Gnu Scientific Library

Bessel functions

During the second part of GSoC I have to work on Bessel functions. The bug is related in particular on the Bessel function of first kind and regard the fact that the function is not computed if the $x$ argument is too big.
During this week I studied the actual implementation i.e. the Amos library, written in Fortran. The problem is related to the fact that zbesj.f simply refuse to compute the result when the input argument is too large (the same problem happens for all bessel functions, both for real and complex arguments). What I did was to "unlock" the amos in such a way to compute the value in any case (which seems to be the strategy used by MATLAB). Then I compared the relative errors with the relative errors done by the Gnu Scientific Library (GSL).
At first, I would precise some limitation present in GSL:
  • The parameter $x$ must be real, while it can be complex in Amos and MATLAB. The same holds for the parameter $\alpha$.
  • The class of the output does not adapt to the class of the input, returning always a double.
Doing some tests, it seems that the amos work better in terms of accuracy (even if we are talking about errors which do not differ in the order). I concentrated on values in which Amos usually refuse to compute, since in every other zone of the $x-\alpha$ plane it is known the error is in the order of 1e-15. For $\alpha\in\{-1,0,1\}$ they return the same result, while for other values of $\alpha$, Amos are in general more accurate.
Anyway, I would remark the fact that there are not, as far as I know, accurate algorithms in double precision for very large magnitude. In fact, for such arguments, both Amos and GSL make a relative error of the order of the unity. This problem is evident when using SageMath to compute accurate values, e.g.
sage: bessel_J(1,10^40).n()
7.95201138653707e-21
sage: bessel_J(1,10^40).n(digits=16)
-7.522035532561300e-21
sage: bessel_J(1,10^40).n(digits=20)
-5.7641641376196142462e-21
sage: bessel_J(1,10^40).n(digits=25)
1.620655257788838811378880e-21
sage: bessel_J(1,10^40).n(digits=30)
1.42325562284462027823158499095e-21
sage: bessel_J(1,10^40).n(digits=35)
1.4232556228446202782315849909515310e-21
sage: bessel_J(1,10^40).n(digits=40)
1.423255622844620278231584990951531026335e-21
The values "stabilize" only when the number of digits is bigger than 30, far away from double precision.

Incomplete gamma function

I also gave a look on how to use GSL to eventually improve the incomplete gamma function. It is not possible, however, to use only GSL functions gsl_sf_gamma_inc_P and gsl_sf_gamma_inc_Q due to their limitations:
  • There is no the "scaled" option.
  • The value of $x$ must be real. This may be not a problem since even MATLAB does not accept complex value (while the version I worked on does).
  • The parameter $x$ must be positive. This is actually a problem of MATLAB compatibility.
  • The class of the output does not adapt to the class of the input, returning always a double.
I tested the accuracy of the GSL and they work better when $a1$ and $x\ll1$ so I think I will fix gammainc.m using the algorithm of gsl_sf_gamma_inc_P and gsl_sf_gamma_inc_Q for that values of the input arguments.

Incomplete Beta function

The actual incomplete beta function needs to be replaced. I've already studied the continued fraction exapnsion (which seems to be the best one to use). In GSL is implemented in a good way but still present two limitations:
  • Does not adapt to the input class (it always work in double precision).
  • There is no "upper" version. This is missing also in the current betainc function, but it is present in MATLAB. Unfortunately, it is not sufficient to compute it as $1 - I_x(a,b)$, it is necessary to find an accurate way to compute directly it.
So I will take inspiration to GSL version of the function but I think I will write beatinc as a single .m file.

by Michele Ginesi (noreply@blogger.com) at July 11, 2017 09:01 AM

July 01, 2017

Michele Ginesi

First period resume

First period resume

First period resume

Here I present a brief resume of the work done in this first month.

1st week: Gammainc

The incomplete gamma function presented a series of inaccurate results in the previous implementation, so it has been decided to rewrite it from scratch. A big part of the work was done by Marco and Nir (here the discussion). I gave my contribution by fixing some problems in the implementation and by making a functioning commit (thanks to the suggestions given by Carne during the OctConf in Geneve).

2nd and 3rd week: Gammaincinv

The inverse of the incomplete gamma function was completely missing in Octave (and this was a problem of compatibility with Matlab). Now the function is present, written as a single .m file. The implementation consist in a simple, but efficient, Newton's method.

Last week: Betainc

When I first submitted my timeline, there was only a bug related to betainc, involving the input validation. During the first period of GSoC emerged a new bug of "incorrect result" type. From this, me and my mentor decided that it is necessary to rewrite the function, so I used this last week to study efficient algorithms to evaluate it. In particular, seems that the most efficient way is to use continude fractions, maybe after a rescale of the function. I'm already working on it, but I will complete it during the first week of august, after finishing the work on Bessel functions.

by Michele Ginesi (noreply@blogger.com) at July 01, 2017 07:58 AM

June 30, 2017

Joel Dahne

One Month In

Now one month of GSoC has passed and so far everything has gone much better than I expected! According to my timeline this week would have been the first of two were I work on vectorization. Instead I have already mostly finished the vectorization and have started to work on other things. In this blog post I'll give a summary of what work I have completed and what I have left to do. I'll structure it according to where the functions are listed in the $INDEX$-file [1]. The number after the heading is the number of functions in that category.

Since this will mainly be a list of which files have been modified and which are left to do this might not be very interesting if you are not familiar with the structure of the interval package.

Interval constant (3)

All of these have been modified to support N-dimensional arrays.

Interval constructor (5)

All of these have been modified to support N-dimensional arrays.

Interval function (most with tightest accuracy) (63)

Almost all of these functions worked out of the box! At least after the API functions to the MPFR and crlibm libraries were fixed, they are further down in the list.

The only function that did not work immediately were $linspace$. Even though this function could be generalized to N-dimensinal arrays the standard Octave function only works for matrices (I think the Matlab version only allows scalars). This means that adding support for N-dimensional vectors for the interval version is not a priority. I might do it later on but it is not necessary.

Interval matrix operation (16)

Most of the matrix functions does not make sense for N-dimensional arrays. For example matrix multiplication and matrix inversion only makes sense for matrices. However all of the reduction functions are also here, they include $dot$, $prod$, $sum$, $sumabs$ and $sumsq$.

At the moment I have implemented support for N-dimensional arrays for $sum$, $sumabs$ and $prod$. The functions $dot$ and $sumsq$ are not ready, I'm waiting to see what happens with bug #51333 [2] before I continue with that work. Depending on the bug I might also have to modify the behaviour of $sum$, $sumabs$ and $prod$ slightly.

Interval comparison (19)

All of these have been modified to support N-dimensional arrays.

Set operation (7)

All of these functions have been modified to support N-dimensional arrays except one, $mince$. The function $mince$ is an interval version of $linspace$ and reasoning here is the same as that for $linspace$ above.

Interval reverse operation (12)

Like the interval functions above, all of the functions worked out of the box!

Interval numeric function (11)

Also these functions worked out of the box, with some small modifications to the documentation for some of them.

Interval input and output (9)

Here there are some functions which require some comments, the ones I do not comment about have all gotten support for N-dimensional arrays.

$interval\_bitpack$
I think that this function does not make sense to generalize to N-dimensions. It could perhaps take an N-dimensional arrays as input, but it will always return a row vector.  I have left it as it is for now at least.

$disp$ and $display$
These are functions that might be subject to change later on. At the moment it prints N-dimensional arrays of intervals in the same way Octave does for normal arrays. It's however not clear how to handle the $\subset$ symbol and we might decide to change it.

Interval solver or optimizer (5)

The functions $gauss$ and $polyval$ are not generalizable to N-dimensional vectors. I don't think that $fzero$ can be generalized either, for it to work the functions must be real-valued.

The function $fsolve$ can perhaps be modified to support N-dimensional vectors. It uses the SIVIA algorithm [3] and I have to dive deeper into how it works to see if it can be done.

For $fminsearch$ nothing needed to be done, it worked for N-dimensional arrays directly.

Interval contractor arithmetic (2)

Both of these functions are used together with $fsolve$ so they also depend on if SIVIA can be generalized or not.

Verified solver or optimizer (6)

All of these functions work on matrices and cannot be generalized.

Utility function (29)

All of these for which it made sense have been modified to support N-dimensional arrays. Some of them only works for matrices, these are $ctranspose$, $diag$, $transpose$, $tril$ and $triu$. I have left them as they were, though I fixed a bug in $diag$.

API function to the MPFR and crlibm libraries (8)

These are the functions that in general required most work. The ones I have added full support for N-dimensional arrays in are $crlibm\_function$, $mpfr\_function\_d$ and $mpfr\_vector\_sum\_d$. Some of them cannot be generalized, these are $mpfr\_matrix\_mul_d$, $mpfr\_matrix\_sqr\_d$ and $mpfr\_to\_string\_d$. The functions $mpfr\_linspace\_d$ and $mpfr\_vector\_dot\_d$ are related to what I mentioned above for $linspace$ and $dot$.

Summary

So summing up the functions that still require some work are
  • Functions related to $fsolve$
  • The functions $dot$ and $sumsq$
  • The functions $linspace$ and $mince$
Especially the functions related to $fsolve$ might take some time to handle. My goal is to dive deeper into this next week.

Apart from this there are also some more things that needs to be considered. The documentation for the package will need to be updated. This includes adding some examples which make use of the new functionality.

The interval package also did not follow the coding style for Octave. All the functions which I have made changes to have been updated with the correct coding style, but many of the functions that worked out of the box still use the old style. It might be that we want to unify the coding standard for all files before the next release.

[1] The $INDEX$ file https://sourceforge.net/u/urathai/octave/ci/default/tree/INDEX
[2] Bug #51333 https://savannah.gnu.org/bugs/index.php?51333
[3] The SIVIA algorithm https://www.youtube.com/watch?v=kxYh2cXHdNQ

by Joel Dahne (noreply@blogger.com) at June 30, 2017 04:38 PM

Enrico Bertino

End of the first work period

Hi all,

this is the end of this first period of GSoC! It was a challenging but very interesting period and I am very excited about the next two months. It is the first time where I have the opportunity to make something that has a real value, albeit small, on someone else's work. Even when I have to do small tasks, like structure the package or learn Tensorflow APIs, I do it with enthusiasm because I have very clear in mind the ultimate goal and the value that my efforts would bring. It may seem trivial, but for a student this is not the daily bread :) I really have fun coding for this project and I hope this will last until the end!

Speaking about the project, I've spent some time wondering which was the best solution to test the correct installation of the Tensorflow Python APIs on the machine. The last solution was putting the test in a function __nnet_init__ and calling it in the PKG_ADD (code in inst/__nnet_init__ in my repo [1]).

Regarding the code, in this last days I tried to connect the dots, calling a Tensorflow network from Octave in a "Matlab compatible" way. In particular, I use the classes that I made two weeks ago in order to implement a basic version of trainNetwork, that is the core function of this package. As I explained in my post of June 12, trainNetwork takes as input the data and two objects: the layers and the options. I had some some difficulty during the implementation of the Layer class due to the inheritance and the overloading. Eventually, I decided to store the layers in a cell array as attribute of the Layer class. Overloading the subsref, I let the user call a specific layer with the '()' access, like a classic array. With this kind of overloading I managed to solve the main problem of this structure, that is the possibility to get a property of a layer doing for example layers(1).Name

classdef Layer < handle
  properties (Access = private)
    layers = {};
  endproperties

  methods (Hidden, Access = {?Layers})
    function this = Layer (varargin)
        nargin = numel(varargin);
        this.layers = cell(1, nargin);
      for i = 1:nargin
        this.layers{i} = varargin{i};
      end
    endfunction
  endmethods

  methods (Hidden)
    function obj = subsref(this, idx)
      switch idx(1).type
          case '()'
              idx(1).type = '{}';
              obj = builtin('subsref',this.layers,idx);
          case '{}'
              error('{} indexing not supported');
          case '.'
            obj = builtin('subsref',this,idx);
      endswitch
    endfunction

    function obj = numel(this)
      obj = builtin('numel',this.layers);
    endfunction

    function obj = size(this)
      obj = builtin('size',this.layers);
    endfunction
endmethods
endclassdef


Therefore, I implemented the same example of my last post in a proper way. In tests/script you can find the function cnn_linear_model which consists simply in:

1. Loading the datasets
  load('tests/examples/cnn_linear_model/train_70.mat')
  load('tests/examples/cnn_linear_model/test_70.mat')


2. Defining layers and options
  layers = [ ...
    imageInputLayer([28 28 1])
    convolution2dLayer(12,25)
    reluLayer
    fullyConnectedLayer(1)
    regressionLayer];
  options = trainingOptions('sgdm', 'MaxEpochs', 1);


3. Training
net = trainNetwork(trainImages, trainAngles, layers, options);
4. Prediction
acc = net.predict(testImages, testAngles) 

TrainNetwork is a draft and I have not yet implemented the classe seriesNetwork, but I think it's a good start :) In next weeks I will focus on the Tensorflow backend of the above mentioned functions, with the goal of having a working version at the end of the second period!

Peace,
Enrico

[1] https://bitbucket.org/cittiberto/octave-nnet/src

by Enrico Bertino (noreply@blogger.com) at June 30, 2017 09:32 AM

June 24, 2017

Enrico Bertino

Package structure

Hello! During the first period of GSoC I have worked mostly on analyzing the Matlab structure of the net package in order to guarantee the compatibility throughout the whole project. The focus of the project is on the convolutional neural networks, about which I will write the next post.

Regarding the package structure, the core will be composed by three parts:

  1. Layers: there are 11 types of layers that I defined as Octave classes, using classdef. These layers can be concatenated in order to create a Layer object defining the architecture of the network. This will be the input for the training function. 
  2. Training: the core of the project is the training function, which takes as input the data, the layers and some options and returns the network as output. 
  3. Network: the network object has three methods (activations, classify and predict) that let the user compute the final classification and prediction. 

Figure 1: conv nnet flowchart 


I have already implemented a draft for the first point, the layers classes [1]. Every layer type inherits some attributes and methods from the parent class Layers. This is useful for creating the Layer object: the concatenation of different layers is always a Layer object that will be used as input for the training function. For this purpose, I overloaded the cat, horzcat and vertcat operators for Layers and subsref for Layer. I need to finalize some details for the disp methods of these classes.

Figure 2: Layers classes definitions 

InputParser
 is used in every class for the parameter management and the attributes setting.

The objects of these classes can be instantiated with a corresponding function, implemented in the directory inst/. Here an example for creating a Layer object 


> # TEST LAYERS
> a = imageInputLayer([2,2,3]); # first layer
> b = convolution2dLayer(1,1); # second layer
> c = dropoutLayer(1); # third layer
> layers = [a b c]; # Layer object from layers concat
> drop = layers(3); # Layer element access
> drop.Probability # Access layer attribute
ans = 0.50000


All functions can be tested with the make check of the package.


The next step is a focus on the Tensorflow integration, with Pytave, writing a complete test for a regression of images angles and comparing the precision and the computational time with Matlab.

[1] https://bitbucket.org/cittiberto/octave-nnet/src/35e4df2a6f887582216fc266fc75976a816f04fc/inst/classes/?at=default


p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Monaco; color: #f4f4f4; background-color: #000000; background-color: rgba(0, 0, 0, 0.95)} span.s1 {font-variant-ligatures: no-common-ligatures} p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Monaco; color: #f4f4f4; background-color: #000000; background-color: rgba(0, 0, 0, 0.95)} span.s1 {font-variant-ligatures: no-common-ligatures}

by Enrico Bertino (noreply@blogger.com) at June 24, 2017 08:55 PM

Train a Convolutional Neural Network for Regression

Hello,

I spent the last period working mostly on Tensorflow, studying the APIs and writing some examples in order to explore the possible implementations of neural networks. For this goal, I chose an interesting example proposed in the Matlab examples at [1]. The dataset is composed by 5000 images, rotated by an angle α, and a corresponding integer label (the rotation angle α). The goal is to make a regression to predict the angle of a rotated image and straighten it up.
All files can be found in tests/examples/cnn_linear_model in my repo [2].

I have kept the structure as in the Matlab example, but I generated a new dataset starting from LeCun's MNIST digits (datasets at [3]). Each image was rotated by a random angle between 0° and 70°, in order to keep the right orientation of the digits (code in dataset_generation.m). In Fig. 1 some rotated digits with the corresponding original digits.

Figure 1. rotated images in columns 1,3,5 and originals in columns 2,4,6

The implemented linear model is:
$ \hat{Y} = \omega X + b $,
where the weights $\omega$ and the bias $b$ will be optimized during the training minimizing a loss function. As loss function, I used the mean square error (MSE):
$  \dfrac{1}{n} \sum_{i=1}^n (\hat{Y_i} - Y_i)^2 $,
where the $Y_i$ are the training labels. 

In order to show the effective improvement given by a Neural Network, I started to make a simple regression feeding the X variable of the model directly with the 28x28 images. Even if for the MSE minimization a close form exists, I implemented an iterative method for discovering some Tensorflow features (code in regression.py). For evaluate the accuracy of the regression, I consider a correct regression if the difference between angles is less than 20°. After 20 epochs, the convergence was almost reached, giving an accuracy of $0.6146$.


Figure 2. rotated images in columns 1,3,5 and after the regression in columns 2,4,6

I want to analyze now the improvement given by a feature extraction performed with a convolutional neural network (CNN). As in the Matlab example, I used a basic CNN since the input images are quite simple (only numbers with monochromatic background) and consequently the features to extract are few.
  • INPUT [28x28x1] will hold the raw pixel values of the image, in this case an image of width 28, height 28
  • CONV layer will compute the output of neurons that are connected to local regions in the input, each computing a dot product between their weights and a small region they are connected to in the input volume. This results in volume such as [12x12x25]: 25 filters of size 12x12
  • RELU layer will apply an element-wise activation function, such as the $max(0,x)$ thresholding at zero. This leaves the size of the volume unchanged ([12x12x25]).
  • FC (i.e. fully-connected) layer will compute the class scores, resulting in volume of size [1x1x1], which corresponds to the rotation angle. As with ordinary Neural Networks, each neuron in this layer will be connected to all the numbers in the previous volume.
Figure 3. CNN linear model architecture

We can visualize the architecture with Tensorboard where the graph of the model is represented.

Figure 4. Model graph generated with Tensorboard

With the implementation in regression_w_CNN.py, the results are quite satisfying: after 15 epochs, it reached an accuracy of $0.75$ (205 seconds overall). One can see in Fig. 4 the marked improvement of the regression.

Figure 5. rotated images in columns 1,3,5 and after the CNN regression in columns 2,4,6

With the same parameters, Matlab reached an accuracy of $0.76$ in 370 seconds (code in regression_Matlab_nnet.m), so the performances are quite promising

In the next post (in few days), I will integrate the work done up to now, calling the Python class within Octave and making a function that simulates the behavior of Matlab. Leveraging the layers classes that I made 2 weeks ago, I will implement a draft of the functions trainNetwork and predict making the Matlab script callable also in Octave.

I will also care about the dependencies of the package: I will add the dependency from Pytave in the package description and write a test as PKG_ADD in order to verify the version of Tensorflow during the installation of the package.

Peace,
Enrico
[3] http://yann.lecun.com/exdb/mnist/

by Enrico Bertino (noreply@blogger.com) at June 24, 2017 02:36 PM

June 22, 2017

Joel Dahne

Vectorization and broadcasting

At the moment I'm actually ahead of my schedule and this week I started to work on support for vectorization on N-dimensional arrays. The by far biggest challenge was to implement proper broadcasting and most of this post will be devoted to going through that. At the end I also mention some of the other things I have done during the week.

Broadcasting arrays

At the moment I have implement support for broadcasting on all binary functions. Since all binary functions behave similarly in respect to broadcasting I will use $+$ in all my example below, but this could in principle be any binary function working on intervals.

When adding to arrays, $A, B$, of the same size the result is just an arrays of the same size with each entry containing the sum of the corresponding entries in $A$ and $B$. If $A$ and $B$ does not have the same size then we try to perform broadcasting. The simplest form of broadcasting is when $A$ is an arrays and $B$ is a scalar. Then we just take the value of $B$ and add to every element in $A$. For example

> A = infsupdec ([1, 2; 3, 4])
A = 2×2 interval matrix
   [1]_com   [2]_com
   [3]_com   [4]_com
> B = infsupdec (5)
B = [5]_com
> A + B
ans = 2×2 interval matrix
   [6]_com   [7]_com
   [8]_com   [9]_com

However it is not only when one of the inputs is a scalar that broadcasting can be performed. Broadcasting is performed separately for each dimension of the input. We require either that the dimensions are equal, and no broadcasting is performed, or that one of the inputs have that dimension equal to $1$, we then concatenate this input along that dimension until they are of equal size. If for example $A$ has dimension $4\times4\times4$ and $B$ dimension $4\times4\times1$ we concatenate $B$ with itself along the third dimension four times to get two arrays of the same size. Since a scalar has all dimensions equal to 1 we see that it can be broadcasted to any size. Both $A$ and $B$ can also be broadcasted at the same time, along different dimensions, for example

> A = infsupdec (ones (1, 5, 2))
A = 1×5×2 interval array
ans(:,:,1) =
   [1]_com   [1]_com   [1]_com   [1]_com   [1]_com
ans(:,:,2) =
   [1]_com   [1]_com   [1]_com   [1]_com   [1]_com
> B = infsupdec ([1, 2, 3, 4, 5; 6, 7, 8, 9, 10])
B = 2×5 interval matrix
   [1]_com   [2]_com   [3]_com   [4]_com    [5]_com
   [6]_com   [7]_com   [8]_com   [9]_com   [10]_com
> A + B
ans = 2×5×2 interval array
ans(:,:,1) =
   [2]_com   [3]_com   [4]_com    [5]_com    [6]_com
   [7]_com   [8]_com   [9]_com   [10]_com   [11]_com
ans(:,:,2) =
   [2]_com   [3]_com   [4]_com    [5]_com    [6]_com
   [7]_com   [8]_com   [9]_com   [10]_com   [11]_com

The implementation

I'll go through a little bit about my implementation. I warn you that I'm not that familiar with the internals of Octave so some things I say might be wrong, or at least not totally correct.

Internally all, numerical, arrays are stored as a linear vector and the dimensions are only metadata. This means that the most efficient way to walk through an array is with a linearly increasing index. When $A$ and $B$ have the same size the most efficient way to sum them is to linearly go through the arrays. In pseudo code

// Calculate C = A + B
for (int i = 0; i < numel (A); i++) {
  C(i) = A(i) + B(i);
}

This works fine, and apart from unrolling the loop or doing optimizations like that it is probably the most efficient way to do it.

If $A$ and $B$ are not of the same size then one way to do it would be to simply extend $A$ or/and $B$ along the needed dimensions. This would however require coping a lot of data, something we want to avoid (memory access is expensive). Instead we try to be smart with our indexing to access the right data from both $A$ and $B$.

After asking on the IRC-channel I got pointed to this Octave function which performs broadcasting. My implementation, which can be found here, is heavily inspired by that function.

Performance

Here I compare the performance of the new implementation with the old one. Since the old one could only handle matrices we are limited by that. We can measure the time it takes to add two matrices $A$, $B$ with the code

tic; A + B; toc;

We do 10 runs for each test and all times are in seconds.

Addition of large matrices

Case 1: A = B = infsupdec (ones (1000, 1000));
       Old         New
       0.324722    0.277179
       0.320914    0.276116
       0.322018    0.276075
       0.318713    0.279258
       0.332041    0.279593
       0.318429    0.279987
       0.323752    0.279089
       0.317823    0.276036
       0.320509    0.280964
       0.320610    0.281123
Mean:  0.32195     0.27854
Case 2: A = B = infsupdec (ones (10, 100000));
        Old         New
        0.299321    0.272691
        0.297020    0.282591
        0.296460    0.274298
        0.294541    0.279661
        0.298306    0.277274
        0.301532    0.275531
        0.298163    0.278576
        0.298954    0.279868
        0.302849    0.275991
        0.297765    0.278806
Mean:   0.29849    0.27753

Case 3: A = B = infsupdec (ones (100000, 10));
        Old         New
        0.286433    0.279107
        0.289503    0.278251
        0.297562    0.279579
        0.292759    0.283311
        0.292983    0.281306
        0.290947    0.282310
        0.293025    0.286172
        0.294153    0.278886
        0.293457    0.278625
        0.296661    0.280804
Mean:   0.29275     0.28084

Broadcasting scalars

Case 4: A = infsupdec (ones (1000, 1000));
             B = infsupdec (1);
        Old         New
        0.298695    0.292419
        0.298158    0.292274
        0.305242    0.296036
        0.295867    0.291311
        0.296971    0.297255
        0.304297    0.292871
        0.298172    0.300329
        0.297251    0.291668
        0.299236    0.294128
        0.300457    0.298005
Mean;   0.29943     0.29463

Case 5: A = infsupdec (1);
             B = infsupdec (ones (1000, 1000));
         Old         New
        0.317276    0.291100
        0.316858    0.296519
        0.316617    0.292958
        0.316159    0.299662
        0.317939    0.301558
        0.322162    0.295338
        0.321277    0.293561
        0.314640    0.291500
        0.317211    0.295487
        0.317177    0.294376
Mean:   0.31773     0.29521

Broadcasting vectors

Case 6: A = infsupdec (ones (1000, 1000));
             B = infsupdec (ones (1000, 1));
        Old         New
        0.299546    0.284229
        0.301177    0.284458
        0.300725    0.276269
        0.299368    0.276957
        0.303953    0.278034
        0.300894    0.275058
        0.301776    0.276692
        0.302462    0.282946
        0.304010    0.275573
        0.301196    0.273109
Mean:   0.30151     0.27833

Case 7: A = infsupdec (ones (1000, 1000));
             B = infsupdec (ones (1, 1000));
         Old         New
        0.300554    0.295892
        0.301361    0.294287
        0.302575    0.299116
        0.304808    0.294184
        0.306700    0.291606
        0.301233    0.298059
        0.301591    0.292777
        0.302998    0.290288
        0.300452    0.291975
        0.305531    0.290178
Mean:   0.30278     0.29384

We see that in all cases the new version is faster or at least equally fast as the old version. In the old version the order of the input made a slight difference in performance (case 4 vs case 5). In the new version both inputs are treated in exactly the same way so we no longer see that difference.

Possible improvements

In theory the cases when we broadcast a scalar could be the fastest ones. If $B$ is a scalar we could, in pseudo code, do something similar to
// Calculate C = A + B with B scalar
for (int i = 0; i < numel (A); i++) {
  C(i) = A(i) + B;
}

This is however not implemented at the moment. Instead we use the ordinary routine to calculate the index for $B$ (since it is a scalar it will always evaluate to $1$). If we would like to optimize more for this case we could add a check for if $A$ or $B$ are scalars and then optimize for that. Of course this would also make the code more complicated, something to watch out for. At the moment I leave it like this but if we later want to optimize for that case it could be done.

Other work

Apart from the work to fix the broadcasting for binary functions there were very little to do for many of the functions. All binary functions that use this code, and all unary functions using an even simpler code, worked directly after fixing the oct-files. Some of them required small changes to the documentation but other than that the octave-scripts were fine. So mainly it has been a matter of actually going through all files and check that they actually did work.

Bug #51283

When going through all the functions I noticed a bug in the interval version of $\sin$,

 > sin (infsupdec (0))
ans = [0]_com
> sin (infsupdec ([0, 0]))
ans = 1×2 interval vector
   [0, -0]_com   [0, -0]_com

The second version here is wrong, $-0$ should never be allowed as a value for the supremum of an interval. I was able to track this down to how Octaves $\max$ function works, see bug #51283. As Oliver writes there the exact behaviour of the $\max$-function is not specified in IEEE Std 754-2008 so we cannot rely on that. To solve this I have added a line to manually set all $-0$ to $+0$ in the supremum of the interval.

 

by Joel Dahne (noreply@blogger.com) at June 22, 2017 05:12 PM

June 20, 2017

Michele Ginesi

Timetable: modification

Timetable: modification

Timetable: modification

According to my timetable (that you can find here), during this last week of June, I should've work on the input validation of betainc. Since a new bug related to this function has been found and, moreover, the actual implementation doesn't accept the "lower" or "upper" tail (as MATLAB do), me and my mentor decided to use this week to start studying how to rewrite betainc (main references will be [1] and [2]) and to use the last part fo the GSoC to actually implement it. In this way, my timetable remain almost identical (I will use July to work on Bessel functions) and I will be able to fix also this problem.

[1] Abramowitz, Stegun "Handbook of Mathematical Functions"
[2] Cuyt, Brevik Petersen, Vendonk, Waadeland "Handbook of Continued Fractions for Special Functions"

by Michele Ginesi (noreply@blogger.com) at June 20, 2017 07:01 AM

June 16, 2017

Joel Dahne

Construction and Printing

This week I have started to work on methods for constructing and printing N-dimensional arrays of intervals. In my timeline I estimated that this work would take 2 weeks. However in this first week I have managed to complete most of the work. I will give some comments on how I have worked with the Mercurial repository, how the work went and different things I encountered along the path.

Working with Mercurial

This is essentially the first time I'm using Mercurial for revision control, though I have used git before. However I quickly learned how to use it for the basic tasks that I need, committing, comparing files and checking the history. As mentioned in a previous post you can find my repository here [1].

Coding style

When I started to work with the files I realized that they did not follow Octaves coding standard [2]. After a short discussion on the mailing list we decided that I will update the files I change to follow the standard coding style. Usually it is not a good idea to change coding style and add functionality in the same commit. However most of the changes to coding style are only white space changes so they can be ignored using the -w flag in Mercurial. Thus we decided that as long as the coding style changes are only such that it is ignored with -w I will do it in the same commit as the added functionality. If there are some coding style changes that's not only white space, the most common example is to long lines, I do a commit with only changes to the coding style first. So if you want to take a look at the functionality I have added you will probably want to use the -w flag. Note however that I have not updated the coding style for any files I have not changed otherwise.

Committing

Normally I do one commit for each file, though in many cases the bare intervals and the decorated intervals have almost identical functions and in that case I commit changes to them both at the same time. Of course it also happens that I have to go back and do more changes to a files, in that case I just do another commit.

The actual work

The work went much faster than I expected. The main reason for this is that Octave has very good support for indexing. For example expressions like

isnai(x.inf <= x.sup) = false;

works just as well for matrices as for N-dimensional arrays. In fact the constructor for bare intervals even worked for N-dimensional arrays from the beginning, there I only had to do slight modification to the documentation and add some tests!

Not all functions were that easy though. Some functions that have not been updated in a while clearly assumed the input was a matrix, for example in $hull$

sizes1 = cellfun ("size", l, 1);
sizes2 = cellfun ("size", l, 2);


In most cases I only needed to add more general indexing, often times even making the code clearer.

In some functions all I had to do was to remove the check on the input data so that it would accept N-dimensional arrays. This was true in for example $cat$ were all I had to do was to remove the check and do some minor modifications to the documentation.

I can conclude with saying that Octave has great support for working with N-dimensional arrays. Since internally the data for intervals are stored only as arrays I could make good use of it!

Noteworthy things

While most functions were straight forward to modify some required some thought. How should they even work for N-dimensional input?

Disp

When modifying the $disp$-function I chose to mimic how Octave handles displaying N-dimensional arrays. I noticed that this is different from how Matlab handles it. In Matlab we have

> x = zeros (2, 2, 2)

x(:,:,1) =

     0     0
     0     0


x(:,:,2) =

     0     0
     0     0


while in Octave it's

> x = zeros (2, 2, 2)
x =

ans(:,:,1) =

   0   0
   0   0

ans(:,:,2) =

   0   0
   0   0


I don't know the choice behind Octaves version. At least at first glance I think I prefer the way Matlab does it. But since I'm working in Octave I chose that style.

The next question was how to handle the subset symbol, $\subset$. The interval package uses $=$ or $\subset$ depending on if the string representation is exact or not. For example

> x = infsup (1/2048, 1 + 1/2048);
> format short; x
x ⊂ [0.00048828, 1.0005]
> format long; x
x = [0.00048828125, 1.00048828125]

How should this be handled for N-dimensional arrays? One way would be to switch all $=$ to $\subset$ is the representation is not exact. Another to use $\subset$ on all submatrices that does not have an exact string representation. The third way, and how it is implemented now, is to only change the first $=$ to $\subset$, the one after the variable name. Like this

> x(1,1,1:2) = infsup (1/2048, 1 + 1/2048)
x ⊂ 1×1×2 interval array

ans(:,:,1) =   [0.00048828, 1.0005]
ans(:,:,2) =   [0.00048828, 1.0005]


This might be a bit odd when you first look at it, on some places we use $=$ and on some $\subset$. Though I think it somehow makes sense, we are saying that $x$ is a subset of the $1\times1\times2$ interval array given by

ans(:,:,1) =   [0.00048828, 1.0005]
ans(:,:,2) =   [0.00048828, 1.0005]

which actually is true. Anyway I will leave like this for now and then we might decide to switch it up later.

linspace and mince

The standard implementation of $linspace$ only supports scalar or vector input. It could be generalized to N-dimensional arrays by for example returning a N+1-dimensional array were the last dimension corresponds to the linearly spaced elements. But since this has not been done in the standard implementation I will at least wait with adding for intervals.

The function $mince$ can be seen as a interval generalization of $linspace$. It  takes an interval and returns an array of intervals whose union cover it. This could similarly be expanded to N dimensions by creating the array along the N+1 dimension. But again we choose to at least wait with adding this.

meshgrid and ndgrid

The interval package already has an implementation of $meshgrid$. But since it previously did not support 3-dimensional arrays it had to fit 3-d data in a 2-d matrix. Now that it supports 3-d data it can output that instead.

Currently the interval package does not implement $ndgrid$. When I looked into it I realized that the standard implementation of $ndgrid$ actually works for interval arrays as well. I have not looked into the internals but in principle it should only need the $cat$ function, which is implemented for intervals. Further I noticed that the standard $meshgrid$ also works for intervals. However the interval implementation differs in that it converts all input to intervals, were as the standard implementation allows for non-uniform output. Using the interval implementation of $meshgrid$ we have

> [X Y] = meshgrid (infsup (1:3), 4:6)
X = 3×3 interval matrix

   [1]   [2]   [3]
   [1]   [2]   [3]
   [1]   [2]   [3]

Y = 3×3 interval matrix

   [4]   [4]   [4]
   [5]   [5]   [5]
   [6]   [6]   [6]

but if we fall back to the standard implementation (by removing the interval implementation) we get

> [X Y] = meshgrid (infsup (1:3), 4:6)
X = 3×3 interval matrix

   [1]   [2]   [3]
   [1]   [2]   [3]
   [1]   [2]   [3]

Y =

   4   4   4
   5   5   5
   6   6   6

Note that the last matrix is not an interval matrix.

So the question is, should we implement a version of $ndgrid$ that converts everything to intervals or should we remove the implementation of $meshgrid$? It's at least most likely not a good idea that the functions are different. I think that removing the implementation of $meshgrid$ makes most sense. First of all it's less code to maintain, which is always nice. Secondly you can manually convert all input to the function to intervals if you want uniform output. If you do not want uniform output then the standard implementation works were as the interval implementation does not, so the standard implementation is more general in a sense.

We have to choose what to do, but for now I leave it as it is.

Non-generalizable functions

From what I have found there is no way to create a 3-dimensional array in Octave in the same way you can create a 2-dimensional one with for example

M = [1, 2; 3, 4];

Instead higher dimensional arrays have to be created using other functions, for example $reshape$ or $zeros$, or by specifying the submatrices directly

M(:,:,1) = [1, 2; 3, 4];
M(:,:,2) = [5, 6; 7, 8];

This means that the functions $\_\_split\_interval\_literals\_\_$, which is used to split a string like $"[1, 2; 3, 4]"$ into its separate components, cannot really be generalized to N dimensions.


[1] https://sourceforge.net/u/urathai/octave/ci/default/tree/
[2] http://wiki.octave.org/Octave_style_guide

by Joel Dahne (noreply@blogger.com) at June 16, 2017 02:07 PM

Michele Ginesi

gammaincinv

The gammaincinv function

The inverse of the incomplete gamma function

Definition

Given two real values $y\in[0,1]$ and $a>0$, gammaincinv(y,a,tail) gives the value $x$ such that \[P(x,a) = y,\quad\text{ or }\quad Q(x,a) = y\] depending on the value of tail ("lower" or "upper").

Computation

The computation is carried out via standard Newton method, solving, for $x$ \[ y - P(x,a) = 0 \quad\text{ or }\quad y - Q(x,a) = 0. \] The Newton method uses as exit criterion a control on the residual and on the maximum number of iterations.
For numerical stability, the initial guess and which gammainc function is inverted depends on the values of $y$ and $a$. We use the following property: \[\text{gammaincinv}(y,a,\text{`upper'})=\text{gammaincinv}(1-y,a,\text{`lower'})\] which follows immediately by the property \[P(x,a) + Q(x,a) = 1.\] We start computing the trivial values, i.e. $a=1,y=0$ and $y=1$. Then we rename the variables as follows (in order to obtain a more classical notation): if tail = "lower", then $p=y,q=1-y$; while if tail = "upper", $q=y,p=1-y$. We will invert with $p$ as $y$ in the cases in which we invert the lower incomplete gamma function, and with $q$ as $y$ in the cases in which we invert the upper one.
Now, the initial guesses (and the relative version of the incomplete gamma function to invert) are the following:
  1. $p\dfrac{(0.2(1+a))^a}{\Gamma(1+a)}$. We define \[r = (p\,\Gamma(1+a))^{\frac{1}{a}}\] and the parameters \begin{align*} c_2 &= \dfrac{1}{a+1}\\ c_3 &= \dfrac{3a+5}{2(a+1)^2(a+2)}\\ c_4 &= \dfrac{8a^2+33a+31}{3(a+1)^3(a+2)(a+3)}\\ c_5 &= \dfrac{125a^4+1179a^3+3971a^2+5661a+2888}{24(1+a)^4(a+2)^2(a+3)(a+4)}\\ \end{align*} Then the initial guess is \[x_0 = r + c_2\,r^2+c_3\,r^3+c_4\,r^4+c_5\,r^5\] and we invert the lower incomplete gamma function.
  2. $q\dfrac{e^{-a/2}}{\Gamma(a+1)}, a\in(0,10)$. The initial guess is \[x_0 = - \log (q) - \log(\Gamma(a))\] and we invert the upper incomplete gamma function.
  3. $a\in(0,1)$. The initial guess is \[x_0 = (p\Gamma(a+1))^{\frac{1}{a}}\] and we invert the lower incomplete gamma function.
  4. Other cases. The initial guess is \[x_0 = a\,t^3\] where \[t = 1-d-\sqrt{d}\,\text{norminv}(q),\quad d=\dfrac{1}{9a}\] (where norminv is the inverse of the normal distribution function) and we invert the upper incomplete gamma function.
The first three initial guesses can be found in the article "Efficient and accurate algorithms for the computation and inversion of the incomplete gamma function rations" by Gil, Segura ad Temme; while the last one can be found in the documentation of the function igami.c of the cephes library.
You can find my work on my repository on bitbucket here, bookmark "gammainc".

Comments

Working on this function I found some problems in gammainc. To solve them, I just changed the flag for the condition 5 and added separate functions to treat Inf cases.

by Michele Ginesi (noreply@blogger.com) at June 16, 2017 10:39 AM

June 06, 2017

Michele Ginesi

gammainc

The gammainc function

The incomplete gamma function

Definition

There are actually four types of incomplete gamma functions:
  • lower (which is the default one) $$P(a,x) = \frac{1}{\Gamma(a)}\int_{0}^{x}e^{-t}t^{a-1}$$
  • upper$$Q(a,x) = \frac{1}{\Gamma(a)}\int_{x}^{+\infty}t^{a-1}e^{-t}dt$$
  • scaled lower and scaled upper which return the corresponding incomplete gamma function scaled by $$\Gamma(a+1)\frac{e^x}{x^a}$$

Computation

To compute the incomplete gamma function, there is a total of seven strategies depending on the position of the arguments in the $x-a$ plane.
  1. $x=0,a=0$. In this case, the output is given as follows:
    • for "lower" and "scaledlower" is $1$
    • for "upper" and "scaledupper" is $0$
  2. $x=0,a\neq0$. In this case:
    • for "lower" is $0$
    • for "upper" and "scalelower" is $1$
    • for "scaledupper" is $+\infty$
  3. $x\neq0,a=0$. In this case:
    • for "lower" is $1$
    • for "scalelower" is $e^x$
    • for "upper" and "scaledupper" is $0$
  4. $x\neq0,a=1$. In this case:
    • for "lower" is $1-e^x$
    • for "upper" is $e^{-x}$
    • for "scaledlower" is $\frac{e^x-1}{x}$
    • for "scaledupper" is $\frac{1}{x}$
  5. $0 x\leq36,a\in\{2,3,4,\ldots,17,18\}$. In this case we used the following expansion (which holds true only for $n\in\mathbb{N}$): denote $\gamma(a,x):=\Gamma(a)P(a,x)$, then $$\gamma(n,x) = (n-1)!\left( 1-e^{-x}\sum_{k=0}^{n-1} \frac{x^k}{k!}\right).$$
  6. $x+0.25 a $ or $x0$, or $x$ not real or $|x|1$ or $a5$. Here we used the expansion $$\gamma(a,x) = e^{-x} x^a \sum_{k=0}^{+\infty} \frac{\Gamma(a)}{\Gamma(a+1+k)}x^k.$$
  7. for all the remaining cases was used a continued fraction expansion. Call $\Gamma(a,x) = \Gamma(a) Q(a,x)$, the expansion is $$\Gamma(a,x) = e^{-x}x^a\left( \frac{1}{x+1-a-}\frac{1(1-a)}{x+3-a-}\frac{2(2-a)}{x+5-a-}\cdots \right).$$ This case is handled by an external .cc function.

My contribution

Actually I dind't participate from the start in the (re)making of this function: most of the work done on it is due to Nir and Marco (see the discussion on the bug tracker).
My contribution was, before the GSoC, to adapt the codes in such a way to make the type of the output (and the tolerances inside the algorithms) coherent on the type of the input (single, int or double). Then I corrected few small bugs and made a patch helped by Carne during the OctConf in Geneve.
During this first week of GSoC I fixed the input validation and added tests to this function, finding some problem in the implementation. Most of them depended on the fact that the continued fractions were used in cases in which they are not optimal. To solve these problems I changed the conditions to use the series expansion instead of the continued fractions.
Now gammainc works properly also with complex argument (for the $x$ variable), while Matlab doesn't accept non-real input.
You can find my work on my repository on bitbucket here, bookmark "gammainc". I will work on this same bookmark the next two weeks while I will work on gammaincinv.

by Michele Ginesi (noreply@blogger.com) at June 06, 2017 09:09 AM

Carnë Draug

Function help text in scripts and command line

I guess I'm one of the few [[citation needed]] that writes Octave programs instead of only things meant to run from the Octave console. You end up with a few hundred lines of Octave and tens of functions in a single file. For debugging you just source the whole thing.

I always thought that Octave's help function only handled functions in their own m files, the help text being the first or second block of comments that does not start with Author or Copyright. But I was wrong. You can document functions in the Octave command line and you can also document functions in a script. Octave seems to pick a block of comments before a function definition as that function help text. Like so:

octave-cli:1> ## This is the help text of the foo function.
octave-cli:1> function x = foo (y, z), x = y+z; endfunction
octave-cli:2> ## This is the help text of the bar function.
octave-cli:2> ##
octave-cli:2> ## Multi-line help text are not really a problem,
octave-cli:2> ## just carry on.
octave-cli:2> function x = bar (y, z), x = y+z; endfunction
octave-cli:3> man foo
 'foo' is a command-line function

 This is the help text of the foo function.

octave-cli:4> man bar
 'bar' is a command-line function

 This is the help text of the bar function.

 Multi-line help text are not really a problem,
 just carry on.

It also works in script file with one exception: the first function of the file which picks the first comment which will be the shebang line:

$ cat > an-octave-script << END
> #!/usr/bin/env octave
>
> ## This is the help text of the foo function.
> function x = foo (y, z)
>    x = y+z;
> endfunction
>
> ## This is the help text of the bar function.
> ##
> ## Multi-line help text are not really a problem,
> ## just carry on.
> function x = bar (y, z)
>   x = y+z;
> endfunction
> END
$ octave-cli
octave-cli:1> source ("an-octave-script")
octave-cli:2> man bar
'bar' is a command-line function

This is the help text of the bar function.

Multi-line help text are not really a problem,
just carry on.

octave-cli:3> man foo
'foo' is a command-line function

!/usr/bin/env octave

I reported a bug about it (bug #51191) but I was pleasantly surprised that functions help text were being identified at all.

June 06, 2017 12:00 AM

June 01, 2017

Carnë Draug

My Free software activities in May 2017

Octave

Bugs and patches:

  • Fixed (bug #47115) where label2rgb in the Image package would not work with labelled images of type uint32 and uint64. Reason was that behind the scenes ind2rgb from the Octave core was being used, and Octave imitates Matlab in that labelled images can only be up to uint16. That sounds like a silly limitation so I dropped it from core in ind2rgb and ind2gray. However, there's other places where that limitation is in place and needs to be lifted such as cmunique, cmpermute, and the display functions.
  • Fixed hist3 handling of special cases with NaN values on the rows that define the histogram edges and centring values in an histogram when there's only one value. Started as side effect of fixing (bug #51059) .
  • (bug #44799) - document imrotate does not work with the spline method.
  • Fix input check for fspecial e65762ee9414
  • Review wiener2, function for adaptive noise reduction, from Hartmut and extended it for N dimensions.
  • Fixed mean2 Matlab incompatibilities for some weird cases (bug #51144)
  • Make imcast support class logical.
  • Review otsuthresh, function to compute threshold value of an histogram via Otsu's method, by Avinoam Kalma
  • Review affine2 and affine3, classes to represent 2d and 3d affine transforms, submitted by Motherboard and Avinoam . Started generalising code to N dimensions to avoid code duplication. Only one method missing (maybe you wanna help?).

As sysadmin:

  • Finished updating the wiki for Octave, due to issues with pygments.

Debian

  • Started packaging libsystem-info-perl, got stuck due to a licensing issue

June 01, 2017 12:00 AM

May 31, 2017

Mike Miller

Blog Relaunch, More Pythonic With Pelican

I just finished converting this blog to Pelican, a static site generator built entirely on Python. The conversion process was so easy, and Pelican is such a pleasure to work with, that I wish I had done this much sooner instead of letting this blog languish for over a year and a half.

To reboot the blog and kick things off again, I'll describe just how easy it was to convert to Pelican, compare it to what I was working with before, and lay out some ideas for the future.

Starting with Pelican

So Pelican is a static site generator, which means it builds a complete website from a template and a bunch of files representing content. The content is usually written in a plain text markup language like Markdown. Pelican has explicit support for common blog features like articles, pages, categories, comments, and syndication. The blog was already written in Markdown and built using a static site generator, and I wanted to try redoing it with Pelican.

Creating a Pelican site is simple, just run pelican-quickstart, fill in some content, and run pelican to compile it. I only had to copy in the existing Markdown content and massage the metadata headers into Pelican's format. I also wanted to preserve the URL scheme I had already been using, and found that it was trivial to tweak the default config files to do exactly that. The result was a new site that mostly replicated the content and URLs of the existing blog.

I noticed a small difference in the rules that Pelican uses for transforming a page title into a URL. There is one post that contains a "+" plus sign. My previous blog software converted this into the word "plus", while Pelican drops it by default. Fortunately, it's simple to add custom URL transformation rules, and I got this post where it needed to be. If generic transformation rules don't work, Pelican also allows an article to specify exactly what its URL should be in its metadata.

I also wanted to be able to add a delimiter in articles to mark the beginning of the article as a summary. By default, Pelican pulls out the first 50 words as the summary, probably cutting off in mid-sentence. I wanted more control, without having to duplicate the summary in a header field. Fortunately, there's a plugin for that. The pelican-plugins repository has dozens of plugins, including one that recognizes a delimiter comment marking the end of the page summary.

At this point, setting aside appearance, the old blog was converted perfectly, preserving all of the content, URLs, summaries, feeds, tags, and comments. I had wanted to know if I could use Pelican to publish my blog, and in about an hour I had answered that easily, and it was basically ready to use. Of course the next question was, what else can I do?

Ditching Octopress

I'll take a small detour here and try to explain why I did this conversion in the first place.

When I started this blog three years ago, I wanted to use a static site generator. I settled on Octopress at the time because it seemed simple and included all of the features I wanted. Octopress is based on Jekyll, which was and is probably the most popular static site generator. Octopress looked like Jekyll with a bunch of integrated plugins and theming problems already solved. It looked like a quick way to get started with Jekyll without having to learn Ruby and mess around with CSS, just write and publish. And yes, it was pretty easy.

One problem that was quickly apparent is that Octopress was basically a preconfigured Jekyll project. To customize Octopress for your own blog, you essentially have to fork Octopress and make changes to it. There was no way to run upstream Octopress against your own configuration and content.

Another problem that's a direct corollary, Octopress is based on a now-ancient version of Jekyll. It's impossible to take advantage of any Jekyll improvements over the years because Octopress used Jekyll 0.12 from 2012. If Octopress updates to a newer version, the only way to take advantage would be to rebase your local fork onto Octopress upstream changes and hope for the best.

Not only are configuration changes made in the Octopress repository itself, but content is put there by default. So a typical Octopress blog is a fork of the Octopress project, with local config files, and with all articles and pages mixed in there too. I actually created a subrepo for just the content, to try to maintain the content separately from the Octopress repo. This seemed better at first, but I still had config changes in one repo and content in the other, and they're not exactly independent. Add in a customized fork of a theme, and that's three separate repositories to maintain one simple blog.

I saw that Octopress had been working on a redesign to fix these problems and work with the latest Jekyll, but it seems like development has stalled. Maybe Jekyll is good enough now that Octopress isn't really needed anymore? Maybe the authors moved on and the Octopress community dissolved? Whatever the state of the project, I had a blog built with a tool that was cumbersome to maintain, wasn't being updated, and I didn't really understand. I stopped updating my blog due to a confluence of many life events, and being difficult to maintain wasn't helping.

I was at PyCon 2017 last week, and one of the strongest messages that I took with me was to try to find a Pythonic way to do everything. Since Python can do so many things and make them so easy, why not do more things in Python? I made a note to find and try out a best-practices Pythonic static site generator to reboot my blog, and I found Pelican. And here we are.

Customizing Pelican

Pelican feels extremely Pythonic to me, because it made it extremely easy to build a replacement blog from scratch using sane defaults and working with my existing Markdown. The default behavior is easy and intuitive and makes something that just works. Now, beyond using the defaults, customizing Pelican is done with Python modules, and there is an active community building reusable plugins and themes for Pelican.

To add plugins and choose a theme, I only needed to point at the pelican-plugins and pelican-themes repositories and add a little configuration. Most of the themes are listed with screenshots and live demos at pelicanthemes.com. I was very happy to find the pelican-bootstrap3 theme because it is clean and minimal, it's based on the widely used Bootstrap framework, and it is extremely customizable. With just a little exploration of the config space, I now have something that behaves and looks more like I wanted the original blog to and less like the Pelican default theme.

I had to stop myself from customizing too much, write this post, and update this blog after a long hiatus. It's functional and pretty much does what I want for now, but there is still more customization to be done. I would like to install a favicon. I want to get the built-in site search feature working. I would like to look at the Bootswatch themes for Bootstrap that can easily be dropped into this theme. I might add a tag cloud once I have some more content.

For now the most important thing is that the blog is up and running again. I hope the switch to Pelican will lower the barrier to updating a bit and help keep things moving. Sometimes the right tools make the difference. I'm also curious what else Pelican can be used for. Are there any downsides that I haven't run into yet? Other Pythonic tools for compiling content into web sites, documents, presentations?

by Mike Miller at May 31, 2017 10:40 PM

Joel Dahne

The first day and my repository

Yesterday I handed in the last exam for the semester(!) and today I started to work on the project full time. I begun by setting up a repository for my code. At the moment I have only cloned the interval package, I'm still to make my own changes to it.

I started to read the constructor for intervals to see how it works. While doing that I realized that there is much less to do than I thought. The constructor can actually handle creating intervals from numerical arrays of any dimension. The function displaying the interval can only handle up to two dimension but if you look at the internal state it actually has more dimensions. It is not perfect though, it does not work for decorated intervals and it cannot handle strings and cells as input. But still, it makes it much easier for me.

I will continue to work and hopefully I have actually committed something by the end of the week. Next week I will be away but after that I will code for the rest of the summer.

by Joel Dahne (noreply@blogger.com) at May 31, 2017 05:38 PM

May 30, 2017

Michele Ginesi

Timetable

Timetable

Timetable

During this first period I searched for the bugs related to special functions in the bug tracker. I found these four bugs that need to be fixed:
  1. gammainc: the upper version rounds down to zero if the output is small. For this bug there is a lot of work already done by Marco and Nir. I will just fix the last few things (like the input validation).
  2. gammaincinv: after finishing gammainc, I should start working on the inverse (which is missing in Octave).
  3. betainc: it is necessary to fix the input validation.
  4. besselj: it gives NaN if the input is too large. Actually, also other versions of the Bessel function have the same problem.
My idea of the timetable is the following
  • First part (May 30 - June 26: 4 weeks):
    • 1st week: finish to fix gammainc (input validation, documentation, add tests)
    • 2nd and 3rd weeks: write gammaincinv (via Newton's method)
    • 4th week: fix the input validation for betainc (and, if needed, add tests and fix the documentation)
  • Second part (July 1 - July 24: 3 weeks): fix the Bessel functions. Some ideas comprehend to test other libraries in addition to the amos (which is the library currently in use) and try to implement new algorithms. Add tests and revise the documentation.
  • Final part (July 29 - August 21: 3 weeks): add tests to the other special functions to make sure they work properly, trying to fix the problems that, eventually, will be found and revise the documentation if needed.

by Michele Ginesi (noreply@blogger.com) at May 30, 2017 03:54 AM

May 28, 2017

Joel Dahne

Timeline

This post will be about specifying a timeline for my work this summer. As I mentioned in the introductory post the work will be about implementing support for higher dimensional arrays in the interval package. To begin with I have divided the work into 5 different parts:

1. Construction and Printing

This part will be about implementing functions for creating and printing arrays. It will mainly consist of modifying the standard constructor and all the different functions used for printing intervals so they can handle higher dimensional arrays.

2. Vectorized Functions

Here I will work on generalizing all functions supporting vectorization to also support arrays of higher dimensions.

3. Folding Functions

Here I will work on generalizing all functions implementing some sort of folding to support higher dimensions. By folding I mean taking a multidimensional array as input and returning an array of lower dimension. Examples of these functions are $sum$, $prod$ and $dot$.

4. Plotting

I'm not sure what support Octave has for plotting higher dimensional arrays, but if there are some functions which could also be useful for intervals I will try to implement them here.

5. Documentation

I'll write the documentation alongside the rest of the work. In the end I will try to add some usage examples and integrate it better with the standard documentation.

So these are the parts in which I have divided my work in and the timeline will be

  • Phase 1 (30/5 - 30/6)
    • Week 1: Setting up
    • Week 2-3: Construction and Printing
    • Week 4: Vectorized Functions
  • Phase 2 (3/7 - 28/7)
    • Week 5: Continue on Vectorized Functions 
    • Week 6-7: Folding Functions
    • Week 8: Plotting
  • Phase 3 (31/7 - 25/8)
    • Week 9: Continue on Plotting
    • Week 10-11: Documentation
    • Week 12: Finishing up!

My first week will be rather short since I have an exam due in the middle of the week. After that I will also be away for a week not working on the project, I have no counted that week in the timeline above.

by Joel Dahne (noreply@blogger.com) at May 28, 2017 08:30 PM

May 21, 2017

Joel Dahne

Introduction

Hi! I'm Joel Dahne and I will be using this blog to share the progress of my work for Google Summer of Code 2017 under GNU Octave where I will be working on improving the interval package.

About me

Currently I'm a master student in mathematics at Uppsala University, Sweden, I'm just about to complete my first year and have one more to go. I first encountered interval numerics with my bachelor thesis, during which I tried to generalize the method described in this paper to higher dimensions. The code from the project is available on GitHub, https://github.com/Urathai/generateZeros. Working on that project ignited my interest for interval numerics and I want to work to increase its availability to the common user.

The project

In short my project is about adding support for N-dimensional arrays in the interval package. At the moment the package only has support for up to 2-dimensional arrays. Most of the time this is enough but sometimes it is limiting. The goal is to have the syntax identical to the one for normal arrays. Switching from floats to intervals should be as easy as adding $infsup$ in front of the variable.

During my time preparing I have also noticed that some of the folding functions (e.g. $sum$, $prod$ and $dot$) handle some edge cases differently than the standard implementation. Hopefully this can be resolved at the same time.

This blog

During my project I will try to update this blog with my current progress. It's my first time writing a blog so we will see how it goes. The next step for me is to create more detailed plan of how I should structure my work during the summer, this will most likely be the subject of my next post.

by Joel Dahne (noreply@blogger.com) at May 21, 2017 10:36 PM

May 16, 2017

Enrico Bertino

Introductory Post

Hello, I'm Enrico Bertino, and this is the blog I'll be using to track my work for Google Summer of Code 2017. The project will consist in rifare the neural network package with a focus on convolutional neural networks. Here my proposal.


Something about myself: I’m an Mathematical Engineering student and I live in Milan. After a BSc in Politecnico di Milano I did a master double degree both in Ecole Centrale de Nantes, France, and Politecnico di Milano. In the two years in France I attended engineering classes with spec in virtual reality and in Italy I am enrolled in the last year of MSc major Statistics and Big Data.

During my project, I will leverage on the Pytave project in order to exploit Python code for the implementation of the package. This is almost necessary because nowadays there is a strong tendency to open-researching in the deep learning field, where even the biggest companies release their frameworks and open their code. The level of this resources is very high, mostly for frameworks and APIs on Python, such as Tensorflow. This integration will allow us to guarantee a continuous updating on neural networks improvement and leading to a greater focus on Octave interface. In this sense, letting Octave coexist with Tensorflow, we will exploit their synergy and reach fast consistent goals.

As of now, I tested the integration of Tensorflow via Pytave. I have come across some problems of variable initialization in Ubuntu 16.04, whereas everything works fine on macOS Sierra. I will perform different tests in order to be be sure it works before starting the implementation of the package. Here my repository.

by Enrico Bertino (noreply@blogger.com) at May 16, 2017 01:53 PM

May 13, 2017

Michele Ginesi

expint

The expint function

The exponential integral

This is the first function I worked on in Octave: you can find the discussion here. Even if it will not be part of my GSoC project, I think it would be interesting to show how the function has been rewritten.

Definition and main properties

The canonical definition for the exponential integral is $$E_i (z) = -\int_{-z}^{+\infty} \frac{e^{-t}}{t}dt$$ but, for Matlab compatibility, expint compute the function $$E_1(z) =\int_{z}^{+\infty}\frac{e^{-t}}{t}dt.$$ The two definitions are related, for positive real values $z$, by $$E_1(-z) = -E_i(z) - \mathbb{i}\pi.$$ More in general, the function $E_n$ is defined as $$E_n(z) = \int_1^{+\infty}\frac{e^{-zt}}{t^n}$$ and the following recurrence relation holds true: $$E_{n+1}(z) = \frac{1}{n}\left(e^{-z}-zE_n(z)\right).$$ Moreover $$E_n(\bar{z}) = \overline{E_n(z)}.$$

Computation

To compute $E_1$, I implemented three different strategies:
  • Taylor series (Abramowitz, Stegun, "Handbook of Mathematical Functions", formula 5.1.11, p 229): $$E_1(z) = -\gamma -\log z -\sum_{n=1}^{\infty}\frac{(-1)^nz^n}{nn!}$$ where $\gamma\approx 0.57721$ is the Euler's constant.
  • Continued fractions (Abramowitz, Stegun, "Handbook of Mathematical Functions", formula 5.1.22, p 229): $$E_1(z) = e^{-z}\left(\frac{1}{z+}\frac{1}{1+}\frac{1}{z+}\frac{2}{1+}\frac{2}{z+}\cdots\right)$$ or, in a more explicit notation $$E_1(z)=e^{-z}\frac{1}{z+\frac{1}{1+\frac{1}{z+\frac{2}{1+\frac{2}{z+\ddots}}}}}$$ This formulation has been implemented using the modified Lentz algorithm ("Numerical recipes in Fortran 77" p.165).
  • Asymptotic series ("Selected Asymptotic Methods with Application to Magnetics and Antennas" formula A.10 p 161): $$E_1(z)\approx \frac{e^{-z}}{z}\sum_{n=0}^{N}\frac{(-1)^n n!}{z^n}.$$ A difficulty about this approximation is that the series is divergent, and that is the reason why the sum goes up to a certain $N$ "big" and not up to infinity.
Then I tested them in the square $[-100,100]\times[-100,100]$ of the complex plane, comparing the result with the ones given by the Octave symbolic package. With these informations, I identified three zones of the complex plane: in each zone one strategy is better than the others.
Then the final function divides the input into three parts, accordingly to the position of the same in the complex plane, and compute the function using the opportune strategy.

by Michele Ginesi (noreply@blogger.com) at May 13, 2017 07:16 AM